meta
stringlengths
97
612
red_pajama_subset
stringclasses
4 values
prompt
stringlengths
1
200
answer
stringlengths
0
518k
{'timestamp': '2021-01-08T02:12:33', 'yymm': '2005', 'arxiv_id': '2005.14484', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14484'}
arxiv
\section{Introduction} In this work we bring forward the study of the normalized Laplacian that has been established for \textsl{chemical hypergraphs}: hypergraphs with the additional structure that e
ach vertex in a hyperedge is either an input, an output or both (in which case we say that it is a catalyst for that hyperedge). Chemical hypergraphs have been introduced in \cite{Hypergraphs} with the idea of modelling chemical reaction networks and related ones, such as metabolic networks. In this model, each vertex represents a chemical element and each hyperedge represents a chemical reaction. Furthermore, in \cite{Master-Stability}, chemical hypergraphs have been used for modelling dynamical systems with high order interactions. In this model, the vertices represent oscillators while the hyperedges represent the interactions on which the dynamics depends.\newline The spectrum of the normalized Laplacian $L$ reflects many structural properties of the network and several theoretical results on the eigenvalues have been established in \cite{Hypergraphs,Sharp,MulasZhang}. Furthermore, as shown in \cite{Sharp}, by defining the vertex degree in a way that it does not take catalysts into account, studying the spectrum of $L$ for chemical hypergraphs is equivalent to studying the spectrum of the \textsl{oriented hypergraphs} introduced in \cite{ReffRusnak} by Reff and Rusnak, in which catalysts are not included. Therefore, without loss of generality we can work on oriented hypergraphs. Here, in particular, we focus on the \textsl{bipartite} case and we show that the spectrum of the normalized Laplacian for bipartite chemical hypergraphs coincides with the spectrum of the \textsl{signless normalized Laplacian} that we introduce for classical hypergraphs. Furthermore, we establish the spectra of the signless normalized Laplacian for special families of such classical hypergraphs.\newline Classical hypergraphs are widely used in various disciplines. For instance, they offer a valid model for transport networks \cite{andreotti2020eigenvalues}, neural networks (in whose context they are often called \textsl{neural codes}) \cite{neuro1,neuro2,neuro3,neuro5,neuro6,neuro7,neuro9,neuro10}, social networks \cite{ZhangLiu} and epidemiology networks \cite{BodoKatonaSimon}, just to mention some examples. It is worth noting that a simplicial complex $\mathcal{S}$ is a particular case of hypergraph with the additional constraint that, if a hyperedge belongs to $\mathcal{S}$, then also all its subsets belong to $\mathcal{S}$. Simplicial complexes are also widely present in applications. On the one hand, their more precise structure allows for a deeper theoretical study, compared to general hypergraphs. On the other hand, the constraints of simplicial complexes can be translated as constraints on the model, and this is not always convenient. Consider, for instance, a collaboration network that represents coauthoring of research papers: in this case, the fact that authors $A$, $B$ and $C$ have written a paper all together does not imply that $A$, $B$ and $C$ have all written single author papers, nor that $A$ and $B$ have written a paper together without $C$. In this case, a hypergraph would give a better model than a simplicial complex. \newline \textbf{Structure of the paper.} In Section \ref{section:oriented} we introduce the basic definitions which are needed throughout the paper, while in Section \ref{section:twin} we introduce and discuss twin vertices. In Section \ref{section:bipartite} we prove new properties of bipartite oriented hypergraphs and we show that, from the spectral point of view, these are equivalent to classical hypergraphs with no input/output structure. Finally, in Section \ref{section:families} we investigate the spectra of new hypergraph structures that we introduce with the idea of generalizing well known graph structures. \section{Basic definitions}\label{section:oriented} \begin{definition}[\cite{ReffRusnak,Hypergraphs}] An \textbf{oriented hypergraph} is a pair $\Gamma=(\mathcal{V},\mathcal{H})$ such that $\mathcal{V}$ is a finite set of vertices and $\mathcal{H}$ is a set such that every element $h$ in $\mathcal{H}$ is a pair of disjoint elements $(h_{in},h_{out})$ (input and output) in $\mathcal{P}(\mathcal{V})\setminus\{\emptyset\}$. The elements of $\mathcal{H}$ are called the \textbf{oriented hyperedges}. Changing the orientation of a hyperedge $h$ means exchanging its input and output, leading to the pair $(h_{out},h_{in})$. \end{definition} \begin{definition} Given $h\in \mathcal{H}$, we say that two vertices $i$ and $j$ are \textbf{co-oriented} in $h$ if they belong to the same orientation sets of $h$; we say that they are \textbf{anti-oriented} in $h$ if they belong to different orientation sets of $h$. \end{definition} From now on, we fix a chemical hypergraph $\Gamma=(\mathcal{V},\mathcal{H})$ on $N$ vertices $v_1,\ldots,v_N$ and $M$ hyperedges $h_1,\ldots, h_M$. For simplicity, we assume that $\Gamma$ has no isolated vertices. \begin{remark}\label{remark:graphs1} Simple graphs can be seen as oriented hypergraphs such that $\# h_{in}=\# h_{out}=1$ for each $h\in \mathcal{H}$, that is, each edge has exactly one input and one output. \end{remark} \begin{definition} The \textbf{underlying hypergraph} of $\Gamma$ is $\Gamma':=(\mathcal{V},\mathcal{H}')$ where \begin{equation*} \mathcal{H}':=\{(h_{in}\cup h_{out},\emptyset):h=(h_{in},h_{out})\in \mathcal{H}\}. \end{equation*} \end{definition} \begin{definition}[\cite{Sharp}] The \textbf{degree} of a vertex $v$ is \begin{equation*} \deg(v):=\#\text{ hyperedges containing $v$.} \end{equation*} Similarly, the \textbf{cardinality} of a hyperedge $h$ is \begin{equation*} \# h:=\#\{h_{in}\cup h_{out}\}. \end{equation*} \end{definition} \begin{definition}[\cite{Hypergraphs,MulasZhang}] The \textbf{normalized Laplace operator} associated to $\Gamma$ is the $N\times N$ matrix \begin{equation*} L:=\id -D^{-1}A, \end{equation*}where $\id$ is the $N\times N$ identity matrix, $D$ is the \textbf{diagonal degree matrix} and $A$ is the \textbf{adjacency matrix} defined by $A_{ii}:=0$ for each $i=1,\ldots,n$ and \begin{align*} A_{ij}:=& \# \{\text{hyperedges in which }v_i \text{ and }v_j\text{ are anti-oriented}\}+\\ &-\# \{\text{hyperedges in which }v_i \text{ and }v_j\text{ are co-oriented}\} \end{align*}for $i\neq j$. \end{definition} We define the \textbf{spectrum of $\Gamma$} as the spectrum of $L$. As shown in \cite{Hypergraphs,MulasZhang}, this spectrum is given by $N$ real, nonnegative eigenvalues whose sum is $N$. We denote them by \begin{equation*} \lambda_1\leq\ldots\leq\lambda_N. \end{equation*} \begin{definition} We say that two vertices $v_i$ and $v_j$ are \textbf{adjacent}, denoted $v_i\sim v_j$, if they are contained at least in one common hyperedge. \end{definition} \begin{remark}\label{remark:graphs2} Consider a graph $\Gamma$ and let $\Gamma'$ be its underlying hypergraph. Then, the adjacency matrix $A$ of $\Gamma$ and the adjacency matrix $A'$ of $\Gamma'$ are such that $A'=-A$, while the degree matrices of $\Gamma$ and $\Gamma'$ coincide. Therefore, the normalized Laplacians of $\Gamma$ and $\Gamma'$ are \begin{equation*} L=\id -D^{-1}A \qquad \text{ and }\qquad L'=\id +D^{-1}A=2\cdot \id -L, \end{equation*}respectively. Hence, $\lambda$ is an eigenvalue for $L$ if and only if $2-\lambda$ is an eigenvalue for $L'$. \end{remark} \begin{definition} Let $\Gamma$ be an oriented hypergraph and let $\Gamma'$ be its underlying hypergraph. The \textbf{signless normalized Laplacian} of $\Gamma$ is the normalized Laplacian of $\Gamma'$. \end{definition} \section{Twin vertices}\label{section:twin} \begin{definition}[\cite{MulasZhang}] Two vertices $v_i$ and $v_j$ are \textbf{duplicate} if $A_{ik}=A_{jk}$ for all $k$. In particular, $A_{ij}=A_{ji}=A_{ii}=0$. \end{definition} In \cite{MulasZhang} it is shown that $\hat{n}$ \textsl{duplicate vertices} produce the eigenvalue $1$ with multiplicity at least $\hat{n}-1$. Similarly, in this section we discuss \textsl{twin vertices}. \begin{definition}\label{def:twins} We say that two vertices $v_i$ and $v_j$ are \textbf{twins} if they belong exactly to the same hyperedges, with the same orientations. In particular, $A_{ij}=-\deg (v_i)=-\deg (v_j)$ and $A_{ik}=A_{jk}$ for all $k\neq i,j$. \end{definition} \begin{remark} While duplicate vertices are known also for graphs, twin vertices cannot exist for graphs, since in this case one assumes that each edge has one input and one output. \end{remark} We now generalize the notions of duplicate vertices and twin vertices by defining \textsl{duplicate families of twin vertices}. \begin{definition} Let $\Gamma=(\mathcal{V},\mathcal{H})$ be an oriented hypergraph. We say that a family of vertices $\mathcal{V}_1\sqcup\ldots\sqcup \mathcal{V}_l\subset \mathcal{V}$ is a \textbf{$l$-duplicate family of $t$-twin vertices} if \begin{itemize} \item For each $i\in \{1,\ldots,l\}$, $\#\mathcal{V}_i=t$ and the $t$ vertices in $\mathcal{V}_i$ are twins to each other; \item For each $i,j\in \{1,\ldots,l\}$ with $i\neq j$, for each $v_i\in \mathcal{V}_i$ and for each $v_j\in \mathcal{V}_j$, we have that $A_{ij}=0$ and $A_{ik}=A_{jk}$ for all vertices $v_k$ that are not in the $l$-family, i.e. $v_k\in \mathcal{V}\setminus \mathcal{V}_1\sqcup\ldots\sqcup \mathcal{V}_l$. \end{itemize} \end{definition} \begin{proposition}\label{prop:lt} If $\Gamma$ contains a $l$-duplicate family of $t$ twins, then: \begin{itemize} \item $t$ is eigenvalue with multiplicity at least $l-1$; \item $0$ is an eigenvalue with multiplicity at least $l(t-1)$. \end{itemize} \end{proposition} \begin{proof}In order to show that $t$ is eigenvalue with multiplicity at least $l-1$, consider the following $l-1$ functions. For $i=2,\ldots,l$, let $f_i:\mathcal{V}\rightarrow \mathbb{R}$ such that $f_i:=1$ on $\mathcal{V}_1$, $f_i:=-1$ on $\mathcal{V}_i$ and $f_i:=0$ otherwise. Then, \begin{itemize} \item For each $v_1\in \mathcal{V}_1$, \begin{equation*} Lf(v_1)=1-\frac{1}{\deg v_1}\sum_{v_1\neq v_j\in \mathcal{V}_1}-\deg v_1=1+t-1=t\cdot f(v_1); \end{equation*} \item For each $v_i\in \mathcal{V}_i$, \begin{equation*} Lf(v_i)=-1-\frac{1}{\deg v_i}\sum_{v_i\neq v_j\in \mathcal{V}_i}\deg v_i=-1-(t-1)=t\cdot f(v_i); \end{equation*} \item For each $v_k\in \mathcal{V}\setminus \mathcal{V}_1\sqcup\ldots\sqcup \mathcal{V}_l$, \begin{equation*} Lf(v_k)=-\frac{1}{\deg v_k}\left(\sum_{v_1\in \mathcal{V}_1}A_{1k}-\sum_{v_i\in \mathcal{V}_i}A_{ik} \right)=0=t\cdot f(v_k). \end{equation*}Therefore, $f_i$ is an eigenfunction for $t$. Furthermore, the functions $f_2,\ldots, f_l$ are linearly independent. Therefore, $t$ is an eigenvalue with multiplicity at least $l-1$. \end{itemize} Similarly, in order to prove that $0$ is eigenvalue with multiplicity at least $l(t-1)$, let $\mathcal{V}_i=\{v^i_1,\ldots,v^i_t\}$ and consider the $l(t-1)$ functions $g_j^i:\mathcal{V}\rightarrow\mathbb{R}$ defined as follows, for $i=1,\ldots,l$ and $j=2,\ldots,t$. Let $g_j^i(v_1^i):=1$, $g_j^i(v_j^i):=-1$ and $g_j^i:=0$ otherwise. Then, by \cite[Equation (5)]{Hypergraphs}, it is clear that each $g_j^i$ is an eigenfunction for $0$. Since, furthermore, these are $l(t-1)$ linearly independent functions, $0$ has multiplicity at least $l(t-1)$. \end{proof} \begin{proposition}\label{prop:twins} If $\Gamma$ has $\hat{n}$ vertices that are twins to each other, $0$ is an eigenvalue with multiplicity at least $\hat{n}-1$. Furthermore, if $v_i$ and $v_j$ are twin vertices and $f$ is an eigenfunction for $L$ with eigenvalue $\lambda\neq 0$, then $f(v_i)=f(v_j)$. \end{proposition} \begin{proof} The first claim follows from Proposition \ref{prop:lt}, by taking $t=1$.\newline Now, assume that $v_i$ and $v_j$ are twin vertices and let $f$ be an eigenfunction for $L$ with eigenvalue $\lambda\neq 0$. Then, \begin{equation*} \lambda f(v_i)= Lf(v_i)=f(v_i)+f(v_j)-\frac{1}{\deg v_i}\left(\sum_{k\neq i,j}A_{ik}f(v_k)\right)=Lf(v_j)=\lambda f(v_j). \end{equation*}Since $\lambda\neq 0$, this implies that $f(v_i)=f(v_j)$. \end{proof} \section{Bipartite hypergraphs}\label{section:bipartite} \begin{definition}[\cite{Hypergraphs}\label{def:bipartite}] We say that a hypergraph $\Gamma$ is \textsl{bipartite} if one can decompose the vertex set as a disjoint union $\mathcal{V}=\mathcal{V}_1\sqcup \mathcal{V}_2$ such that, for every hyperedge $h$ of $\Gamma$, either $h$ has all its inputs in $\mathcal{V}_1$ and all its outputs in $\mathcal{V}_2$, or vice versa (Figure \ref{fig:bipartiteh}). \end{definition} \begin{figure}[t!] \begin{center} \begin{tikzpicture} \node (v3) at (1,0) {}; \node (v2) at (1,1) {}; \node (v1) at (1,2) {}; \node (v6) at (5,0) {}; \node (v5) at (5,1) {}; \node (v4) at (5,2) {}; \begin{scope}[fill opacity=0.5] \filldraw[fill=gray!70] ($(v1)+(0,0.5)$) to[out=180,in=180] ($(v2) + (0,-0.5)$) to[out=0,in=180] ($(v5) + (0,-0.5)$) to[out=0,in=0] ($(v4) + (0,0.5)$) to[out=180,in=0] ($(v1)+(0,0.5)$); \filldraw[fill=white!70] ($(v2)+(0,0.5)$) to[out=180,in=180] ($(v3) + (0,-0.5)$) to[out=0,in=180] ($(v6) + (0,-0.5)$) to[out=0,in=0] ($(v5) + (0,0.5)$) to[out=180,in=0] ($(v2)+(0,0.5)$); \end{scope} \fill (v1) circle (0.05) node [right] {$v_1$} node [above] {\color{black}$+$}; \fill (v2) circle (0.05) node [right] {$v_2$} node [above] {\color{black}$+$} node [below] {\color{gray}$-$}; \fill (v3) circle (0.05) node [right] {$v_3$} node [below] {\color{gray}$-$}; \fill (v4) circle (0.05) node [left] {$v_4$} node [above] {\color{black}$-$}; \fill (v5) circle (0.05) node [left] {$v_5$} node [above] {\color{black}$-$}node [below] {\color{gray}$+$}; \fill (v6) circle (0.05) node [left] {$v_6$} node [below] {\color{gray}$+$}; \node at (0,2) {\color{black}$h_1$}; \node at (0,0) {\color{gray}$h_2$}; \end{tikzpicture} \end{center} \caption{A bipartite hypergraph with $\mathcal{V}_1=\{v_1,v_2,v_3\}$ and $\mathcal{V}_2=\{v_4,v_5,v_6\}$.}\label{fig:bipartiteh} \end{figure} We now give the definition of \textsl{vertex-bipartite} hypergraph that, as we shall see in Lemma \ref{lemma:vertexbipartite} below, coincides with the definition of bipartite hypergraph. \begin{definition} We say that a hypergraph $\Gamma$ is \textbf{vertex-bipartite} if one can decompose the hyperedge set as a disjoint union $\mathcal{H}=\mathcal{H}_1\sqcup \mathcal{H}_2$ such that, for every vertex $v$ of $\Gamma$, either $v$ is an input only for hyperedges in $\mathcal{H}_1$ and it is an output only for hyperedges in $\mathcal{H}_2$, or vice versa. \end{definition} \begin{lemma}\label{lemma:vertexbipartite} Up to changing the orientation of some hyperedges, a hypergraph is bipartite if and only if it is vertex-bipartite. \end{lemma} \begin{proof}Assume that $\Gamma$ is bipartite. Up to changing the orientation of some hyperedges, we can assume that the vertex set has a decomposition $\mathcal{V}=\mathcal{V}_1\sqcup \mathcal{V}_2$ such that each hyperedge $h$ has all its inputs in $\mathcal{V}_1$ and all its outputs in $\mathcal{V}_2$. Therefore, every vertex in $\mathcal{V}_1$ is an input only for hyperedges in $\mathcal{H}$, and every vertex in $\mathcal{V}_2$ is only an output for hyperedges in $\mathcal{H}$. It follows that the decomposition of the hyperedge set as $\mathcal{H}=\mathcal{H}\sqcup \emptyset$ gives a vertex-bipartition.\newline Now, assume that $\Gamma$ is vertex-bipartite, with $\mathcal{H}=\mathcal{H}_1\sqcup \mathcal{H}_2$. Assume, by contradiction, that $\Gamma$ is not bipartite. Then, up to changing the orientation of some hyperedges, there exist two vertices $v,w\in \mathcal{V}$ and two hyperedges $h_1,h_2\in \mathcal{H}$ such that: \begin{enumerate} \item\label{item1} $h_1$ has both $v$ and $w$ as inputs; \item\label{item2} $h_2$ has $v$ as input and $w$ as output. \end{enumerate}The fact that $v$ is an input in both $h_1$ and $h_2$ implies that $h_1$ and $h_2$ are in the same $\mathcal{H}_i$. On the other hand, the fact that $w$ is an input for $h_1$ and an output for $h_2$ implies that $h_1$ and $h_2$ do not belong to the same $\mathcal{H}_i$. This brings to a contradiction. Therefore, $\Gamma$ is bipartite. \end{proof} \begin{proposition}\label{prop_bipartite} If $\Gamma$ is bipartite, it is isospectral to its underlying hypergraph, therefore, in particular, also to every other bipartite hypergraph that has the same underlying hypergraph as $\Gamma$. \end{proposition} \begin{proof}Since $\Gamma$ is bipartite, up to switching (without loss of generality) the orientations of some hyperedges we can assume that all the inputs are in $\mathcal{V}_1$ and all the outputs are in $\mathcal{V}_2$, with $\mathcal{V}=\mathcal{V}_1\sqcup \mathcal{V}_2$. Furthermore, by Lemma 49 in \cite{Hypergraphs}, we can move a vertex from $\mathcal{V}_1$ to $\mathcal{V}_2$ or vice versa, by letting it be always an output or always an input, without affecting the spectrum. In particular, if we move all vertices to $\mathcal{V}_1$, we obtain the underlying hypergraph of $\Gamma$. \end{proof} \begin{remark} As a consequence of Proposition \ref{prop_bipartite}, without loss of generality we can always assume that a bipartite hypergraph $\Gamma$ has only inputs, when studying the spectrum of the normalized Laplacian. In this case, \begin{itemize} \item $A_{ij}=-\# \{h\in \mathcal{H}:v_i,v_j\in \mathcal{H}\}$ for each $i\neq j$; \item $\sum_jA_{ij}=-\sum_{h\ni v_i}(\# h-1)$, for each $v_i\in \mathcal{V}$. \end{itemize} \end{remark} From here on we work on a hypergraph $\Gamma=(\mathcal{V},\mathcal{H})$ that has only inputs. Therefore, we focus on the signless normalized Laplacian of classical hypergraphs. \section{Families of hypergraphs}\label{section:families} \subsection{Hyperflowers}\label{section:flowers} We now introduce and study \textsl{hyperflowers}: hypergraphs in which there is a set of nodes, the \textsl{core,} that is well connected to the other vertices, and a set of \textsl{peripheral nodes} such that each of them is contained in exactly one hyperedge. Hyperflowers are therefore a generalization of star graphs \cite{ANDREOTTI2018206}. \begin{definition}\label{def:hyperflowers} A \textbf{$(l,r)$-hyperflower with t twins} (Figure \ref{fig:hyperflowertwins}) is an hypergraph $\Gamma=(\mathcal{V},\mathcal{H})$ whose vertex set can be written as $ \mathcal{V}=U\sqcup \mathcal{W}$, where: \begin{itemize} \item $U$ is a set of $t\cdot l$ nodes $v_{11},\ldots,v_{1l},\ldots,v_{t1},\ldots,v_{tl}$ which are called \textbf{peripheral}; \item There exist $r$ disjoint sets of vertices $h_1,\ldots,h_r \in \mathcal{P}(\mathcal{W})\setminus\{\emptyset\}$ such that $$\mathcal{H}=\{h|h=h_i\cup\bigcup_{z=1}^{t} v_{zj} \mbox{ for }i=1,\ldots,r \mbox{ and } j=1,\ldots,l \}.$$ \end{itemize}If $t=1$, we simply say that $\Gamma$ is a \textbf{$(l,r)$-hyperflower}.\newline If $r=1$, we simply say that $\Gamma$ is a \textbf{$l$-hyperflower with t twins}. \end{definition} \begin{figure}[t!] \begin{center} \includegraphics[width=6cm]{Hyperflower.jpg} \caption{A $5$-hyperflower with $3$ twins.} \label{fig:hyperflowertwins} \end{center} \end{figure} \begin{remark} The $(l,r)$-hyperflowers in Definition \ref{def:hyperflowers} are a particular case of the hyperstars in \cite{andreotti2020eigenvalues}, that also include weights and non-disjoint sets $h_1,\ldots,h_r$. Here we choose to study the particular structure of $(l,r)$-hyperflowers (and their generalizations with twins) because the strong symmetries of these structures allows for a deeper study of the spectrum. \end{remark} \begin{proposition}\label{prop:l2hyperflower} The spectrum of the $(l,2)$-hyperflower on $N$ nodes is given by: \begin{itemize} \item $0$, with multiplicity $N-l-1$; \item $1$, with multiplicity $\geq l-1$; \item $\lambda_N>1$; \item $\lambda_{N-1}=N-\lambda_N-l+1\geq 1$. \end{itemize}In the particular case in which $\#h$ is constant for each $h\in \mathcal{H}$, $\lambda_N=\frac{N-l}{2}+1$ and $\lambda_{N-1}=\frac{N-l}{2}$. \end{proposition} \begin{proof} By \cite[Corollary 3.5]{MulasZhang}, $1$ is an eigenvalue with multiplicity at least $l-1$. Now, the $N-l$ vertices $v_{l+1},\ldots,v_N$ form two classes of twin vertices that generate the eigenvalue $0$ with multiplicity at least $N-l-2$. In particular, there exist $N-l-2$ linearly independent corresponding eigenfunctions $f_i:\mathcal{V}\rightarrow\mathbb{R}$ such that $f_i(v)=1$ for some $v\notin\{v_1,\ldots,v_l\}$, $f_i(w)=-1$ for a given $w$ twin of $v$, and $f_i=0$ otherwise. If we let $g(v_j):=1$ for each $j=1,\ldots,l$, $g(v'_1):=-1$ for exactly one $v'_1\in h_1$ and $g(v'_2):=-1$ for exactly one $v'_2\in h_2$, it's easy to see that $g$ is also an eigenfunction of $0$. Furthermore, the $f_i$'s and $g$ are all linearly independent, which implies that $0$ has multiplicity at least $N-l-1$.\newline Now, by \cite[Theorem 3.1]{Sharp}, $\lambda_N\geq \frac{\sum_{h\in \mathcal{H}}\# h}{|\mathcal{H}|}>1$. We have therefore listed already $N-1$ eigenvalues and there is only one eigenvalue $\lambda$ missing. Since $\sum_{i=1}^N\lambda_i=N$, we have that $\lambda=N-\lambda_N-l+1$. In particular, since by \cite[Theorem 3.1]{Sharp} $\lambda_N\leq\max_{h\in \mathcal{H}}\#h$ with equality if and only if $\#h$ is constant, and $\max_{h\in \mathcal{H}}\#h\leq N-l$, we have that \begin{equation*} \lambda=N-\lambda_N-l+1\geq 1, \end{equation*}with equality if and only if $\#h$ is constant and equal to $N-l$, that is, if and only if $\#h_1=\#h_2=1$. Hence, $\lambda=\lambda_{N-1}$ and we have that $\lambda_{N-1}=1$ if and only if $\#h_1=\#h_2=1$.\newline In general, if $\#h$ is constant for each $h\in \mathcal{H}$, then by \cite[Theorem 3.1]{Sharp} $\lambda_N=\#h=\frac{N-l}{2}+1$ and therefore $\lambda_{N-1}=\frac{N-l}{2}$. \end{proof} \begin{proposition}\label{prop:r1} Let $\Gamma$ be an $(l,r)$--hyperflower with peripheral vertices $v_1,\ldots,v_l$. Let $\hat{\Gamma}:=(\hat{\mathcal{V}},\hat{\mathcal{H}})$ be the $(1,r)$--hyperflower defined by \begin{equation*} \hat{\mathcal{V}}:=\mathcal{V}\setminus\{v_2,\ldots,v_l\} \qquad\text{and}\qquad \hat{\mathcal{H}}:=\{h\in \mathcal{H}: v_2,\ldots,v_l\notin h\}. \end{equation*}Then, the spectrum of $\Gamma$ is given by: \begin{itemize} \item The $N-l+1$ eigenvalues of $\hat{\Gamma}$, with multiplicity; \item $1$, with multiplicity at least $l-1$. \end{itemize} \end{proposition} \begin{proof} By \cite[Corollary 3.5]{MulasZhang}, adding $v_2,\ldots,v_l$ to $\hat{\Gamma}$ produces the eigenvalue $1$ with multiplicity $l-1$. Therefore, it is left to show that, if $\lambda$ is an eigenvalue of $\hat{\Gamma}$, then $\lambda$ is also an eigenvalue of $\Gamma$. Let $L$ and $A$ be the Laplacian and the adjacency matrix on $\Gamma$, respectively, and let $\hat{L}$ and $\hat{A}$ be the Laplacian and the adjacency matrix on $\hat{\Gamma}$, respectively. Let also $\hat{f}$ be an eigenfunction for $\hat{\Gamma}$ corresponding to the eigenvalue $\lambda$. Then, \begin{equation*} \hat{L}\hat{f}(v_k)=\hat{f}(v_k)-\frac{1}{\deg_{\hat{\Gamma}}v_k}\sum_{v_i\in \hat{\mathcal{V}}\setminus \{v_k\}}\hat{A}_{ik}\hat{f}(v_i)=\lambda\cdot \hat{f}(v_k), \qquad\text{for all }v_k\in \hat{\mathcal{V}}. \end{equation*}Now, let $f:\mathcal{V}\rightarrow \mathbb{R}$ be such that $f:=\hat{f}$ on $\hat{\mathcal{V}}$ and $f(v_2):=\ldots:=f(v_l):=\hat{f}(v_1)$. Then, \begin{align*} Lf(v_1)&=f(v_1)-\frac{1}{\deg v_1}\sum_{v_i\in \hat{\mathcal{V}}\setminus \{v_1\}}A_{i1}f(v_i)=\hat{f}(v_1)-\frac{1}{\deg_{\hat{\Gamma}} v_1}\sum_{v_i\in \hat{\mathcal{V}}\setminus \{v_1\}}\hat{A}_{i1}\hat{f}(v_i)\\&=\hat{L}\hat{f}(v_k) =\lambda\cdot \hat{f}(v_1) =\lambda\cdot f(v_1). \end{align*} Similarly, for $j\in 2,\ldots,l$, \begin{equation*} Lf(v_j)=f(v_j)-\frac{1}{\deg v_j}\sum_{v_i\in \hat{\mathcal{V}}\setminus \{v_1\}}A_{ij}f(v_i)=\hat{f}(v_1)-\frac{1}{\deg_{\hat{\Gamma}} v_1}\sum_{v_i\in \hat{\mathcal{V}}\setminus \{v_1\}}\hat{A}_{i1}\hat{f}(v_i)=\lambda\cdot \hat{f}(v_1)=\lambda\cdot f(v_j). \end{equation*}Furthermore, for each $v_k\in \mathcal{V}\setminus\{v_1,\ldots,v_l\}$, we have that \begin{itemize} \item $\deg_{\hat{\Gamma}}(v_k)=1$ while $\deg(v_k)=l$; \item For each $v_{k'}\in \mathcal{V}\setminus\{v_1,\ldots,v_l,v_k\}$ such that $\hat{A}_{kk'}\neq 0$, $\hat{A}_{kk'}=-1$ while $A_{kk'}=-l$; \item $\hat{A}_{k1}=A_{k1}=-1$, and $A_{kj}=-1$ for each $j\in 2,\ldots, l$. \end{itemize}Therefore, for for each $v_k\in \mathcal{V}\setminus\{v_1,\ldots,v_l\}$, \begin{align*} Lf(v_k)&=f(v_k)-\frac{1}{\deg v_k}\left(\sum_{k'}A_{kk'}f(v_{k'})+\sum_{j=1}^lA_{kj}f(v_j)\right) \\&=\hat{f}(v_k)-\frac{1}{l}\left(\sum_{k'}(-l)\hat{f}(v_{k'})+(-1)\sum_{j=1}^l\hat{f}(v_1)\right) \\&= \hat{f}(v_k)+\sum_{k'}\hat{f}(v_{k'})+\hat{f}(v_1) \\&=\hat{L}\hat{f}(v_k)=\lambda\cdot \hat{f}(v_k)=\lambda\cdot f(v_k). \end{align*}This proves that $\lambda$ is an eigenvalue for $L$, and $f$ is a corresponding eigenfunction. \end{proof} \begin{remark} Proposition \ref{prop:r1} tells us that, in order to know the spectrum of a $(l,r)$--hyperflower, we can study the spectrum of the $(1,r)$--hyperflower obtained by deleting $l-1$ peripheral vertices and the hyperedges containing them, and then add $l-1$ $1$'s to the spectrum. \end{remark} \begin{proposition}The spectrum of the $l$-hyperflower with $t$ twins is given by: \begin{itemize} \item $0$, with multiplicity $N-l$; \item $t$, with multiplicity $l-1$; \item $\lambda_N=N-tl+t$.\end{itemize}\end{proposition} \begin{proof}Since all hyperedges have cardinality $N-tl+t$, by \cite[Theorem 3.1]{Sharp} we have that $\lambda_N=N-tl+t$. Furthermore, by Proposition \ref{prop:lt}, $t$ is an eigenvalue with multiplicity at least $l-1$. Since, clearly, $N-tl+t>t$, we have listed $l$ eigenvalues whose sum is $N$. It follows that $0$ has multiplicity $N-l$. \end{proof} \subsection{Complete hypergraphs} \begin{definition}[\cite{MulasZhang}] We say that $\Gamma=(\mathcal{V},\mathcal{H})$ is the \textbf{$c$-complete hypergraph}, for some $c\geq 2$, if $\mathcal{V}$ has cardinality $N$ and $\mathcal{H}$ is given by all possible ${N \choose c}$ hyperedges of cardinality $c$. \end{definition} \begin{proposition}\label{prop:complete} The spectrum of the $c$-complete hypergraph is given by: \begin{itemize} \item $\frac{N-c}{N-1}$, with multiplicity $N-1$; \item $c$, with multiplicity $1$. \end{itemize} \end{proposition} \begin{proof}By \cite[Theorem 3.1]{Sharp}, $\lambda_N=c$. Now, observe that each vertex $v$ has degree $d:={N-1 \choose c-1}$, while $a:=A_{ij}=-{N-2 \choose c-2}$ is constant for all $i\neq j$. Therefore, $\frac{a}{d}=-\frac{c-1}{N-1}$ and \begin{equation*} Lf(v)=f(v)-\frac{a}{d}\left(\sum_{w\neq v}f(w)\right)=f(v)+\frac{c-1}{N-1}\left(\sum_{w\neq v}f(w)\right), \qquad \forall v\in \mathcal{V}. \end{equation*}Now, for each $i=2,\ldots,N$, let $f(v_1):=1$, $f(v_i):=-1$ and $f:=0$ otherwise. Then, \begin{itemize} \item $Lf(v_1)=1-\frac{c-1}{N-1}=\frac{N-c}{N-1}\cdot f(v_1)$, \item $Lf(v_i)=-1+\frac{c-1}{N-1}=\frac{N-c}{N-1}\cdot f(v_i)$, and \item $Lf(v_j)=0=\frac{N-c}{N-1}\cdot f(v_j)$ for all $j\neq 1,i$. \end{itemize}Therefore, the $f_i$'s are $N-1$ linearly independent eigenfunctions for $\frac{N-c}{N-1}$. This proves the claim. \end{proof} \begin{ex} Proposition \ref{prop:complete} tells us that the signless spectrum of the complete graph on $N$ nodes is given by $\frac{N-2}{N-1}$, with multiplicity $N-1$, and $2$ with multiplicity $1$. By Remark \ref{remark:graphs2}, this is equivalent to saying that the spectrum of the complete graph is given by $\frac{N}{N-1}$, with multiplicity $N-1$, and $0$ with multiplicity $1$. This is a well known result (see \cite{Chung}) and Proposition \ref{prop:complete} generalizes it. \end{ex} \subsection{Lattice Hypergraphs} \textsl{Lattice graphs}, also called \textsl{grid graphs}, are well known both in graph theory and in applications \cite{Asllani2015, Andreotti2015ModelingTF,Lattice1,Lattice2,Lattice3,Lattice4,Lattice5,Lattice6}. For instance, they model topologies used in transportation networks, such as the Manhattan street network, and crystal structures used in crystallography. These structures and their spectra are also widely used in statistical mechanics, in the study of ASEP, TASEP and SSEP models \cite{Mallick_2011, Speer1994, Schtz2017FluctuationsIS}, which have applications in the Ising model, (lattice) gas and which also describe the movement of ribosomes along the mRNA \cite{Gritsenko2015UnbiasedQM}. In this section we generalize the notion of lattice graph to the case of hypergraphs. \begin{definition} Given $l\in \mathbb{N}_{\geq 2}$, we define the \textbf{$l$-lattice} as the hypergraph $\Gamma=(\mathcal{V},\mathcal{H})$ on $l^2$ nodes and $2l$ hyperedges that can be drawn so that: \begin{itemize} \item The vertices form a $l\times l$ grid, and \item The hyperedges are exactly the rows and the columns of the grid (Figure \ref{fig:lattice}). \end{itemize} \end{definition} \begin{figure}[t!] \begin{center} \includegraphics[width=4cm]{Reticolo.png} \caption{A $3$-lattice.} \label{fig:lattice} \end{center} \end{figure} \begin{proposition} The spectrum of the $l$-lattice is given by: \begin{itemize} \item $0$, with multiplicity $l^2-2l+1$; \item $\frac{l}{2}$, with multiplicity $2(l-1)$; \item $l$, with multiplicity $1$. \end{itemize} \end{proposition} \begin{proof}By \cite[Theorem 3.1]{Sharp}, $\lambda_{l^2}=l$. Furthermore, by \cite[Corollay 33]{Hypergraphs}, since the maximum number of \textsl{linearly independent hyperedges} is $2l-1$, this implies that $0$ is an eigenvalue with multiplicity $l^2-2l+1$.\newline Now, observe that $\deg v=2$ for each $v$ and \begin{equation*} A_{ij}=\begin{cases}-1 &\text{if } v_i\sim v_j\\ 0 &\text{otherwise,}\end{cases} \end{equation*}for all $i\neq j$. Therefore, \begin{equation}\label{eq:Llattice} Lf(v)=f(v)+\frac{1}{2}\left(\sum_{w\sim v}f(w)\right), \qquad \text{for all }v\in \mathcal{V}. \end{equation}Fix a row of the $l$-lattice given by the vertices $w_1,\ldots,w_l$. For $i=1,\ldots,l-1$, let $f_i:\mathcal{V}\rightarrow\mathbb{R}$ be $1$ on the neighbors of $w_i$ with respect to the row, $-1$ on the neighbors of $w_i$ with respect to its column, and $0$ otherwise. Then, by \eqref{eq:Llattice}, it is easy to check that $f_i$ is an eigenfunction for $\frac{l}{2}$. Since the $f_i$'s are linearly independent, this proves the claim. \end{proof} \subsection{Hypercycles} \begin{definition} Fix $N$ and $l\in\{2,\ldots,\frac{N}{2}\}$. We say that $\Gamma=(\mathcal{V},\mathcal{H})$ is the \textbf{$l$-hypercycle} on $N$ nodes (Figure \ref{fig:cycle}) if $\mathcal{V}=\{v_1,\ldots,v_N\}$, $\mathcal{H}=\{h_1,\ldots,h_N\}$ and \begin{equation*} h_i=\{v_i,\ldots,v_{i+l-1}\}, \end{equation*}where we let $v_{N+i}:=v_{i}$ for each $i=1,\ldots, N$. \end{definition} \begin{theorem} The eigenvalues of the $l$-hypercycle are \begin{equation*} \lambda_i=1+\frac{\sum_{r=1}^Nm(r)\cdot \cos \left(\frac{2\pi ir}{N}\right)}{l},\qquad \text{for }i=1,\ldots,N, \end{equation*}where $m:\{0,\ldots,N\}\rightarrow\mathbb{Z}$ is such that: \begin{itemize} \item $m(r):=l-r$ for all $r\in \{1,\ldots,l-1\}$ \item $m(N-k):=m(k)=l-k$ for all $k\in \{1,\ldots,l-1\}$ \item $m:=0$ otherwise. \end{itemize} \end{theorem} \begin{proof}By construction, all vertices have degree $l$. Therefore, by \cite[Remark 2.17]{MulasZhang}, proving the claim is equivalent to proving that the eigenvalues of the adjacency matrix are \begin{equation*} \mu_i=-\sum_{r=1}^Nm(r)\cdot \cos \left(\frac{2\pi ir}{N}\right),\qquad \text{for }i=1,\ldots,N. \end{equation*}Observe that the adjacency matrix can be written as \begin{equation*}A=-\begin{bmatrix} 0 & l-1 & l-2 &\ldots & 1 & 0 & \ldots & 0 & 1 & \ldots & l-2 &l-1\\ l-1 & 0 & l-1 & l-2 & \ldots & 1 & 0 & \ldots & 0 & 1 & \ldots & l-2\\ l-2 & l-1 & 0 & \ddots & \ddots & \ddots & \ddots & \ddots & & \ddots & \ddots & \vdots\\ \vdots & \ddots& \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & & \ddots & 1\\ 1 & & & & & & & & & & & 0\\ 0 & 1 & & & & & & & & & \ddots & \vdots\\ \vdots & 0 & \ddots & & & & & & & & \ddots & 0\\ 0 & & \ddots& & & & & & & & & 1\\ 1 & 0 & & & & & & & & \ddots & \ddots& \vdots\\ \vdots & \ddots & \ddots& & & & & & \ddots & \ddots & l-1 & l-2\\ l-2 & & \ddots & \ddots & & \ddots & \ddots & & & l-1 & 0 & l-1\\ l-1 & l-2 & \ldots & \ldots & 0 & \ldots & 0 & 1 & \ldots & \ldots & l-1 & 0 \\ \end{bmatrix}\end{equation*} Therefore, \begin{equation*}A=-\begin{bmatrix} m(0) & m(N-1) & m(N-2) &\ldots & m(1)\\ m(1) & m(0) & m(N-1) &\ldots & m(2)\\ m(2) & m(1) & m(0) &\ldots & m(3)\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ m(N-1) & m(N-2) & m(N-3) & \ldots & m(0) \end{bmatrix}\end{equation*} where \begin{itemize} \item $m(r):=l-r$ for all $r\in \{1,\ldots,l-1\}$ \item $m(N-k):=m(k)=l-k$ for all $k\in \{1,\ldots,l-1\}$ \item $m:=0$ otherwise. \end{itemize} Hence, $A$ is a (symmetric) circulant matrix. By \cite{circulant}, the eigenvalues of $A$ are \begin{equation*} \mu_i=-\sum_{r=1}^Nm(r)\cdot \cos \left(\frac{2\pi ir}{N}\right),\qquad \text{for }i=1,\ldots,N. \end{equation*}This proves the claim. \end{proof} \begin{figure}[t!] \begin{center} \includegraphics[width=5cm]{Cycle2.jpg} \caption{The $3$-hypercycle on $6$ nodes.} \label{fig:cycle} \end{center} \end{figure}
{'timestamp': '2020-06-01T02:09:11', 'yymm': '2005', 'arxiv_id': '2005.14463', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14463'}
arxiv
\section{Grids} \begin{table}[ht] \centering \begin{tabular}[t]{cccccc} \hline \hline $E_{\text{trans}}$ [eV]&$v$&$j$&$E_{\text{trans}}'$ [eV]&$v'$&$j'$\\ \hline 0.0&0&0&0.1&0&0\\ 0.2&2&15&0.3&2&20\\
0.4&4&30&0.5&4&40\\ 0.6&6&45&1.0&6&60\\ 1.0&9&60&1.5&9&80\\ 1.5&12&90&2.0&12&100\\ 2.0&15&120&2.5&15&125\\ 2.5&18&150&3.0&18&150\\ 3.0&21&180&3.5&21&175\\ 3.5&24&210&4.5&24&200\\ 4.5&27&240&5.5&27&220\\ 5.5&30&-&6.5&30&240\\ 6.5&33&-&7.5&33&-\\ 7.5&36&-&8.5&36&-\\ 8.5&-&-&9.5&42&-\\ 9.5&-&-&10.5&47&-\\ 10.5&-&-&-&-&-\\ 11.5&-&-&-&-&-\\ \hline \hline \end{tabular} \caption{Grid points used in this work for sampling the reactant ($P(E_{\text{trans}}), P(v), P(j)$) and product state distributions ($ P(E_{\text{trans}}'), P(v'), P(j')$).} \label{sifig:grid_table} \end{table}% \newpage \section{Statistical Evaluation} \label{si:stat_eval} For statistical evaluation, NNs in the F- and G-based models were trained on 10 independent random splits of $N_{\rm tot}$ into $N_{\rm train}$, $N_{\rm valid}$ and $N_{\rm test}$ for Set1 as well as Set1 and Set3, respectively. For each of the 10 resulting F- and G-based models, ${\rm RMSD}_{\text{NN}}$, $R^2_{\text{NN}}$ and ${\rm RMSD}_{\text{QCT}}$, $R^2_{\text{QCT}}$ values were evaluated over the test set and subsequently the corresponding mean and standard deviation were calculated. These results are displayed in Table \ref{sifig:statistical_eval}. Taking the reported standard deviations as a reference, we can be confident that any performance difference between two models larger than $\sim 0.0001$ is not solely of statistical nature. In particular, we expect this to apply to approaches and data sets other than the ones reported in Table \ref{sifig:statistical_eval} and will therefore refer to this estimate throughout this work.\\ \begin{table}[ht] \centering \begin{tabular}[t]{l||c|c|c} \hline \hline DTD model&F-NN&G-NN&G-NN\\ \hline Training \& test&Set1&Set1&Set3\\ ${\rm RMSD}_{\rm NN}$&$0.00072\pm0.00005$&$0.00089\pm0.00002$&$0.00092\pm0.00003$\\ ${R^2}_{\rm NN}$&$0.99948\pm0.00011$&$0.99930\pm0.00005$&$0.99905\pm0.00010$\\ ${\rm RMSD}_{\rm QCT}$&$0.00142\pm0.00004$&$0.00107\pm0.00003$&$0.00106\pm0.00004$\\ ${R^2}_{\rm QCT}$&$0.99816\pm0.00014$&$0.99901\pm0.00005$&$0.99882\pm0.00011$\\ \hline \hline \end{tabular} \caption{Performance measures (${\rm RMSD}_{\rm NN}$, ${R^2}_{\rm NN}$, ${\rm RMSD}_{\rm QCT}$ and ${R^2}_{\rm QCT}$) for statistical evaluation of F- (F-NN) and G-based models (G-NN) trained and tested on Set 1/3.} \label{sifig:statistical_eval} \end{table}% \section{Statistical Measures} The RMSD$_{\rm QCT}$ and $R^2_{\rm QCT}$ values of all G-based models and variants, applied to the different data sets are summarized in Table \ref{sitab:perfomance_measures}.\\ \begin{table}[h!] \centering \begin{tabular}[t]{l|cccc} \hline \hline DTD model&Training&Test&${\rm RMSD}_{\rm QCT}$&${R^2}_{\rm QCT}$\\ \hline G-NN&Set1&Set1&0.0010&0.9991\\ \hline G-NN&Set1&Set2&0.0016&0.9974\\ G-NN&Set2&Set2&0.0011&0.9990\\ G$'_{1}$-NN&Set2&Set2&0.0022&0.9878\\ G$'_{2}$-NN&Set2&Set2&0.0011&0.9990\\ G($\bm{T}$)-NN&Set2&Set2&0.0011&0.9990\\ G($\bm{\mu}$)-NN&Set2&Set2&0.0011&0.9990\\ \hline G-NN&Set3&Set3&0.0011&0.9989\\ G$'_{2}$-NN&Set3&Set3&0.0013&0.9984\\ G$'_{3}$-NN&Set3&Set3&0.0011&0.9988\\ G($\bm{w},\bm{T}$)-NN&Set3&Set3&0.0011&0.9988\\ G($\bm{\mu}$)-NN&Set3&Set3&0.0020&0.9959\\ G($\bm{\mu},\bm{\sigma}$)-NN&Set3&Set3&0.0011&0.9987\\ \hline G-NN&Set3&Set2&0.0012&0.9988\\ G$'_{3}$-NN&Set3&Set2&0.0012&0.9987\\ G($\bm{w},\bm{T}$)-NN&Set3&Set2&0.0125&0.8907\\ G($\bm{\mu},\bm{\sigma}$)-NN&Set3&Set2&0.0012&0.9988\\ \hline G-NN&Set3&Set3A&0.0009&0.9991\\ G$'_{3}$-NN&Set3&Set3A&0.0009&0.9990\\ G($\bm{\mu},\bm{\sigma}$)-NN&Set3&Set3A&0.0011&0.9988\\ \hline \hline \end{tabular} \caption{Performance measures (${\rm RMSD}_{\rm QCT}$ and ${R^2}_{\rm QCT}$) of all G-based models and variants considered in this work. Here, G$'_{i}$-NN denoted the G$'$-based model with $i$ grid points per reactant state distribution. The column labelled training denotes the data set on which the model was trained on, whereas the test column specifies the data set whose test set was used to calculate these performance measures. The number of significant digits being reported is based on the findings of Section \ref{si:stat_eval} in the SI.} \label{sitab:perfomance_measures} \end{table}% \newpage \section{G$'$-based models for Equilibrium Reactant State Distributions} \label{si:limits_reduced_G_based_models} G$'$-based models are variants of G-based models which use a significantly reduced number of grid points per reactant state distribution. Consider the case of a G$'$-based model, developed for equilibrium reactant state distributions such as Set2.\\ \noindent As these are equilibrium distributions they are characterized by a corresponding temperature once the system-specific parameters are fixed (here these are the energies of the rovibrational state of the diatom). Thus, providing the value (``amplitude'') of the distribution at a single, "suitably chosen" grid point uniquely determines the corresponding distribution in the absence of noise and it is expected that this suffices as input to accurately predict the corresponding product state distributions. This is supported by the finding that G($\bm{\mu}$)- and G($\bm{T}$)-based models yield accurate predictions once ($\bm{\mu}$ and $\bm{T}$) are specified. However, it is important that the grid point chosen does not correspond to a crossing point between two equilibrium distributions at two different temperatures.\\ \noindent In practice, however, the reactant state distributions considered in this work suffer from noise due to finite sample statistics. In the presence of noise the grid points should be placed at locations where the difference between equilibrium distributions at different temperatures is largest. Grid points located where this difference is small, such as at the tail of these distributions, further raises the difficulty of distinguishing between equilibrium distributions at different temperatures when taking noise into account. Consequently, the presence of noise serves as an explanation on why G$'$-based models developed for equilibrium reactant state distributions of Set2 suffer from a significant drop in the prediction accuracy when the number of grid points is reduced to a single point per distribution.\\ \newpage \section{Additional Figures} \begin{figure}[h] \begin{center} \includegraphics[width=0.99\textwidth]{NN_comparison_accurate_R2.png} \caption{Product state distributions obtained by explicit QCT simulations (QCT) as well as the corresponding references (-R) and predictions (-NN) obtained in the (a-c) F-based (F-R, F-NN), (d-f) K-based (K-R, K-NN) and (g-i) G-based approaches (G-R, G-NN). Furthermore, the amplitudes to construct the reference RKHS-based representations in the K- and G-based approaches are displayed (circles). The data sets considered here are from the test set of Set1 and result in predictions with the largest $R^2_{\rm NN}$ value in the test set: (a-c) $\bm{T}=($17750 K, 12500 K, 12500 K), RMSD$_{\text{NN}}=0.0002$, $R^2_{\text{NN}}=0.99997$, (d-f) $\bm{T}=($16750 K, 8000 K, 8000 K), RMSD$_{\text{NN}}=0.0006$, $R^2_{\text{NN}}=0.9997$, (g-i) $\bm{T}=($9750 K, 5000 K, 5000 K), RMSD$_{\text{NN}}=0.0005$, $R^2_{\text{NN}}=0.9998$.} \label{sifig:NN_comparison_accurate} \end{center} \end{figure} \newpage \begin{figure}[h] \begin{center} \includegraphics[width=0.99\textwidth]{NN_comparison_inaccurate_R2.png} \caption{Product state distributions obtained by explicit QCT simulations (QCT) as well as the corresponding references (-R) and predictions (-NN) obtained in the (a-c) F-based (F-R, F-NN), (d-f) K-based (K-R, K-NN) and (g-i) G-based approaches (G-R, G-NN). Furthermore, the amplitudes to construct the reference RKHS-based representations in the K- and G-based approaches are displayed (circles). The data sets considered here are from the test set of Set1 and result in predictions with the smallest $R^2_{\rm NN}$ value in the test set: (a-c) $\bm{T}=($5000 K, 8000 K, 8000 K), RMSD$_{\text{NN}}=0.0020$, $R^2_{\text{NN}}=0.9973$, (d-f) $\bm{T}=($6750 K, 18750 K, 18750 K), RMSD$_{\text{NN}}=0.0036$, $R^2_{\text{NN}}=0.9887$, (g-i) $\bm{T}=($19000 K, 18750 K, 18750 K), RMSD$_{\text{NN}}=0.0015$, $R^2_{\text{NN}}=0.9984$.} \label{sifig:NN_comparison_inaccurate} \end{center} \end{figure} \newpage \begin{figure}[h] \includegraphics[width=0.65\textwidth]{Schwarzentruber.png} \caption{$P(v')$ and $P(j')$ obtained by explicit QCT simulations (QCT) as well as the corresponding fits to the parametric surprisal model\cite{schwartz:2018} (Model). The data sets considered here are from Set1: (a-b) $\bm{T}$ = (20000 K, 5000 K, 5000 K), (c-d) $\bm{T}$ = (5500 K, 20000 K, 20000 K). While the model closely matches the QCT data for (a-b), it is insufficiently described by the model for (c-d), in particular $P(v')$.} \label{sifig:Schwarzentruber} \end{figure} \newpage \begin{figure}[h!] \begin{center} \includegraphics[width=0.99\textwidth]{alternative_function_based_model.png} \caption{$P(v')$ obtained by explicit QCT simulations (QCT) as well as the corresponding references obtained in the F-based approach using Eq.~6 (F-R 1) and Eq.~12 (F-R 2). The data sets considered here are from Set1: (a) $\bm{T}$ = (12500 K, 5750 K, 5750 K), (b) $\bm{T}$ = (9500 K, 16000 K, 16000 K), (c) $\bm{T}$ = (5750 K, 19250 K, 19250 K). These results illustrate that Eq.~12 leads to a better quality of fit compared to Eq.~6.} \label{sifig:alternative_function_based_model} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.99\textwidth]{pv_kernel_coefficients.png} \caption{(a) $P(v')$ obtained by explicit QCT simulations for ($T_{\text{trans}}=12000$ K, $T_{\text{rovib}}=5250$ K; black) and ($T_{\text{trans}}=12000$ K, $T_{\text{rovib}}=5500$ K; red) with similar shape. (b-c) Corresponding featurizations using the K-based approach. Here, the displayed features (kernel coefficients) are (b) non-standardized and (c) standardized. These results illustrate that positive and negative kernel coefficients can cancel. Hence, different combinations of kernel coefficients are able to model similarly shaped distributions, here $P(v')$.} \label{sifig:pv_kernelcoeff} \end{center} \end{figure} \newpage \begin{figure}[h] \begin{center} \includegraphics[width=0.99\textwidth]{results_set2_differentinputs.png} \caption{Product state distributions obtained from explicit QCT simulations (QCT), as well as the corresponding predictions of the G- (G-NN), G$'_{2}$- (G$'_{2}$-NN), G($\bm{T}$)- (G($\bm{T}$)-NN) and G($\bm{\mu}$)-based models (G($\bm{\mu}$)-NN). G$'_{2}$-NN uses 2 grid points per reactant state distribution (see main text). The data sets considered here are from the test set of Set2: (a-c) $\bm{T}$ = (20000 K, 7000 K, 5000 K), (d-f) $\bm{T}$ = (10000 K, 14000 K, 9000 K), (g-i) $\bm{T}$ = (5000 K, 20000 K, 8000 K).} \label{sifig:set2_differentinputs} \end{center} \end{figure} \newpage \begin{figure}[h] \begin{center} \includegraphics[width=0.6\textwidth]{LearningCurve.png} \caption{Learning curve for the G-based model trained and evaluated on Set3 with a variable size of the training and validation sets. Here, both RMSD$_{\rm NN}$ and $1-R^2_{\rm NN}$ are measures for the NN prediction accuracy. The NN prediction accuracy did not significantly increase when $N_{\text{train}}+N_{\text{valid}}$ was increased from 5000 to 30000 in increments of 5000.} \label{sifig:probmodel_learningcurve} \end{center} \end{figure} \begin{table}[h] \centering \begin{tabular}[t]{l|cc} \hline \hline Panels&$w_{n}/w_{\rm tot}$&$\bm{T}$ [K]\\ \hline (a-c)&2/5&$(11750, 6250, 6250)$\\ &2/5&$(7750,8250,8250)$\\ &1/5&$(16750, 19000, 19000)$\\ \hline (d-f)&1/2&$(10000, 14750, 14750)$\\ &1/2&$(20000, 18000, 13000)$\\ \hline (g-i)&2/3&$(19500, 17750,17750)$\\ &1/3&$(5000, 7000, 19000)$\\ \hline \hline \end{tabular} \caption{Normalized weights $w_{n}/w_{\rm tot}$ and sets of temperatures $\bm{T}=(T_{\rm trans}, T_{\rm vib}, T_{\rm rot})$ characterizing the data sets displayed in Figure~8.} \label{sifig:probmodel_differentinputs_R2_table} \end{table}% \newpage \begin{table}[h] \centering \begin{tabular}[t]{l|cc} \hline \hline Panels&$w_{n}/w_{\rm tot}$&$\bm{T}$ [K]\\ \hline (a-c)&39/77&$(15000,19750, 19750)$\\ &6/77&$(14250, 9500 ,9500)$\\ &32/77&$(10250, 10250, 10250)$\\ \hline (d-f)&71/611&$(15000, 19750 ,19750)$\\ &82/611&$(15250 ,12250 ,12250)$\\ &84/611&$(14250 ,18500 ,18500)$\\ &16/611&$(9500,19750 ,19750)$\\ &24/611&$(10000, 14000 ,15000)$\\ &23/611&$(20000, 11000 ,16000)$\\ &95/611&$(9750 ,20000,20000)$\\ &89/611&$(19500 ,17750 ,17750)$\\ &51/611&$(14500 ,19500,19500)$\\ &76/611&$(10250, 10250,10250)$\\ \hline (g-i)&18/77&$(17500, 7500, 7500)$\\ &32/77&$(7750 ,16000,16000)$\\ &5/77&$(10000, 18000, 8000)$\\ &22/77&$(18750 ,5000 ,5000)$\\ \hline \hline \end{tabular} \caption{Normalized weights $w_{n}/w_{\rm tot}$ and sets of temperatures $\bm{T}=(T_{\rm trans}, T_{\rm vib}, T_{\rm rot})$ characterizing the data sets displayed in Figure~9.} \label{sifig:probmodel_diverse_differentinputs_R2_table} \end{table}% \clearpage \section{Introduction} The realistic description of chemical and reactive systems with a large number of available states, such as explosions, hypersonic gas flow around space vehicles upon re-entry into the atmosphere, or meteorites penetrating deep into the lower parts of Earth's or a planet's atmosphere, requires an understanding of the relevant processes at a molecular level.\cite{park:1993,cummings:2003,MM.hypersonics:2020} Correctly describing the population of the available state space under such non-equilibrium conditions (e.g. high temperatures in atmospheric re-entry with $T > 10000$ K) from ground-based experiments is extremely challenging. Such high gas temperatures make the gathering of experimental data exceedingly difficult but is essential for simulations of hypersonic flight.\cite{boyd:2015} On the other hand, a comprehensive modeling of gas phase chemical reactions through explicit molecular-level simulations remains computationally challenging due to the large number of accessible states and transitions between them.\cite{grover:2019,MM.nncs:2019} There are also other, similar situations in physical chemistry such as the spectroscopy in hot environments (e.g. on the sun) for which small polyatomic molecules can populate a large number of rovibrational states\cite{HITRAN} between which transitions can take place. Exhaustively probing and enumerating all allowed transitions or creating high-dimensional analytical representations for them is usually not possible. Nevertheless, it is essential to have complete line lists available because, if specific states that are involved in important transitions are omitted, modeling of the spectroscopic bands becomes difficult or even impossible.\cite{tennyson:2014} This points towards an important requirement for such models, namely that they contain the majority of the important information while remaining sufficiently fast to evaluate.\\ \noindent In such situations, machine learning approaches can provide an alternative to address the problem of characterizing product distributions from given reactant state distributions. In previous work\cite{MM.nncs:2019}, a model for state-to-state (STS) cross sections of an atom-diatom collision system using a neural network (NN) has been proposed. Motivated by the success of such an approach, the present work attempts to develop a NN-based distribution-to-distribution (DTD) model for the relative translational energy $E_{\rm trans}$, the vibrational $v$ and rotational $j$ states of a reactive atom-diatom collision system. In other words, given the reactant state distributions ($P(E_{\text{trans}}), P(v), P(j)$) such a model predicts the three corresponding product state distributions ($P(E_{\text{trans}}'), P(v'), P(j')$). Here, $P(v)$ and $P(j)$ are marginal distributions, i.e. $P(v)=\sum_{j} P(v,j)$ and $P(j)=\sum_{v} P(v,j)$, where $(v,j)$ labels the rovibrational state of the diatom.\cite{schwartz:2018} Hence, instead of considering all possible combinations of ($E_{\text{trans}}, v, j$) on the reactant (input) and product (output) side explicitly, one is rather interested in a description of these microscopic quantities by means of their underlying probability distributions.\\ \noindent While the state-to-state specificity is lost, such a probabilistic approach considerably reduces the computational complexity of the problem at hand. While a STS-approach is still feasible for an atom-diatom collision system with $\sim 10^{7}$ STS cross sections\cite{MM.nncs:2019}, it becomes intractable\cite{schwartzentruber:2018} even for a diatom-diatom type collision system due to the dramatic increase in the number of STS cross sections to $\sim 10^{15}$. Moreover, such DTD models still allow for the prediction of quantities relevant to hypersonic flow, such as the reaction rates or the average vibrational and rotational energies.\cite{singh2:2019}\\ \noindent Here, the N + O$_{2}\rightarrow$ NO + O reaction is considered, which is relevant in the hypersonic flight regime and for which accurate, fully dimensional potential energy surfaces (PESs) are available.\cite{MM.no2:2020} The necessary reference data to train the NN-based models was obtained by running explicit quasi-classical trajectories (QCT) for reactive N + O$_2$ collisions. In particular, from a diverse set of equilibrium reactant state distributions ($P(E_{\text{trans}}), P(v), P(j)$) for N + O$_{2}$, the corresponding product distributions for NO + O are obtained by means of QCT simulations. In this work, three different approaches for learning and characterizing these distributions are pursued, including function-, kernel-, and grid-based models (F-, K-, and G-based models in the following). The microscopic description provided by such DTD models can, e.g., be used as an input or to develop models for more coarse-grained approaches, including Direct Simulation Monte Carlo\cite{dsmc} (DSMC) or computational fluid dynamics (CFD) simulations. Furthermore, the core findings of this work also carry over to applications in other areas where a DTD model is of interest, such as in demographics\cite{flaxman:2016} or economics\cite{perotti:1996}.\\ \noindent This work is structured as follows. First, the methods including three different approaches to construct NN-based DTD models are described. Then, the performance of the models is assessed for various data sets and improvements in particular related to the input features are explored. Finally, implications for modeling hypersonic gas flow, based on DSMC and CFD simulations, are discussed and conclusions are drawn.\\ \section{Methods} \subsection{Quasi-Classical Trajectory Calculations} \label{QCT} \noindent Explicit QCT simulations for the N + O$_{2}$ collision system were carried out on the $^4$A$'$ PES of NO$_2$ following previous work.\cite{tru79,hen11,kon16:4731,MM.cno:2018,MM.no2:2020} Specifically, the reactive channel for the N + O$_2$ $\rightarrow$ NO + O collision was considered. The $^4$A$'$ PES is chosen here, because this state contributes most to the equilibrium rate.\cite{MM.no2:2020} Briefly, Hamilton's equations of motion are solved in reactant Jacobi coordinates using a fourth-order Runge-Kutta method with a time step of $\Delta t = 0.05$~fs, which guarantees conservation of the total energy and angular momentum. The initial conditions for each trajectory were randomly chosen using standard Monte Carlo sampling methods.\cite{tru79,hen11} The initial relative translational energies $E_{\rm trans}$ were sampled from Maxwell-Boltzmann distributions ($E_{\rm trans,min} = 0.0$ eV; $E_{\rm trans,max} = 19.8$ eV;$\Delta E_{\rm trans} = 0.1$ eV) and reactant vibrational $v$ and rotational $j$ states were sampled from Boltzmann distributions ($v_{\rm min}=0, v_{\rm max}=38, \Delta v = 1$; $j_{\rm min}=0, j_{\rm max}=242, \Delta j = 1$), characterized by $T_{\text{trans}}$, $T_{\text{vib}}$ and $T_{\text{rot}}$, respectively.\cite{tru79,bender:2015} The impact parameter $b$ was sampled from 0 to $b_{\rm max} = 10$ \AA\/ using stratified sampling\cite{tru79,bender:2015} with 6 equidistant strata. The rovibrational reactant (O$_{2}$; $(v,j)$) and product diatom (NO; $(v',j')$) states are calculated following semiclassical theory of bound states.\cite{kar65:3259} The states of the product diatom are assigned using histogram binning.\\ \noindent First, models were constructed for the case $T_{\text{rovib}}=T_{\text{rot}}=T_{\text{vib}}$, for which QCT simulations were performed at $T_{\text{trans}}$ and $T_{\text{rovib}}$ ranging from 5000 K to 20000 K in increments of 250 K. This yielded 3698 sets of reactant state distributions and corresponding product state distributions which will be referred to as ``Set1''. Next, for the more general case $T_{\text{rot}} \neq T_{\text{vib}}$, further QCT simulations were performed for $T_{\text{trans}} = 5000, 10000, 15000, 20000$ K with $T_{\text{vib}}$ and $T_{\text{rot}}$ each ranging from 5000 K to 20000 K in increments of 1000 K. This gives an additional 960 data sets and a total of 4658 data sets that include both cases, $T_{\text{rot}} = T_{\text{vib}}$ and $T_{\text{rot}} \neq T_{\text{vib}}$, collectively referred to as ``Set2''.\\ \noindent The reactant and product state distributions of Set1 and Set2 constitute representative reference data to train and validate NN-based models. For both sets the temperatures $\bm{T}= (T_{\text{trans}}$, $T_{\text{vib}}$, $T_{\text{rot}}$) completely specify the reactant and product state distributions as they are related through the explicit QCT simulations. Hence, for brevity a specific set of reactant and product state distributions is referred to as $\bm{T}$.\\ \subsection{Generating Nonequilibrium Data Sets} \label{Generating_noneq_Datasets} In hypersonic applications it is known that quantities such as $P(E_{\text{trans}})$, $P(v)$, or $P(j)$ are typically nonequilibrium probability distributions.\cite{boyd:2015} This has also been confirmed in explicit simulation studies, starting from equilibrium energy and state distributions as is commonly done in QCT simulations.\cite{alp17:2392,MM.cno:2018} Therefore, a general DTD model should be able to correctly predict (nonequilibrium) product state distributions starting from nonequilibrium reactant state distributions. With this in mind, and with Set2 at hand, new reactant and product state distributions were generated by means of a weighted sum of the existing distributions according to \begin{equation} P(i)=\frac{1}{w_{\text{tot}}}\sum_{n=1}^N w_n \cdot P_{n}(i). \label{eq:multit} \end{equation} Here, $i \in [E_{\text{trans}},v,j,E_{\text{trans}}',v',j']$ labels the degree of freedom, $n$ labels the data set, $N \in [2,3]$ is the total number of distributions drawn from Set2, the corresponding distributions $P_{n}(i)$ used for and obtained from QCT simulations and the random weights $w_{n} \in [1,2]$ determine how much these contribute to the total sum. The resulting distributions are scaled by $w_{\text{tot}}=\sum_{n=1}^N w_n$ to conserve probability. Such distributions then constitute Set3. It is assumed that any nonequilibrium state distribution can be represented as a decomposition in terms of a linear combination of equilibrium distributions given by Eq.~\ref{eq:multit}. For instance, general nonequilibrium distributions for nitrogen and oxygen relaxation at high temperature ($> 8000$ K) conditions, obtained via direct molecular simulations (DMS), which is equivalent to solving the full master equation, have been successfully modelled as a weighted sum of two Boltzmann distributions.\cite{singh3:2019} Consequently, DTD models that are successfully trained and validated on Set3 are also expected to generalize well to most nonequilibrium situations encountered in practice.\\ \noindent In the following, a single data set for Set3 is generated by randomly specifying the number of distributions $N \in [2,3]$ to be combined although larger values for $N$ are possible and will be explored later. The final set of reactant and product state distributions is characterized by $N$ sets of temperatures $\bm{T}$ and corresponding normalized weights $\bm{w}=(w_{1}/w_{\rm tot},...,w_{N}/w_{\rm tot})$. The product state distributions obtained by this procedure are akin to explicit QCT simulations using Monte Carlo sampling of the reactant state distributions by sampling each of the equilibrium distributions in the corresponding weighted sum. In the following, three different possibilities for characterizing reactant and product state distributions are described.\\ \subsection{Function (F)-Based Approach} In the F-based approach, each set of relative translational energy, vibrational and rotational state distributions of reactant and product are fitted to parametrized model functions, see Figure \ref{fig:fig1} for an example. The corresponding fitting parameters in Eqs. \ref{eq:func_E_r} to \ref{eq:func_j_p} constitute the input and output of a NN, respectively (see Section \ref{Neural Network} for details on the NN). Together with the parametrized model functions (Eqs. \ref{eq:func_E_r} to \ref{eq:func_j_p}) this serves as a map between reactant and product state distributions, i.e., a DTD model. In this work the F-based approach was only applied to Set1.\\ \noindent The set of model functions used here was \begin{equation} \tilde{P}(E_{\text{trans}}) = a_{1} E_{\text{trans}}\cdot \exp(-E_{\text{trans}}/a_{2}), \label{eq:func_E_r} \end{equation} \begin{equation} \tilde{P}(v) = b_{1}\exp(-v/b_{2}), \label{eq:func_v_r} \end{equation} \begin{equation} \tilde{P}(j) = c_{1}\exp(-j/c_{2}) + c_{3}\exp(-(\ln(2c_{4}(j-c_{5})/c_{6}+1)/c_{4})^2), \label{eq:func_j_r} \end{equation} \begin{equation} \tilde{P}(E_{\text{trans}}') = d_{1}\exp(-(\ln(2d_{2}(E_{\text{trans}}'-d_{3})/d_{4}+1)/d_{2})^2), \label{eq:func_E_p} \end{equation} \begin{equation} \tilde{P}(v') = e_{1}v'^4+e_{2}v'^3+e_{3}v'^2+e_{4}v'+e_{5}, \label{eq:func_v_p} \end{equation} \begin{equation} \tilde{P}(j') = f_{1}\exp(-j'/f_{2}) + f_{3}\exp(-(\ln(2f_{4}(j'-f_{5})/f_{6}+1)/f_{4})^2), \label{eq:func_j_p} \end{equation} where $\bm{a}=(a_{1},a_{2})$ through $\bm{f}=(f_{1},...,f_{6})$ are the fitting parameters of the model functions. In total, this results in 10 and 15 fitting parameters for one set of reactant or product state distribution, respectively. For the reactant and product rotational state distributions, $P(j)$ and $P(j')$, the same model function was used. Such a parametric approach has its foundation in surprisal analysis\cite{levine:1978} which was recently used in models for hypersonics.\cite{schwartz:2018,singh1:2019,singh2:2019}\\ \noindent The reactant state distributions in Set1 are equilibrium distributions and the model functions (Eqs. \ref{eq:func_E_r} to \ref{eq:func_j_r}) were chosen accordingly. For the product state distributions, which are typically nonequilibrium distributions, modified parametrizations were used after inspection of the results from the QCT simulations. Here, it is worth mentioning that using alternative parametrizations for model functions are possible, too, which will be briefly explored later.\\ \begin{figure}[h!] \begin{center} \includegraphics[width=0.99\textwidth]{methods_function_based.png} \caption{Reactant state distributions for the F-based approach: $P(E_{\rm trans})$ ($T_{\text{trans}} = 12500$ K, panel (a), $P(v)$ (panel (b)) and $P(j)$ (panel (c)) distributions ($T_{\text{rovib}} = 5750$ K) for explicit QCT simulations (black) and corresponding fits (red) obtained using Eqs. \ref{eq:func_E_r} to \ref{eq:func_j_r}.} \label{fig:fig1} \end{center} \end{figure} \subsection{Kernel (K)-Based Approach} \label{K-based method} The representer theorem\cite{scholkopf2001generalized} states that, given $N$ grid points $x_i$, the function $f(x)$ can always be approximated as a linear combination of suitable functions \begin{equation} f(x) \approx \widetilde{f}(x) = \sum_{i = 1}^{N} c_i K(x,x_i) \label{RKHS} \end{equation} where $c_i$ are coefficients and $K(x,x_i)$ is a kernel function. The reproducing property asserts that $f(x') = \langle f(x),K(x,x') \rangle$ where $\langle \cdot \rangle$ is the scalar product, $K(x,x')$ is the kernel\cite{aronszajn:1950} and the coefficients $c_i$ are determined through matrix inversion. This leads to a reproducing kernel Hilbert space (RKHS) representation that exactly reproduces the function at the grid points $x_i$.\cite{aronszajn:1950,ho96:2584,MM.RKHS:2017} In the present work the amplitudes of the distributions at the chosen grid points are used for inter- and extrapolation based on a RKHS-based representation and the the coefficients $c_{i}$ serve as input and output of the NN. Hence, given the kernel coefficients for the reactant state distributions, the NN is trained to predict the coefficients of the corresponding product state distributions. Together with the associated grids, one obtains a continuous (K+NN-based) prediction of the product state distributions, i.e., a DTD model. The K-based approach was also only applied to Set1.\\ \noindent In this work, a Gaussian kernel \begin{equation}\label{gaussian_ker} K(x,x')=\exp(-|x-x'|^2/2\sigma^2), \end{equation} with hyperparameter $\sigma$ was found to perform well as a reproducing kernel. Furthermore, a variable amount of regularization as specified by the regularization rate $\lambda$ in the Tikhonov-scheme is used. The hyperparameters $\sigma$ and $\lambda$ remain to be optimized systematically. However, the present choices yielded sufficiently accurate representations. Assigning $\sigma$ to the average spacing between neighbouring grid points was found to be advantageous. Alternatively, it is also possible to choose $\sigma$ at each grid point to be equal to the larger of the two neighbouring grid spacings. In this work, the first approach is used for $P(v)$ and $P(v')$, whereas the second approach is applied for all other distributions. Moreover, such choices for $\sigma$ significantly reduced the number of grid points required for accurate RKHS approximations and the resulting kernel coefficients lead to accurate NN predictions. While the same level of accuracy for the RKHS approximations can be achieved with larger values for $\sigma$, the accuracy of the resulting NN predictions was found to deteriorate. Consequently, only the regularization rate $\lambda$ needed to be tuned and accurate RKHS approximations were obtained after a few iterations. \\ \begin{figure}[b!] \begin{center} \includegraphics[width=0.99\textwidth]{methods_kernel_based.png} \caption{Reactant state distributions for K- and G-based approaches: $P(E_{\rm trans})$ ($T_{\text{trans}} = 9500$ K, panel (a), $P(v)$ (panel (b)) and $P(j)$ (panel (c)) distributions ($T_{\text{rovib}} = 16000$ K) for explicit QCT simulations (black), their RKHS representations (red lines) and the locally averaged values at the corresponding grid points (red circles) used for the G-based approach.} \label{fig:fig2} \end{center} \end{figure} \noindent The location and number of grid points for the reactant and product state distributions is largely arbitrary but should be governed by the overall shape of the distributions, see Figure \ref{fig:fig2} for an example. The grids used here are reported in Table~S1. The number of grid points for reactant and product state distributions differs because they are equilibrium and nonequilibrium distributions, respectively. Also, depending on the shape of the distributions to be represented, additional points may be required to avoid unphysical undulations in the RKHS approximations. For the system considered here, this is mainly observed for $P(v')$ which requires a denser grid than the corresponding reactant state distributions $P(v)$.\\ \noindent Instead of directly evaluating the distributions at the grid points, local averaging over neighboring data points according to \begin{equation} \bar{P}(x_{i}) = \frac{1}{2n+1}\sum\limits_{j=i-n}^{i+n} P(x_{j}), \label{eq:local_averaging} \end{equation} was performed to obtain $\bar{P}(x_{i})$. Here $n = {\rm min}(n_{\rm max},n_{\rm nb})$, with $n_{\rm nb}$ the maximum number of neighbouring data points to the left or the right. If the first and last data points are chosen as grid points they were assigned the unaveraged distribution values. The value of $n_{\text{max}}$ can differ for each of the ($E_{\text{trans}},v,j,E_{\text{trans}}',v',j'$) distributions. For the K-based approach these values were $n_{\text{max},E_{\rm trans}} = 2$, $n_{\text{max},v} = 1$, $n_{\text{max},j} = 12$ for the reactant and $n_{\text{max},E_{\rm trans}'} = 3$, $n_{\text{max},v'} = 2$, and $n_{\text{max},j'} = 13$ for the product state distributions. Local averaging can be seen as an implicit regularization as it reduces the noise partially arising due to finite sample statistics in the QCT simulations.\\ \subsection{Grid (G)-Based Approach} For the G-based approach, the same grids as in the K-based approach are considered (see Table~S1). Furthermore, similar to the K-based approach local averaging is performed with $n_{\text{max},E_{\rm trans}} = 2$, $n_{\text{max,}v} = 1$, $n_{\text{max,}j} = 9$ for the reactant and $n_{\text{max},E_{\rm trans}'} = 3$, $n_{\text{max,}v'} = 2$, and $n_{\text{max,}j'} = 10$ for the product state distributions. These values were adjusted such as to obtain accurate discrete representations of the corresponding distributions. In the G-based approach the locally averaged values of reactant state distributions at the grid points (referred to as ``amplitudes'') directly serve as the input of a NN, see the red circles in Figure \ref{fig:fig2}. The NN then predicts the product state distributions on the corresponding grids, where the amplitudes serve as the reference. The resulting discrete product distributions are finally represented as a continuous RKHS, establishing a DTD model. Similarly to the K-based approach, a Gaussian kernel (Eq. \ref{gaussian_ker}) was used for the RKHS of the product state distributions, as this choice still yielded accurate approximations. Furthermore, the corresponding hyperparameters $\sigma$ and $\lambda$ are were chosen as in the K-based approach.\\ \subsection{Neural Network} \label{Neural Network} The NN architecture for training the three models is a multilayer perceptron with two hidden layers, see Figure \ref{fig:fig4}. The input and output layers consist of 10/43/43 input and 15/44/44 output nodes in the F-, K-, and G-based approaches. The input/output are the fitting parameters (F-based approach), kernel coefficients (K-based approach), and amplitudes (G-based approach) characterizing reactant and product state distributions, respectively. When training a NN using Set1 to Set3 the two hidden layers contain 6, 12, and 9 nodes each, respectively.\\ \begin{figure}[h!] \includegraphics[width=0.35\textwidth]{NN_architecture.png} \caption{Schematic diagram of the NN architecture. The activation vector of each hidden layer is \textbf{a$_{i}$}. The input and output vectors are $\mathbf{x}$ and $\mathbf{y}$ and the weight matrix and bias vector for each layer are $\mathbf{W}_{i}$ and $\mathbf{b}_{i}$, respectively. The activation function of the hidden layers is $\sigma(\cdot)$ and corresponds to a shifted softplus\cite{dugas:2001} function $\sigma(\cdot) = \ln(1+e^{(\cdot)})-\ln(2)$. Here, activation functions act element-wise on input vectors $(\cdot)$. The constant $\ln(2)$ centers the mean activation of the hidden nodes at zero, thereby decreasing the bias shift effect which speeds up the learning.\cite{clevert:2015, lecun:2012} The activation function of the output layer is $C \cdot \tanh(\cdot)$\cite{glorot:2011}, where $C$ is an overall scaling with $C=8/12/8$ in the F-, K- and G-based approaches.} \label{fig:fig4} \end{figure} \noindent For training, the input and output of the NN are standardized according to $x_{i}' = (x_{i} - \bar{x}_{i})/\sigma_{i}$, where $x_{i}$, $\bar{x}_{i}$ and $\sigma_{i}$ are the $i$-th input/output, and the mean and standard deviation of their distribution over the entire set of training data. Scaling of the input, here by means of standardization, is common practice in the data pre-processing step for machine learning tasks relying on gradient descent for optimization, as it generally yields a faster convergence rate.\cite{lecun:2012} The additional standardization of the output allows one to use a root-mean-square deviation (RMSD) loss function, as the non-standardized values can differ drastically in magnitude and spread. Thus, in particular low and high product state probabilities can be predicted with similar accuracy. It would also be possible to simply normalize the output but the additional offset gives the flexibility to use a scaled hyperbolic tangent as an activation function for the output layer which increases the NN prediction accuracy compared to other/no activation functions. The RMSD loss function $\mathcal{L}$ used here is \begin{equation} \label{loss_func} \mathcal{L} = \sqrt{\frac{1}{N}\sum\limits_{i=1}^{N} \left( y_{i} - y'_{i}\right)^2}, \end{equation} with $y_{i}$ and $y'_{i}$ the predicted and reference output values.\\ \noindent The weights and biases of the NN are initialized according to the Glorot scheme\cite{glorot2010understanding} and optimized using Adam\cite{kingma2014adam} with an exponentially decaying learning rate. The NN is trained using TensorFlow \cite{tf:2016} and the set of weights and biases resulting in the lowest loss as evaluated on the validation set are used for predictions. When training a NN using Set1 with $N_{\text{tot}}=3698$ data sets, $N_{\text{train}}=3000$ were randomly selected for training, $N_{\text{valid}}=600$ for validation and $N_{\text{test}}=98$ as a test set, whereas for Set2 $N_{\text{tot}}=4658$, $N_{\text{train}}=3600$, $N_{\text{valid}}=900$, and $N_{\text{test}}=158$.\\ \noindent To train models on reactant state distributions that can {\it not} be characterized as a single set of temperatures $\bm{T}$, Set3 was constructed by means of a weighted sum of the distributions in Set2 (see Section \ref{Generating_noneq_Datasets}). For this, 158 data sets are randomly selected from $N_{\text{tot}}=4658$ data sets of Set2. They constitute the subset from which the final test set of Set3 is generated. Here, $N_{\text{test}}=125$, see Section \ref{Generating_noneq_Datasets}. The remaining 4500 data sets make up the subset from which the data sets for training and validation are constructed. In particular, $N_{\text{train}}+N_{\text{valid}}= 5000, 10000,15000, 20000, 25000, 30000$ data sets are generated by means of the same procedure, making up the final training and validation sets of Set3 with $N_{\text{train}} = 0.8 \times (N_{\text{train}}+N_{\text{valid}})$ and $N_{\text{valid}} = 0.2 \times (N_{\text{train}}+N_{\text{valid}})$. All NNs in this work were trained on a 1.8 GHz Intel Core i7-10510U CPU with training times shorter than 10 minutes.\\ \section{Results} The results section first presents DTD models for the F-, K-, and G-based approaches for $T_{\text{vib}} = T_{\text{rot}}$. This is followed by discussing the influence of featurization, computational cost and generalizability of the approaches considered. As the G-based approach is found to perform best from a number of different perspectives, models are then trained for $T_{\text{vib}} \neq T_{\text{rot}}$ and for nonequilibrium reactant state distributions. Also, variations of the G-based approach requiring fewer input data are explored. Then, the findings are discussed in a broader context and conclusions are drawn.\\ \subsection{Distribution-to-Distribution Models for $T_{\text{vib}} = T_{\text{rot}}$} \label{Dist_to_Dist} First, an overall assessment of the three different approaches for describing the reactant state distributions for Set1 (i.e. $T_{\text{vib}} = T_{\text{rot}}$) is provided. As they are generated according to the typical sampling procedures followed by QCT simulations and not from direct function evaluations of equilibrium distributions they contain noise. This is done because the entire work is concerned with a situation typically encountered in QCT simulations of reactive processes.\cite{kar65:3259} Also, it is noted that for the reactant state distributions one should find - as demonstrated here - that they are characterized by one parameter only, namely temperature. This, however, is an open point and not guaranteed for the product state distributions, which are nonequilibrium distributions in general.\\ \noindent For the F- and K-based approaches the reactant state distributions are first represented either as a parametrized model function or as a RKHS, respectively, and the agreement is found to be excellent (see Figures \ref{fig:fig1} and \ref{fig:fig2}). For the G-based approach this step is not required, as the input are the amplitudes of the reactant state distributions themselves (see Figure \ref{fig:fig2}).\\ \noindent Having established that all three (F-, K-, and G-based) approaches are suitable to describe reactant state distributions, a NN for each of the three models was trained on Set1 with $N_{\text{train}}=3000$ and $N_{\text{valid}}=600$. The quality of the final model for predicting product state distributions depends on two aspects: 1. The ability of the NN to {\it learn} and predict the product state distributions obtained from the QCT simulations 2. The ability of the (F-, K- or G-based) approaches to {\it describe} these distributions.\\ \noindent {\it 1. Quality of the NN Prediction:} For the first aspect, RMSD and coefficient of determination ($R^2$) values, referred to as RMSD$_{\rm NN}$ and $R^2_{\rm NN}$, are considered as performance measures. For a single data set from the test set these are calculated by comparing the normalized reference representations of each of the models and the corresponding normalized NN predictions on the grid for which QCT data is available for each of the three product state distributions separately and averaging over the resulting values. The normalization factors were calculated by numerical integration of the distributions obtained from the QCT simulations. The final RMSD and $R^2$ values are then obtained by averaging over the entire test set with $N_{\text{test}}=98$, see Table \ref{tab:Set1_perfomance_measures}.\\ \begin{table}[ht] \centering \begin{tabular}[t]{l|cc|cc} \hline \hline DTD model&${\rm RMSD}_{\rm NN}$&${R^2}_{\rm NN}$&${\rm RMSD}_{\rm QCT}$&${R^2}_{\rm QCT}$\\ \hline F-NN&0.0007&0.9996&0.0014&0.9982\\ K-NN&0.0013&0.9984&0.0014&0.9981\\ G-NN&0.0009&0.9994&0.0010&0.9991\\ \hline \hline \end{tabular} \caption{Performance measures of the F-, K- and G-based models (F-NN, K-NN and G-NN) trained and evaluated on Set1. The number of significant digits being reported is based on the findings of Table~S2.} \label{tab:Set1_perfomance_measures} \end{table}% \noindent For three different data sets from the test set, the results from explicit QCT simulations, the NN predictions, and the reference representation of the corresponding approach are shown in Figure \ref{fig:fig5}. These results are representative of NN predictions for data from the test set for each of the three approaches as they are characterized by an $R^2_{\rm NN}$ value closest to the mean $R^2_{\rm NN}$ value as evaluated over the test set. Figures S1 and S2 show the predictions that are characterized by the highest (``accurate'' prediction) and lowest (``inaccurate'' prediction) $R^2_{\rm NN}$ value in the test set, respectively.\\ \begin{figure}[h!] \begin{center} \includegraphics[width=0.99\textwidth]{NN_comparison_representative_R2.png} \caption{Product state distributions obtained by explicit QCT simulations (QCT) as well as the corresponding references (-R) and predictions (-NN) from the (a-c) F-based (F-R, F-NN), (d-f) K-based (K-R, K-NN) and (g-i) G-based approaches (G-R, G-NN). Also, the amplitudes to construct the reference RKHS-based representations in the K- and G-based approaches are displayed (circles). The data sets considered here are from the test set of Set1 and result in predictions that are characterized by an $R^2_{\rm NN}$ value closest to the mean $R^2_{\rm NN}$ value as evaluated over the test set: (a-c) $\bm{T}=($9500 K, 16000 K, 16000 K), RMSD$_{\text{NN}}=0.0005$, $R^2_{\text{NN}}=0.9996$, (d-f) $\bm{T}=($10250 K, 19250 K, 19250 K), RMSD$_{\text{NN}}=0.0013$, $R^2_{\text{NN}}=0.9984$, (g-i) $\bm{T}=($12000 K, 9750 K, 9750 K), RMSD$_{\text{NN}}=0.0009$, $R^2_{\text{NN}}=0.9994$.} \label{fig:fig5} \end{center} \end{figure} \noindent In general, all three approaches are very accurate, as their predictions closely match the corresponding reference representations. Closer inspection of Figure \ref{fig:fig5} reveals that for these particular examples the K-based approach appears to yield slightly less accurate predictions than the other two approaches (see, for example Figure \ref{fig:fig5}f). The RMSD$_{\rm NN}$ values are 0.0007, 0.0013 and 0.0009 for the F-, K-, and G-based models respectively, and the corresponding $R^2_{\rm NN}$ values are 0.9996, 0.9984 and 0.9994. These performance measures indicate that the F-based approach yields the most accurate predictions, followed by the G- and K-based approaches.\\ \noindent {\it 2. Quality of the F-, K-, and G-Based Model Predictions:} For the product state distributions the prediction accuracy depends on the accuracy of the NN and the accuracy with which the representations approximate them. Figure \ref{fig:fig6} compares the final model predictions from the three approaches. The examples illustrate the variety of product state distributions in Set1. It is found that despite the appreciable variation in shapes (in particular for $P(v')$) all three models correctly describe the product state distributions. Distributions $P(v')$ are not well represented as a single equilibrium distribution which is typical for vibrational states at high temperatures.\cite{boyd:2015,schwartz:2018}\\ \begin{figure}[h!] \begin{center} \includegraphics[width=0.99\textwidth]{final_model_comparison.png} \caption{Product state distributions obtained by explicit QCT simulations (QCT) as well as the corresponding model predictions obtained in the (a-c) F-based (F-NN), (d-f) K-based (K-NN) and (g-i) G-based approaches (G-NN). The data sets considered here are from the test set of Set1: (a-c) $\bm{T}=$ (12500 K, 5750 K, 5750 K) , (d-f) $\bm{T}=$ (9500 K, 16000 K, 16000 K), (g-i) $\bm{T}=$ (5750 K, 19250~K, 19250 K).} \label{fig:fig6} \end{center} \end{figure} \noindent A quantitative measure for the performance of the three models are the RMSD and $R^2$ values calculated by comparing the locally averaged ($n_{\text{max},E_{\rm trans}'} = 3$, $n_{\text{max},v'} = 2$, $n_{\text{max},j'} = 10$) QCT data and the model predictions following a similar procedure as for RMSD$_{\rm NN}$ and $R^2_{\rm NN}$ which will be called RMSD$_{\rm QCT}$ and $R^2_{\rm QCT}$, respectively. For Set1 these performances are reported in Table \ref{tab:Set1_perfomance_measures}. Again, all models are of high quality with the G-based approach performing best. The somewhat lower quality of the F-based approach when compared to the G-based approach can largely be attributed to the fits of the product state distributions. The representation of the F-based approach leads to differences, in particular for $P(v')$ (e.g., deviations for small and high $v'$ or extra undulations in Figure \ref{fig:fig6}b). However, the deviations in the F-based approach observed for high $v'$ are only partially relevant, as the accessible vibrational and rotational state space is finite in practice, here $v'_{\rm max}=47$, $j'_{\rm max}=240$. Since state space is limited, extrapolation is not always required. Considering the K- and G-based approaches, the reference representations describing product distributions are nearly identical and reproduce the QCT data very closely. Hence, the lower accuracy of the final model in the K-based approach when compared to the G-based approach can largely be attributed to its lower NN prediction accuracy. \\ \noindent For the F-based approach, finding an optimal set of model functions (Eqs. \ref{eq:func_E_r} to \ref{eq:func_j_p}) specific to the system at hand is expected to be a difficult task. Such parametric models for nonequilibrium conditions are still a current topic of research.\cite{schwartz:2018, singh1:2019, singh2:2019} To highlight the performance of different models, a parametric model for transient vibrational and rotational state distributions based on surprisal analysis\cite{schwartz:2018} was applied to Set1. $P(v')$ and $P(j')$ distributions for two different sets of temperatures (($T_{\text{trans}} = 20000$ K, $T_{\text{rovib}} = 5000$ K) and ($T_{\text{trans}} = 5500$ K, $T_{\text{rovib}} = 20000$ K)) from QCT simulation are modelled following the parametrization of Ref. \citen{schwartz:2018} (see Figure~S3). While for the first set of temperatures (translationally hot and rovibrationally moderately hot) the QCT results for $P(v')$ and $P(j')$ are closely matched by the model, for the second set (translationally moderately hot and rovibrationally hot) both distributions are insufficiently described by the model, in particular for $P(v')$ which is consistent with Ref. \citen{schwartz:2018}. The fact that the shape of the $P(v')$ appears to vary more widely for different $\bm{T}$ compared to $P(E_{\rm trans}')$ and $P(j')$ could be a partial explanation of why developing a universally valid parametric model for $P(v')$ is more challenging. It should be emphasised that comparison between predictions based on the model and explicit QCT simulations is mandatory to validate the model function used.\\ \subsection{Sensitivity of Performance to Feature Selection} As in all machine learning tasks, feature selection for representing the raw data is crucial for the complexity and prediction accuracy of the resulting NN-based model.\cite{goodfellow:2016, bengio:2012} Here, the main difference between the three approaches are the features that represent reactant and product state distributions and which serve as input/output of the NNs. Hence, any difference in the NN prediction accuracy is due to the features used (i.e. the ``featurization''). Here, the features are fitting parameters (F-based), kernel coefficients (K-based), and amplitudes (G-based) and together they constitute a feature vector. Hence, a good featurization allowing for an accurate NN to be trained is characterized by the fact that similarly shaped distributions are described by similar feature vectors.\cite{faber:2015} Here, ``similarity'' is measured by an appropriate metric, such as an Euclidean norm.\\ \noindent For the F-based approach, the choice of model functions (see Eqs. \ref{eq:func_E_r} to \ref{eq:func_j_p}) turned out to yield a satisfactory featurization. Conversely, for the K-based approach it was necessary to increase the regularization rate $\lambda$ and averaging over more neighbouring data points which in essence smooths out sharp variations in the kernel coefficients between neighboring data sets in temperature space. In the G-based approach, accurate NN predictions were obtained through local averaging because the amplitudes are the features.\\ \noindent For the F-based approach the dependence of NN performance on feature selection was explicitly explored by choosing an alternative parametrization for \begin{equation} \tilde{P}(v') = g_{1}\exp(-(\ln(2g_{2}(v'-g_{3})/g_{4}+1)/g_{2})^2) + g_{5}\exp(-(\ln(2g_{6}(v'-g_{7})/g_{8}+1)/g_{6})^2), \label{eq:func_v_p_alternative} \end{equation} where $\bm{g}=(g_{1},...,g_{8})$ are the corresponding fitting parameters. The resulting fit (see Figure~S4) to the QCT data demonstrates that Eq. \ref{eq:func_v_p_alternative} yields a better fit than Eq. \ref{eq:func_v_p}. However, training the corresponding NN turned out to be difficult and the resulting NN predictions were highly inaccurate (see below).\\ \noindent Figure \ref{fig:pv_features} illustrates these points for $P(v')$ for three combinations of simulation temperatures with 1) $T_{\text{trans}} \sim T_{\text{rovib}}$ ($T_{\text{trans}} = 5000$ K, $T_{\text{rovib}} = 5000,5250,5500,5750$ K; black) 2) $T_{\text{trans}} < T_{\text{rovib}}$ ($T_{\text{trans}} = 5000$ K, $T_{\text{rovib}} = 10000,10250,10500,10750$ K; red), and 3) $T_{\text{trans}} > T_{\text{rovib}}$ ($T_{\text{trans}} = 12000$ K, $T_{\text{rovib}} = 5000,5250,5500,5750$ K; green). As Figure \ref{fig:pv_features}a demonstrates, the shapes of all $P(v')$ are comparable and for each color there are four largely overlapping distributions which can not be separated because the differences in $T_{\text{rovib}}$ are too small. Using the F-based approach with Eq. \ref{eq:func_v_p} for $P(v')$ yields parameter values that are clustered (Figure \ref{fig:pv_features}b) for the black, red, and green $P(v')$, respectively. For such input a robust NN can be trained. Conversely, using Eq. \ref{eq:func_v_p_alternative}, even the fitting parameters for one set of $P(v')$ spread considerably and mix with those from $P(v')$ of other temperature combinations, see Figure \ref{fig:pv_features}c. Thus, similarity in the shape of $P(v')$ does not translate into similarity of the fitting parameters used for the featurization. This is an unfavourable situation for training a NN which compromises the prediction ability of such an F-based model. The F-based model trained with Eq. \ref{eq:func_v_p} yields an accurate prediction (${\rm RMSD}_{\rm NN}=0.0007, R^2_{\rm NN} = 0.9996$) whereas the one trained with Eq. \ref{eq:func_v_p_alternative} fails to predict $P(v')$ (RMSD$_{\rm NN}=0.0348$, $R^2_{\rm NN}=-10.6133$; i.e., this model is worse than a baseline model with $R^2 =0$). Similarly, a K-based approach can lead to considerable spread of the kernel coefficients (Figure \ref{fig:pv_features}d) which is not observed for the amplitudes in the G-based approach (Figure \ref{fig:pv_features}e). Such differences in the featurization leads to differences in the NN prediction accuracies.\\ \begin{figure}[h!] \begin{center} \includegraphics[width=0.99\textwidth]{pv_representations.png} \caption{Comparison of the featurization for three different groups of similarly shaped $P(v')$ (black, red and green). Panel a: the distributions as obtained from QCT simulations (fluctuating lines). Panels b and c: Fitting Parameters for the F-based approach with model functions from Eqs. \ref{eq:func_v_p} and \ref{eq:func_v_p_alternative}; panel d: kernel coefficients for the K-based approach and panel e: amplitudes for G-based approach. The temperatures are ($T_{\text{trans}} = 5000$ K, $T_{\text{rovib}} = 5000,5250,5500,5750$ K; black), ($T_{\text{trans}} = 5000$ K, $T_{\text{rovib}} = 10000,10250,10500,10750$ K; red), and ($T_{\text{trans}} = 12000$ K, $T_{\text{rovib}} = 5000,5250,5500,5750$ K; green). The quality of all fits in panel a is as good as in Figure \ref{fig:fig6} and all features are standardized.} \label{fig:pv_features} \end{center} \end{figure} \noindent Another difference between the NNs in the three approaches is the fact that prediction errors in the features translate into errors in the corresponding predicted product state distributions in different ways. For the G-based approach, an error in the predicted features directly translates into an error in the predicted product state distributions. This is not the case for the F- or K-based approaches. As an example, the model functions for the product state distributions in an F-based approach depend nonlinearly on the fitting parameters (features). Hence, small errors in the NN predictions can lead to large errors in the predicted distributions. This is problematic, as the NNs are trained on a loss function that measures the errors in feature space, whereas one is rather interested in the quality of the predicted product state distributions. By using a loss function that depends on errors in the predicted product distributions this problem can be avoided. In the K-based approach this is partially resolved by the choice of a Gaussian kernel where the hyperparameter $\sigma$ is assigned according to the procedure described in Section \ref{K-based method}. This results in a local kernel with kernel coefficients largely determined by the amplitude at the corresponding grid points and its nearest neighbours.\cite{bengio:2006} Consequently, errors in the predicted kernel coefficients are also restricted to impact the predicted model function locally, similar to the G-based approach.\\ \subsection{Computational Cost and Generalizability} To compare the computational cost of the final models in the F-, K- and G-based approaches, the evaluation times of the final models for 1000 randomly selected data sets (from Set1) are considered. Here, a single evaluation is defined as a prediction of the product ($E_{\text{trans}}',v',j'$) distributions at 201, 48 and 241 evenly spaced points in the interval between $E_{\text{trans}}' = 0-20$ eV, $v' = 0-47$ and $j' = 0-240$, respectively, given the reactant state distributions. The evaluation times on a 1.8 GHz Intel Core i7-10510U CPU are $(9.0 \pm 0.1)$ s, $(29.0 \pm 0.3)$ s, and $(28.9 \pm 0.3)$ s for the F-, K- and G-based models, respectively. For the F-based approach the evaluation time is 3 times faster compared to the two other methods and is dominated by fitting the reactant state distributions to Eqs. \ref{eq:func_E_r} to \ref{eq:func_j_r} whereas for the K- and G-based approaches the evaluation time is dominated by the evaluation of the RKHS-based representations of the product distributions given the predicted kernel coefficients or amplitudes, respectively. This may be further improved for the K- and G-based approaches by using a computationally efficient kernel toolkit.\cite{MM.RKHS:2017}\\ \noindent In terms of generality and transferability, an F-based model can {\it not} be easily generalized to distributions with widely different shapes emanating from the QCT simulations. New optimal model functions, also suitable for training a NN would need to be found for every single system. Conversely, with the K- and G-based approaches all desired features of the distributions can be captured by an appropriate choice of reproducing kernel and grid, requiring fine-tuning of the corresponding hyperparameters. Compared to the K-based approach, a G-based model only requires tuning of the corresponding hyperparameters for the product state distributions. In addition, for a G-based model it is also possible to use a linear interpolation instead of a RKHS if one is not concerned with extrapolation. Then, the grid for the product state distributions needs to be chosen sufficiently dense suitable for linear interpolation at the cost of an increased number of NN parameters.\\ \subsection{Grid-Based Models for $T_{\text{vib}} \neq T_{\text{rot}}$} As vibrational relaxation is often slow in hypersonic flow, assuming $T_{\text{vib}} = T_{\text{rot}}$ is often not a good approximation.\cite{boyd:2015,bender:2015} Therefore, the G-based approach is extended to and tested for the case of $T_{\rm vib} \neq T_{\rm rot}$ using Set2. Restricting this to the G-based approach is motivated by the fact that it performed best so far, both in terms of final model accuracy and practicability.\\ \begin{figure}[htb!] \begin{center} \includegraphics[width=0.99\textwidth]{results_set2.png} \caption{Performance of the G-based approach on Set2 from training on Set1 (green) or Set2 (red). Product state distributions from explicit QCT simulations (QCT) compared with model predictions from the G-based approach by training on Set1 (G-NN (Set1)) and Set2 (G-NN (Set2)) for: (a-c) $\bm{T}$ = (20000 K, 7000 K, 5000 K) , (d-f) $\bm{T}$ = (10000 K, 14000 K, 9000 K), (g-i) $\bm{T}$ = (5000 K, 20000 K, 8000 K).} \label{fig:tunequal} \end{center} \end{figure} \noindent First, predictions for Set2 were made based on the G-based model (G-NN) trained on Set1 ($T_{\text{vib}} = T_{\text{rot}}$) and compared with QCT data (QCT), see Figure \ref{fig:tunequal}. The accuracy of this G-based model deteriorates (see Table S3 for all performance measures) as the difference between $T_{\text{vib}}$ and $T_{\text{rot}}$ increases (green lines in Figure \ref{fig:tunequal}). Consequently, a new G-based model was trained and evaluated on Set2. The resulting model predictions (red lines in Figure \ref{fig:tunequal}) are very accurate, close to the level of accuracy of the G-based model trained and evaluated on Set1. Thus, the G-based approach performs equally well for $T_{\text{vib}} = T_{\text{rot}}$ and $T_{\text{vib}} \neq T_{\text{rot}}$.\\ \noindent In an attempt to further improve the G-based approach, three alternative types of input were considered. They are all based on reducing the number of input which not only decreases computational cost, but also removes redundant features which can improve prediction accuracy.\cite{chandra:2014} Again, continuous distributions were obtained from an RKHS of the discrete predictions. The first (G$'$-based) model used values of the reactant state distributions at a fixed but reduced number of grid points compared with the G-based model used so far (see Table S1). Next, a model using only the three temperatures characterizing the reactant state distributions $\bm{T}$ as input (G($\bm{T}$)-based model) is considered. This is meaningful because the value of $\bm{T}$ entirely specifies the equilibrium reactant state distributions. A third model used averages $\bm{\mu}=(\mu_{E_{\rm trans}},\mu_{v},\mu_{j})$ of the reactant state distributions as input (G($\bm{\mu}$)-based model). For generality, these models will be trained and evaluated on Set2 and all performance measures are summarized in Table S3.\\ \noindent A G$'$-based approach using two grid points per reactant state distribution ($E_{\text{trans}}=0.3,3.5$ eV; $v=2,12$; $j=30,150$) still allows for accurate predictions of the product state distributions. The location of these grid points is largely arbitrary, but they should be sufficiently spaced to provide information about the distribution at different locations. However, reducing to a single grid point per reactant state distribution ($E_{\text{trans}}=0.6$ eV, $v=6$, $j=60$) leads to a significant drop in the prediction accuracy. The fact that 2 grid points per reactant state distribution are required for accurate predictions can mainly be attributed to the presence of noise in the distributions arising from finite sample statistics (see Section IV in SI for further clarification). The resulting predictions for the above mentioned models for selected data sets from the test set are displayed in Figure~S6 in the SI.\\ \noindent For the G($\bm{T}$)-based model the performance is close to the original G-based model trained and evaluated on Set2. This is expected, as $\bm{T}$ entirely specifies the equilibrium reactant state distributions and allows a NN to predict corresponding product state distributions. Finally, providing the mean $\mu$ of each of the reactant state distributions as input in a G($\bm{\mu}$)-based model also leads to highly accurate predictions. This can be explained by the fact, that the mean values $\bm{\mu}$ of the reactant equilibrium distributions are uniquely linked to the corresponding set of temperatures $\bm{T}$.\\ \noindent The results of this Section highlight that all three {\it variants} of the G-based model yield similarly high levels of accuracy as the G-based approach, which makes them preferable as they are computationally less expensive. These results may be specific to reactant state distributions that can be uniquely specified by a single parameter, such as a temperature or its mean value $\mu$. To explore this and to further demonstrate the generality of the G-based approach and its variants, a more diverse dataset for nonequilibrium conditions (Set3) was finally considered, for which the reactant state distributions are characterized by multiple sets of temperatures $\bm{T}$.\\ \subsection{Grid-based Models for Nonequilibrium Product State Distributions} As a final application, nonequilibrium DTD models are constructed for Set3 which was generated by means of a weighted sum (see Eq. \ref{eq:multit}) using Set2 (see Section \ref{Generating_noneq_Datasets}) with $N \in [2,3]$, and $w_{n} \in [1,2]$. Training and validation sets of variable sizes were considered, whereas $N_{\text{test}}=125$ throughout. First, a G-based model was trained on Set3 with $N_{\text{train}}+N_{\text{valid}}=5000$. Again, all performance measures are summarized in Table S3.\\ \noindent The predictions of this G-based model for three different data sets from the test set are shown in Figure \ref{fig:probmodel_differentinputs_R2}. In particular, the predictions for these three data sets are characterized by a $R^2_{\rm QCT}$ value closest to the mean $R^2_{\rm QCT}$ value as evaluated over the entire test set, as well as the highest and lowest $R^2_{\rm QCT}$ value in the test set, respectively. Once again, the G-based approach gives a very accurate DTD model, close to the G-based model trained and evaluated on Set2. However, the predictions of the G-based model for Set3 have a larger variance compared to the G-based model for Set2. To assess the influence of the training and validation set size on the prediction accuracy, a learning curve was computed (Figure~S7). The NN prediction accuracy does not significantly increase when $N_{\text{train}}+N_{\text{valid}}$ was increased from 5000 to 30000 in increments of 5000. Hence, this justifies training and validating the variants of the G-based model only on $N_{\text{train}}+N_{\text{valid}} = 5000$.\\ \noindent In an attempt to further improve and reduce this model for Set3, the dependence on different amount of input information (as was done for Set2) was tested again. Accurate predictions are still possible with amplitudes of $P(E_{\text{trans}}), P(v), P(j)$ at three different grid points ($E_{\text{trans}}=0.3,1.5,3.5$ eV; $v=2, 6, 12$; $j = 30, 60, 150$), but the prediction accuracy decreases when reducing this to two grid points ($E_{\text{trans}} = 0.3, 3.5$ eV; $v = 2,12$; $j = 30, 150$). Again it is found that with a G$'$-based approach the number of grid points characterizing the reactant state distributions can be significantly reduced compared to the grids in Table S1. This suggests that the possibility to reduce the number of input features (amplitudes) in such G-based models is a generic property which can be systematically explored.\\ \begin{figure}[th!] \begin{center} \includegraphics[width=0.99\textwidth]{probmodel_differentinputs_R2.png} \caption{Performance of G-based models and variants trained and evaluated on Set3. Product state distributions obtained from explicit QCT simulations (QCT), together with the predictions from the G- (G-NN), G$'_{3}$- (G$'_{3}$-NN), G($\bm{\mu},\bm{\sigma}$)- (G($\bm{\mu},\bm{\sigma}$)-NN) and G($\bm{w},\bm{T}$)-based (G($\bm{w},\bm{T}$)-NN) models trained on Set3. G$'_{3}$-NN uses 3 grid points per reactant state distribution (see text). The data sets considered here are from the test set of Set3. In particular, the predictions for these three data sets are characterized by (a-c) a $R^2_{\rm QCT}$ value closest to the mean $R^2_{\rm QCT}$ value as evaluated over the entire test set, as well as (d-f) the largest and (g-i) smallest $R^2_{\rm QCT}$ value in the test set, respectively. The normalized weights $w_{n}/w_{\rm tot}$ and sets of temperatures $\bm{T}$ characterizing the data sets displayed here are given in Table S4.} \label{fig:probmodel_differentinputs_R2} \end{center} \end{figure} \noindent Providing the mean $\mu$ and standard deviation $\sigma$ for each reactant state distribution ($E_{\text{trans}}$,$v$,$j$) in a G($\bm{\mu},\bm{\sigma}$)-based approach also yields a good model whereas omitting the standard deviations $\bm{\sigma}$ as input information results in a G($\bm{\mu}$)-based model with a significantly lower prediction accuracy. This should be compared with the G($\bm{\mu}$)-based model for Set2 which yielded accurate predictions. The aforementioned differences of the G$'$- and G($\bm{\mu}$)-based models for Set2 and Set3 can be attributed to the fact that the reactant state distributions in Set3 show more diverse shapes compared to Set2, which makes it necessary to provide additional information to maintain a high prediction accuracy. In particular, reactant state distributions in Set3 are nonequilibrium distributions and consequently can {\it not} be uniquely specified by a single parameter, such as a temperature or its mean value $\mu$, as was the case in Set2. Rather, the G($\bm{\mu},\bm{\sigma}$)-based approach for Set3 showed that the set of reactant state distributions is characterized by specifying ($\bm{\mu},\bm{\sigma}$).\\ \noindent Keeping this in mind, extending the G($\bm{T}$)-based approach to Set3 can be achieved by providing the sets of temperatures $\bm{T}$ from which the particular reactant state distributions of Set3 were generated, together with the set of weights $\bm{w}$ with which these contributed (see Section \ref{Generating_noneq_Datasets}). This results in a G($\bm{w},\bm{T}$)-based model. To always guarantee the same number of NN input being specified, as expected by the NN used in this work (see Section \ref{Neural Network}), zero padding was used. Such a G($\bm{w},\bm{T}$)-based model for Set3 leads to accurate predictions. The predictions of the G$'$-, G($\bm{\mu},\bm{\sigma}$)- and G($\bm{w},\bm{T}$)-based model for selected data sets from the test set are also reported in Figure \ref{fig:probmodel_differentinputs_R2}.\\ \noindent Interestingly, the G-, G$'$- and G($\bm{\mu},\bm{\sigma}$)-based models trained on Set3 can accurately predict the product state distributions when given the reactant state distributions from Set2. The performance measures (see Table S3) were calculated by considering the subset of 158 data sets of Set2 from which the test set of Set3 was generated. Even though these models were trained on reactant state distributions given as a linear combination of two to three equilibrium distributions, they can accurately generalize to equilibrium reactant state distributions. This is not the case for the G($\bm{w},\bm{T}$)-based model, which yields unreliable predictions when applied to the reactant state distributions of Set2. This can be attributed to the zero padding. In particular, such a model can not generalize {\it at all} to reactant state distributions being composed of more than three equilibrium distributions, as this requires more NN input to be specified than there are input nodes. This point can be addressed in the future by considering a different NN architecture, allowing for a variable input size.\\ \noindent Conversely, generalizing to reactant state distributions being composed of more than three equilibrium distributions is possible for the G-, G$'$- and G($\bm{\mu},\bm{\sigma}$)-based models trained on Set3. Specifically, a set of 125 reactant and product state distributions given as a linear combination of $N \in [1-10]$ distributions from Set2 (i.e., from the subset of 158 data sets of Set2 from which the test set of Set3 was generated) with integer weights $w_{n} \in [1-100]$ (see Section \ref{Generating_noneq_Datasets}) was generated, referred to as Set3A. When applied to the reactant state distributions of Set3A, these models still predict the corresponding product state distributions with high accuracy, see Table S3. The final DTD model predictions using the G-based approach as well as its variants for such data sets is shown in Figure \ref{fig:probmodel_diverse_differentinputs_R2}. Consequently, as discussed in Section \ref{Generating_noneq_Datasets}, these models are also expected to generalize well to most nonequilibrium distributions encountered in practice.\\ \begin{figure}[th!] \begin{center} \includegraphics[width=0.99\textwidth]{probmodel_diverse_differentinputs_R2.png} \caption{Performance of G-based models and variants trained on Set3 and evaluated on Set3A. Product state distributions obtained from explicit QCT simulations (QCT), together with the predictions from the G- (G-NN), G$'_{3}$- (G$'_{3}$-NN), G($\bm{\mu},\bm{\sigma}$)- (G($\bm{\mu},\bm{\sigma}$)-NN) and G($\bm{w},\bm{T}$)-based (G($\bm{w},\bm{T}$)-NN) models trained on Set3. G$'_{3}$-NN uses 3 grid points per reactant state distribution (see text). The data sets considered here are from Set3A. In particular, the predictions for these three data sets are characterized by (a-c) a $R^2_{\rm QCT}$ value closest to the mean $R^2_{\rm QCT}$ value as evaluated over the entire Set3A, as well as (d-f) the largest and (g-i) smallest $R^2_{\rm QCT}$ value in Set3A, respectively. The normalized weights $w_{n}/w_{\rm tot}$ and sets of temperatures $\bm{T}$ characterizing the data sets displayed here are given in Table S5.} \label{fig:probmodel_diverse_differentinputs_R2} \end{center} \end{figure} \section{Discussion and Conclusions} The present work demonstrates that machine learning of product state distributions from the corresponding reactant state distributions for reactive atom + diatom collisions based on a NN (DTD model) constitutes a promising alternative to a full but computationally very demanding (or even unfeasible, e.g., for diatom + diatom type collisions) treatment by means of explicit QCT simulations. For such DTD models, only a subset of the state space of the reactant needs to be sampled which drastically reduces the computational complexity of the problem at hand.\cite{MM.nncs:2019} In particular, DTD models for the N + O$_2$ $\rightarrow$ NO + O reaction were constructed following three distinct (F-, K- and G-based) approaches for $T_{\rm vib} = T_{\rm rot}$. Although all three approaches yield accurate predictions for the product state distributions as judged from $R^2$ and RMSD measures, the G-based approach performs best in terms of prediction accuracy, generality and practical implementation. On the other hand the F-based approach is computationally more efficient by a factor of 3 compared with the K- and G-based approaches. For the K- and G-based approaches it is found that RMSD$_{\rm NN}$ and $R^2_{\rm NN}$ are close to RMSD$_{\rm QCT}$ and $R^2_{\rm QCT}$, respectively. This is different for the F-based approach, where RMSD$_{\rm NN}$ is smaller than RMSD$_{\rm QCT}$ by a factor of 2 (similarly for $R^2$), see Table \ref{tab:Set1_perfomance_measures}. This indicates that the parametrizations used for the F-based model can still be improved. In general, an F-based approach is feasible if a universally valid and accurate parametrization for the distributions can be found, which also allows for an accurate NN to be trained. However, finding such a parametrization may not always be possible. Consequently, the G-based approach is generally preferred.\\ \noindent The G-based approach and its input-reduced variants (G$'$-, G($\bm{\mu},\bm{\sigma}$)) were found to perform well, too, for $T_{\rm vib} \neq T_{\rm rot}$ (Set2) and nonequilibrium reactant state distributions (Set3 and Set3A). Consequently, the G$'$- and G($\bm{\mu},\bm{\sigma}$)-based models are generally preferred over the standard G-based model, as the reduced number of input lowers their computational cost. Moreover, G-, G$'$- and G($\bm{\mu},\bm{\sigma}$)-based models trained on Set3 are also expected to generalize well to most realistic nonequilibrium distributions. This is of particular relevance for applications in hypersonics for which nonequilibrium effects are of importance.\\ \noindent Therefore, it is also of interest to discuss the present findings in the context of the methods traditionally employed in DSMC\cite{dsmc} and CFD simulations for hypersonics. Continuum-level reaction rates are required in a multi-temperature framework usually employed in CFD solvers\cite{gnoffo1990upwind,wright1998data,nompelis2011implementation}. The expression for the exchange reaction rates \begin{equation} k_{\rm exc} (\bm{T}) = \left(\cfrac{8 k_B T_{\rm trans}}{\pi \mu} \right)^{1/2} \pi b_{max}^2 P_{\rm r}, \end{equation} where $k_B$ is the Boltzmann constant, $\mu$ is the reduced mass of the reactants. Here, $P_{\rm r}$ is the reaction probability, which can be obtained in a computationally inexpensive way by integrating one of the predicted product state distributions. While rates derived in this manner are based on equilibrium distributions characterized by $\bm{T}$, the vibrational population is nonequilibrium at high temperatures ($T \geq 8000$ K).\cite{schwartz:2018} Nonequilibrium effects are particularly relevant for diatomic dissociation because high vibrational states have significantly increased probability for dissociation. For instance, for the dissociation of N$_2$ (in N$_2$ + N$_2$) studied in Ref. \citen{bender:2015}, at $T_{\rm trans} =T_{\rm rot} = 10000$ K, the dissociation probability to form N$_2$ + N + N increases by a factor of 500, when $T_{\rm vib}$ is increased from 8000 K to 20000 K. Conversely, for the exchange reaction considered in the present work, at $T_{\rm trans} = T_{\rm rot} = 10000$ K, increasing $T_{\rm vib}$ from 5000 K to 18000 K results in an increase in the reaction rate by only 40 \% (see Figure \ref{fig:fig10}). Therefore due to the weaker dependence of the exchange reaction probability on vibrational energy, a Boltzmann distribution at $T_{\rm vib}$ may be sufficient for modeling exchange rates. However, if necessary, the simple model for non-Boltzmann distribution developed in Ref. \citen{singh1:2019} can be approximated by a linear combination of Boltzmann distributions to include non-Boltzmann effects in the reaction rates for exchange reactions as well. The reactant state distributions can then expressed as a linear combination of equilibrium distributions, as was done here for Set3, from which $P_{\rm r}$ can be calculated. Furthermore, the average vibrational energy change due to decomposition reactions, another key input required in CFD, can also be obtained by taking an appropriate moment of the product state distributions.\cite{singh2:2019}\\ \noindent As an alternative to CFD for hypersonic flow, coarse grained Master equations (ME) are being used for modeling chemical kinetics.\cite{panesi2013rovibrational,magin2012coarse,andrienko2017state} Here, several rovibrational states are lumped together in groups and only the transition rates between these groups are required which considerably speeds up such simulations. The accuracy of such an approach directly depends on the criterion with which the groups are generated, though.\cite{macdonald2018_QCT,macdonald2018construction_DMS} A DTD model as developed here constitutes a new framework for tracking the time evolution of the population in each rovibrational state in a computationally feasible manner. In the context of the present work the DTD model can be repeatedly used for drawing reactant state distributions at each time step for propagating the ME. This is similar to sequential QCT proposed in Refs. \cite{bruehl1988theoretical1,bruehl1988theoretical2} and DMS method \cite{schwartzentruber:2018}, but computationally more efficient because it \textit{avoids} explicit trajectory calculations.\\ \begin{figure} \centering \includegraphics[width=3.5in]{Fig10.png} \caption{Exchange reaction rates $k_{\rm exc} (\bm{T})$ for N + O$_2$ $\rightarrow$ NO + O as a function of $T_{\rm vib}$, where $T_{\rm trans} = T_{\rm rot}$.} \label{fig:fig10} \end{figure} \noindent The product state distributions predicted from the DTD models can also be used for developing simple function-based, state-specific models for exchange reactions in DSMC.\cite{bird1976molecular,boyd2017nonequilibrium,singh1:2019,gimelshein2017modeling} Such a model can be used within DSMC to estimate state-specific exchange (forward and backward) reaction probabilities instead of the total collision energy (TCE) \cite{bird1981monte} model. Furthermore, DTD models also provide a QCT-, physics-based alternative to the phenomenological Borgnakke-Larsen model\cite{borgnakke1975statistical} which is currently employed to sample internal energy and translational energy of products formed in exchange reactions.\\ \noindent There is scope to further extend and improve the present methods. One of them concerns the application of the G-based models to predict product state distributions which can subsequently be used as reactant state distributions for QCT simulations or DTD models. This way, starting from a set of reactant state distributions transient distributions can be obtained after a certain number of cycles. This will be of particular relevance for applications in hypersonics. Moreover, data construction schemes, such as constructing nonequilibrium distributions as a linear combination of equilibrium distributions, may prove useful for training DTD models from a small set of reactant and product state distributions obtained from explicit QCT simulations that generalize far beyond the conditions used for the reactant state distributions.\\ \noindent Overall, the present work establishes that NN-based models for distribution-to-distribution learning can be developed based on explicit trajectory-based data. This will apply to both, data generated from QCT and quantum simulations if sufficiently converged and complete data can be generated. More generally, the approach presented in this work will also be applicable to situations in which initial distributions are mapped on final distributions by means of a deterministic algorithm such as molecular dynamics simulations. In the future it may also be of interest to consider a fourth approach to DTD learning based on a distribution regression network\cite{kou:2019} promising a higher prediction accuracy with fewer NN parameters compared to the approaches investigated in this work. Moreover, it may also be interesting to explore the possibility for constructing a ``state-to-distribution'' model which would be intermediate between the DTD model and the earlier STS model\cite{MM.nncs:2019}.\\ \section*{Data and Code Availability} All data required to train the NNs has been made available on zenodo \url{https://doi.org/QQQ/zenodo/QQQ} and the code for training the DTD models is available at \url{https://github.com/MMunibas/DTD}. \section{Acknowledgments} This work was supported by the Swiss National Science Foundation through grants 200021-117810, 200020-188724, the NCCR MUST, and the University of Basel.
{'timestamp': '2021-03-29T02:17:49', 'yymm': '2005', 'arxiv_id': '2005.14471', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14471'}
arxiv
\section{Introduction} \label{sec:Introduction} Diffractive hadron interactions at high scattering energies are characterised by final states consisting of two systems well separated in rapidity, whi
ch carry the quantum numbers of the initial state hadrons. Most diffractive phenomena are governed by \text{soft}, large distance processes. Despite being dominated by the strong interaction, they remain largely inaccessible to the description by Quantum Chromodynamics (QCD) in terms of quark and gluon interactions. In many cases, perturbative QCD calculations are not applicable because the typical scales involved are too low. Instead, other models have to be employed, such as Regge theory~\cite{Collins:1977jy}, in which interactions are described in terms of the coherent exchange of reggeons and the pomeron. \begin{figure}[!htb]\centering \includegraphics[width=0.33\textwidth,align=t]{d20-080f } \caption{Diffractive vector meson electroproduction.} \label{fig:intro_VMProd} \end{figure} Exclusive vector meson (VM) \text{electroproduction} $e+p \rightarrow e + \ensuremath{{\mathrm{VM}}}\xspace + Y$ is particularly suited to study diffractive phenomena. This process is illustrated in \figref{fig:intro_VMProd}. In leading order QED, the interaction of the electron with the proton is the exchange of a virtual photon which couples to the proton in a diffractive manner to produce a VM (${\ensuremath{{\rho^0}}}\xspace$, $\omega$, $\phi$, $J/\psi$, \dots) in the final state. The proton is scattered into a system $Y$, which can be a proton (\textit{elastic} scattering) or a diffractively excited system (\textit{diffractive proton dissociation}). Scales for the process are provided by the vector meson mass squared $m^2_{\ensuremath{{\mathrm{VM}}}\xspace}$, the photon virtuality $Q^2 = -q^2$, and the squared $4$-momentum transfer at the proton vertex $t$, the dependence on each of which can be studied independently. In this paper, elastic and proton-dissociative \textit{photoproduction} ($Q^2=0$) of ${\ensuremath{{\rho^0}}}\xspace$ mesons is studied using electroproduction data at small $Q^2 < 2.5~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$. Electroproduction of {\ensuremath{{\rho^0}}}\xspace mesons has been studied previously at HERA in both the photoproduction regime and for large $Q^2\gg \Lambda_\mathrm{QCD}^2$ (the perturbative cut-off in QCD), as well as for elastic and proton-dissociative scattering~\cite{Aaron:2009wg, Aaron:2009xp, Aktas:2006qs, Adloff:2002tb, Adloff:1999kg, Adloff:1997jd, Aid:1996ee, Aid:1996bs, Chekanov:2007zr, Chekanov:2002rm, Breitweg:1999fm, Breitweg:1999jy, Breitweg:1998nh, Breitweg:1997ed, Derrick:1996vw, Derrick:1995vq, Derrick:1995yd,Abramowicz:2011pk}. Measurements at lower photon-proton centre-of-mass energy {\ensuremath{{W_{\gamma p}}}}\xspace have been published in fixed-target interactions~\cite{Ballam:1971wq,Park:1971ts,Ballam:1972eq,Struczinski:1975ik,Egloff:1979mg,Aston:1982hr}. Most recently, a measurement of exclusive {\ensuremath{{\rho^0}}}\xspace photoproduction has been performed at the CERN LHC in ultra-peripheral lead-proton collisions~\cite{Sirunyan:2019nog}. The present measurement is based on a data set collected during the 2006/2007 HERA running period by the H1 experiment. Since {\ensuremath{{\rho^0}}}\xspace mesons decay almost exclusively into a pair of charged pions, the analysis is based on a sample of {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction events. Compared with previous HERA results, the size of the sample makes possible a much more precise measurement with a statistical precision at the percent level. It is then possible to extract up to three-dimensional kinematic distributions as a function of the invariant mass of the {\ensuremath{{\pi^+\pi^-}}}\xspace system ${\ensuremath{{m_{\pi\pi}}}}\xspace$, of ${\ensuremath{{W_{\gamma p}}}}\xspace$, and of $t$. However, the size of the dataset is such that the systematic uncertainties of the modelling of the H1 experiment are important. The structure of the paper is as follows: Theoretical details of {\ensuremath{{\rho^0}}}\xspace meson photoproduction are discussed in \secref{sec:theory} with a focus on the description of {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction in terms of Regge theory (\secref{sec:theo_pipiRegge}), cross section definitions (\secref{sec:theo_xSecDef}), and Monte Carlo modelling of relevant processes (\secref{sec:theo_MCModel}). In \secref{sec:experimental}, the experimental method is detailed, including a description of the H1 detector (\secref{sec:exp_detector}), the dataset underlying the analysis (\secref{sec:exp_dataSample}), the unfolding procedure applied to correct detector level distributions (\secref{sec:exp_unfolding}), and systematic uncertainties of the measurement (\secref{sec:systematics}). Results are presented in \secref{sec:results}. They encompass a measurement of the integrated {\ensuremath{{\pi^+\pi^-}}}\xspace production cross section in the fiducial phase space (\secref{sec:res_sigmaPiPiFid}), a study of the invariant {\ensuremath{{m_{\pi\pi}}}}\xspace distributions (\secref{sec:res_sigmaRhoFid}), measurements of the scattering energy (\secref{sec:res_sigmaRhoOfW}) and $t$ dependencies of the {\ensuremath{{\rho^0}}}\xspace meson production cross sections (\secref{sec:res_dSigmaRhoOft}), as well as the extraction of the leading Regge trajectory from the two-dimensional $t$ and {\ensuremath{{W_{\gamma p}}}}\xspace dependencies (\secref{sec:res_dSigmaRhoOfWt}). \section{Theory} \label{sec:theory} \subsection{{\ensuremath{{\pi^+\pi^-}}}\xspace and {\ensuremath{{\rho^0}}}\xspace meson photoproduction} \label{sec:theo_pipiRegge} In electron\footnote{In the following, the term ``electron'' is used indistinctly to refer to both positrons and electrons.}-proton collisions, {\ensuremath{{\pi^+\pi^-}}}\xspace and {\ensuremath{{\rho^0}}}\xspace meson photoproduction is studied in the scattering process \begin{equation} e(e) + p(p) \rightarrow e(e^\prime) + \pi^+(k_1) + \pi^-(k_2) + Y(p^\prime)\, \mathrm, \end{equation} where the 4-momenta of the participating particles are given in parentheses. The relevant kinematic variables are the electron-proton centre-of-mass energy squared \begin{equation} s = (e+p)^2 \mathrm, \end{equation} the photon virtuality, i.e.,\ the negative squared $4$-momentum transfer at the electron vertex \begin{equation} Q^2 = -q^2 = (e-e^\prime)^2 \mathrm, \end{equation} the inelasticity \begin{equation} y= (q \cdot p)/(e\cdot p) \,\mathrm, \end{equation} the $\gamma p$ centre-of-mass energy squared \begin{equation} W_{\gamma p}^2 = (q+p)^2 \mathrm, \label{eqn:theo_Wgp} \end{equation} the invariant mass of the {\ensuremath{{\pi^+\pi^-}}}\xspace system squared \begin{equation} {\ensuremath{{m_{\pi\pi}^2}}}\xspace = (k_1+k_2)^2\mathrm, \end{equation} the squared $4$-momentum transfer at the proton vertex \begin{equation} t = (p-p^\prime)^2\mathrm, \label{eqn:theo_t} \end{equation} and the invariant mass squared of the (possibly dissociated) final state proton system \begin{equation} {\ensuremath{{m_Y^2}}}\xspace = (p^\prime)^2\mathrm. \end{equation} In general, diffractive photoproduction of (light) vector mesons shares the characteristics of soft hadron-hadron scattering: In the high energy limit, the total and elastic cross sections are observed to rise slowly with increasing centre-of-mass energy. Differential cross sections $\ensuremath{{\mathrm{d}}} \sigma/\ensuremath{{\mathrm{d}}} t$ are \textit{peripheral}, favouring low $|t|$, \textit{forward} scattering. With increasing scattering energy, the typical $|t|$ of elastic cross sections becomes smaller, i.e.~forward peaks appear to shrink. In vector dominance models (VDM)~\cite{VDM}, the photon is modelled as a superposition of (light) vector mesons which can interact strongly with the proton to subsequently form a bound VM state. Like hadron-hadron interactions in general, VM production is dominated by colour singlet exchange in the $t$-channel. The lack of a sufficiently hard scale makes these exchanges inaccessible to perturbative QCD in a large portion of the phase space. Empirical and phenomenological models are used instead to describe the data. At low $|t| \ll 1~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$, differential cross sections are observed to follow exponential dependencies $\ensuremath{{\mathrm{d}}} \sigma/ \ensuremath{{\mathrm{d}}} t \propto \exp(bt)$. In the optical interpretation, the exponential slope $b$ is related to the transverse size of the scattered objects. Towards larger $|t|$, cross section dependencies change into a less steep power-law dependence $\ensuremath{{\mathrm{d}}} \sigma / \ensuremath{{\mathrm{d}}} t \propto |t|^a$. In Regge theory~\cite{Collins:1977jy}, the dependence of hadronic cross sections on the scattering energy ${\ensuremath{{W_{\gamma p}}}}\xspace$ and the shrinkage of the forward peak are characterised by \textit{Regge trajectories} $\alpha(t)$. The contribution of a single \textit{Regge pole} to the differential elastic cross section is $\ensuremath{{\mathrm{d}}}\sigma_{\ensuremath{{\mathrm{el}}}\xspace} / \ensuremath{{\mathrm{d}}} t({\ensuremath{{W_{\gamma p}}}}\xspace) \propto W_{\gamma p}^{4 (\alpha(t)-1)}$~\cite{Collins:1977jy}. At low energies ${\ensuremath{{W_{\gamma p}}}}\xspace \lesssim 10~{\ensuremath{{\mathrm{GeV}}}}\xspace$, \textit{reggeon trajectories} $\alpha_{\ensuremath{{I\!\!R}}\xspace}(t)$ dominate which are characterised by intercepts $\alpha_{\ensuremath{{I\!\!R}}\xspace}(0) < 1$, i.e.,\ they result in cross sections that fall off with increasing energy~\cite{Donnachie:1994zb}. In the high energy limit, only the contribution of what is known as the \textit{pomeron} Regge pole remains because its trajectory $\alpha_{\ensuremath{{I\!\!P}}\xspace}(t)$ has a large enough intercept $\alpha_{\ensuremath{{I\!\!P}}\xspace}(0) \gtrsim 1$ for it not to have decreased to a negligible level. The shrinkage of the elastic forward peak with increasing energy is the result of a positive slope $\alpha_{\ensuremath{{I\!\!P}}\xspace}^\prime > 0$ of the trajectory $\alpha(t)$ at $t=0$. \begin{figure}[!htb]\centering \includegraphics[width=0.33\textwidth,align=c]{d20-080f2a}\hspace*{.1\textwidth} \includegraphics[width=0.33\textwidth,align=c]{d20-080f2b} \caption{Diagram of {\ensuremath{{\rho^0}}}\xspace meson production and decay in elastic (left) and proton-dissociative (right) $ep$ scattering in the VDM and Regge picture, where the interaction is governed by soft pomeron exchange in the high energy limit.} \label{fig:intro_eptorhopDiag} \end{figure} Feynman-like diagrams which illustrate the interpretation of {\ensuremath{{\rho^0}}}\xspace meson photoproduction in the VDM/Regge picture are given in \figref{fig:intro_eptorhopDiag}. In the diagrams, the {\ensuremath{{\rho^0}}}\xspace meson is shown to decay into a pair of charged pions. This is the dominant decay channel with a branching ratio ${\ensuremath{{\mathcal{BR}}}}\xspace({\ensuremath{{\rho^0}}}\xspace \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace ) \simeq 99\%$~\cite{PDG}. The decay structure of the {\ensuremath{{\rho^0}}}\xspace meson into {\ensuremath{{\pi^+\pi^-}}}\xspace is described by two decay angles~\cite{Schilling:1973ag}. These also give insight into the production mechanism of the {\ensuremath{{\rho^0}}}\xspace meson. Contributions with $s$-channel helicity conservation are expected to dominate, such that the {\ensuremath{{\rho^0}}}\xspace meson retains the helicity of the photon, i.e.,\ in photoproduction the {\ensuremath{{\rho^0}}}\xspace meson is transversely polarised~\cite{Breitweg:1997ed}. While vector mesons dominate photon-proton interactions, the VDM approach does not provide a complete picture. This is particularly evident for {\ensuremath{{\rho^0}}}\xspace meson production where in the region of the {\ensuremath{{\rho^0}}}\xspace meson resonance peak also non-resonant {\ensuremath{{\pi^+\pi^-}}}\xspace production plays an important role. The non-resonant {\ensuremath{{\pi^+\pi^-}}}\xspace amplitude interferes with the resonant {\ensuremath{{\rho^0}}}\xspace meson production amplitude to produce a skewing of the Breit-Wigner resonance profile in the {\ensuremath{{\pi^+\pi^-}}}\xspace mass spectrum~\cite{Soding:1965nh}. Newer models thus aim to take a more general approach, e.g.,\ a recently developed model for {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction based on tensor pomeron exchange~\cite{Nachtmann:pipi}. That model seems to be in fair agreement with H1 data when certain model parameters are adjusted \cite{Phd:Abolz}. A more detailed investigation is beyond the scope of this paper. \subsection{Cross section definition} \label{sec:theo_xSecDef} \subsubsection{Photon flux normalisation} In this paper, photoproduction cross sections $\sigma_{{\ensuremath{{\gamma p}}}\xspace}$ are derived from the measured electron-proton cross sections $\sigma_{ep}$. At low $Q^2$, these can be expressed as a product of a flux of virtual photons $f^{T/L}_{\gamma/e}$ and a virtual photon-proton cross section $\sigma^{T/L}_{\gamma^* p}$: \begin{equation} \dfrac{\ensuremath{{\mathrm{d}}}^2 \sigma_{ep}}{\ensuremath{{\mathrm{d}}} y \ensuremath{{\mathrm{d}}} Q^2} =f^T_{\gamma/e}\left(y, Q^2\right) \, \sigma^T_{\gamma^* p}\left({\ensuremath{{W_{\gamma p}}}}\xspace(y,Q^2), Q^2\right) + f^L_{\gamma/e}\left(y, Q^2\right) \, \sigma^L_{\gamma^* p}\left({\ensuremath{{W_{\gamma p}}}}\xspace(y,Q^2), Q^2\right). \label{eqn:sigmaEpFactoriz} \end{equation} A distinction is made between transversely and longitudinally polarised photons, as indicated by the superscripts $T$ and $L$, respectively. In the low $Q^2$ regime studied here, the transverse component is dominant. The transverse and longitudinal photon fluxes are given in the Weizs\"acker-Williams approximation~\cite{equivPhoton} by \begin{align} f^T_{\gamma/e}(y,Q^2) &= \dfrac{\alpha_\mathrm{em}}{2\pi} \dfrac{1}{yQ^2}\left( 1 + (1-y)^2 - 2(1-y) \dfrac{Q^2_\mathrm{min}}{Q^2} \right) \intertext{ and} f^L_{\gamma/e}(y,Q^2) &= \dfrac{\alpha_\mathrm{em}}{\pi} \dfrac{1}{yQ^2}\left(1-y\right) \text, \end{align} respectively, where $\alpha_\mathrm{em}$ is the fine structure constant, $m_e$ the mass of the electron, and $Q^2_\mathrm{min} = m_e^2 y^2/(1-y)$ is the smallest kinematically accessible $Q^2$ value. In vector meson production in the VDM approach, the cross section for virtual photon interactions ($Q^2>0$) can be related to the corresponding photoproduction cross section $\sigma_{\gamma p}$ at $Q^2=0$: \begin{align} \sigma^T_{\gamma^* p}\left({\ensuremath{{W_{\gamma p}}}}\xspace, Q^2\right) &= \sigma_{\gamma p}\left({\ensuremath{{W_{\gamma p}}}}\xspace\right) \, \left( 1 + \frac{Q^2}{m^2_{\ensuremath{{\mathrm{VM}}}\xspace}} \right)^{-2} \mathrm, \\ \sigma^L_{\gamma^* p}\left({\ensuremath{{W_{\gamma p}}}}\xspace, Q^2\right) &= \sigma^T_{\gamma^* p}\left({\ensuremath{{W_{\gamma p}}}}\xspace, Q^2\right) \, \dfrac{Q^2}{m^2_{\ensuremath{{\mathrm{VM}}}\xspace}} \xi^2\mathrm. \end{align} In the equations, $m_{\ensuremath{{\mathrm{VM}}}\xspace}$ denotes the mass of the considered vector meson and $\xi^2$ is a proportionality constant, that in the following is set to unity. The real photoproduction cross section can then be factored out of \eqnref{eqn:sigmaEpFactoriz} according to \begin{equation} \dfrac{\ensuremath{{\mathrm{d}}}^2 \sigma_{ep}}{\ensuremath{{\mathrm{d}}} y \ensuremath{{\mathrm{d}}} Q^2} = \sigma_{\gamma p}\left( {\ensuremath{{W_{\gamma p}}}}\xspace(y) \right)\, \varphi_\mathrm{eff}\left(y, Q^2\right) \mathrm, \end{equation} were the so-called \textit{effective photon flux} is given by \begin{equation} \varphi_\mathrm{eff}(y, Q^2) = \dfrac{\alpha_\mathrm{em}}{2\pi} \dfrac{1}{yQ^2} \left[ 1 + (1-y)^2 - 2(1-y) \left( \dfrac{Q^2_\mathrm{min}}{Q^2}-\dfrac{Q^2}{m^2_{\ensuremath{{\mathrm{VM}}}\xspace}} \right) \right] \left( 1 + \frac{Q^2}{m^2_{\ensuremath{{\mathrm{VM}}}\xspace}} \right)^{-2}\mathrm. \label{eqn:theo_effFlux} \end{equation} In practice, measurements of $\sigma_{ep}$ are evaluated as integrals over finite regions in $Q^2$ and ${\ensuremath{{W_{\gamma p}}}}\xspace$. In order to extract corresponding photoproduction cross sections at $Q^2=0$ and for an appropriately chosen average energy $\langle {\ensuremath{{W_{\gamma p}}}}\xspace \rangle$, the measured values are normalised by the effective photon flux integrated over the corresponding $Q^2$ and ${\ensuremath{{W_{\gamma p}}}}\xspace$ ranges: \begin{equation} \sigma_{\gamma p}\left(\langle {\ensuremath{{W_{\gamma p}}}}\xspace \rangle\right) = \dfrac{ \sigma_{ep} }{\ensuremath{{\Phi_{\gamma/e}}}\xspace} \mathrm, \end{equation} with \begin{equation} \ensuremath{{\Phi_{\gamma/e}}}\xspace = \displaystyle \int_{\scriptsize\begin{array}{c}Q^2_{\min}<Q^2<Q^2_{\max} \\ W_{\min}<{\ensuremath{{W_{\gamma p}}}}\xspace<W_{\max}\end{array}} \varphi_\mathrm{eff}(y, Q^2) \, \ensuremath{{\mathrm{d}}} y\ensuremath{{\mathrm{d}}} Q^2 \mathrm. \label{eqn:theo_intFlux} \end{equation} \subsubsection{{\ensuremath{{\rho^0}}}\xspace meson photoproduction cross section} \label{sec:theo_xSecRho} With the H1 detector, {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction is measured. In the kinematic region studied, there are significant contributions from the {\ensuremath{{\rho^0}}}\xspace meson resonance, non-resonant {\ensuremath{{\pi^+\pi^-}}}\xspace production, the $\omega$ meson resonance and excited $\rho$ meson states~\cite{PDG,Nachtmann:pipi}. Photoproduction of {\ensuremath{{\rho^0}}}\xspace mesons has to be disentangled from these processes by analysis of the {\ensuremath{{\pi^+\pi^-}}}\xspace mass spectrum. Here, a model is used which parametrises the spectrum in terms of {\ensuremath{{\rho^0}}}\xspace and $\omega$ meson, as well as non-resonant amplitudes~\cite{Phd:Abolz}, similar to the original proposal made by S\"oding~\cite{Soding:1965nh}. The contributions are added at the amplitude level, and interference effects are taken into account. The model is defined as \begin{equation} \dfrac{\ensuremath{{\mathrm{d}}} \sigma_{\pi\pi}}{\ensuremath{{\mathrm{d}}}{\ensuremath{{m_{\pi\pi}}}}\xspace }({\ensuremath{{m_{\pi\pi}}}}\xspace) = A\ \dfrac{q^3({\ensuremath{{m_{\pi\pi}}}}\xspace)}{q^3(m_\rho)} \ \bigg \vert A_{\rho,\omega}({\ensuremath{{m_{\pi\pi}}}}\xspace) + A_{\ensuremath{{\mathrm{nr}}}\xspace}({\ensuremath{{m_{\pi\pi}}}}\xspace) \bigg \vert^2\text, \label{eqn:theo_rhoSoedMass} \end{equation} where $A$ is a global normalisation factor and $q^3({\ensuremath{{m_{\pi\pi}}}}\xspace)$ describes the phase space, with $q({\ensuremath{{m_{\pi\pi}}}}\xspace) = \frac{1}{2} \sqrt{{\ensuremath{{m_{\pi\pi}^2}}}\xspace - 4m_\pi^2}$ being the momentum of one of the pions in the {\ensuremath{{\pi^+\pi^-}}}\xspace centre-of-mass frame~\cite{Jackson:1964zd}. It is normalised to the value $q^3(m_\rho)$ at the {\ensuremath{{\rho^0}}}\xspace mass. The amplitude $A_{\rho,\omega}$ takes into account the ${\ensuremath{{\rho^0}}}\xspace$ and $\omega$ meson resonance contributions, whereas the non-resonant component is modelled by $A_{\ensuremath{{\mathrm{nr}}}\xspace}({\ensuremath{{m_{\pi\pi}}}}\xspace)$. The components are considered to be fully coherent. Since additional resonances with masses above 1~{\ensuremath{{\mathrm{GeV}}}}\xspace are neglected, the model can only be applied to the region near the {\ensuremath{{\rho^0}}}\xspace meson mass peak. Models of this form have been widely used in similar past analyses because they preserve the physical amplitude structure while being parametric and thus easily applicable. More sophisticated models are designed to preserve unitarity \cite{Basdevant:1977ya} or are regularised by barrier factors at higher masses \cite{VonHippel:1972fg}. The combined ${\ensuremath{{\rho^0\text{-}\omega}}}\xspace$ amplitude is modelled following a parametrisation given by \cite{Akhmetshin:2001ig} \begin{equation} A_{\rho,\omega}({\ensuremath{{m_{\pi\pi}}}}\xspace) = \mathcal{BW}_{\rho}({\ensuremath{{m_{\pi\pi}}}}\xspace) \ \left( 1 + f_\omega \ensuremath{{\mathrm{e}}}^{\ensuremath{{\mathrm{i}}} \phi_{\omega} } \dfrac{ {\ensuremath{{m_{\pi\pi}^2}}}\xspace}{ m_\omega^2 } \mathcal{BW}_{\omega}({\ensuremath{{m_{\pi\pi}}}}\xspace) \right), \end{equation} where $f_\omega$ and $\phi_\omega$ are a normalisation factor and a mixing phase for the $\omega$ contribution, respectively. The $\omega$ meson is not expected to decay into a pair of charged pions directly because of the conservation of G-parity by the strong interaction. However, electromagnetic $\omega\rightarrow{\ensuremath{{\rho^0}}}\xspace$-mixing with a subsequent ${\ensuremath{{\rho^0}}}\xspace \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace$ decay is possible~\cite{OConnell:1995nse}. Both resonances are modelled by a relativistic Breit-Wigner function~\cite{Breit:1936zzb}: \begin{equation} \mathcal{BW}_{\ensuremath{{\mathrm{VM}}}\xspace}({\ensuremath{{m_{\pi\pi}}}}\xspace) = \dfrac{ m_{\ensuremath{{\mathrm{VM}}}\xspace} \Gamma_{{\ensuremath{{\mathrm{VM}}}\xspace}} }{ m^2_{\pi\pi} - m_{\ensuremath{{\mathrm{VM}}}\xspace}^2 + \ensuremath{{\mathrm{i}}} \, m_{\ensuremath{{\mathrm{VM}}}\xspace} \Gamma({\ensuremath{{m_{\pi\pi}}}}\xspace) }. \label{eqn:effingBW} \end{equation} The parameters $m_{\ensuremath{{\mathrm{VM}}}\xspace}$ and $\Gamma_{{\ensuremath{{\mathrm{VM}}}\xspace}}$ are the respective vector meson's Breit-Wigner mass and width. The Breit-Wigner function is normalised to $|\mathcal{BW}_{\ensuremath{{\mathrm{VM}}}\xspace}(m_{\ensuremath{{\mathrm{VM}}}\xspace})|=1$. For the {\ensuremath{{\rho^0}}}\xspace resonance a p-wave mass-dependent width~\cite{Jackson:1964zd} is used: \begin{equation} \Gamma({\ensuremath{{m_{\pi\pi}}}}\xspace) = \Gamma_{{\ensuremath{{\mathrm{VM}}}\xspace}} \dfrac{q^3({\ensuremath{{m_{\pi\pi}}}}\xspace)}{q^3(m_{\ensuremath{{\mathrm{VM}}}\xspace})} \dfrac{m_{\ensuremath{{\mathrm{VM}}}\xspace}}{{\ensuremath{{m_{\pi\pi}}}}\xspace}\mathrm{,} \label{eqn:gamma_rho} \end{equation} whereas a constant width is assumed for the very narrow $\omega$ resonance. The unknown non-resonant amplitude is parametrised by the function \begin{equation} A_{\ensuremath{{\mathrm{nr}}}\xspace}({\ensuremath{{m_{\pi\pi}}}}\xspace) = \dfrac{f_{\ensuremath{{\mathrm{nr}}}\xspace}}{ \left( {\ensuremath{{m_{\pi\pi}^2}}}\xspace - 4m_\pi^2 + \Lambda^2_{\ensuremath{{\mathrm{nr}}}\xspace} \right)^{\delta_{\ensuremath{{\mathrm{nr}}}\xspace}}} \mathrm{,} \label{eqn:theo_Anr} \end{equation} where the relative normalisation is given by $f_{\ensuremath{{\mathrm{nr}}}\xspace}$, while $\Lambda_{\ensuremath{{\mathrm{nr}}}\xspace}$ and $\delta_{\ensuremath{{\mathrm{nr}}}\xspace}$ are free model parameters. They can shape the amplitude for the modelling of a possible internal structure of the non-resonant $\gamma\pi\pi$-coupling. In similar past analyses, typically a purely real non-resonant amplitude has been assumed. Following that assumption, $f_{\ensuremath{{\mathrm{nr}}}\xspace}$ is set to be real. For $\delta_{\ensuremath{{\mathrm{nr}}}\xspace} > \frac{3}{4}$, the non-resonant contribution to the cross section (cf. \eqnref{eqn:theo_rhoSoedMass}) has a local maximum at \begin{equation} m_{\ensuremath{{\mathrm{nr}}}\xspace}^{\text{max}} = \dfrac{ \sqrt{\Lambda^2_{\ensuremath{{\mathrm{nr}}}\xspace} + (\frac{4}{3}\delta_{\ensuremath{{\mathrm{nr}}}\xspace}-1)\, 4m_\pi^2} }{\sqrt{\frac{4}{3}\delta_{\ensuremath{{\mathrm{nr}}}\xspace}-1}}\mathrm, \end{equation} and falls proportionally to $\left( 1/{\ensuremath{{m_{\pi\pi}^2}}}\xspace \right)^{2\delta_{\ensuremath{{\mathrm{nr}}}\xspace}-3}$ in the high mass region. In order to extract the {\ensuremath{{\rho^0}}}\xspace meson contribution to the {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross section, the measured {\ensuremath{{\pi^+\pi^-}}}\xspace mass distributions are fitted using \eqnref{eqn:theo_rhoSoedMass}. The {\ensuremath{{\rho^0}}}\xspace meson Breit-Wigner contribution is then conventionally defined by the integral \begin{equation} \sigma(\gamma p \rightarrow {\ensuremath{{\rho^0}}}\xspace Y) = \dfrac{A}{q^3(m_\rho)} \int_{2m_\pi}^{m_\rho + 5 \Gamma_{\rho} } \big\vert \mathcal{BW}_{\rho}(m) \big\vert^2 q^3(m) \ensuremath{{\mathrm{d}}} m. \label{eqn:theo_intRho} \end{equation} As the {\ensuremath{{\rho^0}}}\xspace meson resonance decays almost exclusively into two charged pions, this is taken to be equal to the total {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross section without correcting for the ${\ensuremath{{\rho^0}}}\xspace \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace$ branching fraction. Kinematic dependencies of the {\ensuremath{{\rho^0}}}\xspace meson production cross section on the variables {\ensuremath{{W_{\gamma p}}}}\xspace, $t$, and {\ensuremath{{m_Y}}}\xspace are measured by fitting the mass distributions in bins of the respective variables, such that all model parameters may have kinematic dependencies. Physical considerations and statistical and technical limitations affect the assumed dependencies. Physical parameters, i.e.,\ $m_{\ensuremath{{\mathrm{VM}}}\xspace}$, $\Gamma_{{\ensuremath{{\mathrm{VM}}}\xspace}}$, and $\delta_{\ensuremath{{\mathrm{nr}}}\xspace}$ are assumed to be constants. The small width of the $\omega$ meson cannot be constrained by the present data and the PDG value $\Gamma_{\omega}=8.5~{\ensuremath{{\mathrm{MeV}}}}\xspace$~\cite{PDG} is assumed and kept fixed in all fits. Dependencies of $f_\omega$, $\phi_\omega$, and $f_{\ensuremath{{\mathrm{nr}}}\xspace}$ on $t$ or {\ensuremath{{W_{\gamma p}}}}\xspace cannot be constrained with the present data. However, these parameters are allowed to depend on {\ensuremath{{m_Y}}}\xspace, i.e.,\ to be different for elastic and proton-dissociative distributions. The non-resonant background is observed to change with $t$. This is modelled by a $t$ dependence of the parameter $\Lambda_{\ensuremath{{\mathrm{nr}}}\xspace}$, which also can be different for elastic and proton-dissociative distributions. The normalisation $A$ is a free parameter in each kinematic bin. All fit set-ups with the corresponding parameter assumptions are summarised in \tabref{tab:theo_SodingPars}. \begin{table}\centering \small \begin{tabular}{@{}l@{\hspace{0.5em}} l@{\hspace{1.0em}} c c c c@{}} \toprule & & \multicolumn{4}{c}{Number of free fit parameters} \\ \cmidrule{3-6} Parameter & Dependencies & $\frac{\ensuremath{{\mathrm{d}}} \sigma}{\ensuremath{{\mathrm{d}}} m}(m;\, {\ensuremath{{m_Y}}}\xspace)$ & $\frac{\ensuremath{{\mathrm{d}}} \sigma}{\ensuremath{{\mathrm{d}}} m}(m;\, {\ensuremath{{m_Y}}}\xspace,W)$ & $\frac{\ensuremath{{\mathrm{d}}}^2 \sigma}{\ensuremath{{\mathrm{d}}} m \ensuremath{{\mathrm{d}}} t}(m;\, {\ensuremath{{m_Y}}}\xspace, t)$ & $\frac{\ensuremath{{\mathrm{d}}}^2 \sigma}{\ensuremath{{\mathrm{d}}} m \ensuremath{{\mathrm{d}}} t}(m;\, {\ensuremath{{m_Y}}}\xspace, W, t)$\\ \midrule $A$ & ${\ensuremath{{m_Y}}}\xspace,\ W,\ t$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $9_{W}^{\ensuremath{{\mathrm{el}}}\xspace\vphantom{p}} + 6_{W}^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $12_{t}^{\ensuremath{{\mathrm{el}}}\xspace\vphantom{p}} + 9_{t}^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $4_{W}^{\ensuremath{{\mathrm{el}}}\xspace\vphantom{p}}\cdot7_{t}^{\ensuremath{{\mathrm{el}}}\xspace\vphantom{p}} + 4_{W}^{\ensuremath{{\mathrm{pd}}}\xspace}\cdot5_{t}^{\ensuremath{{\mathrm{pd}}}\xspace}$ \\ $m_\rho$ & - & $1$ & $1$ & $1$ & $1$ \\ $\Gamma_\rho$ & - & $1$ & $1$ & $1$ & $1$ \\ $f_\omega$ & ${\ensuremath{{m_Y}}}\xspace$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ & fixed & fixed & fixed \\ $\phi_\omega$ & ${\ensuremath{{m_Y}}}\xspace$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ & fixed & fixed & fixed \\ $m_\omega$ & - & $1$ & fixed & fixed & fixed \\ $\Gamma_\omega$ & - & PDG & PDG & PDG & PDG \\ $f_{\ensuremath{{\mathrm{nr}}}\xspace}$ & ${\ensuremath{{m_Y}}}\xspace$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ \\ $\delta_{\ensuremath{{\mathrm{nr}}}\xspace}$ & - & $1$ & $1$ & $1$ & $1$ \\ $\Lambda_{\ensuremath{{\mathrm{nr}}}\xspace}$ & ${\ensuremath{{m_Y}}}\xspace,\ t$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $1^{\ensuremath{{\mathrm{el}}}\xspace}+1^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $12_{t}^{\ensuremath{{\mathrm{el}}}\xspace\vphantom{p}} + 9_{t}^{\ensuremath{{\mathrm{pd}}}\xspace}$ & $7_{t}^{\ensuremath{{\mathrm{el}}}\xspace\vphantom{p}} + 5_{t}^{\ensuremath{{\mathrm{pd}}}\xspace}$\\ \midrule \multicolumn{2}{@{}l}{Total} & $14$ & $22$ & $47$ & $65$ \\ \bottomrule \end{tabular} \caption{% Parameter assumptions and resulting number of parameters used to fit \eqnref{eqn:theo_rhoSoedMass} to the invariant {\ensuremath{{\pi^+\pi^-}}}\xspace mass distributions. The number of parameters depends on the number of bins in the extracted cross sections. The number of bins in which a parameter is fitted freely is given and the corresponding distribution indicated by sub- and superscripts. For the fits of the {\ensuremath{{m_{\pi\pi}}}}\xspace distributions in multiple {\ensuremath{{W_{\gamma p}}}}\xspace or $t$ bins, the $\omega$ meson model parameters are fixed to the values obtained from the fit to the one-dimensional distributions. The $\omega$ meson width is always fixed to the PDG value. } \label{tab:theo_SodingPars} \end{table} \subsection{Monte Carlo modelling} \label{sec:theo_MCModel} For the purpose of quantifying detector effects, the data are modelled using Monte Carlo (MC) simulations of elastic and proton-dissociative electroproduction of ${\ensuremath{{\rho^0}}}\xspace$, $\omega(782)$, $\phi(1020)$, $\rho(1450)$, and $\rho(1700)$ vector mesons, as well as for non-resonant diffractive photon dissociation. The samples are all generated using the DIFFVM event generator\cite{List:1999} that models VM production on the principles of equivalent photon approximation\cite{equivPhoton}, VDM\cite{VDM}, and pomeron exchange \cite{ReggeTraj}. Proton dissociation is modelled by DIFFVM assuming the following dependence of the cross section on the mass of the dissociated system: \begin{equation} \dfrac{\ensuremath{{\mathrm{d}}} \sigma_{\ensuremath{{\gamma p}}}\xspace}{\ensuremath{{\mathrm{d}}} {\ensuremath{{m_Y^2}}}\xspace} = \dfrac{f({\ensuremath{{m_Y}}}\xspace)}{ ({\ensuremath{{m_Y^2}}}\xspace)^{1+\epsilon_Y} }\mathrm{.} \end{equation} Here, $\epsilon_Y = 0.0808$ and $f({\ensuremath{{m_Y}}}\xspace)$ is a phenomenological function that is fitted to the experimental data~\cite{ProtonDissoc} to parametrise the low-mass resonance structure in the region $m_p < {\ensuremath{{m_Y}}}\xspace < 1.9~{\ensuremath{{\mathrm{GeV}}}}\xspace$. For ${\ensuremath{{m_Y}}}\xspace>1.9~{\ensuremath{{\mathrm{GeV}}}}\xspace$, the function $f({\ensuremath{{m_Y}}}\xspace)=1$ becomes constant. In the low mass region the dissociative system is treated as an $N^*$ resonance and decays are modelled according to measured branching fractions\cite{PDG}. For higher masses the decay is simulated using the Lund fragmentation model as implemented in JETSET~\cite{Sjostrand:1986hx}. Non-resonant photon dissociation is modelled analogously by assuming a dissociative mass $m_X$ spectrum $\ensuremath{{\mathrm{d}}} \sigma_{{\ensuremath{{\gamma p}}}\xspace}/\ensuremath{{\mathrm{d}}} m_X^2 = 1/(m_X^2)^{1+\epsilon_X}$ with $\epsilon_X = \epsilon_Y$, and simulating the decay using the Lund model. \begin{table}[htb] \centering \small \renewcommand{\arraystretch}{1.1} \begin{tabular}{ @{}l@{\hspace{2.5em}} l @{\hspace{2em}}r@{\hspace{2em}} c c@{}} \toprule & & & \multicolumn{2}{c}{Number of events} \\ \cmidrule{4-5} Process & Decay modes & ${\ensuremath{{\mathcal{BR}}}}\xspace$ $[\%]$ & elastic & $p$-dissociative \\ \midrule \multirow{2}{*}{${\ensuremath{{\rho^0}}}\xspace(770)$} & $\pi^+\pi^-$ & $ 99.0$ & \multirow{2}{*}{$10^7$} & \multirow{2}{*}{$10^7$} \\ & ${\ensuremath{{\pi^+\pi^-}}}\xspace\gamma$ & $ 1.0$ \\ \multicolumn{3}{l}{$\quad \hookrightarrow$ reweighted to describe all {\ensuremath{{\pi^+\pi^-}}}\xspace final states} &\\ \cmidrule(r){1-3} \cmidrule{4-5} \multirow{3}{*}{$\omega(782)$} & $\pi^+\pi^-\pi^0$ & $ 89.2$ & \multirow{3}{*}{$10^6$} & \multirow{3}{*}{$10^6$} \\ & $\pi^0\gamma$ & $ 8.6$ \\ & $\pi^+\pi^-$ (removed, included in signal) & $ 2.2$ \\ \cmidrule(r){1-3} \cmidrule{4-5} \multirow{5}{*}{$\phi(1020)$} & $K^+ K^-$ & $ 49.0$ & \multirow{5}{*}{$10^6$} & \multirow{5}{*}{$10^6$}\\ & $K_L K_S$ & $ 34.4$ \\ & $\pi^+\rho^-,\ \pi^-\rho^+,\ \pi^0\rho^0$ & $4.3$, $4.3$, $4.3$ \\ & $\pi^+\pi^-\pi^0$ & $ 2.4 $\\ & $\eta \gamma$ & $ 1.3 $\\ \cmidrule(r){1-3} \cmidrule{4-5} \multirow{2}{*}{$\rho(1450)$} & $\rho^0\pi^+\pi^-,\ \rho^+\pi^-\pi^0,\ \rho^-\pi^+\pi^0$ & $25.0$, $25.0$, $ 25.0 $ & \multirow{2}{*}{$10^6$} & \multirow{2}{*}{$10^6$} \\ \multirow{2}{*}{\& } & $\pi^+\pi^-\pi^+\pi^-$ & $ 15.0$ \\ \multirow{2}{*}{$\rho(1700)$} & $\pi^+\pi^-\pi^0\pi^0$ & $ 8.0$ & \multirow{2}{*}{$10^6$} & \multirow{2}{*}{$10^6$}\\ & $\pi^+\pi^-$ (removed, included in signal) & $ 2.0$ \\ \multicolumn{3}{l}{$\quad \hookrightarrow$ merged $\rho(1450):\rho(1700) = 1:1$ } & \\ \cmidrule(r){1-3} \cmidrule{4-5} \multirow{2}{*}{$\gamma$-dissoc.} & \multicolumn{2}{l}{Lund fragmentation model } & \multirow{2}{*}{$10^7$}& \multirow{2}{*}{$10^7$} \\ & \multicolumn{2}{l}{ (exclusive ${\ensuremath{{\pi^+\pi^-}}}\xspace$ removed, included in signal) } \\ \bottomrule \end{tabular} \caption{DIFFVM MC samples used to model the {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction dataset. All decay modes with a branching fraction $\gtrsim1\%$ are simulated~\cite{List:1999}. For the $\rho(1450)$ and $\rho(1700)$ samples a ratio of 1:1 is assumed. The {\ensuremath{{\pi^+\pi^-}}}\xspace final states are removed from background samples and included in the signal definition.} \label{tab:theo_diffVMSamples} \end{table} In \tabref{tab:theo_diffVMSamples}, details on the samples and in particular on the simulated decay modes are listed\footnote{For the simulation of the $\rho(1450)$ and $\rho(1700)$ mesons, DIFFVM was modified to account for the finite width of intermediate $\rho(770)$ resonances, and decay modes involving the final state ${\ensuremath{{\pi^+\pi^-}}}\xspace \pi^0\pi^0$ were added.}. Several of the considered processes result in an exclusive {\ensuremath{{\pi^+\pi^-}}}\xspace final state. They are simulated by DIFFVM independently, so that interference effects are not considered. However, these can be significant. For example, the interference between the {\ensuremath{{\rho^0}}}\xspace meson resonance and non-resonant {\ensuremath{{\pi^+\pi^-}}}\xspace production causes a strong skewing of the resonance lineshape. To consider these interference effects, the ${\ensuremath{{\rho^0}}}\xspace$ meson samples are reweighted to describe exclusive {\ensuremath{{\pi^+\pi^-}}}\xspace production including contributions from {\ensuremath{{\rho^0}}}\xspace, $\omega$, and a single {\ensuremath{{\rho^\prime}}}\xspace meson resonance and non-resonant production, that are all added at the amplitude level. For the reweighting, a $t$ and {\ensuremath{{m_{\pi\pi}}}}\xspace dependent lineshape is used, which is similar to the model introduced in \secref{sec:theo_xSecRho} but is extended by an additional {\ensuremath{{\rho^\prime}}}\xspace Breit-Wigner amplitude~\cite{Phd:Abolz}. All generated events are passed through the full GEANT-3\cite{Brun:1994aa} based simulation of the H1 apparatus and are reconstructed using the same program chain as used for the data. Trigger scaling factors are applied to correct differences in the trigger performance between data and simulation. They are obtained from a {\ensuremath{{\pi^+\pi^-}}}\xspace electroproduction sample, that is triggered independently of the tracking devices~\cite{Phd:Abolz}. A template is constructed from all MC samples to describe the data. For a better description of the {\ensuremath{{W_{\gamma p}}}}\xspace and $t$ distributions, all samples are tuned to data~\cite{Phd:Abolz}. An additional background contribution from beam-gas events is estimated in a data driven method. The MC normalisations are obtained from data control regions that are enriched with events from a given process through modified event selection requirements as described below. The $\rho(1450)$ and $\rho(1700)$ samples cannot be well distinguished experimentally in this analysis. They are thus combined at a $1{:}1$ ratio and treated as a single MC sample. In order to obtain normalisations for the elastic and proton-dissociative samples independently, information from the forward detector components is used as described below. Neither initial and final state radiation of real photons from the electron, nor vacuum polarisation effects are simulated. Consequently, these effects are not corrected for in the present measurement. In a comparable phase space, their effect on the overall cross section has been estimated to be smaller than 2\%~\cite{Kurek:1996ez}. \section{Experimental Method and Data Analysis} \label{sec:experimental} \subsection{H1 detector} \label{sec:exp_detector} A detailed description of the H1 detector is given elsewhere~\cite{Abt:1996xvhi,Appuhn:1996na}. The components that are relevant for the present analysis are briefly described in the following. A right-handed Cartesian coordinate system is used with the origin at the nominal $ep$ interaction point. The proton beam direction defines the positive $z$-axis (\textit{forward direction}). Transverse momenta are measured in the $x$-$y$ plane. Polar ($\theta$) and azimuthal ($\phi$) angles are measured with respect to this frame of reference. The interaction point is surrounded in the central region ($15^\circ < \theta < 165^\circ$) by the central tracking detector. Two large coaxial cylindrical jet chambers (CJC1 and CJC2) for precise track reconstruction form its core. They are supported by the central inner proportional chamber (CIP) used for the reconstruction of the primary vertex position on the trigger level, a $z$-drift chamber for an improved reconstruction of $z$ coordinates, and a silicon vertex detector for the reconstruction of secondary decay vertices~\cite{Pitzl:2000wz}. In the forward direction (${7^\circ < \theta < 30^\circ}$), additional coverage is provided by the forward tracking detector, a set of planar drift chambers. The tracking detectors are operated in a solenoidal magnetic field of 1.16~T. For offline track reconstruction, track helix parameters are fitted to the inner detector hits in a general broken lines fit~\cite{Blobel:2006yi}. The procedure considers multiple scattering and corrects for energy loss by ionisation in the detector material. The primary vertex position is calculated from all tracks and optimised as part of the fitting procedure. Transverse track momenta are measured with a resolution of ${\sigma(p_T)/p_T \simeq 0.002~p_T/{\ensuremath{{\mathrm{GeV}}}}\xspace \oplus 0.015}$. The CJCs also provide a measurement of the specific energy loss of charged particles by ionisation ${\ensuremath{{\dd E/\dd x}}}\xspace$ with a relative resolution of 6.5\% for long tracks. The tracking detectors are surrounded by the liquid argon (LAr) sampling calorimeter~\cite{Andrieu:1993kh}. It provides coverage in the region ${4^\circ < \theta < 154^\circ}$ and over the full azimuthal angle. The inner electromagnetic section of the LAr is interlaced with lead, the outer hadronic section with steel absorbers. With the LAr, the energies of electromagnetic and hadronic showers are measured with a precision of ${\sigma(E)/E \simeq 12\%/\sqrt{E/{\ensuremath{{\mathrm{GeV}}}}\xspace} \oplus 1\%}$ and ${\sigma(E)/E \simeq 50\%/\sqrt{E/{\ensuremath{{\mathrm{GeV}}}}\xspace}\oplus 2\%}$, respectively~\cite{Andrieu:1994yn}. In the backward region \linebreak (${153^\circ < \theta < 178^\circ}$), energies are measured with a spaghetti calorimeter (SpaCal) of lead absorbers interlaced with scintillating fibres~\cite{Appuhn:1996na}. Detector components positioned in the very forward direction are used in this analysis to identify proton dissociation events. These are the forward muon detectors (FMD), the PLUG calorimeter and the forward tagging system (FTS). The lead-scintillator plug calorimeter is positioned around the beampipe at $z=4.9~{\ensuremath{{\mathrm{m}}}}\xspace$ to measure the energies of particles in the pseudorapidity region $3.5 < \eta < 5.5$. The FMD is a system of six drift chambers positioned outside of the LAr and covering the range $1.9 < \eta < 3.7$. Particles at larger pseudorapidity up to $\eta \lesssim 6.5$ can still induce spurious signals via secondary particles produced in interactions with the beam transport system and detector support structures~\cite{Phd:Dirkmann}. The very forward region, $6.0 < \eta < 7.5$, is covered by an FTS station of scintillation detectors positioned around the beampipe at $z=28~{\ensuremath{{\mathrm{m}}}}\xspace$. The H1 trigger is operated in four stages. The first trigger level (L1) is implemented in dedicated hardware reading out fast signals of selected sub-detector components. Those signals are combined and refined at the second level (L2). A third, software-based level (L3) combines L1 and L2 information for partial event reconstruction. After full detector read-out and full event reconstruction, events are subject to a final software-based filtering (L4). The data used for the present analysis are recorded using mainly information from the fast track trigger (FTT)~\cite{Baird:2001xc}. The FTT makes it possible to measure transverse track parameters at the first trigger level and complete three-dimensional tracks at L2. This is achieved through applying pattern recognition and associative memory technology to identify predefined tracks in the hit-patterns produced by charged particles in a subset of the CJC signal wires. The instantaneous luminosity is measured by H1 with a dedicated photon detector located close to the beampipe at $z=-103~{\ensuremath{{\mathrm{m}}}}\xspace$. With it, the rate of the Bethe-Heitler process $ep \rightarrow ep\gamma$ is monitored. The integrated luminosity is measured more precisely with the main H1 detector using the elastic QED Compton process. In this process, the electron and photon in the $ep\gamma$ final state have large transverse momenta and can be reconstructed in a back-to-back topology in the SpaCal. The integrated luminosity has been measured with a total uncertainty of $2.7\%$~\cite{Aaron:2012kn} that is dominated by systematic effects. \subsection{Data sample} \label{sec:exp_dataSample} The present analysis is based on data collected by the H1 experiment during the 2006/2007 HERA running period. In that period, the accelerator was operated with positrons having an energy of $E_e = 27.6~{\ensuremath{{\mathrm{GeV}}}}\xspace$ and protons with an energy of $E_p = 920~{\ensuremath{{\mathrm{GeV}}}}\xspace$. Due to bandwidth limitations, only a subset of the H1 dataset is available for the trigger conditions relevant for this analysis, corresponding to an effective integrated luminosity of $\mathcal{L}_\mathrm{int} = 1.3~{\ensuremath{{\mathrm{pb}^{-1}}}}\xspace$. In the kinematic range considered in this analysis, the pions from ${\ensuremath{{\rho^0}}}\xspace \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace$ photoproduction are produced within the acceptance of the CJC and with low transverse momenta $p_T \lesssim 0.5~{\ensuremath{{\mathrm{GeV}}}}\xspace$. In the diffractive photoproduction regime, both the outgoing proton and electron avoid detection by escaping through the beampipe\footnote{In the studied energy region, the elastically scattered protons are mostly outside of the acceptance region of the H1 forward proton spectrometer (FPS) and the very forward proton spectrometer (VFPS)~\cite{FPSVFPS}.}. \subsubsection{Trigger} A dedicated, track-based {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction trigger condition was used for online event selection. Track information within the \mbox{2.3~$\mu$s} decision time of the L1 trigger was available through the FTT. For a positive trigger decision, at least two FTT tracks above a transverse momentum threshold of 160~{\ensuremath{{\mathrm{MeV}}}}\xspace and at most three tracks above a threshold of 100~{\ensuremath{{\mathrm{MeV}}}}\xspace were required. The sum of the charges of these tracks was restricted to $\pm 1$ elementary electric charge. In addition, trigger information from the CIP was used to ensure a low multiplicity interaction within the nominal interaction region along the \mbox{$z$-axis}. Vetoes on the inner forward part of the LAr calorimeter and on a scintillator wall in the forward direction were applied to suppress non-diffractive inelastic interactions. Further SpaCal and timing vetoes rejected events from beam-gas and beam-machine interactions. To keep under control the expected rate from the large {\ensuremath{{\rho^0}}}\xspace meson production cross section, the trigger was scaled down by an average factor of $\sim100$. \subsubsection{Event reconstruction and selection} In order to select a sample of {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction events, a set of offline selection cuts is applied on top of the trigger requirements: \begin{itemize} \item The {\ensuremath{{\pi^+\pi^-}}}\xspace topology is ensured by requiring exactly two primary-vertex fitted, central tracks to be reconstructed. They need to satisfy some additional quality requirements, have opposite charge, and be within the acceptance region\footnote{The polar acceptance is reduced with respect to the CJC geometry to improve the performance of the {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction trigger and its MC simulation.} defined as $25^\circ < \theta < 155^\circ$ and \mbox{$p_T>0.16~{\ensuremath{{\mathrm{GeV}}}}\xspace$}. Low-momentum kaons, protons, and deuterons are suppressed using the difference between the measured energy loss ${\ensuremath{{\dd E/\dd x}}}\xspace$ of the tracks in the CJC and the expected loss for the respective particle hypothesis in a likelihood-based approach. The two tracks are then taken to be the pion candidates, and their 4-momentum vectors are calculated with the corresponding mass hypothesis. \item The photoproduction kinematic regime is ensured by vetoing events with a scattered electron candidate in the SpaCal or LAr. The SpaCal acceptance then limits the photon virtuality to $Q^2\lesssim 2.5~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$. \item The diffractive topology is ensured by requiring a large rapidity gap between the central tracks and any forward detector activity. Events with LAr clusters above a noise level of 0.6~{\ensuremath{{\mathrm{GeV}}}}\xspace in the forward region $\theta < 20^\circ$ are rejected. Information from the FTD is used to reconstruct forward tracks, and events with more than one forward track that cannot be matched to one of the central tracks are also rejected. The presence of a single unmatched track is permitted to reduce the sensitivity on the modelling of the forward energy flow in the forward detectors. This rapidity gap selection in particular also limits the mass of the proton-dissociative system to approximately ${\ensuremath{{m_Y}}}\xspace \lesssim 10~{\ensuremath{{\mathrm{GeV}}}}\xspace$. \item Background processes with additional neutral particles or charged particles outside of the central tracker acceptance are suppressed by cuts on the LAr and SpaCal energy. LAr and SpaCal clusters above respective noise levels of 0.6~{\ensuremath{{\mathrm{GeV}}}}\xspace and 0.3~{\ensuremath{{\mathrm{GeV}}}}\xspace are geometrically matched to the two central tracks: A cluster is associated to a track if it is within a cylinder of a 60~{\ensuremath{{\mathrm{cm}}}}\xspace radius in the direction of the track upon calorimeter entry. The energies from clusters not associated to either track are summed up. Events are rejected if the total unassociated LAr or SpaCal energies exceed thresholds of 0.8~{\ensuremath{{\mathrm{GeV}}}}\xspace or 0.4~{\ensuremath{{\mathrm{GeV}}}}\xspace, respectively. This allows for a small amount of unassociated energy to account for residual noise or secondary particles produced in interactions of the pion candidates with the detector material. A further suppression of background events with additional final state particles is achieved by requiring a transverse opening angle between the two pion tracks $\Delta \phi > 50^\circ$. \item For a reliable trigger performance and MC modelling thereof, the difference in the FTT track angles\footnote{The FTT $\phi$ angle is determined at a radial distance of $r=22~{\ensuremath{{\mathrm{cm}}}}\xspace$ from the $z$ axis.} must exceed $\Delta \phi_\mathrm{FTT} > 20^\circ$. \item The background is further reduced by rejecting out-of-time events via cuts on the LAr and CJC event timing information. Background events from beam-gas and beam-wall interactions are suppressed by restricting the $z$ coordinate of the primary vertex to be within 25~{\ensuremath{{\mathrm{cm}}}}\xspace of the nominal interaction point. \end{itemize} The reaction $ep \rightarrow e {\ensuremath{{\pi^+\pi^-}}}\xspace Y$ is kinematically underconstrained since only the two pions in the final state are reconstructed. The mass of the {\ensuremath{{\pi^+\pi^-}}}\xspace system {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace is reconstructed from the 4-momenta of the two tracks under pion hypothesis. The momentum transfer at the proton vertex $t$ and the scattering energy ${\ensuremath{{W_{\gamma p}}}}\xspace$ are reconstructed from the two pion 4-momenta: \begin{align} \trec &= -{\ensuremath{{\left(p_{T,\pi\pi}^{\mathrm{rec}}\right)^2}}}\xspace \label{eqn:exp_trec}\\ \intertext{and} {\ensuremath{{W_{\gamma p}^{\mathrm{rec}}}}}\xspace &= \sqrt{2E_p \left( {\ensuremath{{E_{\pi\pi}^{\mathrm{rec}}}}}\xspace - {\ensuremath{{p_{z,\pi\pi}^{\mathrm{rec}}}}}\xspace \right)}\,\text. \label{eqn:exp_wrec} \end{align} Here, $E_p$ denotes the proton-beam energy and ${\ensuremath{{E_{\pi\pi}^{\mathrm{rec}}}}}\xspace$, ${\ensuremath{{p_{T,\pi\pi}^{\mathrm{rec}}}}}\xspace$, and ${\ensuremath{{p_{z,\pi\pi}^{\mathrm{rec}}}}}\xspace$ are the measured energy, transverse, and longitudinal 4-momentum components of the {\ensuremath{{\pi^+\pi^-}}}\xspace system. These two equations are approximations to \eqnref{eqn:theo_Wgp} and \eqnref{eqn:theo_t}, respectively. In some regions of the probed phase space, $Q^2$ may be similar in size to $t$, or ${\ensuremath{{m_Y}}}\xspace$ may be similar in size to {\ensuremath{{W_{\gamma p}}}}\xspace, such that these approximations are poor. Such effects are corrected for in the unfolding procedure discussed later in the text (cf. \secref{sec:exp_unfolding}). The analysis phase space probed by this measurement is explicitly defined by detector-level cuts $15 < {\ensuremath{{W_{\gamma p}^{\mathrm{rec}}}}}\xspace <90~{\ensuremath{{\mathrm{GeV}}}}\xspace$, $\trec < 3~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$, and $0.3 <{\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace < 2.3~{\ensuremath{{\mathrm{GeV}}}}\xspace$. The exclusivity requirements, which veto events with detector activity not related to the {\ensuremath{{\pi^+\pi^-}}}\xspace pair, further restrict the phase space to $Q^2 \lesssim 2.5~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$ and ${\ensuremath{{m_Y}}}\xspace\lesssim 10~{\ensuremath{{\mathrm{GeV}}}}\xspace$. The mean and median $Q^2$ in that phase space are approximately $0.02~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$ and $8\cdot10^{-6}~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$, respectively, as evaluated in the MC simulation. A total of $943\,962$ {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction event candidates pass all selection requirements. In \figref{fig:exp_ctrl}, the selected number of events is shown as a function of {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace, {\ensuremath{{W_{\gamma p}^{\mathrm{rec}}}}}\xspace, and \trec. The distributions are compared to the MC model introduced in \secref{sec:theo_MCModel}. The {\ensuremath{{\rho^0}}}\xspace meson resonance at a mass of $\sim 770~{\ensuremath{{\mathrm{MeV}}}}\xspace$ clearly dominates the sample. Background contamination amounts to about 11\% and is investigated in the next section. \subsubsection{Background processes} The {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction sample includes various background contributions, even after the full event selection. The dominant background processes are the decays of diffractively produced $\omega \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace \pi^0$, $\phi\rightarrow K^+K^-$, or $\rho^\prime \rightarrow 4\pi$, as well as diffractive photon dissociation. Another source of background originates from interactions of the electron and proton beams with residual gas. Such \textit{reducible} background events are wrongly selected when charged kaons or protons are misidentified as pions or additional charged or neutral particles escape detection, e.g.,\ by being outside the central tracker acceptance or having energies below the calorimeter noise threshold. In addition to the {\ensuremath{{\rho^0}}}\xspace meson, some other vector mesons also decay to an exclusive {\ensuremath{{\pi^+\pi^-}}}\xspace state (cf. \tabref{tab:theo_diffVMSamples}). Rather than being treated as \textit{irreducible} background, these are included in the signal for the analysis of the {\ensuremath{{\pi^+\pi^-}}}\xspace production cross section. To study the reducible background contributions in more detail, multiple dedicated control regions are introduced: \begin{itemize} \item $\omega(782)$ mesons predominantly decay into the ${\ensuremath{{\pi^+\pi^-}}}\xspace \pi^0$ final state. The $\pi^0$ meson can be identified via energy deposits in the calorimeters that are not associated with either of the two pion tracks. An $\omega$ control region is defined by replacing the empty calorimeter selection by a cut $E_\mathrm{LAr}^\mathrm{!assoc} > 0.8~{\ensuremath{{\mathrm{GeV}}}}\xspace$ on the unassociated energy deposited in the LAr. Events with an $\omega$ meson are distinguished from those with a {\ensuremath{{\rho^\prime}}}\xspace meson by cuts on ${\ensuremath{{m_{\pi\pi}}}}\xspace < 0.55~{\ensuremath{{\mathrm{GeV}}}}\xspace$ and on the invariant mass of both tracks (assumed to be pions) and all unassociated clusters $m_\mathrm{evt} < 1.2~{\ensuremath{{\mathrm{GeV}}}}\xspace$. The $\omega$ meson purity achieved in this region is roughly 54\%. \item $\phi(1020)$ mesons predominantly decay into pairs of charged kaons. A $\phi$ control region is defined by replacing the $\ensuremath{{\mathrm{d}}} E/\ensuremath{{\mathrm{d}}} x$ pion identification selection cuts by a kaon selection and requiring the invariant $K^+K^-$ mass to be within 15~{\ensuremath{{\mathrm{MeV}}}}\xspace of the $\phi$ meson mass. Also the cut on the opening angle between the two tracks at the vertex is removed. The $\phi$ meson purity achieved in this region is roughly 89\%. \item The excited ${\ensuremath{{\rho^\prime}}}\xspace$ meson dominantly decay into 4 pions in various charge configurations. Due to the track veto in the trigger, additional tracks cannot be used to identify ${\ensuremath{{\rho^\prime}}}\xspace \rightarrow 4\pi$ events. Instead, unassociated energy deposits in the LAr $E_\mathrm{LAr}^\mathrm{!assoc} > 0.8~{\ensuremath{{\mathrm{GeV}}}}\xspace$ are required in place of the empty LAr signal selection. Events with a ${\ensuremath{{\rho^\prime}}}\xspace$ meson are distinguished from those with an $\omega$ meson by requiring $m_\mathrm{evt}>1.2~{\ensuremath{{\mathrm{GeV}}}}\xspace$. The ${\ensuremath{{\rho^\prime}}}\xspace$ meson purity achieved in this region is roughly 48\%. \item Particles from photon dissociation emerge primarily in the backwards direction. A photon dissociation control region is thus defined by replacing the empty SpaCal signal requirement by a cut $ 4 < E_\mathrm{SpaCal}^\mathrm{!assoc} < 10~{\ensuremath{{\mathrm{GeV}}}}\xspace$ on the unassociated energy deposit in the SpaCal. The lower cut removes $\omega$ and ${\ensuremath{{\rho^\prime}}}\xspace$ events, the upper cut is retained as a veto against the scattered electron. The photon dissociation purity achieved in this region is roughly 78\%. \end{itemize} For the {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross section measurement, reducible background processes are subtracted in the unfolding procedure, where the templates for the respective diffractive background processes are taken from the DIFFVM MC samples, as described in \secref{sec:theo_MCModel}. The respective normalisation factors are determined by making use of the control regions in the unfolding process. The residual beam-gas induced background is modelled in a data driven approach, using events from electron and proton {\em pilot} bunches. For electron (proton) pilot bunches, there is a corresponding gap in the proton (electron) beam bunch structure, such that only interactions with rest-gas atoms may occur. The beam gas background shape predictions estimated from pilot bunch events are scaled to match the integrated beam current of the colliding bunches. The beam-gas induced background amounts to about 2\%. \subsubsection{Proton dissociation tagging} \label{sec:forwardTagging} \noindent In proton-dissociative events, the proton remnants are produced in the very forward direction where the H1 detector is only sparsely instrumented. Consequently, the remnants cannot be fully reconstructed, and elastic and proton-dissociative scattering cannot be uniquely identified on an event-by-event basis. However, in many cases some of the remnants do induce signals in the forward instruments, either directly or via secondary particles that are produced in interactions with the detector or machine infrastructure. These signals are used to tag proton-dissociative events (\textit{tagging fraction}). At a much lower level, such signals can also be present in elastic scattering events (\textit{mistagging fraction}), e.g.,\ in the presence of detector noise or when the elastically scattered proton produces secondaries in interactions with the beam transport system. By simulating the respective tagging and mistagging fractions of proton-dissociative and elastic scattering, the respective proton-dissociative and elastic contributions to the cross section can be determined. The forward detectors used for tagging in this analysis are the FMD, the PLUG, and the FTS. An event is considered to be tagged by the FMD if there is at least one hit in any of the first three FMD layers, by the PLUG if there is more than one cluster above a noise level of 1.2~{\ensuremath{{\mathrm{GeV}}}}\xspace, or by the FTS if it produces at least one hit. A small contribution to the FTS signal, induced by secondary particles produced by the elastically scattered proton hitting a beam collimator, is discarded by applying acceptance cuts depending on $\trec$ and the location of hits in the FTS~\cite{Phd:Abolz}. The tagging information from the three detectors is combined by summing the total number of tags in an event: $ 0 \leq N_\mathrm{tags} \leq 3$. In turn, three tagging categories are defined: a zero-tag ($N_\mathrm{tag} = 0$), single-tag ($N_\mathrm{tag} = 1$), and multi-tag ($N_\mathrm{tag} \geq 2$) category. The respective tagging fractions for events passing the standard selection cuts are shown in \figref{fig:exp_ctrl_tagEff_ptsq} as a function of \trec. The tagging categories are used to split the dataset into 3 tagging control regions. The zero-tag region is dominated by $\sim 90\%$ elastic events, in the single-tag region their fraction is reduced to $\sim 64\%$, and in the multi-tag region proton dissociation dominates at $\sim 91\%$. \subsection{Unfolding} \label{sec:exp_unfolding} An unfolding procedure is applied to correct binned reconstructed detector-level distributions for various detector effects. The unfolding corrects the data for reducible background contributions, the finite resolution of reconstructed variables, and efficiency and acceptance losses. Furthermore, it is set up to separate elastic and proton-dissociative scattering events. This makes possible the determination of elastic and proton-dissociative particle level distributions from which the corresponding differential {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections are derived. The cross sections are measured in a fiducial phase space that is slightly smaller than the analysis phase space defined by the event selection. This makes it possible to account for contributions migrating into and out of the fiducial phase space. The fiducial and analysis phase space cuts are summarised in \tabref{tab:exp_fiducialPS}. \begin{table}\centering \small \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{3pt} \begin{tabular}{@{}c@{\hspace{5em}} c@{}} \toprule \multicolumn{1}{@{}l}{Analysis phase space} & \multicolumn{1}{@{}l}{Fiducial measurement phase space} \\ \midrule \begin{tabular}[t]{ @{}r c c c r l@{} } 15.0 &$<$ & ${\ensuremath{{W_{\gamma p}}}}\xspace$ & $<$ & 90.0 & {\ensuremath{{\mathrm{GeV}}}}\xspace \\ & & $|t|$ & $<$ & 3.0 & {\ensuremath{{\mathrm{GeV}^2}}}\xspace \\ 0.3 &$<$ & ${\ensuremath{{m_{\pi\pi}}}}\xspace$ & $<$ & 2.3 & {\ensuremath{{\mathrm{GeV}}}}\xspace\\ & & $Q^2$ & $<$ & 2.5 & {\ensuremath{{\mathrm{GeV}^2}}}\xspace \\ & & \multirow{2}{*}{${\ensuremath{{m_Y}}}\xspace$} & \multirow{2}{*}{$<$} & \multirow{2}{*}{10.0} & \multirow{2}{*}{{\ensuremath{{\mathrm{GeV}}}}\xspace}\\ \end{tabular} & \begin{tabular}[t]{ @{}l r c c c r l@{}} &20.0 &$<$ & ${\ensuremath{{W_{\gamma p}}}}\xspace$ & $<$ & 80.0 & {\ensuremath{{\mathrm{GeV}}}}\xspace \\ & & & $|t|$ & $<$ & 1.5 & {\ensuremath{{\mathrm{GeV}^2}}}\xspace \\ &0.5 &$<$ & ${\ensuremath{{m_{\pi\pi}}}}\xspace$ & $<$ & 2.2 & {\ensuremath{{\mathrm{GeV}}}}\xspace\\ & & & $Q^2$ & $<$ & 2.5 & {\ensuremath{{\mathrm{GeV}^2}}}\xspace\\ \multicolumn{1}{@{}l}{elastic:} & & & ${\ensuremath{{m_Y}}}\xspace$ & $=$ & $m_p$ \\ \multicolumn{1}{@{}l}{$p$-dissociative:} & $m_p$ & $<$ & ${\ensuremath{{m_Y}}}\xspace$ & $<$ & 10.0 & {\ensuremath{{\mathrm{GeV}}}}\xspace \\ \end{tabular}\\ \bottomrule \end{tabular} \caption{Analysis and fiducial measurement phase space. At detector level, the respective cuts are applied to {\ensuremath{{W_{\gamma p}^{\mathrm{rec}}}}}\xspace, \trec, and {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace, the $Q^2$ cut is replaced by the veto on the reconstruction of the scattered electron, and the ${\ensuremath{{m_Y}}}\xspace$ cut is replaced by the rapidity gap requirement as detailed in the text. } \label{tab:exp_fiducialPS} \end{table} \subsubsection{Regularised template fit} Differential cross section measurements are performed for various distributions of the variables ${\ensuremath{{m_{\pi\pi}}}}\xspace$, {\ensuremath{{W_{\gamma p}}}}\xspace, and $t$, and combinations thereof. The beam-gas background template is subtracted from the considered data distribution and the result is used as input to the unfolding. The unfolding is performed by means of a regularised template fit within the TUnfold framework~\cite{Schmitt:2012kp}. A response matrix $\bm A$ is introduced, with elements $A_{ij}$ describing the probability that an event generated in bin $j$ of a truth-level distribution $\vec{x}$ is reconstructed in bin $i$ of a detector-level distribution $\vec{y}$. In the definition of the response matrix, elastic and proton-dissociative signal and background MC processes are included as dedicated sub-matrices. This makes it possible to separate the elastic and proton-dissociative signal components in the unfolding. Also, background subtraction is implicitly performed during the unfolding where the normalisation of the backgrounds is determined in the fit. At detector level, the response matrix is split into signal and background control regions as defined above. The signal region is further split into three and the background regions into two orthogonal forward tagging categories. This constrains the respective MC contributions in the template fit. Migrations into and out of the fiducial phase space are considered by including side bins in each sub-matrix both at detector and at truth level~\cite{Phd:Abolz}. These contain events passing the analysis phase space cuts but failing the fiducial cuts (cf. \tabref{tab:exp_fiducialPS}). As an illustration, the response matrix used for unfolding the one-dimensional {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace distributions is given in \figref{fig:exp_respMatrix_mpipi}. For all response matrices, most truth bins are found to have good constraints by at least one reconstruction level bin with a purity above 50\%. As a result, the statistical correlations between bins of the unfolded one-dimensional distributions are small. The somewhat larger statistical correlations in the unfolding of multi-dimensional distributions are reduced by introducing a Tikhonov regularisation term on the curvature into the template fit~\cite{Schmitt:2012kp,tikonov}. The truth-level MC distributions, that are adjusted to data as explained in \secref{sec:theo_MCModel}, are used as a bias in the regularisation. However, regularisation is not used in the unfolding of the one-dimensional {\ensuremath{{m_{\pi\pi}}}}\xspace distributions. This makes possible the measurement of the $\omega$ meson mass and production cross section, which were found to be very sensitive to any sort of regularisation. \subsubsection{{\ensuremath{{\pi^+\pi^-}}}\xspace cross section definition} \label{sec:exp_xSecPiPi} Differential {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections are calculated from the number of unfolded {\ensuremath{{\pi^+\pi^-}}}\xspace events. The double-differential cross section in {\ensuremath{{m_{\pi\pi}}}}\xspace and $t$ is defined as \begin{equation} \left[\dfrac{\ensuremath{{\mathrm{d}}}^{2} \sigma(\gamma p \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace Y) ({\ensuremath{{m_{\pi\pi}}}}\xspace, t; {\ensuremath{{W_{\gamma p}}}}\xspace)}{ \ensuremath{{\mathrm{d}}} t \, \ensuremath{{\mathrm{d}}} {\ensuremath{{m_{\pi\pi}}}}\xspace} \right]_j = \dfrac{ \hat{x}_j }{ \Delta t \, \Delta {\ensuremath{{m_{\pi\pi}}}}\xspace} \dfrac{1}{ \mathcal{L}\, \ensuremath{{\Phi_{\gamma/e}}}\xspace\left({\ensuremath{{W_{\gamma p}}}}\xspace\right) } \label{eqn:exp_xSecDef} \end{equation} in bin $j$ of an unfolded distribution and with $\hat x_j$ being the number of unfolded events. Differentiation is approximated by dividing the event yields by the {\ensuremath{{m_{\pi\pi}}}}\xspace and $t$ bin widths, $\Delta t$ and $\Delta {\ensuremath{{m_{\pi\pi}}}}\xspace$, respectively. No bin centre correction is performed for the variables {\ensuremath{{m_{\pi\pi}}}}\xspace and $t$. Instead, predictions or fit functions are integrated over the respective bin width when comparing to data. The event yields are normalised by the integrated $ep$ luminosity $\mathcal{L}$. Photoproduction cross sections are calculated by normalising the $ep$ cross sections by means of the integrated effective photon flux $\ensuremath{{\Phi_{\gamma/e}}}\xspace$, that is calculated for the relevant {\ensuremath{{W_{\gamma p}}}}\xspace ranges according to \eqnref{eqn:theo_intFlux}. \subsection{Cross section uncertainties} \label{sec:systematics} The measured cross sections are subject to statistical and systematic uncertainties. Statistical uncertainties originate from the limited size of the available data and MC samples. Systematic uncertainties originate from the MC modelling of the signal and background processes and their respective kinematic distributions, as well as from the simulation of the detector response. Detector uncertainties are estimated by either varying the simulated detector response to MC events or by modifying the event selection procedure simultaneously in both the MC samples and data. Model uncertainties are estimated by varying the generation and reweighting parameters of the MC samples. \subsubsection{Statistical uncertainties} Two sources of statistical uncertainties are considered: uncertainties of the data distributions and uncertainties of the unfolding factors obtained from the MC samples. The data uncertainties include contributions from the subtraction of the beam-gas background. Statistical uncertainties are propagated through the unfolding as described in the TUnfold documentation~\cite{Schmitt:2012kp}. The finite detector resolution imposes correlations on the statistical uncertainties after the unfolding. \subsubsection{Experimental systematic uncertainties} In order to estimate uncertainties related to the knowledge of the H1 detector response, the unfolding is repeated with systematically varied response matrices. Two types of variations are considered for different sources of uncertainty. Variations of the MC samples on detector level are introduced and are propagated to the cross sections by repeating the unfolding with varied response matrices. Other uncertainties are estimated by modifying the selection procedure. Such variations affect both the migration matrices and the input data distributions simultaneously. For two-sided variations, the full absolute shifts between the nominal and varied unfolded distributions are used to define up and down uncertainties. For one-sided variations, the full shift is taken as a two-sided uncertainty~\cite{Phd:Abolz}. Concerning the trigger, the following uncertainties are considered: \begin{itemize} \item The trigger scale factors for the CIP and FTT correction are varied to account for statistical and shape uncertainties. A further uncertainty is estimated to cover the propagation of the correction factors from the electroproduction regime, where they are derived, to photoproduction~\cite{Phd:Abolz}. \item The trigger scale factors for the correction of the forward vetoes are varied to account for statistical and shape uncertainties~\cite{Phd:Abolz}. \item Pions in the very backward direction may potentially interfere with the SpaCal trigger veto via secondary particles produced in {\em nuclear interactions}. Nuclear interactions of the pions with the beam transport system and detector material are not modelled perfectly by the simulation. To estimate a potential impact, the cut on the upper track polar angle acceptance is varied between $150^\circ$ and $160^\circ$. \end{itemize} Concerning the track reconstruction and simulation, the following uncertainties are considered: \begin{itemize} \item The uncertainty of the modelling of the geometric CJC acceptance in $p_T$ is estimated by varying the acceptance cut to $p_T >0.18~{\ensuremath{{\mathrm{GeV}}}}\xspace$. \item The uncertainty for potential mismodelling of the forward tracking detectors is estimated by varying the veto on the number of forward tracks to allow either none or up to two tracks in the selection. \item The uncertainty of the MC $z$-vertex distribution is estimated by varying the MC distribution to cover small discrepancies in the mean and tails relative to the observed $z$-vertex distribution in data. \item In the particle identification, the cuts on the kaon, proton, and deuteron {\ensuremath{{\dd E/\dd x}}}\xspace rejection likelihoods are varied up and down while simultaneously varying the pion {\ensuremath{{\dd E/\dd x}}}\xspace selection likelihood down and up. \item Inhomogeneities in the magnetic field may not be fully modelled in the simulation. Nor may the modelling of nuclear interactions of the pions with the detector material be completely accurate. These effects are estimated to result in an uncertainty on the modelled track $p_T$ resolution of 20\%. Their impact is evaluated by smearing the reconstructed track $p_{T,\mathrm{rec}} \rightarrow p_{T,\mathrm{rec}} \pm 0.2\, \left(p_{T,\mathrm{rec}}-p_{T,\mathrm{gen}} \right)$ with respect to the generated true $p_{T,\mathrm{gen}}$ in the signal {\ensuremath{{\pi^+\pi^-}}}\xspace MC samples. The result is propagated to all kinematic variables reconstructed from the two pion 4-momenta. \item Similarly, a 20\% uncertainty is assumed on the resolution of the track $\theta$ measurement to cover potential mismodellings in the simulation. \end{itemize} Concerning the track momentum scale, the following uncertainties are considered: \begin{itemize} \item The field strength of the H1 solenoid is known to a level of 0.3\%~\cite{Abt:1996xvhi} or better. To estimate a potential impact on the absolute track $p_T$ scale, the reconstructed MC $p_T$ values are varied up and down by $\pm 0.3\%$. The variation is performed simultaneously for both tracks. \item The energy loss correction depends on a precise knowledge of the material budget, which is known at a level of 7\%. Through the energy loss correction in the track reconstruction, this uncertainty results in an average track $p_T$ uncertainty of $0.4~{\ensuremath{{\mathrm{MeV}}}}\xspace$. \end{itemize} Concerning the calorimeters, the following uncertainties are considered: \begin{itemize} \item The noise cuts on LAr and SpaCal energy clusters are independently varied up and down by $\pm 0.2~{\ensuremath{{\mathrm{GeV}}}}\xspace$ and $\pm 0.1~{\ensuremath{{\mathrm{GeV}}}}\xspace$, respectively. \item The energy scales of the LAr and SpaCal clusters are independently varied by $\pm 10\%$. These variations are applied prior to the respective cluster noise cut. \item A mismodelling of nuclear interactions may also affect the calorimeter response. A potential impact is estimated by varying the cut applied for associating calorimeter clusters to tracks by $\pm 10~{\ensuremath{{\mathrm{cm}}}}\xspace$. \end{itemize} Concerning the forward detectors, the following tagging uncertainties are considered: \begin{itemize} \item The tagging and mistagging fractions of the forward detectors are varied independently. The FTS tagging fraction is varied by $\pm 5\%$ in the proton-dissociative MC samples and the mistagging fraction by $\pm 50\%$ in the elastic samples. Similarly, these fractions in the FMD are varied by $\pm 5\%$ and $\pm 5\%$ and in the PLUG by $\pm 5\%$ and $\pm 100\%$, respectively. The sizes of the variations are estimated to cover observed mismodellings in the tagging fractions of the respective detectors. \end{itemize} \subsubsection{Model uncertainties} In order to estimate model uncertainties, the MC samples are modified on generator-level. Model uncertainties are propagated to the cross sections by repeating the unfolding with the varied migration matrices and MC bias distributions. Cross section uncertainties are then calculated from the shifted unfolded spectra in the same manner as for the experimental uncertainties. Generally, the unfolding approach is rather insensitive to many aspects of the MC modelling. The following model variations are considered: \begin{itemize} \item Uncertainties on the $Q^2$ and ${\ensuremath{{m_Y}}}\xspace$ dependencies of the MC samples are estimated~\cite{Alexa:2013xxa}. The $Q^2$ dependence of the MC samples is varied by applying an event weight ${(1+Q^2/m_{\ensuremath{{\mathrm{VM}}}\xspace}^2)^{\pm 0.07}}$ in agreement with experimental uncertainties~\cite{Aaron:2009xp}. The ${\ensuremath{{m_Y}}}\xspace$ dependence of the proton-dissociative sample is varied by applying a weight ${(1/{\ensuremath{{m_Y^2}}}\xspace)^{\pm 0.15}}$. \item The tuning parameters of the MC samples derived to describe the kinematic data distributions (\secref{sec:theo_MCModel}) are independently varied up and down~\cite{Phd:Abolz}. \item For the ${\ensuremath{{\rho^\prime}}}\xspace$ background MC samples, an uncertainty on the relative ratio of $\rho(1450)$ and $\rho(1700)$ is estimated by varying it from $1{:}1$ up and down to $2{:}1$ and $1{:}2$, \linebreak respectively. An uncertainty on the ${\ensuremath{{\rho^\prime}}}\xspace$ decay modes is estimated by varying \linebreak ${{\ensuremath{{\mathcal{BR}}}}\xspace({\ensuremath{{\rho^\prime}}}\xspace \rightarrow \rho^\pm \pi^\mp \pi^0) = (50\pm25)\%}$ while simultaneously scaling all other decay modes proportionally. \item A shape uncertainty on the mass distribution of the photon-dissociative mass is estimated by reweighting the distribution by $(1/m_{X}^2)^{\pm 0.15}$. \end{itemize} \subsubsection{Normalisation uncertainties} Normalisation uncertainties are directly applied to the unfolded cross sections. The following uncertainties are considered: \begin{itemize} \item The uncertainty of the track reconstruction efficiency due to modelling of the detector material is 1\% per track, leading to a 2\% normalisation uncertainty. \item The integrated luminosity of the used dataset is known at a precision of 2.7\%. \item Higher order QED effects have been estimated to be smaller than 2\%~\cite{Kurek:1996ez} which is taken as a normalisation uncertainty. \end{itemize} This results in a combined 3.9\% normalisation uncertainty on the measured cross sections. \subsection{Cross section fits} \label{sec:exp_fits} For the analysis, various parametric models are fitted to the measured cross section distributions. Fits are performed by varying model parameters to minimise a $\chi^2$ function. Parametrisations of differential distributions are integrated over the respective kinematic bins and divided by their bin widths in order to match the differential cross section definition (cf. \eqnref{eqn:exp_xSecDef}). The $\chi^2$ function takes into account only the statistical uncertainties, represented by the covariance matrix. Nominal fit parameters are obtained by fitting the nominal measured cross section distributions. The corresponding $\chi^2$ value relative to the number of degrees of freedom $\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace$ is used as a measure of the quality of a fit. Systematic uncertainties are then propagated through a fit via an \textit{offset method}, where each systematic cross section variation is fitted independently. The resulting shifts in the fit parameters relative to the nominal parameters are used to define the systematic parameter uncertainties. \section{Results} \label{sec:results} \noindent Elastic and proton-dissociative {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections are measured. The measurement is presented integrated over the fiducial phase space and as one-, two-, and three-dimensional cross section distributions as functions of {\ensuremath{{m_{\pi\pi}}}}\xspace, {\ensuremath{{W_{\gamma p}}}}\xspace, and $t$. Subsequently, {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections as functions of {\ensuremath{{W_{\gamma p}}}}\xspace and $t$ are extracted via fits to the invariant {\ensuremath{{\pi^+\pi^-}}}\xspace mass distributions (cf.~\eqnref{eqn:theo_rhoSoedMass}). The procedure is exemplified with the differential {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections {\ensuremath{{\dd \sigma(\gamma p \rightarrow \pipi Y)/\dd \mpipi}}}\xspace as functions of {\ensuremath{{m_{\pi\pi}}}}\xspace. One- and two-dimensional {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross section distributions are then parametrised and interpreted using fits. In particular, a Regge fit of the elastic {\ensuremath{{\rho^0}}}\xspace meson production cross section as a function of {\ensuremath{{W_{\gamma p}}}}\xspace and $t$ makes possible the extraction of the effective leading Regge trajectory $\alpha(t)$. \subsection{Fiducial {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections} \label{sec:res_sigmaPiPiFid} \noindent The fiducial {\ensuremath{{\pi^+\pi^-}}}\xspace electroproduction cross section is measured in the phase space defined in \tabref{tab:exp_fiducialPS} by unfolding one-dimensional {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace distributions, integrating the event yield over all bins, and normalising the result by the integrated luminosity. It is subsequently turned into a photoproduction cross section by normalisation to the integrated effective photon flux. In the phase space considered, the integrated effective flux calculated with \eqnref{eqn:theo_intFlux} is \ensuremath{{\Phi_{\gamma/e}}}\xspace = 0.1368\text. This yields a {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross section of \begin{equation} \ensuremath{{\sigma(\gamma p \rightarrow \pipi Y)}}\xspace = 16.20 \pm 0.05\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+1.11}_{-1.15}\ (\ensuremath{{\mathrm{syst.}}}\xspace)\ \ensuremath{{\mu\mathrm{b}}}\xspace \text, \ \text{ for }\ m_p \leq {\ensuremath{{m_Y}}}\xspace < 10~{\ensuremath{{\mathrm{GeV}}}}\xspace \text. \end{equation} The cross section is measured at an average energy $\langle {\ensuremath{{W_{\gamma p}}}}\xspace \rangle \simeq 43~{\ensuremath{{\mathrm{GeV}}}}\xspace$ as estimated from the MC simulation. The elastic and proton-dissociative components of the cross section are separated through the unfolding which yields \begin{alignat}{3} \ensuremath{{\sigma(\gamma p \rightarrow \pipi p)}}\xspace =&&\ 11.&52 &&\pm 0.06\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.76}_{-0.78}\ (\ensuremath{{\mathrm{syst.}}}\xspace)\ \ensuremath{{\mu\mathrm{b}}}\xspace \text{ and} \\ \ensuremath{{\sigma(\gamma p \rightarrow \pipi Y)}}\xspace =&&\ 4.&68 &&\pm 0.06\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.62}_{-0.64}\ (\ensuremath{{\mathrm{syst.}}}\xspace)\ \ensuremath{{\mu\mathrm{b}}}\xspace \text, \ \text{ for }\ m_p < {\ensuremath{{m_Y}}}\xspace < 10~{\ensuremath{{\mathrm{GeV}}}}\xspace \text, \end{alignat} respectively\footnote{The fiducial cross sections can be obtained by alternatively unfolding the $\trec$, {\ensuremath{{W_{\gamma p}^{\mathrm{rec}}}}}\xspace, or multi-dimensional distributions. The variations of the results are similar in size to the statistical uncertainties. Comparable statistical and systematic uncertainties are observed.}. The uncertainties of the two components are correlated with a statistical and symmetrised total Pearson correlation coefficient of $\rho_\mathrm{stat} = -0.59$, and $\rho_\mathrm{tot} = +0.30$, respectively. The measurements are statistically very precise at a level of 0.5\% (1.2\%) but have large systematic uncertainties of 6.3\% (13.2\%) for the elastic (proton-dissociative) component. The uncertainty of the elastic component is dominated by the trigger and the normalisation uncertainties of 4.1\% and 3.9\%, respectively, whereas the uncertainties associated with the tagging and the calorimeter dominate the proton-dissociative component at 8.4\% and 7.3\%, respectively. A more detailed breakdown of the cross section uncertainties is given in \tabref{tab:res_fidUncGroups}. \begin{table}[!ttt]\centering \small \begin{tabular}{@{}l@{\hspace{4em}} c@{\hspace{5em}} c@{}} \\ \toprule & \multicolumn{2}{c}{Relative cross section uncertainty [\%]} \\ \cmidrule{2-3} Source of uncertainty & ${{{\ensuremath{{m_Y}}}\xspace} = m_p}$ & {${m_p < {{\ensuremath{{m_Y}}}\xspace} < 10~{{\ensuremath{{\mathrm{GeV}}}}\xspace}}$}\\ \midrule Statistical & 0.5 & 1.2 \\[6pt] Trigger & 4.1 & 5.3 \\ Tracking & 1.4 & 1.3 \\ Momentum scale & 0.1 & 0.1 \\ Calorimeter & 1.5 & 7.3 \\ Tagging & 2.0 & 8.4 \\[6pt] Normalisation & 3.9 & 3.9 \\[6pt] MC model (${\ensuremath{{m_Y}}}\xspace,Q^2,$ bgr.) & 2.0 & 2.7 \\ MC model (${\ensuremath{{m_{\pi\pi}}}}\xspace,{\ensuremath{{W_{\gamma p}}}}\xspace,t$) & 0.1 & 0.4 \\ \midrule Total & 6.6 & 13.3 $\ $ \\ \bottomrule \end{tabular} \caption{Summary of the combined impact of systematic uncertainties on the fiducial {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections. The numbers are obtained by unfolding the one-dimensional {\ensuremath{{m_{\pi\pi}}}}\xspace distribution and with symmetrised systematic uncertainties.} \label{tab:res_fidUncGroups} \end{table} Photoproduction of {\ensuremath{{\pi^+\pi^-}}}\xspace mesons has a fairly large cross section and forms a sizeable contribution to the total $\gamma p$ cross section, which is known for similar {\ensuremath{{W_{\gamma p}}}}\xspace from cosmic ray experiments~\cite{Vereshkov:2003cp}. Within the restrictions of the fiducial phase space in {\ensuremath{{m_{\pi\pi}}}}\xspace, $t$, and {\ensuremath{{m_Y}}}\xspace, the elastic and proton-dissociative processes contribute to the total cross section by about 9\% and 4\%, respectively. The elastic {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross section has been measured before in $ep$ collisions at slightly higher {\ensuremath{{W_{\gamma p}}}}\xspace \cite{Aid:1996bs,Breitweg:1997ed}. Considering the differences in {\ensuremath{{W_{\gamma p}}}}\xspace and the energy dependence of the elastic cross section (see below), the measurements are found to be consistent. There has not been a previous measurement of the proton-dissociative {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross section in a comparable phase space. \subsection{Differential {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections} \label{sec:res_sigmaRhoFid} \noindent The elastic and proton-dissociative differential {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections \linebreak {\ensuremath{{\dd \sigma(\gamma p \rightarrow \pipi Y)/\dd \mpipi}}}\xspace are measured as functions of {\ensuremath{{m_{\pi\pi}}}}\xspace by unfolding one-dimensional {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace distributions. Numerical results are given in \apxref{apx:sec_pipiXSecTables} and the cross section distributions are displayed in \figref{fig:results_dSigma_dm}. The photoproduction of {\ensuremath{{\pi^+\pi^-}}}\xspace mesons is dominated by the {\ensuremath{{\rho^0}}}\xspace meson resonance peaking at a mass of around 770~{\ensuremath{{\mathrm{MeV}}}}\xspace. The differential cross sections fall off steeply towards higher masses with a second broad excited $\rho^\prime$ meson resonance appearing at around 1600~{\ensuremath{{\mathrm{MeV}}}}\xspace. At the {\ensuremath{{\rho^0}}}\xspace meson resonance peak, the proton-dissociative cross section is around 40\% of the elastic cross section with the difference becoming smaller towards higher masses. The uncertainties of the measurement vary slowly with {\ensuremath{{m_{\pi\pi}}}}\xspace. At the {\ensuremath{{\rho^0}}}\xspace meson peak the statistical and systematic uncertainty of the elastic differential cross section are roughly 1\% and 7\%, respectively. At the position of the $\rho^\prime$ peak they have grown to 7\% and 13\%, respectively. The corresponding uncertainties of the differential proton-dissociative cross section are about twice as large. \subsubsection{Fit of the {\ensuremath{{m_{\pi\pi}}}}\xspace dependence} In order to extract the {\ensuremath{{\rho^0}}}\xspace meson contributions to the {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections, the {\ensuremath{{m_{\pi\pi}}}}\xspace lineshape defined in \eqnref{eqn:theo_rhoSoedMass} is fitted to the data. As this paper focusses on the analysis of {\ensuremath{{\rho^0}}}\xspace meson production, the fit is only performed in the analysis range $0.6~{\ensuremath{{\mathrm{GeV}}}}\xspace \leq {\ensuremath{{m_{\pi\pi}}}}\xspace \leq 1.0~{\ensuremath{{\mathrm{GeV}}}}\xspace$ where contributions from excited $\rho^\prime$ meson states can be neglected. The parametrisation is fitted simultaneously to the elastic and proton-dissociative distributions with the parameter settings described in \tabref{tab:theo_SodingPars}. The model describes the data well with the fit yielding $\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace= 24.6/24$. The resulting model parameters are presented in \tabref{tab:results_dSigma_dm_pars}. The fitted curves are shown in \figref{fig:results_dSigma_dm_fit}, where they are compared to the measured data. \begin{table}[!thb]\centering \small \renewcommand{\arraystretch}{1.3} \begin{tabular}{@{}l@{\hspace{4em}} d{3.4} d{1.4} l c@{\hspace{4em}} d{3.4} d{1.4} l@{}} \toprule Shared parameter & \multicolumn{1}{l}{Value} & \multicolumn{1}{l}{$\Delta_{\ensuremath{{\mathrm{stat.}}}\xspace}$} & \multicolumn{1}{l}{$\Delta_{\ensuremath{{\mathrm{syst.}}}\xspace}$} &\\ \midrule $ m_{\rho}~[{\ensuremath{{\mathrm{MeV}}}}\xspace]$ & 770.8 & 1.3 &$ ^{+2.3}_{-2.4}$ \\ $ \Gamma_{\rho}~[{\ensuremath{{\mathrm{MeV}}}}\xspace]$ & 151.3 & 2.2 &$ ^{+1.6}_{-2.8}$ \\ $ m_{\omega}~[{\ensuremath{{\mathrm{MeV}}}}\xspace]$ & 777.9 & 2.2 &$ ^{+4.3}_{-2.2}$ \\ $ \Gamma_{\omega}~[{\ensuremath{{\mathrm{MeV}}}}\xspace]$ & 8.5 & \multicolumn{2}{l}{PDG fixed} \\ $ \delta_{nr}$ & 0.76 & 0.35 &$ ^{+0.14}_{-0.08}$ \\ \midrule & \multicolumn{3}{c}{${\ensuremath{{m_Y}}}\xspace = m_p$} && \multicolumn{3}{c}{$m_p < {\ensuremath{{m_Y}}}\xspace < 10~{{\ensuremath{{\mathrm{GeV}}}}\xspace}$} \\ \cmidrule{2-4} \cmidrule{6-8} Independent parameter & \multicolumn{1}{l}{Value} & \multicolumn{1}{l}{$\Delta_{\ensuremath{{\mathrm{stat.}}}\xspace}$} & \multicolumn{1}{l}{$\Delta_{\ensuremath{{\mathrm{syst.}}}\xspace}$} && \multicolumn{1}{l}{Value} & \multicolumn{1}{l}{$\Delta_{\ensuremath{{\mathrm{stat.}}}\xspace}$} & \multicolumn{1}{l}{$\Delta_{\ensuremath{{\mathrm{syst.}}}\xspace}$}\\ \midrule $ A~[\ensuremath{{\mu\mathrm{b}}}\xspace/{\ensuremath{{\mathrm{GeV}}}}\xspace^{2}]$ & 48.4 & 0.8 & $^{+3.3}_{-3.2}$ && 20.8 & 0.4 & $^{+2.7}_{-2.6}$\\ $ f_{\omega}$ & 0.166 & 0.017 & $^{+0.008}_{-0.023}$ && 0.135 & 0.042 & $^{+0.042}_{-0.036}$\\ $ \phi_{\omega}$ & -0.53 & 0.22 & $^{+0.21}_{-0.17}$ && -0.02 & 0.34 & $^{+0.31}_{-0.19}$\\ $ f_{nr}$ & 0.189 & 0.026 & $^{+0.025}_{-0.016}$ && 0.145 & 0.025 & $^{+0.029}_{-0.014}$\\ $ \Lambda_{nr}~[{\ensuremath{{\mathrm{GeV}}}}\xspace]$ & 0.18 & 0.59 & $^{+0.20}_{-0.10}$ && 0.1 & 1.3 & $^{+0.4}_{-0.3}$\\ \bottomrule \end{tabular}\\ \caption{% Free parameters for the fit of the single-differential elastic and proton-dissociative cross section {\ensuremath{{\dd \sigma(\gamma p \rightarrow \pipi Y)/\dd \mpipi}}}\xspace as a function of {\ensuremath{{m_{\pi\pi}}}}\xspace. The nominal fit values and statistical and systematic uncertainties are given. A full uncertainty breakdown and statistical correlations are provided~\cite{H1_data}. The corresponding fit is shown in \figref{fig:results_dSigma_dm_fit}.} \label{tab:results_dSigma_dm_pars} \end{table} Within the fit model, the differential {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections obtain significant contributions from the {\ensuremath{{\rho^0}}}\xspace meson resonance and non-resonant {\ensuremath{{\pi^+\pi^-}}}\xspace production. The two corresponding amplitudes give rise to large interference terms that result in a skewing of the resonance lineshapes. At the measured {\ensuremath{{\rho^0}}}\xspace meson mass, the relative contributions of the {\ensuremath{{\rho^0}}}\xspace resonance and non-resonant {\ensuremath{{\pi^+\pi^-}}}\xspace production to the elastic differential {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross section are $[79.1 \pm 3.2~(\ensuremath{{\mathrm{stat.}}}\xspace)~{}^{+4.4}_{-1.4}~(\ensuremath{{\mathrm{syst.}}}\xspace)]\%$ and $[7.1 \pm 0.6~(\ensuremath{{\mathrm{stat.}}}\xspace)~{}^{+0.7}_{-0.5}~(\ensuremath{{\mathrm{syst.}}}\xspace)]\%$, respectively. The corresponding contributions to the proton-dissociative cross section are $[85.9 \pm 3.8~(\ensuremath{{\mathrm{stat.}}}\xspace)~{}^{+3.8}_{-3.2}~(\ensuremath{{\mathrm{syst.}}}\xspace)]\%$ and $[4.8 \pm 0.6~(\ensuremath{{\mathrm{stat.}}}\xspace)~{}^{+0.6}_{-0.5}~(\ensuremath{{\mathrm{syst.}}}\xspace)]\%$, respectively. Most noticeably, the relative non-resonant contribution is smaller for proton-dissociative than for elastic scattering resulting in a weaker skewing. In the analysed mass range, the non-resonant differential cross section exhibits little dependence on {\ensuremath{{m_{\pi\pi}}}}\xspace. For this reason the two fit parameters $\delta_{\ensuremath{{\mathrm{nr}}}\xspace}$ and $\Lambda_{\ensuremath{{\mathrm{nr}}}\xspace}$ are strongly correlated and cannot be constrained very well individually. Qualitatively the result of the fit is similar to past HERA {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction analyses~\cite{Breitweg:1997ed}. Quantitative comparisons of fit parameters are not possible because different parametrisations were used. Variants of the fit were also explored but did not significantly improve the compatibility with data. When including a possible admixture of incoherent background, the respective contribution is found to be compatible with zero. This conclusion is largely independent of the assumed incoherent background shape. Models including barrier factors \cite{VonHippel:1972fg} also can be fitted well by the data. Obviously, all extracted particle masses and widths are model dependent. A detailed discussion of these effects is beyond the scope of this paper. The rich structure of the interference term near 770~{\ensuremath{{\mathrm{MeV}}}}\xspace, visible in \figref{fig:results_dSigma_dm_fit}, originates from the $\omega$ resonance. In the fit model, the $\omega$ resonance amplitude prevents the net interference contribution from vanishing at the {\ensuremath{{\rho^0}}}\xspace meson mass and shifts the position of the sign change of the interference term towards the $\omega$ meson mass. In the {\ensuremath{{\pi^+\pi^-}}}\xspace production cross section this is visible as a characteristic step. The impact of the $\omega$ meson on the measured differential cross sections is similar in size to observations made in $e^+e^- \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace$ production~\cite{ee_mrho}. In the present analysis, this effect is measured for the first time at HERA. The masses of the ${\ensuremath{{\rho^0}}}\xspace$ and $\omega$ mesons and the width of the ${\ensuremath{{\rho^0}}}\xspace$ meson obtained from the fit are compared to the world average PDG values~\cite{PDG} in \tabref{tab:res_mgRhoComp}. The PDG lists two sets of {\ensuremath{{\rho^0}}}\xspace parameters from {\ensuremath{{\rho^0}}}\xspace meson photoproduction and production in $e^+e^-$ scattering, respectively. The value measured for the {\ensuremath{{\rho^0}}}\xspace mass is compatible with previous photoproduction measurements~\cite{Breitweg:1997ed,Abramowicz:2011pk,Bartalucci:1977cp}, confirming a slightly lower {\ensuremath{{\rho^0}}}\xspace mass parameter as compared to typical $e^+e^-$ measurements~\cite{ee_mrho}. The mass uncertainty is dominated by the uncertainties in the magnetic field and the material in the detector affecting the track $p_T$ scale. The precision of the presented value is similar to the most precise previous photoproduction measurements. The {\ensuremath{{\rho^0}}}\xspace width is consistent with both PDG values and offers an uncertainty as good as previous photoproduction measurements. The $\omega$ mass is also compatible with the PDG value but has a sizeable uncertainty. The observed difference between the measured and the PDG $\omega$ mass is similar to that of the ${\ensuremath{{\rho^0}}}\xspace$ mass relative to the $e^+e^-$ measurement. \begin{table}[!ttt] \centering \small \renewcommand{\arraystretch}{1.3} \setlength{\tabcolsep}{1em} \begin{tabular}{@{}l@{\hspace{3em}} c c c c@{}} \toprule Parameter & This measurement & PDG $\gamma p$ & PDG $e^+e^-$ & $\Delta$($e^+e^-$, H1)\\ \midrule $ m_{{\ensuremath{{\rho^0}}}\xspace}[{\ensuremath{{\mathrm{MeV}}}}\xspace]$ & $770.8~{}^{+2.6}_{-2.7}$ & $769.0\pm 1.0$ & $775.26 \pm 0.25$ & $4.5 ^{+2.7}_{-2.6}$ \\ $ \Gamma_{{\ensuremath{{\rho^0}}}\xspace}~[{\ensuremath{{\mathrm{MeV}}}}\xspace]$ & $151.3~{}^{+2.7}_{-3.6}$ & $151.7\pm 2.6$ & $147.8\ \, \pm 0.9\ \,$ & \\ $ m_\omega~[{\ensuremath{{\mathrm{MeV}}}}\xspace]$ & $777.9~{}^{+4.8}_{-3.1}$ & & $782.65\pm 0.12$ & $4.8 ^{+3.1}_{-4.8}$ \\ \bottomrule \end{tabular} \caption{Comparison of measured ${\ensuremath{{\rho^0}}}\xspace$ and $\omega$ meson properties to the PDG values~\cite{PDG}. Only the total uncertainties are given. The last column gives the mass differences between the PDG $e^+e^-$ and the present measurement.} \label{tab:res_mgRhoComp} \end{table} \subsubsection{Fiducial {\ensuremath{{\rho^0}}}\xspace and $\omega$ meson photoproduction cross sections} The result of the fit is used to calculate the {\ensuremath{{\rho^0}}}\xspace meson contributions to the fiducial {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections. This is achieved by integrating the {\ensuremath{{\rho^0}}}\xspace meson resonance amplitude in the range $2m_\pi < {\ensuremath{{m_{\pi\pi}}}}\xspace < 1.53~{\ensuremath{{\mathrm{GeV}}}}\xspace$ as discussed in \secref{sec:theo_xSecRho}. The integration yields \begin{alignat}{3} \ensuremath{{\sigma(\gamma p \rightarrow \rho^0 p)}}\xspace =&&\ 10.&97 &&\pm 0.18\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.72}_{-0.73}\ (\ensuremath{{\mathrm{syst.}}}\xspace)\ \ensuremath{{\mu\mathrm{b}}}\xspace \text{ and} \\ \ensuremath{{\sigma(\gamma p \rightarrow \rho^0 Y)}}\xspace =&&\ 4.&71 &&\pm 0.09\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.60}_{-0.59}\ (\ensuremath{{\mathrm{syst.}}}\xspace)\ \ensuremath{{\mu\mathrm{b}}}\xspace \text, \ \text{ for }\ m_p < {\ensuremath{{m_Y}}}\xspace < 10~{\ensuremath{{\mathrm{GeV}}}}\xspace \text, \end{alignat} for the elastic and proton-dissociative ${\ensuremath{{\rho^0}}}\xspace$ meson photoproduction cross sections, respectively. The ratio of the elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections in the measurement phase space is \begin{equation} \dfrac{\ensuremath{{\sigma(\gamma p \rightarrow \rho^0 Y)}}\xspace}{\ensuremath{{\sigma(\gamma p \rightarrow \rho^0 p)}}\xspace} = 0.429 \pm 0.009\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.056}_{-0.053}\ (\ensuremath{{\mathrm{syst.}}}\xspace) \text, \ \text{ for }\ m_p < {\ensuremath{{m_Y}}}\xspace < 10~{\ensuremath{{\mathrm{GeV}}}}\xspace \text, \end{equation} taking into account correlations. The measured elastic {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross section is consistent with previous HERA measurements~\cite{Aid:1996bs,Derrick:1995vq,Breitweg:1997ed}, when accounting for {\ensuremath{{W_{\gamma p}}}}\xspace differences between the measurements. Suitable reference measurements for the proton-dissociative cross section are not available. However, the ratio of the proton-dissociative to the elastic {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross section is consistent with a previous measurement~\cite{Breitweg:1997ed}, assuming that phase space differences are covered by the large uncertainties. With the fit model \eqnref{eqn:theo_rhoSoedMass}, also the indirect measurement of the fiducial $\omega$ meson photoproduction cross sections is possible by integrating the $\omega$-{\ensuremath{{\rho^0}}}\xspace mixing amplitude over the range $2m_\pi \leq {\ensuremath{{m_{\pi\pi}}}}\xspace \leq 0.82~{\ensuremath{{\mathrm{GeV}}}}\xspace$. The result is corrected for the $\omega \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace$ branching fraction $\mathcal{BR}(\omega \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace) = 0.0153 \pm 0.0006\ (\ensuremath{{\mathrm{tot.}}}\xspace)$~\cite{PDG} to yield the cross sections \begin{alignat}{3} \sigma(\gamma p \rightarrow \omega p) =&&\ 1.&06 &&\pm 0.26\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.13}_{-0.30}\ (\ensuremath{{\mathrm{syst.}}}\xspace)\ \ensuremath{{\mu\mathrm{b}}}\xspace \ \text{ and} \\ \sigma(\gamma p \rightarrow \omega Y) =&&\ 0.&31 &&\pm 0.23\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.19}_{-0.16}\ (\ensuremath{{\mathrm{syst.}}}\xspace)\ \ensuremath{{\mu\mathrm{b}}}\xspace \text, \ \text{ for }\ m_p < {\ensuremath{{m_Y}}}\xspace < 10~{\ensuremath{{\mathrm{GeV}}}}\xspace \text. \end{alignat} The uncertainty of the branching fraction is included in the systematic uncertainty. The ratios of the $\omega$ meson to the ${\ensuremath{{\rho^0}}}\xspace$ meson photoproduction cross sections are then determined to be \begin{alignat}{2} \dfrac{ \sigma(\gamma p \rightarrow \omega p)}{\ensuremath{{\sigma(\gamma p \rightarrow \rho^0 p)}}\xspace} =&\ 0.097 &&\pm 0.020\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.011}_{-0.026}\ (\ensuremath{{\mathrm{syst.}}}\xspace) \ \text{ and} \\ \dfrac{ \sigma(\gamma p \rightarrow \omega Y)}{\ensuremath{{\sigma(\gamma p \rightarrow \rho^0 Y)}}\xspace} =&\ 0.065 &&\pm 0.041\ (\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.042}_{-0.032}\ (\ensuremath{{\mathrm{syst.}}}\xspace) \text, \ \text{ for }\ m_p < {\ensuremath{{m_Y}}}\xspace < 10~{\ensuremath{{\mathrm{GeV}}}}\xspace \text. \end{alignat} The elastic $\omega$ photoproduction cross section has been directly measured at HERA in the $\omega \rightarrow {\ensuremath{{\pi^+\pi^-}}}\xspace \pi^0$ channel by the ZEUS collaboration. In a similar phase space, a compatible value for the cross section was observed~\cite{Derrick:1996yt}. The present value is also consistent with the expectation that the {\ensuremath{{\rho^0}}}\xspace and $\omega$ meson photoproduction cross sections differ by a factor $c_{\omega}/c_{\rho} \simeq 1/9$ that originates from SU(2) flavour symmetry and the quark electric charges. \subsection{Energy dependence of {\ensuremath{{\rho^0}}}\xspace meson photoproduction} \label{sec:res_sigmaRhoOfW} \noindent The energy dependencies of the elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections are obtained by unfolding two-dimensional distributions ${\ensuremath{{W_{\gamma p}^{\mathrm{rec}}}}}\xspace \otimes {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace$ and calculating the corresponding two-dimensional {\ensuremath{{\pi^+\pi^-}}}\xspace cross sections. Numerical results are presented in \apxref{apx:sec_pipiXSecTables} and the cross section distributions are displayed in \figref{fig:results_dSigma_dmW_wmFit_el} and \figref{fig:results_dSigma_dmW_wmFit_pd} for the elastic and proton-dissociative components, respectively. To extract the {\ensuremath{{\rho^0}}}\xspace meson contributions, \eqnref{eqn:theo_rhoSoedMass} is fitted simultaneously to the {\ensuremath{{m_{\pi\pi}}}}\xspace distributions in every unfolded {\ensuremath{{W_{\gamma p}}}}\xspace bin with the parameter settings described in \tabref{tab:theo_SodingPars}. In particular, the $\omega$ meson model parameters cannot be constrained well by the fit and are fixed to the values obtained in the fit to the one-dimensional mass distributions. The fit gives $\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace = 222.0/188$. The fitted curves are also shown figures \ref{fig:results_dSigma_dmW_wmFit_el} and \ref{fig:results_dSigma_dmW_wmFit_pd}. The fit results are used to calculate the elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections \ensuremath{{\sigma(\gamma p \rightarrow \rho^0 Y)}}\xspace as a function of {\ensuremath{{W_{\gamma p}}}}\xspace. The results are given in \tabref{tab:tab_xSec_wRho} and displayed in \figref{fig:results_sigmaRho_w_fit}. The cross sections only have weak dependencies on {\ensuremath{{W_{\gamma p}}}}\xspace, as is expected for diffractive processes at such energies. Many sources of uncertainty considered here are observed to vary with {\ensuremath{{W_{\gamma p}}}}\xspace. For the elastic cross section, the full systematic uncertainty is about 6\% in the centre of the considered phase space and increases to 10\% and 8\% at the lowest and highest considered {\ensuremath{{W_{\gamma p}}}}\xspace, respectively. The corresponding proton-dissociative uncertainties are about twice as large. \subsubsection{Fit of the energy dependence} The energy dependencies of the elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections in the range ${20 < {\ensuremath{{W_{\gamma p}}}}\xspace < 80~{\ensuremath{{\mathrm{GeV}}}}\xspace}$ are parametrised by a simple power law: \begin{equation} {\ensuremath{{\sigma_{\rho}}}}\xspace({\ensuremath{{W_{\gamma p}}}}\xspace) = {\ensuremath{{\sigma_{\rho}}}}\xspace(W_0)\ \left(\dfrac{{\ensuremath{{W_{\gamma p}}}}\xspace}{W_0} \right)^\delta\,. \label{eqn:results_sigma_w_fit_fcn} \end{equation} The parametrisation is fitted simultaneously to the measured elastic and proton-disso\-cia\-tive distributions assuming independent elastic and proton-dissociative fit parameters. The function is evaluated at the geometric bin centres, and the reference energy is set to $W_0 = 40~{\ensuremath{{\mathrm{GeV}}}}\xspace$. The fit yields a very poor $\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace = 33.0 / 11 $, where however systematic uncertainties are not considered. The resulting fit parameters are presented in \tabref{tab:results_sigmaRho_w_pars}. The fitted curves are compared to the data in \figref{fig:results_sigmaRho_w_fit} The fitted power parameter ${\delta_{\ensuremath{{\mathrm{el}}}\xspace}=0.171\pm0.009~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.039}_{-0.026}~(\ensuremath{{\mathrm{syst.}}}\xspace)}$ characterises the slow rise of the elastic cross section with increasing scattering energy. The measured value is compatible with previous measurements, e.g.,\ by the ZEUS collaboration~\cite{Breitweg:1997ed} or the CMS Collaboration ($pPb$)~\cite{Sirunyan:2019nog}, but is more precise. The fit results in a proton-dissociative parameter ${\delta_{\ensuremath{{\mathrm{pd}}}\xspace}=-0.156\pm0.026~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.081}_{-0.079}~(\ensuremath{{\mathrm{syst.}}}\xspace)}$ that is significantly different from the elastic parameter and suggests a decrease of the proton-dissociative cross section with increasing energy. However, the proton-dissociative cross section is expected to be strongly shaped by the restriction of the fiducial phase space; most notably by the requirement ${\ensuremath{{m_Y}}}\xspace < 10~{\ensuremath{{\mathrm{GeV}}}}\xspace$. Since energy is required to excite high masses {\ensuremath{{m_Y}}}\xspace, the cut suppresses the cross section more strongly for high than for low {\ensuremath{{W_{\gamma p}}}}\xspace. The phase space is not accounted for in \eqnref{eqn:results_sigma_w_fit_fcn} so that the proton-dissociative result $\delta_{\ensuremath{{\mathrm{pd}}}\xspace}$ cannot be interpreted directly in terms of a scattering amplitude. In \figref{fig:results_sigmaRho_w_hist}, the measured elastic {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross section data are compared to selected measurements by fixed-target~\cite{Ballam:1971wq,Park:1971ts,Ballam:1972eq,Struczinski:1975ik,Egloff:1979mg,Aston:1982hr}, HERA~\cite{Aid:1996bs,Derrick:1995vq,Breitweg:1997ed}, and LHC~\cite{Sirunyan:2019nog} experiments\footnote{The phase space differences between the measurements with respect to the considered $t$ ranges are found to be negligible ($\lesssim 2\%$) relative to much larger overall normalisation uncertainties.}. The present elastic data are in agreement with other measurements and connect the fixed-target data with the high energy regime. The combination of data makes possible the study of the region of reggeon exchange at small scattering energies and of pomeron exchange at large energies simultaneously. For describing the energy range $2 < {\ensuremath{{W_{\gamma p}}}}\xspace < 200~{\ensuremath{{\mathrm{GeV}}}}\xspace$, the energy dependence of the elastic cross section is parametrised by the sum of two power-law functions: \begin{equation} {\ensuremath{{\sigma_{\rho}}}}\xspace({\ensuremath{{W_{\gamma p}}}}\xspace) = {\ensuremath{{\sigma_{\rho}}}}\xspace(W_0)\ \left( \left(\dfrac{{\ensuremath{{W_{\gamma p}}}}\xspace}{W_0} \right)^{\delta_{\ensuremath{{I\!\!P}}\xspace}} + f_{\ensuremath{{I\!\!R}}\xspace} \left(\dfrac{{\ensuremath{{W_{\gamma p}}}}\xspace}{W_0} \right)^{\delta_{\ensuremath{{I\!\!R}}\xspace}} \right). \label{eqn:results_sigma_w_fit_fcn_DL} \end{equation} The two parameters $\delta_{\ensuremath{{I\!\!P}}\xspace}$ and $\delta_{\ensuremath{{I\!\!R}}\xspace}$ are associated with pomeron exchange at high and reggeon exchange at low {\ensuremath{{W_{\gamma p}}}}\xspace, respectively. The parametrisation is fitted to the HERA data~\cite{Aid:1996bs,Derrick:1995vq,Breitweg:1997ed} together with data from fixed-target experiments~\cite{Ballam:1971wq,Park:1971ts,Ballam:1972eq,Struczinski:1975ik,Egloff:1979mg,Aston:1982hr} as shown in \figref{fig:results_sigmaRho_w_hist}. In the fit, previous measurements enter with their respective uncorrelated and normalisation uncertainties. For the H1~\cite{Aid:1996bs} and ZEUS measurements~\cite{Breitweg:1997ed}, normalisation uncertainties of 8\% and 9\%, respectively, are extracted from the total uncertainties. Correlations between experiments are not considered. The uncorrelated uncertainties are accounted for in the data covariance, together with the statistical uncertainties of the present measurement. The normalisation uncertainties are included by offsetting the data and repeating the fit, as is also done for systematic and normalisation uncertainties of the present data. The fit yields $\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace = 84.3/43$ and results in $\delta_{\ensuremath{{I\!\!P}}\xspace} = 0.207 \pm 0.015~(\ensuremath{{\mathrm{stat.}}}\xspace) ^{+0.053}_{-0.033}~(\ensuremath{{\mathrm{syst.}}}\xspace)$ and $\delta_{\ensuremath{{I\!\!R}}\xspace} = -1.45 \pm 0.12~(\ensuremath{{\mathrm{stat.}}}\xspace) ^{+0.35}_{-0.21} ~(\ensuremath{{\mathrm{syst.}}}\xspace)$. All fit parameters are given in table \ref{tab:results_world_w_pars}. The fitted curve is compared to the data in \figref{fig:results_sigmaRho_w_hist}. The extracted parameters are consistent with the broader picture of vector meson electroproduction, in particular with the observed dependence of $\delta_{\ensuremath{{I\!\!P}}\xspace}$ on the vector meson mass~\cite{Favart:2015umi}. The fit further suggests a small reggeon contribution of ${f_{\ensuremath{{I\!\!R}}\xspace} = [2.0 \pm 0.7~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+2.9}_{-1.3}~(\ensuremath{{\mathrm{syst.}}}\xspace)]\%}$ at $W_0=40~{\ensuremath{{\mathrm{GeV}}}}\xspace$. For this reason, the parameter $\delta_{\ensuremath{{\mathrm{el}}}\xspace}$ from the fit with a single power-law function (\tabref{tab:results_sigmaRho_w_pars}, ${\ensuremath{{m_Y}}}\xspace=m_p$) is interpreted as an effective parameter describing both pomeron and reggeon exchange contributions. \subsection{$t$ dependence of {\ensuremath{{\rho^0}}}\xspace meson photoproduction} \label{sec:res_dSigmaRhoOft} \noindent The $t$ dependencies of the elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections are obtained by unfolding two-dimensional distributions $\trec \otimes {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace$ and calculating the corresponding two-dimensional {\ensuremath{{\pi^+\pi^-}}}\xspace cross sections. Numerical results are presented in \apxref{apx:sec_pipiXSecTables} and the cross section distributions are displayed in \figref{fig:results_dSigma_dmdt_mtFit_el} and \figref{fig:results_dSigma_dmdt_mtFit_pd} for the elastic and proton-dissociative component, respectively. To extract the {\ensuremath{{\rho^0}}}\xspace meson contributions, \eqnref{eqn:theo_rhoSoedMass} is fitted simultaneously to the {\ensuremath{{m_{\pi\pi}}}}\xspace distributions in every unfolded $t$ bin with the parameter settings described in \tabref{tab:theo_SodingPars}. The fit yields ${\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace = 353.4/247}$. The fitted curves are also shown figures \ref{fig:results_dSigma_dmdt_mtFit_el} and \ref{fig:results_dSigma_dmdt_mtFit_pd}. The fit results are used to calculate the elastic and proton-dissociative differential {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections ${\ensuremath{{\dd \sigma(\gamma p \rightarrow \myrho Y)/\dd t}}}\xspace$ as a function of $t$. The results are given in \tabref{tab:tab_xSec_tRho} and displayed in \figref{fig:results_dSigmaRho_dt_fit}. The cross sections fall of exponentially with $t$ as is expected for diffractive processes. The relative systematic uncertainties of roughly 6\% for the elastic distribution vary little with $t$ and increase moderately to 11\% in the highest $|t|$ bin. For the proton-dissociative distribution, the uncertainties increase from roughly 13\% at intermediate $|t|$ to 17\% in both the highest and lowest considered $|t|$ bins. \subsubsection{Fit of the $t$ dependence} The $t$ dependencies of the elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections are parametrised by the function \begin{align} \dfrac{\ensuremath{{\mathrm{d}}} {\ensuremath{{\sigma_{\rho}}}}\xspace}{\ensuremath{{\mathrm{d}}} t}(t) = \dfrac{\ensuremath{{\mathrm{d}}} {\ensuremath{{\sigma_{\rho}}}}\xspace}{\ensuremath{{\mathrm{d}}} t}(t=0)\ \left(1 - \dfrac{ b\, t}{a} \right)^{-a} \text. \label{eqn:results_dSigma_dt_fit_fcn} \end{align} It interpolates between an exponential $\ensuremath{{\mathrm{d}}} {\ensuremath{{\sigma_{\rho}}}}\xspace/\ensuremath{{\mathrm{d}}} t \propto \exp\left(b\, t \right) $ at low $|t|$ and a power-law $ \ensuremath{{\mathrm{d}}} {\ensuremath{{\sigma_{\rho}}}}\xspace / \ensuremath{{\mathrm{d}}} t \propto |t|^{-a}$ at large $|t|$. The function is fitted simultaneously to the elastic and proton-dissociative distributions assuming independent elastic and proton-dissociative model parameters. To match the cross section definition, the function is integrated over each bin and divided by the respective bin width in the fit. The proton-dissociative distribution is affected by phase space restrictions that are not accounted for in the parametrisation. The impact is particularly large at small $|t|$ because excitations with large ${\ensuremath{{m_Y}}}\xspace$ require a minimal\footnote{ $|t| \gtrsim \dfrac{{\ensuremath{{m_Y^2}}}\xspace}{W_{{\ensuremath{{\gamma p}}}\xspace}^4} \left(m_{\pi\pi}^2+Q^2\right)^2$~\cite{Abramowicz:1999eq}} $|t|$. For this reason, the lowest $|t|$ bin of the proton-dissociative distribution is not included in the fit. The fit yields ${\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace = 15.3 / 14 }$. The resulting fit parameters are presented in \tabref{tab:results_dSigmaRho_dt_pars}. The fitted curves are compared to data in \figref{fig:results_dSigmaRho_dt_fit} The exponential $t$ dependencies of the elastic and proton-dissociative cross sections at small $|t|$ are quantified by the parameters ${b_{\ensuremath{{\mathrm{el}}}\xspace} = 9.61\pm0.15~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.20}_{-0.15}~(\ensuremath{{\mathrm{syst.}}}\xspace)~{\ensuremath{{\mathrm{GeV}^{-2}}}}\xspace}$ and ${b_{\ensuremath{{\mathrm{pd}}}\xspace} = 4.81\pm0.24~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.39}_{-0.37}~(\ensuremath{{\mathrm{syst.}}}\xspace)~{\ensuremath{{\mathrm{GeV}^{-2}}}}\xspace}$, respectively. With $b_{\ensuremath{{\mathrm{pd}}}\xspace} \sim 1/2\, b_{\ensuremath{{\mathrm{el}}}\xspace}$, the elastic spectrum falls off much more steeply than the proton-dissociative spectrum. The difference between the exponential slope parameters is $b_{\ensuremath{{\mathrm{el}}}\xspace} - b_{\ensuremath{{\mathrm{pd}}}\xspace} = 4.80 \pm 0.29~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.41}_{-0.40}~(\ensuremath{{\mathrm{syst.}}}\xspace)$, taking into account correlations. In the optical interpretation (or in an eikonal model approach~\cite{eikonalModel}), this difference reflects the difference in the size of the respective target, i.e.,\ proton in elastic vs parton in proton-dissociative interactions. The measured value is consistent with HERA measurements in {\ensuremath{{\rho^0}}}\xspace meson electroproduction~\cite{Aaron:2009xp}. There is evidence for a deviation from the purely exponential behaviour given by the finite values for $a_{\ensuremath{{\mathrm{el}}}\xspace} = 20.4\pm3.7~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+6.8}_{-5.1}~(\ensuremath{{\mathrm{syst.}}}\xspace)$ and $a_{\ensuremath{{\mathrm{pd}}}\xspace} = 8.5\pm1.7~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+2.7}_{-2.1}~(\ensuremath{{\mathrm{syst.}}}\xspace)$. The smaller $a_{\ensuremath{{\mathrm{pd}}}\xspace}$ indicates a stronger deviation for the proton-dissociative than for the elastic cross section. The present measurements of the $t$ dependencies of the elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections can be compared with other HERA measurements~\cite{Aid:1996bs,Breitweg:1997ed}. The measured values for the elastic slope $b_{\ensuremath{{\mathrm{el}}}\xspace}$ are somewhat lower than observed previously but seem compatible within uncertainties when taking into account the differences in energy and the thus expected shrinkage. The present value is the most precise of these measurements. The observed deviation of the elastic slope from the exponential behaviour is compatible with the ZEUS measurement \cite{Breitweg:1997ed}. Similar to the elastic case, the slopes $b_{\ensuremath{{\mathrm{pd}}}\xspace}$ in proton-dissociative scattering are compatible with earlier measurements. The present measurement is more precise and does account for deviations from the exponential form. Differences in the definition of the proton-dissociative phase space were not considered in these comparisons. \subsection{{\ensuremath{{\rho^0}}}\xspace meson photoproduction as a function of $t$ and {\ensuremath{{W_{\gamma p}}}}\xspace} \label{sec:res_dSigmaRhoOfWt} \noindent The two-dimensional $t$ and {\ensuremath{{W_{\gamma p}}}}\xspace dependencies of the elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections are obtained by unfolding three-dimensional distributions $\trec \otimes {\ensuremath{{W_{\gamma p}^{\mathrm{rec}}}}}\xspace \otimes {\ensuremath{{m_{\pi\pi}^\mathrm{rec}}}}\xspace$ and calculating the corresponding {\ensuremath{{\pi^+\pi^-}}}\xspace cross sections\footnote{The underlying response matrix has 1243 bins on detector and 882 bins on truth level.}. Numerical results are presented in \apxref{apx:sec_pipiXSecTables} and the cross section distributions are displayed in \figref{fig:results_dSigma_dmdtW_twmFit_el} and \figref{fig:results_dSigma_dmdtW_twmFit_pd} for the elastic and proton-dissociative component, respectively. To extract the {\ensuremath{{\rho^0}}}\xspace meson contributions, \eqnref{eqn:theo_rhoSoedMass} is fitted simultaneously to the {\ensuremath{{m_{\pi\pi}}}}\xspace distributions in every unfolded {\ensuremath{{W_{\gamma p}}}}\xspace and $t$ bin with the parameter settings described in \tabref{tab:theo_SodingPars}. The fit yields $\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace = 804.0/607$. The fitted curves are also shown in figures \ref{fig:results_dSigma_dmdtW_twmFit_el} and \ref{fig:results_dSigma_dmdtW_twmFit_pd}. The fit results are used to calculate the elastic and proton-dissociative differential {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections ${\ensuremath{{\dd \sigma(\gamma p \rightarrow \myrho Y)/\dd t}}}\xspace$ as a function of $t$ and {\ensuremath{{W_{\gamma p}}}}\xspace. The results are given in \tabref{tab:tab_xSec_twRho_el} and \tabref{tab:tab_xSec_twRho_pd} for the elastic and proton-dissociative component, respectively, and are shown in \figref{fig:results_dSigmaRho_dtW_twfit}. \subsubsection{Regge fit of the $t$ and ${\ensuremath{{W_{\gamma p}}}}\xspace$ dependence} The elastic and proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections are parametrised by the function \begin{equation} \dfrac{\ensuremath{{\mathrm{d}}} {\ensuremath{{\sigma_{\rho}}}}\xspace}{\ensuremath{{\mathrm{d}}} t}(t;{\ensuremath{{W_{\gamma p}}}}\xspace) = \dfrac{\ensuremath{{\mathrm{d}}} {\ensuremath{{\sigma_{\rho}}}}\xspace}{\ensuremath{{\mathrm{d}}} t}(t;W_0) \left( \dfrac{{\ensuremath{{W_{\gamma p}}}}\xspace}{W_0} \right)^{4 (\alpha(t)-1) } \text, \label{eqn:sigma_walpha_fcn} \end{equation} with an energy dependence predicted by Regge theory and with a single leading trajectory $\alpha(t)$. The $t$ dependence at the reference energy $W_0=40~{\ensuremath{{\mathrm{GeV}}}}\xspace$ is parametrised according to \eqnref{eqn:results_dSigma_dt_fit_fcn}. Regge trajectories are expected to continue linearly only in the region of small $|t|$. Since the analysis extends to $|t| \leq 1.5~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$, a modified parametrisation based on a Fermi function is used: \begin{equation} \alpha(t) = \alpha_0 + \beta\, \left( \left(\ensuremath{{\mathrm{e}}}^{ - \frac{4\alpha_1 t}{\beta} }+1\right)^{-1} -\dfrac{1}{2}\right)\text . \label{eqn:sigma_alpha_fermi} \end{equation} It approximates a linear function $\alpha_0 + \alpha_1\, t$ for small $|t|$ and approaches a constant value ${\alpha_0 - \beta/2}$ for ${t \rightarrow -\infty}$. Such a lower bound is expected for large $|t|$ from QCD calculations~\cite{BoundReggeTraj} but deviations from a linear behaviour have not been observed yet in this reaction in the $t$ range considered here. The parametrisation in \eqnref{eqn:sigma_walpha_fcn} is fitted simultaneously to the elastic and proton-dissociative cross sections assuming independent elastic and proton-dissociative fit parameters. In the $\chi^2$ calculation, the function is integrated over each $t$ bin and evaluated at the geometric {\ensuremath{{W_{\gamma p}}}}\xspace bin centres to match the cross section definition. Following the argumentation discussed above, the respective lowest $|t|$ bins of the proton-dissociative cross sections are excluded from the fit. The fit yields $\ensuremath{{\chi_\mathrm{stat}^2/\mathrm{n}_\mathrm{dof}}}\xspace =31.7/32 $. The resulting fit parameters are presented \tabref{tab:results_fPar_wtRhoFit}. The fitted curves are compared to data in \figref{fig:results_dSigmaRho_dtW_twfit}. \subsubsection{Shrinkage of the elastic differential cross section} The fit parameters in \tabref{tab:results_fPar_wtRhoFit} that describe the $t$ dependencies of the differential {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections at $W_0=40~{\ensuremath{{\mathrm{GeV}}}}\xspace$ are in agreement with the parameters from the one-dimensional fit of the $t$ dependencies that is described above. For the elastic differential cross section, the finite slope of the measured trajectory $\alpha(t)$ (see below) results in a shrinkage of the forward peak with increasing {\ensuremath{{W_{\gamma p}}}}\xspace. At small $|t|$, where the differential cross section ${\ensuremath{{\dd \sigma(\gamma p \rightarrow \myrho p)/\dd t}}}\xspace$ falls exponentially with $t$ and the trajectory is linear, the shrinkage can be expressed in terms of a {\ensuremath{{W_{\gamma p}}}}\xspace dependence of the exponential slope parameter \begin{equation} b_{\ensuremath{{\mathrm{el}}}\xspace}({\ensuremath{{W_{\gamma p}}}}\xspace) = b_{\ensuremath{{\mathrm{el}}}\xspace}\left(W_0\right) + 4\alpha_{1} \log\left(\dfrac{{\ensuremath{{W_{\gamma p}}}}\xspace}{W_0} \right)\text. \end{equation} The function is plotted in \figref{fig:results_fPar_wtRho_b} using the parameters extracted for the elastic cross section in the two-dimensional fit (cf. \tabref{tab:results_fPar_wtRhoFit}, $m_Y{=}m_p$). For comparison, $b({\ensuremath{{W_{\gamma p}}}}\xspace)$ is measured in every {\ensuremath{{W_{\gamma p}}}}\xspace bin by fitting the parametrisation \eqnref{eqn:results_dSigma_dt_fit_fcn} with free fit parameters $b_t$ to the $t$ dependencies in all {\ensuremath{{W_{\gamma p}}}}\xspace bins. Deviations from the exponential form are accounted for by including a single parameter $a$ common to all for {\ensuremath{{W_{\gamma p}}}}\xspace bins. The result is presented in \tabref{tab:tab_fPar_bRho} and also shown in \figref{fig:results_fPar_wtRho_b}. Further data from previous HERA~\cite{Aid:1996bs,Derrick:1995vq,Breitweg:1997ed} and fixed-target~\cite{Ballam:1971wq,Park:1971ts,Ballam:1972eq,Struczinski:1975ik,Egloff:1979mg,Aston:1982hr} measurements are included in the figure, as well. The present slope values are somewhat lower than those from previous HERA measurements but are much more precise. The present data alone clearly indicate that there is shrinkage of the elastic peak with increasing energy. \subsubsection{Effective leading Regge trajectory} For visualisation of the Regge trajectory parameters, $\alpha(t)$ is measured separately in each $t$ bin by fitting a simple power law $\propto {\ensuremath{{W_{\gamma p}}}}\xspace^{4(\alpha_t - 1)}$ with free fit parameters $\alpha_t$ to the {\ensuremath{{W_{\gamma p}}}}\xspace distributions in all $t$ bins. The resulting parameters $\alpha_t$ are presented in \tabref{tab:tab_fPar_alphaRho} and displayed as a function of the $t$ bin centres $t_{bc}$\footnote{ The bin centres $t_{bc}$ are defined to satisfy $\frac{\ensuremath{{\mathrm{d}}}\sigma}{\ensuremath{{\mathrm{d}}} t}\left(t_{bc}\right) = \frac{1}{\left(t_\mathrm{max}-t_\mathrm{min}\right)} \int_{t_\mathrm{min}}^{t_\mathrm{max}} \frac{\ensuremath{{\mathrm{d}}} \sigma}{\ensuremath{{\mathrm{d}}} t}\left(t\right)\, \ensuremath{{\mathrm{d}}} t$ and calculated using \eqnref{eqn:results_dSigma_dt_fit_fcn} and the fit parameters for the elastic and proton-dissociative cross-sections given in \tabref{tab:results_dSigmaRho_dt_pars}. } in \figref{fig:results_fPar_wtRho_alpha}. The data are compared to the parametrisations $\alpha(t)$ obtained in the two-dimensional fit and to other curves as discussed below. A direct fit of a parametrisation $\alpha(t)$ to the so extracted $\alpha_t$ is ambiguous because of the definition of the bin centres $t_{bc}$. For the elastic cross section, the measured trajectory is linear at small $|t|$. The intercept \linebreak and slope parameters are measured at ${\alpha_{0}=1.0654 \pm 0.0044~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.0088}_{-0.0050}~(\ensuremath{{\mathrm{syst.}}}\xspace)}$ and \linebreak ${\alpha_{1} = 0.233 \pm 0.064~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.020 }_{-0.038 }~(\ensuremath{{\mathrm{syst.}}}\xspace) ~{\ensuremath{{\mathrm{GeV}^{-2}}}}\xspace}$, respectively. Potential non-linearities start occurring for $t\lesssim -0.2~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$ and the trajectory is compatible with approaching a constant asymptotic value of approximately $0.98 \pm 0.04~(\ensuremath{{\mathrm{tot.}}}\xspace)$ for $t \rightarrow -\infty$. However, a linear trajectory over the full considered $t$ range is also able to describe the elastic cross section. The resulting intercept and slope parameters are ${\alpha_{0,\mathrm{lin}} = 1.0624 \pm 0.0033~(\ensuremath{{\mathrm{stat.}}}\xspace)~{}^{+0.0083}_{-0.0052}~(\ensuremath{{\mathrm{syst.}}}\xspace)}$ and $\alpha_{1,\mathrm{lin}} = 0.175 \pm 0.026~(\ensuremath{{\mathrm{stat.}}}\xspace)~{}^{+0.021}_{-0.027}~(\ensuremath{{\mathrm{syst.}}}\xspace)~{\ensuremath{{\mathrm{GeV}^{-2}}}}\xspace$, respectively. For comparison, the linear trajectory is also included in \figref{fig:results_fPar_wtRho_alpha}. In contrast, the proton-dissociative cross section is not compatible with a linear trajectory $\alpha(t)$. The fitted parametrisation falls off steeply at low $|t|$ but is constant with a value of approximately ${0.93 \pm 0.03~(\ensuremath{{\mathrm{tot.}}}\xspace)}$ for $t \lesssim -0.2~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$. However, significant shaping effects are expected from the phase space restrictions, as discussed earlier. No attempt is made here to relate these fit parameters with the underlying amplitude. In the high energy limit, the elastic cross section is expected to be governed by the soft pomeron trajectory. The canonical pomeron trajectory has been determined by Donnachie and Landshoff from $pp$ and $p\bar{p}$ scattering data as $\alpha_\mathrm{DL}(t) = 1.0808 + 0.25\,t/{\ensuremath{{\mathrm{GeV}^2}}}\xspace$~\cite{Donnachie:1983hf}. The trajectory parameters have been investigated in various vector meson photo- and electroproduction measurements. A recent analysis of HERA data with a two tensor pomeron model furthermore has lead to a precise measurement of the soft pomeron intercept using inclusive DIS cross sections at low Bjorken-$x$ and total photoproduction cross sections~\cite{Britzger:2019lvc}. In {\ensuremath{{\rho^0}}}\xspace meson photoproduction, the trajectory has previously been directly extracted~\cite{Breitweg:1999jy} in an analysis of the energy dependence of the differential cross section {\ensuremath{{\dd \sigma(\gamma p \rightarrow \myrho p)/\dd t}}}\xspace in the range $8.2 < {\ensuremath{{W_{\gamma p}}}}\xspace < 94~{\ensuremath{{\mathrm{GeV}}}}\xspace$ as measured by various experiments~\cite{Aston:1982hr,Aid:1996bs,Breitweg:1997ed,Derrick:1996vw,Breitweg:1999jy}. The present analysis has the advantage that it measures the leading trajectory from a single dataset. In \figref{fig:results_alphaPom_comp}, the measured intercept and slope at $t=0$ are compared to the canonical pomeron trajectory parameters and those measured by the cited works~\cite{Britzger:2019lvc,Breitweg:1999jy}. The measured intercept seems to be comparatively low. The analysis of the one-dimensional energy dependence (\secref{sec:res_sigmaRhoOfW}) suggests the presence of a subleading reggeon contribution that indeed may have lowered the measured effective trajectory by about $0.01$ units. For the measured slope, good agreement is observed with the canonical DL value. However, there is some tension with previous HERA measurements, such as the cited ZEUS measurement. A potential explanation could arise from possible deviations from the linear form of the trajectory, which are taken into account in the present analysis. For proton-dissociative {\ensuremath{{\rho^0}}}\xspace meson photoproduction, a constant trajectory has been previously measured at HERA for higher $|t|$ than are considered here~\cite{Chekanov:2002rm}. \section{Summary} \label{sec:Summary} Elastic and proton-dissociative {\ensuremath{{\pi^+\pi^-}}}\xspace and {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections are measured as a function of the invariant {\ensuremath{{\pi^+\pi^-}}}\xspace mass {\ensuremath{{m_{\pi\pi}}}}\xspace, the scattering energy {\ensuremath{{W_{\gamma p}}}}\xspace, and the squared momentum transfer at the proton vertex $t$. The measurement covers a phase space of $0.5 < {\ensuremath{{m_{\pi\pi}}}}\xspace < 2.2~{\ensuremath{{\mathrm{GeV}}}}\xspace$, $20 < {\ensuremath{{W_{\gamma p}}}}\xspace < 80~{\ensuremath{{\mathrm{GeV}}}}\xspace$, and $|t|<1.5~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$. The cross sections are obtained by unfolding up to three-dimensional {\ensuremath{{\pi^+\pi^-}}}\xspace distributions. In the procedure, the elastic and proton-dissociative components are extracted simultaneously. The results are more precise than previous measurements, in particular the measurement of the proton-dissociative cross sections. The {\ensuremath{{\pi^+\pi^-}}}\xspace mass spectra are analysed with a model in which the skewing of the {\ensuremath{{\rho^0}}}\xspace meson resonance peak is attributed to interference with non-resonant {\ensuremath{{\pi^+\pi^-}}}\xspace production, as originally proposed by S\"oding in 1966. A fit of the model to the data yields ${m_\rho = 770.8 \pm 1.3~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+2.3}_{-2.4}~(\ensuremath{{\mathrm{syst.}}}\xspace) ~{\ensuremath{{\mathrm{MeV}}}}\xspace}$ and ${\Gamma_\rho = 151.3 \pm 2.2~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+1.6}_{-2.8}~(\ensuremath{{\mathrm{syst.}}}\xspace) ~{\ensuremath{{\mathrm{MeV}}}}\xspace}$ for the mass and width of the {\ensuremath{{\rho^0}}}\xspace meson, respectively. For the first time at HERA, the sensitivity of the data is sufficient to constrain the $\omega$ meson contribution to {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction and measure the $\omega$ meson mass at $m_\omega = 777.9 \pm 2.2~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+4.3}_{-2.2}~(\ensuremath{{\mathrm{syst.}}}\xspace) ~{\ensuremath{{\mathrm{MeV}}}}\xspace$. The fit is used to extract the {\ensuremath{{\rho^0}}}\xspace meson contributions to the elastic and proton-dissociative {\ensuremath{{\pi^+\pi^-}}}\xspace photoproduction cross sections. One- and two-dimensional {\ensuremath{{\rho^0}}}\xspace meson photoproduction cross sections as functions of {\ensuremath{{W_{\gamma p}}}}\xspace and $t$ are presented and subsequently interpreted with phenomenological models in the context of Regge theory. Precise parameters describing the $t$ and {\ensuremath{{W_{\gamma p}}}}\xspace dependencies are measured. The slope parameters describing the exponential drop of the cross sections with $t$ are measured at an average energy of $\langle {\ensuremath{{W_{\gamma p}}}}\xspace \rangle = 43~{\ensuremath{{\mathrm{GeV}}}}\xspace$ as ${b_{\ensuremath{{\mathrm{el}}}\xspace} = 9.61\pm0.15~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.20}_{-0.15}~(\ensuremath{{\mathrm{syst.}}}\xspace)~{\ensuremath{{\mathrm{GeV}^{-2}}}}\xspace}$ and $b_{\ensuremath{{\mathrm{pd}}}\xspace} = 4.81\pm0.24~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.39}_{-0.37}~(\ensuremath{{\mathrm{syst.}}}\xspace)~{\ensuremath{{\mathrm{GeV}^{-2}}}}\xspace$ for the elastic and proton-dissociative components, respectively. From the analysis of the elastic cross section {\ensuremath{{\dd \sigma(\gamma p \rightarrow \myrho p)/\dd t}}}\xspace as a function of {\ensuremath{{W_{\gamma p}}}}\xspace and $t$ the effective leading Regge trajectory is extracted. Allowing for deviations from the linear form at large $|t|$, an intercept of $\alpha(t{=}0) = 1.0654 \pm 0.0044~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.0088}_{-0.0050}~(\ensuremath{{\mathrm{syst.}}}\xspace)$ and a slope of $\alpha^\prime(t{=}0) = 0.233 \pm 0.064~(\ensuremath{{\mathrm{stat.}}}\xspace)\ {}^{+0.020 }_{-0.038 }~(\ensuremath{{\mathrm{syst.}}}\xspace) ~{\ensuremath{{\mathrm{GeV}^{-2}}}}\xspace$ are measured. Since the probed energies are low, the intercept measurement may be obscured somewhat by sub-leading reggeon contributions that cannot be constrained by the present data alone. There are some indications that the trajectory deviates from a linear behaviour for $t < -0.2~{\ensuremath{{\mathrm{GeV}^2}}}\xspace$. Within uncertainties, the data are compatible with the trajectory becoming equal to unity for large $|t|$. \section*{Acknowledgements} We are grateful to the HERA machine group whose outstanding efforts have made this experiment possible. We thank the engineers and technicians for their work in constructing and maintaining the H1 detector, our funding agencies for financial support, the DESY technical staff for continual assistance and the DESY directorate for support and for the hospitality which they extend to the non DESY members of the collaboration. We would like to give credit to all partners contributing to the EGI computing infrastructure for their support for the H1 Collaboration. We express our thanks to all those involved in securing not only the H1 data but also the software and working environment for long term use allowing the unique H1 dataset to continue to be explored in the coming years. The transfer from experiment specific to central resources with long term support, including both storage and batch systems has also been crucial to this enterprise. We therefore also acknowledge the role played by DESY-IT and all people involved during this transition and their future role in the years to come.
{'timestamp': '2020-06-01T02:06:38', 'yymm': '2005', 'arxiv_id': '2005.14388', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14388'}
arxiv
\section{Conclusions} \bibliographystyle{IEEEtran} \subsection{Dynamic program to compute $\mathbf F(\cdot)$} \label{app:F_compute} We here describe how to compute $\mathbf F(p,v)$ in $O(mn)$ time
and space complexity, where $p=(p_1,...,p_n)$ and $v=v_1...v_m$, via a dynamic programming approach. We first define \begin{align*} &\mathbf G^{for}(k,j)\triangleq \mathbf F(p_{[1:k]},v_{[1:j]}). \numberthis \label{eq:G} \end{align*} Using Lemma~\ref{lemma:F_decomposition} with $i=n$, we get \begin{align*} \mathbf F(p,v) = \mathbf F(p_{[n-1]},v) + p_n^{v_m} (1-p_n)^{(1-v_m)} \mathbf F(p_{[n-1]},v_{[m-1]}). \end{align*} \noindent This translates to the following dynamic program for $\mathbf G^{for}$: \begin{align*} \mathbf G^{for}(k,j) = \mathbf G^{for}(k-1,j)+ p_{k}^{v_j}(1-p_{k})^{1-v_j}&\mathbf G^{for}(k-1,j-1),\numberthis \label{eq:approx_smap_dpfor} \end{align*} with the boundary conditions $\mathbf G^{for}(k,0)=1\ \forall\ k \geq 0$ and $\mathbf G^{for}(k,j)=0\ \forall\ k<j$. The algorithm is now summarized as Alg.~\ref{alg:F_comp}. \begin{algorithm}[h!] \caption{Computing $\mathbf F(p,v)$}\label{alg:F_comp} \begin{algorithmic}[1] \State Inputs: $p \in [0,1]^n$, $v \in \{0,1\}^m$ \State Outputs: $\mathbf F(p_{[1:k]},v_{[1:j]})$ for all $k \in [n]$ and $j\in[m]$ \State Initialize $\mathbf G^{for}(k,0)=1\ \forall\ k$ and $\mathbf G^{for}(k,j)=0\ \forall\ k<j$ \For {$k = 1:n$ and $j = 1:m$} \State Use \eqref{eq:approx_smap_dpfor} to update $\mathbf G^{for}(k,j)$ \EndFor \State return $\mathbf G^{for}(k,j)\ \forall\ k,j$ \end{algorithmic} \end{algorithm} We note that a similar dynamic programming approach yields $\mathbf F(p_{[k:n]},v_{[j:m]})$ for all $k \in [n]$ and $j\in[m]$ in $O(mn)$ time and space complexity by defining \begin{align*} &\mathbf G^{rev}(k,j)\triangleq \mathbf F(p_{[k:n]},v_{[j:m]}). \end{align*} \subsection{An algebraic definition of the infiltration product.} \label{app:infil_def} We now give a more formal definition of the infiltration product (see \cite{lothaire1997combinatorics} for the equivalence of the two definitions and a more rigorous treatment). A \textit{formal series} with indeterminates (or variables) in a set $\mathcal A$ and coefficients in a commutative ring $\mathcal R$, is a mapping of $\mathcal A^*$ onto $\mathcal R$. Recall that a commutative ring is a set which forms an abelian group under an \textit{addition} operation, is a monoid under a \textit{multiplication} operation which commutes, and the multiplication operation distributes over addition. Here we consider $\mathbb Z$, the set of integers as the commutative ring $\mathcal{R}$. A formal series is called a \textit{polynomial} if only a finite number of sequences are mapped to non-zero values, the rest of the sequences map to zero. Consider two polynomials $\sigma,\tau: \mathcal{A}^*\rightarrow \mathbb Z$. The value taken by a sequence $w\in \mathcal A^*$ on $\sigma$ (or the coefficient of $w$ in $\sigma$) is denoted by $\langle \sigma,w\rangle \in \mathbb R$. We also define binary addition ($\oplus$) and multiplication operations ($\times$) on the set of polynomials as follows: \begin{align} \langle \sigma\oplus \tau,w\rangle \triangleq \langle \sigma,w\rangle + \langle \tau,w \rangle \quad \forall w\in \mathcal A^*,\label{eq:polynomial_add}\\ \langle \sigma\times \tau,w\rangle \triangleq \sum_{\substack{f,g\in \mathcal A^*:\\ f.g=w}}\langle \sigma,f\rangle \langle \tau,g \rangle \quad \forall w\in \mathcal A^*.\label{eq:polynomial_prod} \end{align} We will use the usual symbols $+$ and $.$ in place of $\oplus$ and $\times$ in this work for convenience. The meaning of the operation would be clear depending on the operands. With these operations the set of polynomials form a non-commutative ring, and is denoted by $\mathbb Z\langle\mathcal A \rangle$, also called the free $\mathbb Z$-algebra on $\mathcal A$ in ring theory. Note that the addition and multiplication operations defined in \eqref{eq:polynomial_add} and \eqref{eq:polynomial_prod} are similar to the operations defined on commutative polynomials, except that the multiplication operation under the summation in \eqref{eq:polynomial_prod} ($f.g=w$) is actually concatenation and is non-commutative. The multiplication inside the summation in \eqref{eq:polynomial_prod} is multiplication in the real field and hence commutative. It is also easy to prove that the multiplication defined in \eqref{eq:polynomial_prod} distributes over addition defined in \eqref{eq:polynomial_add}. Thus, a polynomial in $\mathbb Z\langle\mathcal A \rangle$ can be represented as a sum of monomials in $\mathcal A^*$ each with an associated coefficient in $\mathbb Z$, i.e., $\sigma=\sum\limits_{w\in \mathcal A^*} \langle\sigma,w \rangle w$. Define the \textit{degree} of a polynomial to be equal to the length of a longest sequence with a non-zero coefficient in the polynomial and the \textit{number of terms} of a polynomial as the number of sequences with non-zero coefficients in the polynomial. Note that a degree $d$ polynomial could have a number of terms upto $2^{d+1}-1$. With this, the \textit{infiltration product} (in general, for two polynomials) is defined as follows: \begin{align} \forall f\in \mathcal{A}^*,& \quad f\uparrow e = e\uparrow f=f.\nonumber \\ \forall f,g\in \mathcal{A}^*&,\quad \forall a,b\in \mathcal{A}, \nonumber\\ fa\uparrow gb=(f\uparrow gb)a&+(fa\uparrow g)b+\mathbbm{1}_{a=b}(f\uparrow g)a.\nonumber \\ \forall \sigma,\tau \in \mathbb{Z}\langle\mathcal{A} \rangle, \quad &\sigma\uparrow \tau=\sum_{f,g\in \mathcal{A}^*} \langle \sigma,f \rangle \langle \tau,g \rangle (f\uparrow g). \label{def:infiltforseq} \end{align} \subsection{Proof of the channel equivalence in Theorem~\ref{thm:channel_equiv}.} \label{app:channel_equiv} We first state a lemma (introduced in \cite{Srini2019}) which is closely related to the channel equivalence. The proof of the Lemma is relegated to Appendix~\ref{app:bin_inf_lemma}. \begin{restatable}{lemma}{bininfrelation} \label{lemma:bin_inf_relation} For $h,f_1,f_2,...,f_m \in \mathcal{A}^*$,\\ $${h \choose f_1} {h \choose f_2}...{h \choose f_m}=\sum_{w\in \mathcal{A}^*}\langle f_1\uparrow f_2\uparrow ...\uparrow f_m,w \rangle{h \choose w}.$$ \end{restatable The channel equivalence can essentially be tied to this lemma as follows: consider the two channel models in Fig.~\ref{fig:channel_equiv}. The probability of observations given the input in both cases is proportional to the number of ways of obtaining the observations given the input. \begin{itemize} \item For the $t$-trace deletion channel model in Fig.~\ref{fig:channel_equiv} (a), the number of ways to obtain the traces given the input is equal to ${X \choose \tilde{Y}^{(1)}}{X \choose \tilde{Y}^{(2)}}...{X \choose \tilde{Y}^{(t)}}$. \item For the cascade model in Fig.~\ref{fig:channel_equiv} (b), the number of ways to obtain the traces given the input is equal to $\sum_{z} {X \choose z} \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle$ (this is shown in the proof of the theorem). \end{itemize} Clearly by Lemma~\ref{lemma:bin_inf_relation}, the above two are equal, thus proving the equivalence of the two channel models. On the contrary, this equivalence provides a nice physical interpretation of Lemma~\ref{lemma:bin_inf_relation}. \channelequiv* \begin{proof} We show the probabilistic equivalence between the two channel models in Fig.~\ref{fig:channel_equiv}: 1) $t$ independent deletion channels 2) cascade of a deletion channel with parameter $\delta^t$ with the \textit{remnant} channel with parameter $\delta$ defined as follows: an input symbol to the remnant channel is included in $k>0$ given outputs and deleted in the rest with a probability $\frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. Given this definition, we now first compute the probability of a given set of output sequences given an input sequence for the remnant channel, namely $\Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z)$. First, note that there can be multiple deletion patterns corresponding to outputs $\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}$ resulting from a given input $Z$. The number of such patterns is equal to $\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle$, which essentially follows from the definition of the infiltration product. Consider one such valid deletion pattern, i.e., a deletion pattern $\mathcal{D}$ that is a mapping of the symbols in $Z$ onto the symbols in $\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}$: $\mathcal{D}=\{(1,S_1),(2,S_2),...,(|Z|,S_{|Z|})\}$. Here $(i,S_i)$ represents the fact that $Z_i$ is not deleted in the output set $\tilde{Y}^{(S_i)}$ and is deleted in the rest. Clearly, $|S_i|>0$ from the definition of the remnant channel. Also $\sum_{i=1}^{|Z|}|S_i|=\sum_{j=1}^t|\tilde{Y}^{(j)}|$ since every symbol of each output is associated with exactly one input symbol and hence corresponds to one particular $S_i$. Thus, \begin{align*} \Pr(&\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z) \\&= \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z,\mathcal{D})\\ &=\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \prod_{i=1}^{|Z|}\frac{(1-\delta)^{|S_i|}\delta^{t-|S_i|}}{1-\delta^t}\\ &=\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \frac{(1-\delta)^{\sum|S_i|}\delta^{|Z|t-\sum |S_i|}}{(1-\delta^t)^{|Z|}}\\ &=\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \frac{(1-\delta)^{\sum|\tilde{Y}^{(j)}|}\delta^{|Z|t-\sum |\tilde{Y}^{(j)}|}}{(1-\delta^t)^{|Z|}}. \end{align*} We can then compute the probability of the output given the input for the cascade channel as \begin{align*} \Pr(&\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|X)\\ &= \sum_{z} \Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)},Z=z|X)\\ &= \sum_{z} \Pr(Z=z|X)\Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z=z)\\ &= \sum_{z} \Bigg[{X \choose z} \delta^{t(|X|-|z|)}(1-\delta^t)^{|z|}\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle \frac{(1-\delta)^{\sum|\tilde{Y}^{(j)}|}\delta^{|z|t-\sum |\tilde{Y}^{(j)}|}}{(1-\delta^t)^{|z|}}\Bigg]\\ &= \left[\sum_{z} {X \choose z} \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle\right] \delta^{t|X|-\sum |\tilde{Y}^{(j)}|} {(1-\delta)^{\sum|\tilde{Y}^{(j)}|}}\\ &={X \choose \tilde{Y}^{(1)}}{X \choose \tilde{Y}^{(2)}}...{X \choose \tilde{Y}^{(t)}} \delta^{t|X|-\sum |\tilde{Y}^{(j)}|} {(1-\delta)^{\sum|\tilde{Y}^{(j)}|}}\\ &=\prod_{j=1}^t {X \choose \tilde{Y}^{(j)}} \delta^{|X|-|\tilde{Y}^{(j)}|} {(1-\delta)^{|\tilde{Y}^{(j)}|}}\\ &=\Pr(Y^{(1)}, Y^{(2)},..., Y^{(t)}|X), \end{align*} proving the equivalence. \end{proof} \subsection{Proof of Lemma~\ref{lemma:F_decomposition}.} \label{app:F_lemma_proof} \deletionMLrellemma* \begin{proof} The proof of this lemma uses a similar approach as the proof of Thm.~\ref{thm:approx_smap_postprob}. First, in the expression for $\mathbf F(\cdot )$, we separate out the subsets that contain index $i$: \begin{align*} \mathbf F(p,Y) &= \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{Y_j} (1-p_{\mathcal S_j})^{1-Y_j}\\ &= \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m, \\ i \notin \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{Y_j} (1-p_{\mathcal S_j})^{1-Y_j} + \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ i \in \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{Y_j} (1-p_{\mathcal S_j})^{1-Y_j}\\ &= \mathbf F(p_{[n]\backslash \{i\}},Y) + \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ i \in \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{Y_j} (1-p_{\mathcal S_j})^{1-Y_j}. \numberthis \label{eq:Flemma_proof1} \end{align*} Now the second term can be further split as, \begin{align*} \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ i \in \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{Y_j} (1-p_{\mathcal S_j})^{1-Y_j} &= \sum_{k=1}^m\sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ \mathcal S_k = i}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{Y_j} (1-p_{\mathcal S_j})^{1-Y_j}. \end{align*} One could express the set $\mathcal S$ as the union $\mathcal S = \mathcal S' \cup \{i\} \cup \mathcal S''$ such that $\mathcal S' \subseteq [i-1]$ and $\mathcal S'' \subseteq [i+1:n]$ to get \begin{align*} &\sum_{k=1}^m\sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ \mathcal S_k = i}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{Y_j} (1-p_{\mathcal S_j})^{1-Y_j}\\ &= \sum_{k=1}^m\sum_{\substack{\mathcal S'|\\ \mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\sum_{\substack{\mathcal S''|\\ \mathcal S'' \subseteq [i+1:n]\\ |\mathcal S''|=m-k}} \left( \prod\limits_{j=1}^{k-1} p_{\mathcal S'_j}^{Y_j} (1-p_{\mathcal S'_j})^{1-Y_j} \right) \left( p_{i}^{Y_k} (1-p_{i})^{1-Y_k} \right) \left( \prod\limits_{j=1}^{m-k} p_{\mathcal S''_j}^{Y_{j+k}} (1-p_{\mathcal S''_j})^{1-Y_{j+k}} \right)\\ &= \sum_{k=1}^m p_{i}^{Y_k} (1-p_{i})^{1-Y_k}\left(\sum_{\substack{\mathcal S'|\\ \mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}} \prod\limits_{j=1}^{k-1} p_{\mathcal S'_j}^{Y_j} (1-p_{\mathcal S'_j})^{1-Y_j} \right) \left( \sum_{\substack{\mathcal S''|\\ \mathcal S'' \subseteq [i+1:n]\\ |\mathcal S''|=m-k}} \prod\limits_{j=1}^{m-k} p_{\mathcal S''_j}^{Y_{j+k}} (1-p_{\mathcal S''_j})^{1-Y_{j+k}} \right)\\ &= \sum_{k=1}^m p_{i}^{Y_k} (1-p_{i})^{1-Y_k} \mathbf F(p_{[i-1]},Y_{[k-1]}) \mathbf F(p_{[i+1:n]},Y_{[k+1:m]}). \end{align*} The $\sum_{k=1}^m$ summation in the above expression could further be split into the two cases depending on whether $Y_k=0$ or $Y_k=1$, which simplifies the term $p_{i}^{Y_k} (1-p_{i})^{1-Y_k}$ to either $1-p_i$ or $p_i$ respectively. Thus, \begin{align*} &\sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ i \in \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{Y_j} (1-p_{\mathcal S_j})^{1-Y_j}\\ &= (1-p_i)\sum_{k|Y_k=0} \mathbf F(p_{[i-1]},Y_{[k-1]}) \mathbf F(p_{[i+1:n]},Y_{[k+1:m]}) + p_i\sum_{k|Y_k=1} \mathbf F(p_{[i-1]},Y_{[k-1]}) \mathbf F(p_{[i+1:n]},Y_{[k+1:m]}).\numberthis \label{eq:Flemma_proof2} \end{align*} \noindent Plugging \eqref{eq:Flemma_proof2} in \eqref{eq:Flemma_proof1} concludes the proof of the Lemma. \end{proof} \subsection{Symbolwise posterior probabilities for the remnant channel} \label{app:remnant_postprob} Consider the remnant channel shown below, and let $Z=Z_1Z_2...Z_n$. Also let $Z_i \sim \text{Ber}(0.5)$. We aim to compute $\Pr(Z_i=1|\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)})$. \begin{figure}[!h] \centering \includegraphics[scale=0.4]{remnant_channel.pdf} \caption{The remnant channel} \end{figure} As seen in appendix~\ref{app:channel_equiv}, the input-output relation for this channel is given by: \begin{align*} \Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z) =\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \frac{(1-\delta)^{\sum|\tilde{Y}^{(j)}|}\delta^{nt-\sum |\tilde{Y}^{(j)}|}}{(1-\delta^t)^{n}}. \end{align*} Now, one could write the symbolwise posterior probabilities for $Z$ as: \begin{align*} \Pr(Z_i=1&|\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}) = \sum_{\substack{z||z|=n,\\z_i=1}} \Pr(z|\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)})\\ &{=} \frac{1}{2^n \Pr(\tilde Y^{(1)}...,\tilde Y^{(t)})} \sum_{\substack{z||z|=n,\\z_i=1}} \Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|z)\\ &{=} \frac{{(1-\delta)^{\sum|\tilde{Y}^{(j)}|}\delta^{nt-\sum |\tilde{Y}^{(j)}|}}}{(1-\delta^t)^{n} 2^n \Pr(\tilde Y^{(1)},...,\tilde Y^{(t)})} \sum_{\substack{z||z|=n,\\z_i=1}} \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle. \numberthis \label{eq:remant_map_prob_1} \end{align*} A similar expression can be obtained for the case when $Z_i=0$ as \begin{align*} \Pr(Z_i=0&|\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)})\\ &{=} \frac{{(1-\delta)^{\sum|\tilde{Y}^{(j)}|}\delta^{nt-\sum |\tilde{Y}^{(j)}|}}}{(1-\delta^t)^{n} 2^n \Pr(\tilde Y^{(1)},...,\tilde Y^{(t)})} \sum_{\substack{z||z|=n,\\z_i=0}} \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle. \numberthis \label{eq:remant_map_prob_0} \end{align*} We could further simplify \eqref{eq:remant_map_prob_1} and \eqref{eq:remant_map_prob_0} using the fact that \begin{align*} \Pr(Z_i=1|\tilde Y^{(1)},...,\tilde Y^{(t)}) &= \frac{\Pr(Z_i=1|\tilde Y^{(1)},...,\tilde Y^{(t)})}{\Pr(Z_i=0|\tilde Y^{(1)},...,\tilde Y^{(t)})+\Pr(Z_i=1|\tilde Y^{(1)},...,\tilde Y^{(t)})} \\ &= \frac{\sum\limits_{\substack{z||z|=n,\\z_i=1}} \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle}{\sum\limits_{\substack{z||z|=n}} \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle}. \numberthis \label{eq:remnant_map_prob} \end{align*} We precisely describe the algorithm which computes the terms in \eqref{eq:remnant_map_prob} in section~\ref{sec:exactsmap}, by exploiting the edit graph interpretation of the infiltration product, but give a high level idea below. The complexity of such an algorithm is $O((2n)^t)$ which is equal to the number of edges in the edit graph. Note that for a fixed number of traces, this algorithm is polynomial in the blocklength as opposed to a naive approach of iterating through all the $n$-length sequences. Recall that $\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle$ is the number of paths from origin to destination of the edit graph $\mathcal G(\tilde Y^{(1)}, ..., \tilde Y^{(t)})$ which correspond to $z$. Therefore, $\sum_{\substack{z||z|=n}}\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle$ is equal to the number of $n$-length paths in $\mathcal G(\tilde Y^{(1)}, ..., \tilde Y^{(t)})$ from the origin to the destination. Note that the edit graph has no cycles, so this quantity can be efficiently computed via the following dynamic program -- the number of $n$ length paths from the origin to a vertex $v$ is equal to the sum of the number of $n-1$ length paths from the origin to the in-neighbors of $v$. Such a procedure iterates over the vertex set of $\mathcal G(\tilde Y^{(1)}, ..., \tilde Y^{(t)})$ exactly once. The numerator term $\sum_{\substack{z||z|=n\\z_i=1}}\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle$ can be interpreted in a similar way: it is equal to the number of $n$-length paths in $\mathcal G(\tilde Y^{(1)}, ..., \tilde Y^{(t)})$ from the origin to the destination such that the $i^{th}$ edge of the path corresponds to a `1'. The algorithm for this, therefore, follows a similar principle but has an extra step. For each vertex $v$, we compute \begin{itemize} \item the number of paths from the origin to $v$ of length $0,1,...,n$, \item the number of paths from $v$ to the destination of length $0,1,...,n$. \end{itemize} Next we iterate over all edges in $\mathcal G(\tilde Y^{(1)}, ..., \tilde Y^{(t)})$ corresponding to a `1' and accumulate the number of $n$ length paths which have this particular edge as its $i^{th}$ edge. Thus, this algorithm iterates over the vertex set twice and the edge set of $\mathcal G(\tilde Y^{(1)}, ..., \tilde Y^{(t)})$ once. \subsection{Proof of Lemma~\ref{lemma:bin_inf_relation}} \label{app:bin_inf_lemma} \bininfrelation* \begin{proof} We use induction to prove the statement. The statement is trivially true when $m=1$ since, $\sum_{w}{h \choose w}\langle f_1,w \rangle={h \choose f_1}$ as $\langle f,w \rangle=\mathbbm{1}_{f=w}$. We refer the reader to \cite{lothaire1997combinatorics} for a proof of the proposition for the case when $m=2$. Assume that the statement is true for $m=k \in \mathbb{Z}, k\geq 2$. We next prove the validity when $m=k+1$. \\ Consider \begin{align} {h \choose f_1} {h \choose f_2}...{h \choose f_k}{h \choose f_{k+1}}\nonumber\\=\sum_{w}{h \choose w}\langle f_1\uparrow f_2\uparrow ...\uparrow f_k,w \rangle {h \choose f_{k+1}}\nonumber\\ =\sum_{w}\left[{h \choose w} {h \choose f_{k+1}}\right]\langle f_1\uparrow f_2\uparrow ...\uparrow f_k,w \rangle\nonumber\\ =\sum_{w}\left[\sum_v \langle w\uparrow f_{k+1},v \rangle {h \choose v}\right]\langle f_1\uparrow f_2\uparrow ...\uparrow f_k,w \rangle\nonumber\\ =\sum_{v}{h \choose v}\left[\sum_w \langle w\uparrow f_{k+1},v \rangle \langle f_1\uparrow f_2\uparrow ...\uparrow f_k,w \rangle\right]. \label{eq:prop2proof} \end{align} To evaluate the term in the square bracket, we use \eqref{def:infiltforseq}. For the case where $\tau \in \mathcal{A}^*,\sigma \in \mathbb{Z}\langle \mathcal{A} \rangle$ in \eqref{def:infiltforseq}, we have $$\sigma\uparrow \tau=\sum_{f\in \mathcal{A}^*} \langle \sigma,f \rangle (f\uparrow \tau),$$ and thus \begin{equation} \langle \sigma \uparrow \tau,u \rangle=\sum_{f\in \mathcal{A}^*} \langle \sigma,f \rangle \langle f\uparrow \tau,u\rangle. \label{eq:prop2proof2} \end{equation} We use \eqref{eq:prop2proof2} to replace the term in the square bracket in \eqref{eq:prop2proof}, i.e., \begin{align} {h \choose f_1} {h \choose f_2}...{h \choose f_k}{h \choose f_{k+1}}\nonumber\\ =\sum_{v}{h \choose v}\langle (f_1\uparrow f_2\uparrow ...\uparrow f_k) \uparrow f_{k+1},v \rangle, \end{align} and the proposition follows from the associativity property of the infiltration product. \end{proof} \subsection{Proof of Lemma~\ref{lemma:smapsum}} \label{app:smapsum} \begin{restatable}{lemma}{smapsum} \label{lemma:smapsum} \begin{align*} \sum_{\substack{f||f|{=}n\\f_i{=}a}}{f \choose g} = 2^{n-|g|}\Bigg(\frac{1}{2}{n-1 \choose |g|} &+\sum_{j|g_j=a}{i-1 \choose j-1}{n-i \choose |g|-j}\Bigg), \end{align*} where $j\in\Big[\max\{1,|g|+i-n\}:\min\{i,|g|\}\Big]$. \end{restatable} \begin{proof} First, observe that $${f \choose g} = \sum_{\substack{S\subseteq [n]:\\|S|=|g|}} \mathbbm 1_{f_S=g},$$ where the summation is over all ordered subsets of $[n]=\{1,2,...,n\}$ of size $|g|$ and $f_S$ corresponds to the subsequence of $f$ indexed by $S$. Thus, \begin{align*} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}}&{f \choose g} = \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \sum_{\substack{S\subseteq [n]|\\|S|=|g|}} \mathbbm 1_{f_S=g} = \sum_{\substack{S\subseteq [n]|\\|S|=|g|}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g}\\ &=\sum_{\substack{S\subseteq [n]|\\|S|=|g|\\i \notin S}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g} + \sum_{\substack{S\subseteq [n]|\\|S|=|g|\\i \in S}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g}\\ &=\sum_{\substack{S\subseteq [n]|\\|S|=|g|\\i \notin S}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g} + \sum_{j=1}^m \sum_{\substack{S\subseteq [n]|\\|S|=|g|\\S_j=i}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g}. \numberthis \label{eq:lemmasmapsum} \end{align*} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{lemma1.pdf} \vspace*{-2mm} \caption{Figure illustrating proof of Lemma~\ref{lemma:smapsum}.} \label{fig:lemmasmapsum} \end{center} \end{figure} The two terms in \eqref{eq:lemmasmapsum} can be visualized as the number of ways to fill up the blank spaces (spaces without arrows pointing to it in $f$) in Fig.~\ref{fig:lemmasmapsum}(a) and (b) respectively. Solving this counting problem is easy and we get, $$\sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}}{f \choose g}=2^{n-|g|}\left(\frac{1}{2}{n-1 \choose |g|}+\sum_{j|g_j=a}{i-1 \choose j-1}{n-i \choose |g|-j}\right).$$ \end{proof} \section{Conclusions} \label{sec:conclusions} In this work we gave, to the best of our knowledge, the first results and techniques to compute posterior distributions over single and multiple deletion channels. We also provided a new perspective on the maximum-likelihood for the deletion channel by showing an equivalence between a discrete optimization problem and its relaxed version. In this process, we introduced a variety of tools (the relaxed binomial coefficient, edit graph and infiltration product) and demonstrated their use for analyzing deletion channels. We also presented numerical evaluations of our algorithms. \section{Introduction} \label{sec:intro} Sequence reconstruction over deletion channels, both with and without a codebook, has received considerable attention in the information theory as well as in the theoretical computer science literature. From an information theory perspective, reconstruction over the deletion channel, or more specifically a maximum-likelihood (ML) argument for the deletion channel, would give further insight on the capacity of the deletion channel, a long-standing open problem (see \cite{mitzenmacher2009survey}). To quote \cite{mitzenmacher2009survey} -- ``at the very least, progress in this direction would likely surpass previous results on the capacity of the deletion channels''. Yet, there are no results on reconstruction over a deletion channel with statistical guarantees -- in this work, we take a step in this direction. On the other hand, the problem of \textit{trace reconstruction}, as introduced in \cite{Batu2004}, has received renewed interest in the past few years (see \cite{Holenstein2008}, \cite{Peres2017}, \cite{De2017}, \cite{holden18}, \cite{Nazarov:2017}, \cite{holden2018lower}, \cite{chase2019new}). The problem of trace reconstruction can be stated simply as follows: consider a sequence $X$ which is simultaneously passed through $t$ independent deletion channels to yield $t$ deleted observations (also called \textit{traces}) of $X$ (see Fig.~\ref{fig:tdeletion}). How many such traces are needed to reconstruct $X$ perfectly? A variety of upper and lower bounds for this problem have been proposed, both for worst case and average case reconstruction. Our problem definition, stated in the following paragraph, is closely related to the average case trace reconstruction. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2]{tdeletion.pdf} \caption{$t$-deletion channel model: sequence $X$ passed through $t$ independent deletion channels to yield $t$ \textit{traces}. We aim to estimate $X$ from the $Y^{(i)}$s.} \label{fig:tdeletion} \end{center} \end{figure} \noindent \textbf{Problem definition.} Given an input sequence of length $n$ (known apriori), the independently and identically distributed (i.i.d.) deletion channel deletes each input symbol indepedently with probability $\delta$, producing at its output a subsequence of the input sequence. Consider a sequence $X$ passed through $t$ ($t$ is fixed) such deletion channels as shown in Fig.~\ref{fig:tdeletion}. We call this the $t$-deletion channel model. We ask two questions: \begin{enumerate} \item For $t=1$ (see Fig.~\ref{fig:1deletion}, also called the single deletion channel model), and when $X_i \sim\ \text{ind. Ber}(p_i)$, compute the posterior distributions of $X_i$ given the trace $Y$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{1deletion.pdf} \caption{The single deletion channel model. We assume $X_i \sim\ \text{ind. Ber}(p_i)$.} \label{fig:1deletion} \end{center} \end{figure} \item In the $t$ deletion channel model, for a fixed $t$ assume that $X_i \sim\ \text{i.i.d. Ber}(0.5)$ and compute the posterior distributions of $X_i$ given all traces $Y^{(1)}, Y^{(2)},...,Y^{(t)}$. \end{enumerate} Note that solving 1) above doesn't lead to a natural solution for 2). This is because for a memoryless channel, we have $Y^{(j)} - X_i - Y^{(k)}$ and hence $\Pr(X_i=\alpha|Y^{(j)}, Y^{(k)}) \propto \Pr(X_i=\alpha|Y^{(j)}) \Pr(X_i=\alpha|Y^{(k)})$; so one could independently combine the posterior probabilities from each noisy observation. This is not the case for deletion channels since the markov chain $Y^{(j)} - X_i - Y^{(k)}$ no longer holds. More intuitively, one needs to first ``align'' the traces for computing the likelihoods. We point out that the problem considered in 2) asks a question complementary to the trace reconstruction: given a fixed (possibly a few) number of traces, what is our ``best'' guess of $X$? We provide algorithms which do this. Unlike trace reconstruction, we are not concerned with perfect reconstruction (since perfect reconstruction may not be possible with just a few traces), although it should also be obvious that performance guarantees for our algorithms (not a part of this work) would naturally lead to upper bounds for trace reconstruction. Deletion channel by itself is known to be notoriously difficult to analyse. As stated earlier, the capacity of a single deletion channel is still unknown (\cite{diggavi2007capacity,diggavi2006information,diggavi2001transmission}); as are optimal coding schemes. Recent works have looked at the design of codes for deletion channels (\cite{ratzer2005marker,ratzer2000codes,thomas2017polar}); these works consider use of a codebook (we do not). As a result, statistical estimation over deletion channels is also a difficult problem due its highly combinatorial nature. To the best of our knowledge, as yet there are no efficient estimation algorithms over deletion channels with statistical guarantees; not even for ML over a single deletion channel. \noindent \textbf{Biological motivation.} Trace reconstruction in itself was motivated, in part, by problems DNA sequence reconstruction. One such problem was to infer the DNA sequence of a common ancestor from the samples of its descendants. We argue that our problem definition fits more naturally in such a scenario since perfect reconstruction may not be feasible or even possible. Our motivation for considering this problem also comes from a recent DNA sequencing technology called \textit{nanopore sequencing}. The $t$-deletion channel model is a simplistic model to approximately capture the process of a DNA sequence passed through a nanopore sequencer\footnote{As seen in \cite{Mao2017},\cite{MDK17} there are more complicated effects of the nanopore reader not captured in this simple representation.}. Very recently, a variant of the trace reconstruction problem called \textit{coded trace reconstruction} has been proposed, motivated by portable DNA-based data storage systems using DNA nanopores (see \cite{abroshan2019coding}, \cite{cheraghchi2019coded}, \cite{brakensiek2019coded}) and we believe that the ideas in this work may prove useful in such a setting. There are other works on sequence assembly (see for example, \cite{Li09fastand}, \cite{Shomorony2016}), where multiple short reads (from different segments of a sequence) are used to reconstruct the bigger sequence. This work differs from sequence assembly since we are interested to infer the entire length sequence and not just small segments of it (which are then ``stitched'' together in sequence assembly). \noindent \textbf{Tools and techniques.} In terms of theoretical tools, the series of books by Lothaire (\cite{lothaire1997combinatorics,lothaire2002algebraic,lothaire2005applied}) extensively use algebraic tools for problems in the combinatorics of sequences (or \textit{words}), and our work is inspired by such techniques. We borrow many of their notations and results for our work. \noindent \textbf{Contributions.} {\color{red}} Our main contribution is to provide tools to visualize and analyze error events (described in the next section) for the multiple deletion channel model in Fig.~\ref{fig:tdeletion}. We also provide algorithms to solve the problems stated in 1) and 2) earlier in the section. \begin{itemize}[wide=5pt] \item In section~\ref{sec:1deletion}, for the single deletion channel model, we provide an $O(n^2)$ algorithm to calculate the symbolwise posterior probabilities $\Pr(X_i=1|Y)\ \forall\ i$ when $X_i \sim \text{ind. Ber}(p_i)$. \item In Section~\ref{sec:exactsmap}, for the $t$-deletion channel model, we give an $O(2^t n^{t+2})$ algorithm to calculate the symbolwise posterior probabilities $\Pr(X_i = 1|Y_1,...,Y_t)$ when $X_i \sim \text{ind. Ber}(0.5)$. \end{itemize} \section{Introduction} \label{sec:intro} Sequence reconstruction over deletion channels, both with and without a codebook, has received considerable attention in the information theory as well as in the theoretical computer science literature. From an information theory perspective, reconstruction over the deletion channel, or more specifically a maximum-likelihood (ML) argument for the deletion channel, would give further insight on the capacity of the deletion channel, a long-standing open problem (see \cite{mitzenmacher2009survey}). To quote \cite{mitzenmacher2009survey} -- ``at the very least, progress in this direction would likely surpass previous results on the capacity of the deletion channels''. Yet, there are no results on reconstruction over a deletion channel with statistical guarantees. In this work, we take steps in this direction. In this space, the problem of \textit{trace reconstruction}, as introduced in \cite{Batu2004}, has also received renewed interest in the past few years (see \cite{Holenstein2008,Peres2017}, \cite{De2017}, \cite{holden18}, \cite{Nazarov:2017}, \cite{holden2018lower}, \cite{chase2019new}). The problem of trace reconstruction can be stated simply as follows: consider a sequence $X$ which is simultaneously passed through $t$ independent deletion channels to yield $t$ output subsequences (also called \textit{traces}) of $X$ (see Fig.~\ref{fig:tdeletion}). How many such traces are needed to reconstruct $X$ perfectly? A variety of upper and lower bounds for this problem have been proposed, both for worst case and average case reconstruction. Our problem formulation is complementary to this, as we discuss next. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2]{tdeletion.pdf} \caption{The $t$-trace deletion channel model: the sequence $X$ is passed through $t$ independent deletion channels to yield $t$ \textit{traces}. We aim to estimate $X$ from the $Y^{(i)}$s.} \label{fig:tdeletion} \end{center} \end{figure} \noindent \textbf{Problem formulation.} Given an input sequence of length $n$ (known apriori), the independently and identically distributed (i.i.d.) deletion channel deletes each input symbol indepedently with probability $\delta$, producing at its output a subsequence of the input sequence. Consider a sequence $X$ passed through $t$ ($t$ is fixed) such deletion channels as shown in Fig.~\ref{fig:tdeletion}. We call this the $t$-trace deletion channel model. We ask three main questions: \begin{enumerate} \item For $t=1$ (also called \textit{single-trace deletion channel}, see Fig.~\ref{fig:1deletion}), what is the maximum-likelihood estimate of $X$ having observed $Y$, i.e., a solution to $\argmax\limits_{x\in \{0,1\}^n}\Pr(Y|X)$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2]{1deletion.pdf} \caption{The single-trace deletion channel model.} \label{fig:1deletion} \end{center} \end{figure} \item For $t=1$ and $X_i \sim\ \text{ind. Ber}(p_i)$ in Fig.~\ref{fig:1deletion}, what are the posterior distributions of $X_i$ given the trace $Y$, i.e., compute $\Pr(X_i=\alpha|Y)$. \item For a fixed $t$, with $t>1$ and $X_i \sim\ \text{i.i.d. Ber}(0.5)$ in Fig.~\ref{fig:tdeletion}, what are the posterior distributions of $X_i$ given all traces $Y^{(1)}, Y^{(2)},...,Y^{(t)}$, i.e., compute $\Pr(X_i=\alpha|Y^{(1)}, Y^{(2)},...,Y^{(t)})$. \end{enumerate} We make two observations:\\ {\bf I.} An answer to 2) above doesn't lead to a natural solution for 3) since the deletion channel is not memoryless. In particular, for a memoryless channel, we have $Y^{(j)}_i - X_i - Y^{(k)}_i$ and hence $\Pr(X_i=\alpha|Y^{(j)}, Y^{(k)}) = \Pr(X_i=\alpha|Y^{(j)}_i, Y^{(k)}_i) \propto \Pr(X_i=\alpha|Y^{(j)}_i) \Pr(X_i=\alpha|Y^{(k)}_i)$; so one could first obtain the posterior probabilities from each independent observation and combine them after. However, this is not the case for deletion channels since the markov chain $Y^{(j)}_i - X_i - Y^{(k)}_i$ no longer holds. As a result, one first needs to ``align'' all the observations in order to compute the likelihoods. \\ {\bf II.} Solving 3) naturally leads to an algorithm for trace reconstruction, that selects the most likely value for each $X_i$. However, the problem formulation in 3) asks a question complementary to the trace reconstruction: given a fixed (possibly a few) number of traces, what is our ``best'' guess of $X$? Unlike trace reconstruction, we are not concerned with perfect reconstruction (since perfect reconstruction may not be possible with just a few traces). We also note that error rate guarantees for our algorithms (not a part of this work) would naturally lead to upper bounds for trace reconstruction.\\ \noindent \textbf{Contributions.} Our main contributions are as follows. \begin{itemize}[leftmargin=5mm] \item We introduce mathematical tools and constructs to visualize and analyze single-trace and $t$-trace deletion error events (see Section~\ref{sec:notation}). \item For the single-trace deletion channel, we establish an equivalence between finding the optimal ML decoder and a continuous optimization problem we introduce (see Section~\ref{sec:1deletion_ML}). \item In Section~\ref{sec:1deletion}, we prove the following: \begin{theorem} For the single-trace deletion channel model with priors $X_i \sim \text{ind. Ber}(p_i)$ and observed trace $Y$, the symbolwise posterior probabilities $\Pr(X_i=1|Y)\ \forall\ i$ can be computed in $O(n^2)$ time complexity. \end{theorem} \item In Section~\ref{sec:exactsmap}, we prove the following: \begin{theorem} For the $t$-trace deletion channel model with priors $X_i \sim \text{i.i.d. Ber}(0.5)$ and observed traces $Y^{(1)},...,Y^{(t)}$, the symbolwise posterior probabilities $\Pr(X_i = 1|Y^{(1)},...,Y^{(t)})\ \forall\ i$ can be computed in $O(2^t n^{t+2})$ time complexity.\\ \end{theorem} \end{itemize} \noindent \textbf{Tools and techniques.} In terms of theoretical tools, the series of books by Lothaire (\cite{lothaire1997combinatorics,lothaire2002algebraic,lothaire2005applied}) extensively use algebraic tools for problems in the combinatorics of sequences (or \textit{words}), and our work is inspired by such techniques. We borrow some notation and leverage a few of their results in our work. \\ \noindent \textbf{Biological motivation.} Trace reconstruction in itself was motivated, in part, by problems in DNA sequence reconstruction. One such problem was to infer the DNA sequence of a common ancestor from the samples of its descendants. Our problem definition, that considers a fixed value of $t$, would fit naturally in a scenario with a fixed number of descendants where perfect reconstruction may not be possible. Our motivation for considering this problem also comes from a recent DNA sequencing technology called \textit{nanopore sequencing}. The $t$-trace deletion channel model is a simplistic model to approximately capture the process of a DNA sequence passed through a nanopore sequencer\footnote{As seen in \cite{Mao2017},\cite{MDK17} there are more complicated effects of the nanopore reader not captured in this simple representation.}. \\ \noindent \textbf{More related work.} Our work falls under the general umbrella of sequence reconstruction over deletion channels, where we offer, to the best of our knowledge, the first non-trivial results on maximum likelihood and maximum aposteriori estimates for the single and multiple deletion channel. As mentioned earlier, the complementary problem of trace reconstruction falls closest to this work. The deletion channel by itself is known to be notoriously difficult to analyse. As stated earlier, the capacity of a single deletion channel is still unknown (\cite{diggavi2007capacity,diggavi2006information,diggavi2001transmission}); as are optimal coding schemes. Prior works have looked at the design of codes for deletion channels (\cite{ratzer2005marker,ratzer2000codes,thomas2017polar}); these works consider use of a codebook (we do not). Statistical estimation over deletion channels is a difficult problem to analyze due its highly combinatorial nature. To the best of our knowledge, as yet there are no efficient estimation algorithms over deletion channels with statistical guarantees. Very recently, a variant of the trace reconstruction problem called \textit{coded trace reconstruction} has been proposed, motivated by portable DNA-based data storage systems using DNA nanopores (see \cite{abroshan2019coding}, \cite{cheraghchi2019coded}, \cite{brakensiek2019coded}) and we believe that the ideas in this work may prove useful in such a setting. There are other works on sequence assembly (see for example, \cite{Li09fastand}, \cite{Shomorony2016}), where multiple short reads (from different segments of a sequence) are used to reconstruct the bigger sequence. This work differs from sequence assembly since we are interested in inferring the entire length sequence and not just small segments of it (which are then ``stitched'' together in sequence assembly).\\ \noindent \textbf{Paper Organization.} Section~\ref{sec:notation} introduces our notation and visualization tools for the single and $t$-trace channel error events; Section~\ref{sec:1deletion_ML} provides a result concerning question 1) wherein we prove the equivalence of ML decoding to solving a continuous optimization problem; Section~\ref{sec:1deletion} answers question 2) for the single-trace channel; Section~\ref{sec:exactsmap}) answers question 3) for the $t$-deletion channel; Section~\ref{sec:Numerics} gives numerical evaluations; and Section~\ref{sec:conclusions} concludes the paper. \section{Symbolwise posterior probabilities for the $t$-trace deletion channel} \label{sec:exactsmap} In this section, we put to use the ideas and constructs introduced in section~\ref{sec:notation} to exactly compute the symbolwise posterior probabilities given $t$-traces, which in turn gives a symbolwise MAP estimate (with uniform priors) of the input sequence. In Appendix~\ref{app:remnant_postprob}, we also provide a method to compute the symbolwise posterior probabilities for the remnant channel -- we encourage the reader to use this appendix as a warm-up. For the $t$-trace deletion channel, similar expressions arise due to the channel equivalence result of Theorem~\ref{thm:channel_equiv}. Let $\mathcal A = \{0,1\}$, and assume that $X\sim $ Uniform $\mathcal A^n$. Our goal is to compute the symbolwise posterior probabilities $\Pr(X_i{=}1|Y^{(1)},...,Y^{(t)})$, where $Y^{(j)}$ is the $j^{th}$ trace. Our proposed algorithm is provided in Alg.~\ref{alg:exact_smap} and estimates the symbolwise MAP (with uniform priors). We can directly leverage Alg.~\ref{alg:exact_smap} to reconstruct the input as follows: for each index $i$, compute $\Pr(X_i{=}1|Y^{(1)},...,Y^{(t)})$ and decide \begin{align*} \hat{X}_i = \begin{cases} 1,\quad &$if$\ \Pr(X_i{=}1|Y^{(1)},...,Y^{(t)})\geq 0.5 \\ 0, \quad &$otherwise$. \end{cases} \end{align*} Through the rest of this section, we show how to compute $\Pr(X_i{=}1|Y^{(1)},...,Y^{(t)})$\footnote{Symbolwise MAP with non-uniform priors is part of on-going work.} in two steps: \begin{itemize} \item We first give an expression for $\Pr(X_i{=}1|Y^{(1)},...,Y^{(t)})$ which sums over potentially an exponential number of terms. \item We then show that this summation can be computed in polynomial time (polynomial in the blocklength $n$). \end{itemize} \noindent \textbf{Step 1: An expression for $\Pr(X_i{=}1|Y^{(1)},...,Y^{(t)})$.} \begin{theorem} \label{thm:exactSMAP_posteriorprob} Assume $X\sim $ Uniform $\mathcal A^n$ or equivalently $X_i \sim\ \text{Ber}(0.5)$. The posterior probability of the $i^{th}$ bit given the $t$ traces can be expressed as \begin{align*} \Pr(X_i=1|Y^{(1)},&...,Y^{(t)}) &\\ = & \Bigg[ \sum_{k=0}^n 2^{n-k-1}{n-1 \choose k} \sum_{w||w|=k} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \\ &+ \sum_{k=0}^n \sum_{j=1}^k 2^{n-k}{i-1 \choose j-1}{n-i \choose k-j} \sum_{\substack{w| |w|=k,\\w_j=1}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \Bigg] \Big/ \\ &\Bigg[ \sum_{k=0}^n 2^{n-k} {n \choose k} \sum_{\substack{w| |w|=k}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \Bigg]. \numberthis \label{eq:posterior_prob} \end{align*} \end{theorem} Note that the summation index, $w| |w|{=}k$ is over all sequences $w$ of length $k$; this is an alternate expression for $w|w{\in}\mathcal A^k$. We follow this convention throughout the rest of the paper. \begin{proof} \begin{align*} \Pr(X_i=1&|Y^{(1)},...,Y^{(t)}) = \sum_{\substack{x||x|=n,\\x_i=1}} \Pr(x|Y^{(1)},...,Y^{(t)})\\ &\overset{(a)}{=} \frac{1}{2^n \Pr(Y^{(1)},...,Y^{(t)})} \sum_{\substack{x||x|=n,\\x_i=1}} \Pr(Y^{(1)},...,Y^{(t)}|x)\\ &\overset{(b)}{=} \frac{1}{2^n \Pr(Y^{(1)},...,Y^{(t)})} \sum_{\substack{x||x|=n,\\x_i=1}} \prod_{j=1}^t\Pr(Y^{(j)}|x), \end{align*} where $(a)$ uses Bayes' principle and $(b)$ is because each deletion channel acts independently. Recall that for a deletion channel with deletion probability $\delta$, $\Pr(Y|X=x)={x \choose Y}\delta^{|x|-|Y|}(1-\delta)^{|Y|}$. Also, using the fact that $\Pr(Y^{(1)},...,Y^{(t)})=\sum\limits_{\substack{x||x|=n}}\Pr(x) \Pr(Y^{(1)},...,Y^{(t)}|x)$ we have, \begin{align*} \Pr(X_i=1|Y^{(1)},...,Y^{(t)})= \frac{\sum\limits_{\substack{x||x|=n,\\x_i=1}} {x \choose Y^{(1)}}...{x \choose Y^{(t)}}}{\sum\limits_{\substack{x||x|=n}} {x \choose Y^{(1)}}...{x \choose Y^{(t)}}}. \numberthis \label{eq:thmexactsmap_proofterm0} \end{align*} We first simplify the numerator $\sum\limits_{\substack{x||x|=n,\\x_i=1}} {x \choose Y^{(1)}}...{x \choose Y^{(t)}}$; the denominator can be simplified using the same approach. Now, \begin{align*} \sum\limits_{\substack{x||x|=n,\\x_i=1}} {x \choose Y^{(1)}}...{x \choose Y^{(t)}} &\overset{(a)}{=} \sum_{\substack{x||x|=n,\\x_i=1}} \sum_{w\in \{0,1\}^*} {x \choose w} \langle Y^{(1)} \uparrow Y^{(2)} \uparrow ... \uparrow Y^{(t)},w \rangle \\ &=\sum_{w\in \mathcal A^*} \langle Y^{(1)} \uparrow Y^{(2)} \uparrow ... \uparrow Y^{(t)},w \rangle \sum_{\substack{x||x|=n,\\x_i=1}}{x \choose w}\\ &\overset{(b)}{=}\sum_{w\in \mathcal A^*} \langle Y^{(1)} \uparrow Y^{(2)} \uparrow ... \uparrow Y^{(t)},w \rangle \\ & \hspace{2cm} \times 2^{n-|w|}\left(\frac{1}{2}{n-1 \choose |w|}+\sum_{j|w_j=1}{i-1 \choose j-1}{n-i \choose |w|-j}\right) \end{align*} where $(a)$ is due to Lemma~\ref{lemma:bin_inf_relation} and $(b)$ due to Lemma~\ref{lemma:smapsum} (both introduced in \cite{Srini2018}); see Appendix~\ref{app:bin_inf_lemma} and Appendix~\ref{app:smapsum} for the statement and proof. Note that Lemma~\ref{lemma:bin_inf_relation} is an alternate mathematical outlook of the channel equivalence in Theorem~\ref{thm:channel_equiv} (see Appendix~\ref{app:channel_equiv}). \noindent Therefore we have, \begin{align*} \sum\limits_{\substack{x||x|=n,\\x_i=1}} {x \choose Y^{(1)}}...{x \choose Y^{(t)}} &\overset{(a)}{=} \sum_{k=0}^{\infty} 2^{n-k-1}{n-1 \choose k} \sum_{w| |w|=k} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \\ &\hspace{1cm}+ \sum_{k=0}^{\infty} \sum_{j=1}^k 2^{n-k}{i-1 \choose j-1}{n-i \choose k-j} \sum_{\substack{w| |w|=k,\\w_j=1}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \\ &\overset{(b)}{=} \sum_{k=0}^n 2^{n-k-1}{n-1 \choose k} \sum_{w| |w|=k} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \\ &\hspace{1cm}+ \sum_{k=0}^n \sum_{j=1}^k 2^{n-k}{i-1 \choose j-1}{n-i \choose k-j} \sum_{\substack{w| |w|=k,\\w_j=1}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle, \numberthis \label{eq:thmexactsmap_proofterm1} \end{align*} where in $(a)$ we first fix $|w|$ and then sum over all $w$ of the given length and $(b)$ holds because the combinatorial terms are $0$ when $k>n$. A similar analysis gives \begin{align*} \sum\limits_{x| |x|=n} &{x \choose Y^{(1)}}...{x \choose Y^{(t)}} = \sum_{k=0}^n 2^{n-k} {n \choose k} \sum_{\substack{w| |w|=k}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle.\numberthis \label{eq:thmexactsmap_proofterm2} \end{align*} Plugging \eqref{eq:thmexactsmap_proofterm1} and \eqref{eq:thmexactsmap_proofterm2} in \eqref{eq:thmexactsmap_proofterm0}, we get the expression in Theorem~\ref{thm:exactSMAP_posteriorprob}, \begin{align*} \Pr(X_i=1|Y^{(1)},&...,Y^{(t)}) &\\ = & \Bigg[ \sum_{k=0}^n 2^{n-k-1}{n-1 \choose k} \sum_{w||w|=k} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \\ &+ \sum_{k=0}^n \sum_{j=1}^k 2^{n-k}{i-1 \choose j-1}{n-i \choose k-j} \sum_{\substack{w| |w|=k,\\w_j=1}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \Bigg] \Big/ \\ &\Bigg[ \sum_{k=0}^n 2^{n-k} {n \choose k} \sum_{\substack{w| |w|=k}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \Bigg]. \end{align*} \end{proof} \noindent \textbf{Step 2: Dynamic program to compute $\sum\limits_{w| |w|=k} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle$ and $\sum\limits_{\substack{w| |w|=k,\\w_j=1}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle$.} Note that the number of sequences $w| |w|=k$ is $O(2^k)$ so a naive evaluation is exponential in the blocklength $n$. We can, however, exploit the edit graph to come up with a dynamic program resulting in an algorithm which is polynomial in $n$. Recall that in the edit graph, $\langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle$ is equal to the number of distinct paths from the origin $(0,...,0)$ to the destination $(|Y^{(1)}|,...,|Y^{(t)}|)$ and which correspond to $w$. Hence, \begin{enumerate}[wide=0pt] \item[(a)] $\sum\limits_{w| |w|=k} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle$ is the number of distinct paths of length $k$ from origin to destination and, \item[(b)] $\sum\limits_{\substack{w| |w|=k,\\w_j=1}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle$ is the number of such paths of length $k$ such that the $j^{th}$ edge of the path corresponds to a `1'. \end{enumerate} \noindent With this interpretation, the dynamic program for (a) follows naturally -- the number of $k$-length paths from the origin to any vertex is the sum of the number of $(k{-}1)$-length paths from the origin to all incoming neighbors of the vertex. To make this formal, associate a polynomial (in $\lambda$) for each vertex, such that the coefficient of $\lambda^k$ is equal to the number of paths of length $k$ from the origin to $v$: we call it the "forward-potential" polynomial $p^{for}_v(\lambda)$ for vertex $v$, the coefficient of $\lambda^k$ as earlier is denoted by $\langle p^{for}_v(\lambda),\lambda^k \rangle $. The dynamic program to compute $p^{for}_v(\lambda)$ for all $v$ can be expressed as: \begin{equation} p^{for}_v(\lambda) = \sum_{u|u\rightarrow v} \lambda p^{for}_u(\lambda). \end{equation} With this definition, we have $$\sum\limits_{w| |w|=k} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle =\langle p^{for}_{destination}(\lambda),\lambda^k \rangle.$$ In the example in Fig.~\ref{fig:editgraph_smap1}, one could do the following: order the vertices $(0,0)$ to $(3,3)$ lexicographically and then compute $p^{for}_v(\lambda)$ in the same order. Because of the directed grid nature of the edit graph, every vertex has incoming neighbors which are lexicographically ahead of itself. Also we initialize $p^{for}_{(0,0)}(\lambda)=1$. For the example in Fig.~\ref{fig:editgraph_smap1}, the forward-potentials are shown in Fig.~\ref{fig:editgraph_smap2}. The complexity of this dynamic program is $O(2^tn^{t+1})$ as it goes over $O(n^t)$ vertices and for each vertex it sums $O(2^t)$ polynomials, each of degree $O(n)$. \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap2.pdf} \caption{The forward-potential $p^{for}_v(\lambda)$ at each vertex.} \label{fig:editgraph_smap2} \end{figure} We compute (b) as follows: pick an edge $(u{\rightarrow}v)$ which corresponds to `1', count the number of $(j{-}1)$-length paths from origin to $u$ and multiply it with the number of $(k{-}j)$-length paths from $v$ to the destination -- this is exactly the number of paths of length $k$ such that its $j^{th}$ edge is $(u{\rightarrow}v)$. Summing this term for all such edges which correspond to 1 gives us the term in (b). Note that we have already computed the number of $k$-length paths ($\forall k$) from origin to every vertex in $p^{for}_v(\lambda)$ . We can similarly compute the number of $k$-length paths ($\forall k$) from every vertex to the destination as $p^{rev}_v(\lambda)$ -- the "reverse potential" polynomial. The dynamic program for $p^{rev}_v(\lambda)$ is: \begin{equation} p^{rev}_v(\lambda) = \sum_{u|v\rightarrow u} \lambda p^{rev}_u(\lambda), \end{equation} with $p^{rev}_{destination}(\lambda)=1$. The reverse potentials for the example in Fig.~\ref{fig:editgraph_smap1} is shown in Fig.~\ref{fig:editgraph_smap3}. Like in the case of forward potential, we first order the vertices reverse lexicographically and then invoke the dynamic program above sequentially to compute the reverse potential polynomial at each vertex. \begin{algorithm}[t!] \caption{Computing the forward-potentials $p^{for}_u(\lambda)$ }\label{alg:forward_pot} \begin{algorithmic}[1] \item Input: Edit graph {$\mathcal G(Y^{(1)},...,Y^{(t)})$}\\ Outputs: $p^{for}_v(\lambda)\ \forall\ v$ \State Order the vertices from $(0,0,...,0)$ to $(|Y^{(1)}|,|Y^{(2)}|,...,|Y^{(t)}|)$ lexicogaphically; let the ordered list be $\mathcal V$ \State Initialise $p^{for}_{(0,...,0)}(\lambda)\gets 1$ \For{$v\ \in\ \mathcal V$} \State \textbf{assign} $p^{for}_v(\lambda)\gets \sum_{u|u\rightarrow v} \lambda p^{for}_u(\lambda)$ \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[t!] \caption{Computing the reverse-potentials $p^{rev}_u(\lambda)$ }\label{alg:reverse_pot} \begin{algorithmic}[1] \item Input: Edit graph {$\mathcal G(Y^{(1)},...,Y^{(t)})$}\\ Outputs: $p^{rev}_v(\lambda)\ \forall\ v$ \State Order the vertices from $(|Y^{(1)}|,|Y^{(2)}|,...,|Y^{(t)}|)$ to $(0,0,...,0)$ reverse lexicogaphically; let the ordered list be $\mathcal V$ \State Initialise $p^{rev}_{(|Y^{(1)}|,|Y^{(2)}|,...,|Y^{(t)}|)}(\lambda)\gets 1$ \For{$v\ \in\ \mathcal V$} \State \textbf{assign} $p^{rev}_v(\lambda) \gets \sum_{u|v\rightarrow u} \lambda p^{rev}_u(\lambda)$ \EndFor \end{algorithmic} \end{algorithm} \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap3.pdf} \caption{The reverse-potential $p^{rev}_v(\lambda)$ at each vertex.} \label{fig:editgraph_smap3} \end{figure} With this, the term in (b) can be expressed as: \begin{align*} \sum\limits_{\substack{w| |w|=k,\\w_j=1}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle =\hspace{-2mm} \sum_{\substack{(u,v)|\\s(u\rightarrow v)=1}} \hspace{-2mm}\langle p^{for}_u(\lambda),\lambda^{j-1} \rangle \langle p^{rev}_v(\lambda),\lambda^{k-j} \rangle. \end{align*} Alg.~\ref{alg:exact_smap} now summarizes the computation of the posterior probabilities. This algorithm iterates over all the edges (we have $O((2n)^t)$ of these), and also $k,j$ ($O(n)$ each). The time complexity of Alg.~\ref{alg:exact_smap} hence is $O(2^tn^{t+2})$. \begin{algorithm}[t!] \caption{Symbolwise posterior probabilities with $t$ traces} \label{alg:exact_smap} \begin{algorithmic}[1] \item Input: Traces {$Y^{(1)},Y^{(2)},...,Y^{(t)}$, input length $n$}\\ Output: $\Pr(X_i=1|Y^{(1)},Y^{(2)},...,Y^{(t)})\ \forall\ i \State Construct edit graph $\mathcal G(Y^{(1)},...,Y^{(t)})$ \State Use Alg.~\ref{alg:forward_pot} and Alg.~\ref{alg:reverse_pot} on $\mathcal G(Y^{(1)},...,Y^{(t)})$ to calculate $p^{for}_v(\lambda)$ and $p^{rev}_v(\lambda)$ $\forall\ v$ \For{$k \in\ [0:n]$} \State \textbf{assign} $\sum\limits_{w| |w|=k} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \gets \langle p^{for}_{destination}(\lambda),\lambda^k \rangle.$ \For{each $j \in\ [1:n]$} \State Initialize $temp \leftarrow $ 0 \For{each edge $u\rightarrow v\ \in\ \mathcal G$} \If{$s(u{\rightarrow} v)=$ `1'} \State $temp\ += \langle p^{for}_u(\lambda),\lambda^{j-1} \rangle \langle p^{rev}_v(\lambda),\lambda^{k-j} \rangle$ \EndIf \EndFor \State \textbf{assign} $\sum\limits_{\substack{w| |w|=k,\\w_j=1}} \langle Y^{(1)} \uparrow ... \uparrow Y^{(t)},w \rangle \gets temp$ \EndFor \EndFor \For{$i \in\ [1:n]$} \State Use \eqref{eq:posterior_prob} to compute $\Pr(X_i=1|Y^{(1)},Y^{(2)},...,Y^{(t)})$ \EndFor \end{algorithmic} \end{algorithm} \section{Notation and Tools} \label{sec:notations} \noindent \textbf{Basic notations:} In this work, we borrow a majority of the notation and tools from \cite{lothaire1997combinatorics} which deals with non-commutative algebra. We restate the definitions here for convenience. Calligraphic letters refer to sets, capitalized letters correspond to random variables and bold letters are used for functions. Let $\mathcal{A}$ be the set of all symbols. Throughout this work, we will focus on the case where $\mathcal{A}=\{0,1\}$, though our methods extend to arbitrarily large sets of finite size. Define $\mathcal{A}^n$ to be the set of all $n$-length sequences and $\mathcal{A}^*$ to be the set of all finite length sequences with symbols in $\mathcal{A}$. For a sequence $f$, $|f|$ denotes the length of $f$. For integers $i,j$, we define $[i:j] \triangleq \{i,i+1,...,j\}$ if $j\geq i$ and $[i:j] \triangleq \emptyset$ otherwise. Also define $[i] \triangleq [1:i]$.\\ \noindent \textbf{Binomial coefficient:} Given sequences $f$ and $g$ in $\mathcal{A}^*$, the number of subsequence patterns of $f$ that are equal to $g$ is called the \textit{binomial coefficient} of $g$ in $f$ and is denoted by $f \choose g$. For example, ${'apple' \choose 'ape'} = 2$ since $'ape'$ can be obtained from two (overlapping) subsequences of $'apple'$. When the alphabet $\mathcal{A}$ is of cardinality 1, ${f \choose g} = {|f| \choose |g|}$, the classical binomial coefficient with their respective lengths as the parameters. This definition hence could be thought of as a generalization of the classical binomial coefficients. We will denote by $e$ the sequence of length 0, and {define ${f \choose e} \triangleq 1\ \forall\ f\ \in\ \mathcal{A}^*$.} We also define the classical binomial coefficient ${a \choose b}\triangleq 0,$ whenever $b>a$ or $b<0$ for ease of use. The binomial coefficient is an integral aspect of this work and for analyzing error events in deletion channels because the input-output relation for a deletion channel (with deletion probability $\delta$, input $X$ and output $Y$) can be expressed as \begin{equation} \label{eq:in-out_relation} \Pr(Y=y|X=x) = {x \choose y} \delta^{|x|-|y|} (1-\delta)^{|y|}. \end{equation} The proof is straight-forward -- the number of distinct error events that give rise to $y$ from $x$ is exactly the number of subsequences of $x$ which are equal to $y$. Each of these error events has a probability $\delta^{|x|-|y|} (1-\delta)^{|y|}$, wherein the exponent of $\delta$ corresponds to the deleted symbols and the exponent of $1-\delta$ to the undeleted symbols. Given the definition of the binomial coefficient, the maximum-likelihood (ML) estimate over a deletion channel with observed output $Y$ can be cast in the following form: \begin{align*} \argmax_{x \in \mathcal C} {x \choose Y},\numberthis \label{eq:ML_deletion} \end{align*} where $\mathcal C$ is the chosen codebook. In the case of multiple deletion channels with observed traces $Y^{(1)},...,Y^{(t)}$, the ML formulation is similar: \begin{align*} \argmax_{x \in \mathcal C} \prod_{j=1}^{t} {x \choose Y^{(j)}}.\numberthis \label{eq:ML_multiple_deletion} \end{align*} As yet, there is no known efficient way to come up with a solution for the above two formulations, even for \eqref{eq:ML_deletion} with $\mathcal C = \{0,1\}^n$ (see \cite{mitzenmacher2009survey}). In this work, we attempt to take a step in this direction by showing that a continuous relaxation of \eqref{eq:ML_deletion} is equivalent to \eqref{eq:ML_deletion}. However, an efficient algorithm to solve the optimization \eqref{eq:ML_deletion} remains open. In the context of trace reconstruction, the ultimate pursuit would be an algorithm for \eqref{eq:ML_multiple_deletion} with $\mathcal C = \{0,1\}^n$ and error analysis thereof. \noindent We now describe a function which can be thought of as a real-valued extension of the binomial coefficient. This function is used in sections~\ref{sec:1deletion_ML} and \ref{sec:1deletion}. Consider the function $\mathbf F(\cdot)$ defined as: \begin{definition} \label{def:f} \begin{align*} &\mathbf F: \mathbb{R}^k \times \{0,1\}^l \rightarrow \mathbb{R},\\ \mathbf F(q, v)\triangleq &\begin{cases} \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [k],\\|\mathcal S|=l}} \quad \prod\limits_{i=1}^l q_{\mathcal S_i}^{v_i} (1-q_{\mathcal S_i})^{1-v_i} & 1 \leq l\leq k \\ 1 & 0=l\leq k \\ 0 & \text{else}. \end{cases} \end{align*} \noindent An alternate definition is as follows: consider a random vector $Z\in\{0,1\}^k$ such that $Z_i\sim$ ind. Ber$(q_i)$, let $q$ be the vector of probabilities of length $k$, $\mathcal S\subseteq [k]$ a subset of the indices of size $l$, and ${v}$ a binary vector of length $l$. Then, $$\mathbf F(q,v)=\sum_{\substack{\mathcal S||\mathcal S|=l}} \Pr(Z_{\mathcal S}= v).$$ Note that if $q$ is a vector of $0$'s and $1$'s, then $\mathbf F( q, v)={q \choose v}$, the binomial coefficient of $v$ in $q$. Thus, $\mathbf F(\cdot)$ could be interpreted as an extension of the binomial coefficient where one of the parameters can take values in $[0,1]^n$ instead of $\{0,1\}^n$. \end{definition} Though at first sight, $\mathbf F(q,v)$ sums over an exponential number of subsets, a dynamic programming approach can be used to compute it in $O(|v|^2)$ time complexity. The dynamic program is described in section~\ref{sec:1deletion}. \iffalse \noindent \textbf{Edit distance:} The edit distance $d_e(f,g)$ measures similarity between two sequences of possibly different lengths \cite{Navarro2001}. $d_e(f,g)$ is the minimum number of operations needed to transform $f$ to $g$, where the permitted operations are insertion, deletion or substitution of a symbol. In this work, we quantify the performance of algorithms in Section \ref{sec:Numerics} using the edit distance metric. \fi The following definitions and ideas are relevant only for \ref{sec:exactsmap} and can be omitted for the sections on single deletion channel. Before getting to the mathematical definitions, we first state a result of ours that aids in thinking about error events in multiple deletion channels. \subsection{An alternate interpretation of multiple deletion channel model} The events occurring in the $t$-deletion channel model can be categorized into two groups: \begin{enumerate} \item an input symbol is deleted in \textit{all} the $t$-traces, \item an input symbol is reflected in at least one of the traces. \end{enumerate} The error events of the first kind are in some sense ``not correctable" or even ``detectable" since it is impossible to tell what and where the deleted symbol could have been (although the probabilities need not be uniform). The events of the second kind, however, can still be detected although they could likely be confused with a large set of similar events. This thought process gives rise to a natural decomposition of the $t$-deletion channel model into a cascade of two channels: the first one being a deletion channel which captures error events of the first kind and the second one is what we call the \textit{remnant channel} which captures events of the second kind (see Fig.~\ref{fig:channel_equiv}). More precisely, the remnant channel is defined as follows: \begin{definition} \textit{Remnant channel:} an input symbol to the remnant channel is reflected in $k>0$ given outputs and deleted in the rest with a probability $\frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. \end{definition} It is easy to note that probability of the union of all possible events here is $\sum_{k=1}^t {t \choose k} \frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}=1$, validating our definition. \begin{restatable}{theorem}{channelequiv} \label{thm:channel_equiv} The $t$-deletion channel model and the cascade of the deletion channel with remnant channel shown in Fig.~\ref{fig:channel_equiv} are probabilistically equivalent, i.e., $$\Pr({Y}^{(1)},{Y}^{(2)},...,{Y}^{(t)}|X = x) = \Pr(\tilde{Y}^{(1)},\tilde{Y}^{(2)},...,\tilde{Y}^{(t)}|X = x).$$ \end{restatable} The formal proof of the theorem requires few more definitions and is relegated to the appendix. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{channel_equiv.pdf} \caption{A channel equivalence result: the $t$-deletion channel model in (a) is probabilistically equivalent to the the cascade of a deletion channel with the \textit{remnant channel} ($ \mathcal C_2$) in (b).} \label{fig:channel_equiv} \end{center} \end{figure} \noindent\textbf{Edit graph} (as defined in \cite{Gusfield1997}): We now define a graph construct which is closely related to the error events in the remnant channel. We start with a simple case and generalize subsequently. Define an \textit{edit graph} given two sequences $f$ and $g$, where every path connecting the ``origin'' to the ``destination'' on the edit graph yields a supersequence $h$ of $f,g$, where $h$ is ``covered'' by $f,g$ -- i.e., each symbol of $h$ comes from either $f$ or $g$ or both. In other words, given that $f$ and $g$ are the outputs of the remnant channel (with two outputs), each path from the origin of the edit graph to the destination corresponds to a possible input $h$ to the remnant channel and to an error event which resulted in $f$ and $g$ with input $h$. For $f$ and $g$ in $\mathcal A^*$, we form a graph $\mathcal{G}(f,g)$ with $(|f|+1)(|g|+1)$ vertices each labelled with a distinct pair $(i,j), 0\leq i\leq |f|,\ 0\leq j\leq |g|$. A directed edge $(i_1,j_1)\rightarrow(i_2,j_2)$ exists iff at least one of the following holds: \begin{enumerate} \item$i_2-i_1=1$ and $j_1=j_2$, or \item $j_2-j_1=1$ and $i_1=i_2$, or \item $i_2-i_1=1$, $j_2-j_1=1$ and $f_{i_2}=g_{j_2}$, \end{enumerate} where $f_i$ is the $i^{th}$ symbol of the sequence $f$. The origin is the vertex $(0,0)$ and the destination $(|f|,|g|)$. \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap1.pdf} \caption{ Edit graph for sequences $f=$ `001' and $g=$ `101'. An easy way to think about this is to write down $f$ vertically with each symbol aligned to a vertical set of edges and $g$ horizontally likewise. A diagonal edge in a small square exists if the corresponding $f$ and $g$ symbols are equal. The thick red edges form a path from the origin to the destination; this path corresponds to the sequence `0101' -- append the corresponding symbol at the left of an edge if it's vertical or diagonal, otherwise append the symbol at the top. It is also easy to verify that 0101 is a supersequence of both $f$ and $g$, and could be obtained as a covering of $f$ and $g$; the path itself gives one such covering. This covering also corresponds to an error event in the remnant channel which would result in outputs $f$ and $g$ with input $h=$ `0101' -- more precisely, the error event is one in which the first symbol of $h$ is deleted only in $g$, the second symbol is deleted only in $f$ and the third and fourth symbols not deleted in either $f$ or $g$.} \label{fig:editgraph_smap1} \end{figure} Let $p=((i_1,j_1),(i_2,j_2),...,(i_m,j_m))$ be a path in $\mathcal{G}(f,g)$. We define $ s(p)$ to be the sequence corresponding to the path. Intuitively, $s(p)$ is formed by appending symbols in the following way: append the corresponding $f$ symbol for a vertical edge, $g$ symbol for horizontal edge, and $f$ or $g$ symbol for diagonal edge (see example Fig.~\ref{fig:editgraph_smap1}). Any path from $(0,0)$ to $(|f|,|g|)$ corresponds to a supersequence of $f$ and $g$ and which is ``covered" by $f$ and $g$. More formally, define $ s(p)\triangleq x_1x_2...x_{m-1}$ where $$x_k = \begin{cases} f_{i_{k+1}} \quad\text{if }j_{k}=j_{k+1},\\ g_{j_{k+1}} \quad\text{if }i_{k}=i_{k+1},\\ f_{i_{k+1}} \quad\text{else.} \end{cases} $$ The construct of edit graph can be extended to more than 2 sequences with the same idea. For sequences $f_1,f_2,...,f_t$, construct a $t$-dimensional grid with a number of vertices $(|f_1|+1)(|f_2|+1)...(|f_t|+1)$ labeled from $(0,0,...,0)$ to $(|f_1|,|f_2|,...,|f_t|)$. A vertex $u=(i_1,i_2,...,i_t)$ is connected to $v=(j_1,j_2,...,j_t)$ (we say $u \rightarrow v$) iff both of the following conditions are met: \begin{itemize} \item $j_l=i_l$ or $j_l=i_l+1$ $\forall\ l\in [t]$, i.e., $(i_1,...,i_t)$ and $(j_1,...,j_t)$ are vertices of a particular unit cube. Only these type of vertices can share an edge in the grid graph. \item Let $\mathcal T \subseteq [t]$ be the collection of indices where $j_l=i_l+1$. Then $f_{i_l}$ is same $\forall\ l \in \mathcal T$. For instance, if $\mathcal T=\{1,3,5\}$, then $f_{i_1}=f_{i_3}=f_{i_5}$. \end{itemize} Define the vertex $(0,...,0)$ to be the origin of this graph and the vertex $(|f_1|,...,|f_t|)$ to be the destination. If $|f_j|=O(n)\ \forall\ j$, this graph has a number of vertices $O(n^t)$ and a maximum number of edges $O((2n)^t)$ since each vertex has at most $2^t$ outgoing edges.\\ \noindent\textbf{Infiltration product} (introduced in \cite{lothaire1997combinatorics}): The infiltration product has been extensively used in \cite{lothaire1997combinatorics}, as a tool in non-commutative algebra. Here we give an edit-graph interpretation of this tool. We also give a formal definition later in this section. Using the edit graph we can construct the set of possible supersequences $\mathcal{S}(f,g)$ of $f,g$ which are covered by it. Clearly multiple paths could yield the same supersequence and we can count the number of distinct ways $\mathbf N(h;f,g)$ one can construct the same supersequence $h$ from $f,g$. We can informally define the \emph{infiltration product $f\uparrow g$} of $f$ and $g$, as a polynomial with monomials the supersequences $h$ in $\mathcal{S}(f,g)$ and coefficients $\langle f\uparrow g,h\rangle$ equal to $\mathbf N(h;f,g)$. In Fig.~\ref{fig:editgraph_smap1}, it is easy to verify that there is exactly one path corresponding to `101001' and hence $\langle 001\uparrow 101,101001 \rangle=1$ and similarly $\langle 001\uparrow 101,01001 \rangle=2$. One could find these coefficients for all relevant sequences and form the polynomial as described. More examples: Let $\mathcal{A}=\{a,b\}$, then \begin{itemize}[wide=2pt] \item $ab\uparrow ab=ab+2aab+2abb+4aabb+2abab$, \item $ab\uparrow ba=aba+bab+abab+2abba+2baab+baba.$ \end{itemize} The infiltration operation is commutative and associative, and infiltration of two sequences $f\uparrow g$ is a polynomial with variables of length (or \textit{degree}) at most $|f|+|g|$; see \cite{lothaire1997combinatorics}. The definition of infiltration extends to two polynomials via distributivity (precisely defined in later), and consequently to multiple sequences as well. For multiple sequences, infiltration has the same edit graph interpretation: $\langle f_1\uparrow f_2 \uparrow...\uparrow f_t, w \rangle$ is the number of distinct ways of constructing $w$ as a supersequence of $f_1, f_2, ... ,f_t$ so that the construction covers $w$, i.e., construct the $t$-dimensional edit graph of $f_1, f_2, ... ,f_t$ and count the number of paths corresponding to $w$. \subsection{Formal definition of infiltration product} We now give a more formal definition of the infiltration product (see \cite{lothaire1997combinatorics} for the equivalence of the two definitions and a more rigorous treatment). A \textit{formal series} with indeterminates (or variables) in a set $\mathcal A$ and coefficients in a commutative ring $\mathcal R$, is a mapping of $\mathcal A^*$ onto $\mathcal R$. Recall that a commutative ring is a set which forms an abelian group under an \textit{addition} operation, is a monoid under a \textit{multiplication} operation which commutes, and the multiplication operation distributes over addition. Here we consider $\mathbb Z$, the set of integers as the commutative ring $\mathcal{R}$. A formal series is called a \textit{polynomial} if only a finite number of sequences are mapped to non-zero values, the rest of the sequences map to zero. Consider two polynomials $\sigma,\tau: \mathcal{A}^*\rightarrow \mathbb Z$. The value taken by a sequence $w\in \mathcal A^*$ on $\sigma$ (or the coefficient of $w$ in $\sigma$) is denoted by $\langle \sigma,w\rangle \in \mathbb R$. We also define binary addition ($\oplus$) and multiplication operations ($\times$) on the set of polynomials as follows: \begin{align} \langle \sigma\oplus \tau,w\rangle \triangleq \langle \sigma,w\rangle + \langle \tau,w \rangle \quad \forall w\in \mathcal A^*,\label{eq:polynomial_add}\\ \langle \sigma\times \tau,w\rangle \triangleq \sum_{\substack{f,g\in \mathcal A^*:\\ f.g=w}}\langle \sigma,f\rangle \langle \tau,g \rangle \quad \forall w\in \mathcal A^*.\label{eq:polynomial_prod} \end{align} We will use the usual symbols $+$ and $.$ in place of $\oplus$ and $\times$ in this work for convenience. The meaning of the operation would be clear depending on the operands. With these operations the set of polynomials form a non-commutative ring, and is denoted by $\mathbb Z\langle\mathcal A \rangle$, also called the free $\mathbb Z$-algebra on $\mathcal A$ in ring theory. Note that the addition and multiplication operations defined in \eqref{eq:polynomial_add} and \eqref{eq:polynomial_prod} are similar to the operations defined on commutative polynomials, except that the multiplication operation under the summation in \eqref{eq:polynomial_prod} ($f.g=w$) is actually concatenation and is non-commutative. The multiplication inside the summation in \eqref{eq:polynomial_prod} is multiplication in the real field and hence commutative. It is also easy to prove that the multiplication defined in \eqref{eq:polynomial_prod} distributes over addition defined in \eqref{eq:polynomial_add}. Thus, a polynomial in $\mathbb Z\langle\mathcal A \rangle$ can be represented as a sum of monomials in $\mathcal A^*$ each with an associated coefficient in $\mathbb Z$, i.e., $\sigma=\sum\limits_{w\in \mathcal A^*} \langle\sigma,w \rangle w$. Define the \textit{degree} of a polynomial to be equal to the length of a longest sequence with a non-zero coefficient in the polynomial and the \textit{number of terms} of a polynomial as the number of sequences with non-zero coefficients in the polynomial. Note that a degree $d$ polynomial could have a number of terms upto $2^{d+1}-1$. With this, the \textit{infiltration product} (in general, for two polynomials) is defined as follows: \begin{align} \forall f\in \mathcal{A}^*,& \quad f\uparrow e = e\uparrow f=f.\nonumber \\ \forall f,g\in \mathcal{A}^*&,\quad \forall a,b\in \mathcal{A}, \nonumber\\ fa\uparrow gb=(f\uparrow gb)a&+(fa\uparrow g)b+\mathbbm{1}_{a=b}(f\uparrow g)a.\nonumber \\ \forall \sigma,\tau \in \mathbb{Z}\langle\mathcal{A} \rangle, \quad &\sigma\uparrow \tau=\sum_{f,g\in \mathcal{A}^*} \langle \sigma,f \rangle \langle \tau,g \rangle (f\uparrow g). \label{def:infiltforseq} \end{align} \textbf{Summary of definitions and ideas introduced in this section:} \begin{itemize} \item The binomial coefficient captures the likelihood of observations for deletion channels. \item An extension of the binomial coefficient function where one of the parameters can take real values has been introduced; this function is pivotal for our results on the single-trace deletion channel. \item For multiple deletion channels, the error events can be categorized into two groups -- one where an input symbol is deleted in all the traces, and second, the complement of this event. This categorization results in a natural decomposition of multiple deletion channel model into a cascade model involving the remnant channel. \item The remnant channel disregards the error events where a symbol is deleted in all the traces. \item The edit graph provides a way to visualize all the possible error events and input sequences to a remnant channel given its outputs. \item The infiltration product serves the same purpose as the edit graph, but has an algebraic flavor and provides rigor for proofs and analyses. The edit graph, on the other hand, is more helpful in designing reconstruction algorithms over deletion channels. \end{itemize} \section{Notation and Tools} \label{sec:notation} \noindent \textbf{Basic notation:} We borrow some notation from \cite{lothaire1997combinatorics} which deals with non-commutative algebra; we restate them here for convenience. Calligraphic letters refer to sets, capitalized letters correspond to random variables and bold letters are used for functions. Let $\mathcal{A}$ be the set of all symbols. Throughout this work, we will focus on the case where $\mathcal{A}=\{0,1\}$, though our methods extend to arbitrarily large sets of finite size. Define $\mathcal{A}^n$ to be the set of all $n$-length sequences and $\mathcal{A}^*$ to be the set of all finite length sequences with symbols in $\mathcal{A}$. For a sequence $f$, $|f|$ denotes the length of $f$. For integers $i,j$, we define $[i:j] \triangleq \{i,i+1,...,j\}$ if $j\geq i$ and $[i:j] \triangleq \emptyset$ otherwise. We also define $[i] \triangleq [1:i]$. For a vector or sequence $x=(x_1,x_2,...,x_{i-1},x_i,x_{i+1},...,x_n)$, define $$x^{(i\rightarrow s)}\triangleq (x_1,x_2,...,x_{i-1},s,x_{i+1},...,x_n),$$ where the $i^{th}$ coordinate of $x$ is replaced by symbol $s$. \\ \noindent \textbf{Binomial coefficient \cite{lothaire1997combinatorics}:} Given sequences $f$ and $g$ in $\mathcal{A}^*$, the number of subsequence patterns of $f$ that are equal to $g$ is called the \textit{binomial coefficient} of $g$ in $f$ and is denoted by $f\choose g$. For example, ${'apple' \choose 'ape'} = 2$ since $'ape'$ can be obtained from two (overlapping) subsequences of $'apple'$. When the alphabet $\mathcal{A}$ is of cardinality 1, ${f \choose g} = {|f| \choose |g|}$, the classical binomial coefficient with their respective lengths as the parameters. This definition hence could be thought of as a generalization of the classical binomial coefficients. We will denote by $e$ the sequence of length 0, and {define ${f \choose e} \triangleq 1\ \forall\ f\ \in\ \mathcal{A}^*$.} We also define the classical binomial coefficient ${a \choose b}\triangleq 0,$ whenever $b>a$ or $b<0$ for ease of use. The binomial coefficient forms the backbone for the probabilistic analysis of deletion channels since the input-output relation for a deletion channel (with deletion probability $\delta$, input $X$ and output $Y$) can be expressed as \begin{equation} \label{eq:in-out_relation} \Pr(Y=y|X=x) = {x \choose y} \delta^{|x|-|y|} (1-\delta)^{|y|}. \end{equation} The proof is straightforward -- the number of distinct error events that give rise to $y$ from $x$ is exactly the number of subsequences of $x$ which are equal to $y$. Each of these error events has a probability $\delta^{|x|-|y|} (1-\delta)^{|y|}$, wherein the exponent of $\delta$ corresponds to the deleted symbols and the exponent of $1-\delta$ to the undeleted symbols. \\ \noindent \textbf{Maximum Likelihood (ML) estimate:} Given the definition of the binomial coefficient, the maximum-likelihood (ML) estimate over a deletion channel with observed output $Y$ can be cast in the following form: \begin{align*} \argmax_{x \in \{0,1\}^n} {x \choose Y}.\numberthis \label{eq:ML_deletion} \end{align*} In the case of multiple deletion channels with observed traces $Y^{(1)},...,Y^{(t)}$, the ML formulation is similar: \begin{align*} \argmax_{x \in \{0,1\}^n} \prod_{j=1}^{t} {x \choose Y^{(j)}}.\numberthis \label{eq:ML_multiple_deletion} \end{align*} As yet, there is no known efficient way to come up with a solution for either of the above two formulations (see \cite{mitzenmacher2009survey}).\\ \noindent \textbf{Relaxed binomial coefficient.} We now introduce the function $\mathbf F(\cdot)$ which can be thought of as a real-valued relaxation of the binomial coefficient. This function is used in sections~\ref{sec:1deletion_ML} and~\ref{sec:1deletion}. An intuitive definition is as follows: Consider a random vector $Z\in\{0,1\}^n$ such that $Z_i\sim$ ind. Ber$(p_i)$, let $p$ be the vector of probabilities of length $n$, $\mathcal S\subseteq [n]$ a subset of the indices of size $m$, and ${v}$ a binary vector of length $m$. Then, $$\mathbf F(p,v)=\sum_{\substack{\mathcal S||\mathcal S|=m}} \Pr(Z_{\mathcal S}= v).$$ It is easy to see that $\mathbf F(p,v)=\mathbb E_{Z\sim p}\ {Z \choose v}$; in words, suppose $Z$ is sampled from a Bernoulli distribution parametrized by $p$, $\mathbf F(p,v)$ is the expected number of times $v$ appears as a subsequence of $Z$. Clearly if $p \in \{0,1\}^n$, then $Z=p$ with probability 1 and $\mathbf F(p,v) = {p \choose v}$. More precisely, $\mathbf F(\cdot)$ is defined as: \begin{definition} \label{def:f} \begin{align*} &\mathbf F: [0,1]^n \times \{0,1\}^m \rightarrow \mathbb{R},\\ \mathbf F(p, v)\triangleq &\begin{cases} \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m}} \quad \prod\limits_{i=1}^m p_{\mathcal S_i}^{v_i} (1-p_{\mathcal S_i})^{1-v_i} & 1 \leq m\leq n \\ 1 & 0=m\leq n \\ 0 & \text{else}. \end{cases} \end{align*} \end{definition} Though at first sight $\mathbf F(p,v)$ sums over an exponential number of subsets, a dynamic programming approach can be used to compute it in $O(nm)$ time complexity (see Appendix~\ref{app:F_compute}).\\ \noindent \textbf{Decomposition of the $t$-trace deletion channel:} The following definitions and ideas are relevant only for Section \ref{sec:exactsmap} and can be omitted for the sections on single-trace deletion channel. We first state a result of ours that aids in thinking about error events in multiple deletion channels. The events occurring in the $t$-deletion channel model can be categorized into two groups: \begin{enumerate} \item an input symbol is deleted in \textit{all} the $t$-traces, \item an input symbol is reflected in at least one of the traces. \end{enumerate} The error events of the first kind are in some sense ``not correctable'' or even ``detectable'' in any situation since it is impossible to tell with absolute certainty what and where the deleted symbol could have been (although the probabilities need not be uniform). The events of the second kind, however, can be detected and corrected in some situations. This thought process gives rise to a natural decomposition of the $t$-deletion channel model into a cascade of two channels: the first one being a deletion channel which captures error events of the first kind and the second one is what we call the \textit{remnant channel} which captures events of the second kind (see Fig.~\ref{fig:channel_equiv}). More precisely, we define the remnant channel as follows: \begin{definition} \textit{Remnant channel:} an input symbol to the remnant channel is reflected in $k>0$ given outputs and deleted in the rest with a probability $\frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. \end{definition} It is easy to note that probability of the union of all possible events here is $\sum_{k=1}^t {t \choose k} \frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}=1$, validating our definition. \begin{restatable}{theorem}{channelequiv} \label{thm:channel_equiv} The $t$-deletion channel model and the cascade of the deletion channel with remnant channel shown in Fig.~\ref{fig:channel_equiv} are probabilistically equivalent, i.e., $$\Pr({Y}^{(1)},{Y}^{(2)},...,{Y}^{(t)}|X = x) = \Pr(\tilde{Y}^{(1)},\tilde{Y}^{(2)},...,\tilde{Y}^{(t)}|X = x).$$ \end{restatable} An algebraic proof of the theorem requires a few more definitions and is relegated to Appendix~\ref{app:channel_equiv}. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{channel_equiv.pdf} \caption{A channel equivalence result: the $t$-trace deletion channel model in (a) is probabilistically equivalent to the the cascade of a deletion channel with the \textit{remnant channel} ($ \mathcal C_2$) in (b).} \label{fig:channel_equiv} \end{center} \end{figure} \noindent\textbf{Edit graph} (\cite{Gusfield1997}): Similar graph constructs have been defined in related problems on common supersequences and subsequences (see \cite{Nicosia2001} for example). This graph is closely related to the error events in the remnant channel. We start with a simple case and generalize subsequently. Define an \textit{edit graph} given two sequences $f$ and $g$, where every path connecting the ``origin'' to the ``destination'' on the edit graph yields a supersequence $h$ of $f,g$, where $h$ is ``covered'' by $f,g$ -- i.e., each symbol of $h$ comes from either $f$ or $g$ or both. In other words, given that $f$ and $g$ are the outputs of the remnant channel (with two outputs), each path from the origin of the edit graph to the destination corresponds to a possible input $h$ to the remnant channel and to an error event which resulted in outputs $f,g$ with input $h$. For $f$ and $g$ in $\mathcal A^*$, we form a graph $\mathcal{G}(f,g)$ with $(|f|+1)(|g|+1)$ vertices each labelled with a distinct pair $(i,j), 0\leq i\leq |f|,\ 0\leq j\leq |g|$. A directed edge $(i_1,j_1)\rightarrow(i_2,j_2)$ exists iff at least one of the following holds: \begin{enumerate} \item$i_2-i_1=1$ and $j_1=j_2$, or \item $j_2-j_1=1$ and $i_1=i_2$, or \item $i_2-i_1=1$, $j_2-j_1=1$ and $f_{i_2}=g_{j_2}$, \end{enumerate} where $f_i$ is the $i^{th}$ symbol of the sequence $f$. The origin is the vertex $(0,0)$ and the destination $(|f|,|g|)$. \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap1.pdf} \caption{ Edit graph for sequences $f=$ `001' and $g=$ `101'. Make a grid so the vertical edges are aligned with a symbol in $f$ and horizontal edges with $g$ as shown. A diagonal edge $(i{-}1,j{-}1)\rightarrow (i,j)$ exists if $f_i = g_j$. The thick red edges form a path from the origin to the destination; this path corresponds to $h=$`0101' -- sequentially append the corresponding symbol to which each edge is aligned. It is also easy to verify that $h$ is a supersequence of both $f$ and $g$, and could be obtained as a covering of $f$ and $g$; the path itself gives one such covering. This covering also corresponds to an error event (or a deletion pattern) in the remnant channel which would result in outputs $f$ and $g$ with input $h=$ `0101' -- the deletion pattern is shown in the figure.} \label{fig:editgraph_smap1} \end{figure} Let $p=((i_1,j_1),(i_2,j_2),...,(i_m,j_m))$ be a path in $\mathcal{G}(f,g)$. We define $ s(p)$ to be the sequence corresponding to the path. Intuitively, $s(p)$ is formed by appending symbols in the following way: append the corresponding $f$ symbol for a vertical edge, $g$ symbol for horizontal edge, and $f$ or $g$ symbol for diagonal edge (see example Fig.~\ref{fig:editgraph_smap1}). Any path from $(0,0)$ to $(|f|,|g|)$ corresponds to a supersequence of $f$ and $g$ and which is covered by $f$ and $g$. More formally, define $ s(p)\triangleq x_1x_2...x_{m-1}$ where $$x_k = \begin{cases} f_{i_{k+1}} \quad\text{if }j_{k}=j_{k+1},\\ g_{j_{k+1}} \quad\text{if }i_{k}=i_{k+1},\\ f_{i_{k+1}} \quad\text{else.} \end{cases} $$ The construct of edit graph can be extended to more than 2 sequences with the same idea. For sequences $f_1,f_2,...,f_t$, construct a $t$-dimensional grid with a number of vertices $(|f_1|+1)(|f_2|+1)...(|f_t|+1)$ labeled from $(0,0,...,0)$ to $(|f_1|,|f_2|,...,|f_t|)$. A vertex $u=(i_1,i_2,...,i_t)$ is connected to $v=(j_1,j_2,...,j_t)$ (we say $u \rightarrow v$) iff both of the following conditions are met: \begin{itemize} \item $j_l=i_l$ or $j_l=i_l+1$ $\forall\ l\in [t]$, i.e., $(i_1,...,i_t)$ and $(j_1,...,j_t)$ are vertices of a particular unit cube. Only these type of vertices can share an edge in the grid graph. \item Let $\mathcal T \subseteq [t]$ be the collection of indices where $j_l=i_l+1$. Then ${f_l}_{j_l}$ is equal $\forall\ l \in \mathcal T$. For example in 4 dimensional grid, consider the two vertices $(10,5,8,2)$ and $(10,6,9,2)$. In this case $\mathcal T = \{2,3\}$ since the second and third coordinates differ by 1. Therefore $(10,5,8,2)\rightarrow (10,6,9,2)$ iff ${f_2}_{5}={f_3}_{9}$. Note that if only one coordinate differs by 1 in the two vertices, a directed edge always exists (in other words all non-diagonal edges exist). \end{itemize} Define the vertex $(0,...,0)$ to be the origin of this graph and the vertex $(|f_1|,...,|f_t|)$ to be the destination. If $|f_j|=O(n)\ \forall\ j$, this graph has a number of vertices $O(n^t)$ and a maximum number of edges $O((2n)^t)$ since each vertex has at most $2^t-1$ outgoing edges.\\ \noindent\textbf{Infiltration product} (introduced in \cite{lothaire1997combinatorics}): The infiltration product has been extensively used in \cite{lothaire1997combinatorics}, as a tool in non-commutative algebra. Here, we give an edit-graph interpretation of this tool. A formal algebraic definition of the infiltration product is in Appendix~\ref{app:infil_def}. Using the edit graph we can construct the set of possible supersequences $\mathcal{S}(f,g)$ of $f$, $g$ that are covered by the symbols in $f$ and $g$. Clearly multiple paths could yield the same supersequence and we can count the number of distinct ways $\mathbf N(h;f,g)$ one can construct the same supersequence $h$ from $f$, $g$. We can informally define the \emph{infiltration product $f\uparrow g$} of $f$ and $g$, as a polynomial with monomials the supersequences $h$ in $\mathcal{S}(f,g)$ and coefficients $\langle f\uparrow g,h\rangle$ equal to $\mathbf N(h;f,g)$. For the example in Fig.~\ref{fig:editgraph_smap1}, it is easy to verify that there is exactly one path corresponding to `101001' and hence $\langle 001\uparrow 101,101001 \rangle=1$ and similarly $\langle 001\uparrow 101,01001 \rangle=2$. One could find these coefficients for all relevant sequences and form the polynomial as described. As additional examples, let $\mathcal{A}=\{a,b\}$, then \begin{itemize}[wide=2pt] \item $ab\uparrow ab=ab+2aab+2abb+4aabb+2abab$, \item $ab\uparrow ba=aba+bab+abab+2abba+2baab+baba.$ \end{itemize} The infiltration operation is commutative and associative, and infiltration of two sequences $f\uparrow g$ is a polynomial with variables of length (or \textit{degree}) at most $|f|+|g|$; see \cite{lothaire1997combinatorics}. The definition of infiltration extends to two polynomials via distributivity (precisely defined in Appendix~\ref{app:infil_def}), and consequently to multiple sequences as well. For multiple sequences, infiltration has the same edit graph interpretation: $\langle f_1\uparrow f_2 \uparrow...\uparrow f_t, w \rangle$ is the number of distinct ways of constructing $w$ as a supersequence of $f_1, f_2, ... ,f_t$ so that the construction covers $w$, i.e., construct the $t$-dimensional edit graph of $f_1, f_2, ... ,f_t$ and count the number of paths corresponding to $w$.\\ \noindent \textbf{Summary of definitions and ideas introduced in this section:} \begin{itemize} \item The binomial coefficient captures the likelihood of observations for deletion channels. \item A real-valued extension of the binomial coefficient function is introduced; this function is pivotal to our results on the single-trace deletion channel. \item For multiple deletion channels, the error events can be categorized into two groups -- one where an input symbol is deleted in all the traces, and second, the complement of this event. This categorization results in a natural decomposition of multiple deletion channel model into a cascade model involving the remnant channel. \item The remnant channel disregards the error events where a symbol is deleted in all the traces. \item The edit graph provides a way to visualize and construct all the possible error events and input sequences to a remnant channel given its outputs. \item The infiltration product serves the same purpose as the edit graph, but has an algebraic flavor and provides rigor for proofs and analysis. The edit graph, on the other hand, is more helpful in designing algorithms. \end{itemize} \section{Maximum likelihood for the single-trace deletion channel} \label{sec:1deletion_ML} Consider the ML formulation for the single-trace deletion channel given non-empty trace $Y$ and all possible $n$-length inputs allowed, restated here for convenience: \begin{equation} \argmax_{x\in \{0,1\}^n} {x \choose Y}. \label{eq:1deletion_ML} \end{equation} To the best of our knowledge, the only known method to solve \eqref{eq:1deletion_ML} involves iterating over all possible choices of $x$ and computing the objective value for each of the choices. We here show that it is sufficient to solve a continuous relaxation of above problem to obtain a solution to \eqref{eq:1deletion_ML}. Note that there could be multiple solutions to \eqref{eq:1deletion_ML}. Before going in the main result, we first state a useful lemma which factors out a given coordinate $p_i$ out of $\mathbf F(p,Y)$. The proof of the lemma is relegated to the appendix. \begin{restatable}{lemma}{deletionMLrellemma} For $p=[p_1,p_2,..,p_i,...,p_n]$ and $Y=Y_1Y_2...Y_m$ with $n \geq m > 0$, we have \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} \label{lemma:F_decomposition} \end{restatable} \begin{theorem} An alternative optimization to characterize ML for the single-trace deletion channel. \begin{equation} \max_{x\in \{0,1\}^n} {x \choose Y} = \max_{p\in [0,1]^n} \mathbf F(p,Y). \label{eq:ml_opt_equiv} \end{equation} Furthermore, given any non-integral $p^* \in [0,1]^n$ that maximizes $\mathbf F(p,Y)$, one could construct a corresponding integral solution $x^* \in \{0,1\}^n$ that maximizes $\mathbf F(p,Y)$ and consequently is also a solution to $\max_{x\in \{0,1\}^n} {x \choose Y}$. \end{theorem} \begin{proof} As noted earlier, we have ${x \choose Y} = \mathbf F(x,Y)$. Therefore, we are interested in proving the following: \begin{align*} \max_{x\in \{0,1\}^n} \mathbf F(x,Y) \equiv \max_{p\in [0,1]^n} \mathbf F(p,Y).\numberthis \label{eq:ml_opt_equiv_proof1} \end{align*} To show this and also the second statement, we prove that given $p=(p_1,p_2,...,p_i,...,p_n)$, at least one of the following holds true: \begin{itemize} \item$\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y)$, where $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},0,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by 0. \item $\mathbf F(p^{(i\rightarrow 1)},Y) \geq \mathbf F(p,Y)$, where $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},1,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by 1. \end{itemize} Thus if $p^*$ is an optimal solution to $\max_{p\in [0,1]^n} \mathbf F(p,Y)$ with $p_i\in (0,1)$, then at least one of $p^{(i\rightarrow 0)}$ or $p^{(i\rightarrow 1)}$ is also an optimal solution. Sequentially applying this argument for each coordinate of $p$ shows that there exists a point in $\{0,1\}^n$ which is also an optimal solution to $\max_{p\in [0,1]^n} \mathbf F(p,Y)$ and consequently to $\max_{x\in \{0,1\}^n} \mathbf F(x,Y)$. Now to prove our claim, we use Lemma~\ref{lemma:F_decomposition} to factor out $p_i$ terms in $\mathbf F(p,Y)$: \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} There are 3 cases \begin{enumerate} \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) = \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) = \mathbf F(p,Y) = \mathbf F(p^{(i\rightarrow 1)},Y).$ \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) > \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) \leq \mathbf F(p,Y) \leq \mathbf F(p^{(i\rightarrow 1)},Y).$ \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) < \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y) \geq \mathbf F(p^{(i\rightarrow 1)},Y).$ \end{enumerate} Thus in each of the 3 cases, we see that at least one of $\mathbf F(p^{(i\rightarrow 0)},Y)$ or $\mathbf F(p^{(i\rightarrow 1)},Y)$ is at least as large as $\mathbf F(p,Y)$ thus proving the theorem. Note that the proof gives a natural way to find an optimal lattice point $p_{lattice}^*$ given a non-lattice point $p^*$ by iterating through each coordinate of $p^*$ and switching them to $0$ or $1$ by comparing $\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})$ with $\sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$ \end{proof} \section{Numerical results} \label{sec:Numerics} In this section we show numerics supporting our theoretical results. The metric we use to measure the performance of different reconstruction algorithms is \textit{Hamming error rate}, which is defined as the average Hamming distance (the Hamming distance between the actual and estimated input sequence divided by the input sequence length).\\ \noindent \textbf{Single-trace deletion channel.} \begin{itemize}[leftmargin=3mm] \item Fig.~\ref{fig:numerics_single1} compares the error rate for reconstruction via ML heuristic (Alg.~\ref{alg:cood_switch}) when the initialization is a random lattice point and when the initialization is a random interior point, and also the symbolwise MAP via Alg.~\ref{alg:apprx_smap_dp} for a small blocklength $n=10$. For Alg.~\ref{alg:cood_switch}, we start with 5 random initializations and choose the one that gives the maximum objective value. As can be seen, initialization with an interior point far outperforms initialization with lattice point, thus validating the usefulness of our relaxation. \end{itemize} \begin{figure}[t!] \centering \includegraphics[scale=0.48]{ml_int_vs_ver_error.eps} \caption{Hamming error rate for reconstruction via coordinate switch ML heuristic (Alg.~\ref{alg:cood_switch}) when the initialization is a random lattice point and when the initialization is a random interior point, and reconstruction via symbolwise MAP (Alg.~\ref{alg:apprx_smap_dp}) for a blocklength $n=10$.} \label{fig:numerics_single1} \end{figure} \vspace{10pt} \noindent \textbf{$t$-trace deletion channel model.} Fig.~\ref{fig:numerics1} and Fig.~\ref{fig:numerics2} show results comparing reconstruction via symbolwise MAP for $t$-trace deletion channel model (Alg.~\ref{alg:exact_smap}), reconstruction via sequentially updating posterior probabilities over the traces (Alg.~\ref{alg:apprx_smap}), reconstruction via independently combining posteriors from each of the traces (described as Alg.~\ref{alg:ind_comb}) and a trace reconstruction algorithm from \cite{Batu2004} called the \textit{Bitwise Majority Alignment (BMA)} reproduced here as Alg.~\ref{alg:bitwise_majority}. We next make two observations, related to two of these algorithms. \noindent \textbf{Independent posterior combination:} As pointed in the introduction, computing the posterior probabilities for each deletion channel and combining them as if they came from independent observations does not provide a natural solution for computing the posterior probabilities for the $t$-trace deletion channel. One could, however, check how such a naive combination of posteriors compares with our reconstruction algorithms for $t$-traces. \noindent \textbf{Bitwise Majority Alignment:} BMA reconstructs the input sequence by first ``aligning'' the traces using a pointer for each trace, and then taking the majority of the pointed symbols. We note that the state-of-the-art trace reconstruction algorithms in the literature are applicable in the asymptotic regime where the blocklength $n$ and the number of traces $t$ approach $\infty$; it is not clear how to adapt such algorithms for a finite blocklength and a small number of traces. \begin{algorithm}[t!] \caption{Trace reconstruction via independent posterior combination}\label{alg:ind_comb} \begin{algorithmic}[1] \item Input: Traces {$Y^{(1)},...,Y^{(t)}$}, input length $n$ \\ Outputs: Estimate of the input $\hat X$ \State Initialize priors $p^{old} \gets (0.5,0.5,...,0.5)$ \For {$l=1:t$} \State Use Alg.~\ref{alg:apprx_smap_dp} with $p^{old}$ and $Y^{(l)}$ to compute posteriors $p^{(l),new}$ \EndFor \For {$i=1:n$} \If {$\prod_{l=1}^t p^{(l),new}_i \geq \prod_{l=1}^t (1-p^{(l),new}_i)$} $\ \hat X_i \gets 1$ \Else $\ \hat X_i \gets 0$ \EndIf \EndFor \State \textbf{return} $\hat X_1 \hat X_2 ... \hat X_n$ \end{algorithmic} \end{algorithm} \begin{algorithm}[!t] \caption{Bitwise Majority Alignment} \label{alg:bitwise_majority} \begin{algorithmic}[1] \item Input: Traces {$Y^{(1)},Y^{(2)},...,Y^{(t)}$, input length $n$}\\ Output: estimate of input $\hat X = \hat X_1 \hat X_2...\hat X_n$. \State Initialize $c_j=1$ for $j\in [t]$. \State Initialize $\hat X_i = 1$ for $i \in [n]$. \For{$i \in\ [0:n]$} \State Let $b$ be the majority over all $t$ of $Y^{(j)}_{c_j}$ \State $\hat X_i \gets b$ \State Increment $c_j$ for each $j$ such that $Y^{(j)}_{c_j} = b$ \EndFor \State \textbf{return} $\hat X_1 \hat X_2 ... \hat X_n$ \end{algorithmic} \end{algorithm} Our numerical evaluation results are as follows. \begin{itemize}[leftmargin=3mm] \item Fig.~\ref{fig:numerics1} compares symbolwise MAP for the $t$-trace deletion channel model (Alg.~\ref{alg:exact_smap}), trace reconstruction heuristic via sequentially updating posterior probabilities (Alg.~\ref{alg:apprx_smap}) with BMA (Alg.~\ref{alg:bitwise_majority}) for a small blocklength $n=10$. As seen, both of our algorithms significantly outperform BMA. Moreover, the heuristic Alg.~\ref{alg:apprx_smap} is not much worse than Alg.~\ref{alg:exact_smap}, at least for this blocklength. Note that Alg.~\ref{alg:apprx_smap} is polynomial in $n$ and $t$ ($O(tn^2)$) while Alg.~\ref{alg:exact_smap} is exponential in $t$ ($O(2^tn^{t+2})$). \item Fig.~\ref{fig:numerics2} compares the reconstruction heuristic via sequentially updating posterior probabilities (Alg.~\ref{alg:apprx_smap}) with reconstruction via independently combining posteriors from each of the traces (Alg.~\ref{alg:ind_comb}) and BMA (Alg.~\ref{alg:bitwise_majority}), over a larger blocklength of $n=100$. Clearly, Alg.~\ref{alg:apprx_smap} outperforms both the others, indicating that it is an interesting heuristic for trace reconstruction. \begin{figure}[t!] \centering \includegraphics[width=16cm]{errors_BL10.eps} \caption{Hamming error rate comparison for symbolwise MAP for the $t$-trace deletion channel model (Alg.~\ref{alg:exact_smap}), trace reconstruction heuristic via sequentially updating posterior probabilities (Alg.~\ref{alg:apprx_smap}) with BMA (Alg.~\ref{alg:bitwise_majority}) for a blocklength $n=10$.} \label{fig:numerics1} \end{figure} \item We note that with the number of traces, though the performance of all algorithms improves, the returns tend to diminish. This intuitively makes sense because not all the information in each additional trace is ``new''; some of it is already conveyed by existing traces. \item Even when the deletion probability is $\delta = 0.5$, the error rate is still less than $0.5$. This is not unexpected since the observed deleted sequence still carries some information about each of the input bits. \end{itemize} \begin{figure}[t!] \centering \includegraphics[width=16cm]{errors_BL100.eps} \caption{Hamming error rate comparison for reconstruction heuristic via sequentially updating posterior probabilities (Alg.~\ref{alg:apprx_smap}) with reconstruction via independently combining posteriors from each of the traces (Alg.~\ref{alg:ind_comb}) and BMA (Alg.~\ref{alg:bitwise_majority}) for a larger blocklength of $n=100$.} \label{fig:numerics2} \end{figure} \section{Introduction} \label{sec:intro} Sequence reconstruction over deletion channels, both with and without a codebook, has received considerable attention in the information theory as well as in the theoretical computer science literature. From an information theory perspective, reconstruction over the deletion channel, or more specifically a maximum-likelihood (ML) argument for the deletion channel, would give further insight on the capacity of the deletion channel, a long-standing open problem (see \cite{mitzenmacher2009survey}). To quote \cite{mitzenmacher2009survey} -- ``at the very least, progress in this direction would likely surpass previous results on the capacity of the deletion channels''. Yet, there are no results on reconstruction over a deletion channel with statistical guarantees -- in this work, we take a step in this direction. On the other hand, the problem of \textit{trace reconstruction}, as introduced in \cite{Batu2004}, has received renewed interest in the past few years (see \cite{Holenstein2008}, \cite{Peres2017}, \cite{De2017}, \cite{holden18}, \cite{Nazarov:2017}, \cite{holden2018lower}, \cite{chase2019new}). The problem of trace reconstruction can be stated simply as follows: consider a sequence $X$ which is simultaneously passed through $t$ independent deletion channels to yield $t$ deleted observations (also called \textit{traces}) of $X$ (see Fig.~\ref{fig:tdeletion}). How many such traces are needed to reconstruct $X$ perfectly? A variety of upper and lower bounds for this problem have been proposed, both for worst case and average case reconstruction. Our problem definition, stated in the following paragraph, is closely related to the average case trace reconstruction. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2]{tdeletion.pdf} \caption{$t$-deletion channel model: sequence $X$ passed through $t$ independent deletion channels to yield $t$ \textit{traces}. We aim to estimate $X$ from the $Y^{(i)}$s.} \label{fig:tdeletion} \end{center} \end{figure} \noindent \textbf{Problem definition.} Given an input sequence of length $n$ (known apriori), the independently and identically distributed (i.i.d.) deletion channel deletes each input symbol indepedently with probability $\delta$, producing at its output a subsequence of the input sequence. Consider a sequence $X$ passed through $t$ ($t$ is fixed) such deletion channels as shown in Fig.~\ref{fig:tdeletion}. We call this the $t$-deletion channel model. We ask two questions: \begin{enumerate} \item For $t=1$ (see Fig.~\ref{fig:1deletion}, also called the single deletion channel model), and when $X_i \sim\ \text{ind. Ber}(p_i)$, compute the posterior distributions of $X_i$ given the trace $Y$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{1deletion.pdf} \caption{The single deletion channel model. We assume $X_i \sim\ \text{ind. Ber}(p_i)$.} \label{fig:1deletion} \end{center} \end{figure} \item In the $t$ deletion channel model, for a fixed $t$ assume that $X_i \sim\ \text{i.i.d. Ber}(0.5)$ and compute the posterior distributions of $X_i$ given all traces $Y^{(1)}, Y^{(2)},...,Y^{(t)}$. \end{enumerate} Note that solving 1) above doesn't lead to a natural solution for 2). This is because for a memoryless channel, we have $Y^{(j)} - X_i - Y^{(k)}$ and hence $\Pr(X_i=\alpha|Y^{(j)}, Y^{(k)}) \propto \Pr(X_i=\alpha|Y^{(j)}) \Pr(X_i=\alpha|Y^{(k)})$; so one could independently combine the posterior probabilities from each noisy observation. This is not the case for deletion channels since the markov chain $Y^{(j)} - X_i - Y^{(k)}$ no longer holds. More intuitively, one needs to first ``align'' the traces for computing the likelihoods. We point out that the problem considered in 2) asks a question complementary to the trace reconstruction: given a fixed (possibly a few) number of traces, what is our ``best'' guess of $X$? We provide algorithms which do this. Unlike trace reconstruction, we are not concerned with perfect reconstruction (since perfect reconstruction may not be possible with just a few traces), although it should also be obvious that performance guarantees for our algorithms (not a part of this work) would naturally lead to upper bounds for trace reconstruction. Deletion channel by itself is known to be notoriously difficult to analyse. As stated earlier, the capacity of a single deletion channel is still unknown (\cite{diggavi2007capacity,diggavi2006information,diggavi2001transmission}); as are optimal coding schemes. Recent works have looked at the design of codes for deletion channels (\cite{ratzer2005marker,ratzer2000codes,thomas2017polar}); these works consider use of a codebook (we do not). As a result, statistical estimation over deletion channels is also a difficult problem due its highly combinatorial nature. To the best of our knowledge, as yet there are no efficient estimation algorithms over deletion channels with statistical guarantees; not even for ML over a single deletion channel. \noindent \textbf{Biological motivation.} Trace reconstruction in itself was motivated, in part, by problems DNA sequence reconstruction. One such problem was to infer the DNA sequence of a common ancestor from the samples of its descendants. We argue that our problem definition fits more naturally in such a scenario since perfect reconstruction may not be feasible or even possible. Our motivation for considering this problem also comes from a recent DNA sequencing technology called \textit{nanopore sequencing}. The $t$-deletion channel model is a simplistic model to approximately capture the process of a DNA sequence passed through a nanopore sequencer\footnote{As seen in \cite{Mao2017},\cite{MDK17} there are more complicated effects of the nanopore reader not captured in this simple representation.}. Very recently, a variant of the trace reconstruction problem called \textit{coded trace reconstruction} has been proposed, motivated by portable DNA-based data storage systems using DNA nanopores (see \cite{abroshan2019coding}, \cite{cheraghchi2019coded}, \cite{brakensiek2019coded}) and we believe that the ideas in this work may prove useful in such a setting. There are other works on sequence assembly (see for example, \cite{Li09fastand}, \cite{Shomorony2016}), where multiple short reads (from different segments of a sequence) are used to reconstruct the bigger sequence. This work differs from sequence assembly since we are interested to infer the entire length sequence and not just small segments of it (which are then ``stitched'' together in sequence assembly). \noindent \textbf{Tools and techniques.} In terms of theoretical tools, the series of books by Lothaire (\cite{lothaire1997combinatorics,lothaire2002algebraic,lothaire2005applied}) extensively use algebraic tools for problems in the combinatorics of sequences (or \textit{words}), and our work is inspired by such techniques. We borrow many of their notations and results for our work. \noindent \textbf{Contributions.} {\color{red}} Our main contribution is to provide tools to visualize and analyze error events (described in the next section) for the multiple deletion channel model in Fig.~\ref{fig:tdeletion}. We also provide algorithms to solve the problems stated in 1) and 2) earlier in the section. \begin{itemize}[wide=5pt] \item In section~\ref{sec:1deletion}, for the single deletion channel model, we provide an $O(n^2)$ algorithm to calculate the symbolwise posterior probabilities $\Pr(X_i=1|Y)\ \forall\ i$ when $X_i \sim \text{ind. Ber}(p_i)$. \item In Section~\ref{sec:exactsmap}, for the $t$-deletion channel model, we give an $O(2^t n^{t+2})$ algorithm to calculate the symbolwise posterior probabilities $\Pr(X_i = 1|Y_1,...,Y_t)$ when $X_i \sim \text{ind. Ber}(0.5)$. \end{itemize} \section{Notation and Tools} \label{sec:notations} \noindent \textbf{Basic notations:} In this work, we borrow a majority of the notation and tools from \cite{lothaire1997combinatorics} which deals with non-commutative algebra. We restate the definitions here for convenience. Calligraphic letters refer to sets, capitalized letters correspond to random variables and bold letters are used for functions. Let $\mathcal{A}$ be the set of all symbols. Throughout this work, we will focus on the case where $\mathcal{A}=\{0,1\}$, though our methods extend to arbitrarily large sets of finite size. Define $\mathcal{A}^n$ to be the set of all $n$-length sequences and $\mathcal{A}^*$ to be the set of all finite length sequences with symbols in $\mathcal{A}$. For a sequence $f$, $|f|$ denotes the length of $f$. For integers $i,j$, we define $[i:j] \triangleq \{i,i+1,...,j\}$ if $j\geq i$ and $[i:j] \triangleq \emptyset$ otherwise. Also define $[i] \triangleq [1:i]$.\\ \noindent \textbf{Binomial coefficient:} Given sequences $f$ and $g$ in $\mathcal{A}^*$, the number of subsequence patterns of $f$ that are equal to $g$ is called the \textit{binomial coefficient} of $g$ in $f$ and is denoted by $f \choose g$. For example, ${'apple' \choose 'ape'} = 2$ since $'ape'$ can be obtained from two (overlapping) subsequences of $'apple'$. When the alphabet $\mathcal{A}$ is of cardinality 1, ${f \choose g} = {|f| \choose |g|}$, the classical binomial coefficient with their respective lengths as the parameters. This definition hence could be thought of as a generalization of the classical binomial coefficients. We will denote by $e$ the sequence of length 0, and {define ${f \choose e} \triangleq 1\ \forall\ f\ \in\ \mathcal{A}^*$.} We also define the classical binomial coefficient ${a \choose b}\triangleq 0,$ whenever $b>a$ or $b<0$ for ease of use. The binomial coefficient is an integral aspect of this work and for analyzing error events in deletion channels because the input-output relation for a deletion channel (with deletion probability $\delta$, input $X$ and output $Y$) can be expressed as \begin{equation} \label{eq:in-out_relation} \Pr(Y=y|X=x) = {x \choose y} \delta^{|x|-|y|} (1-\delta)^{|y|}. \end{equation} The proof is straight-forward -- the number of distinct error events that give rise to $y$ from $x$ is exactly the number of subsequences of $x$ which are equal to $y$. Each of these error events has a probability $\delta^{|x|-|y|} (1-\delta)^{|y|}$, wherein the exponent of $\delta$ corresponds to the deleted symbols and the exponent of $1-\delta$ to the undeleted symbols. Given the definition of the binomial coefficient, the maximum-likelihood (ML) estimate over a deletion channel with observed output $Y$ can be cast in the following form: \begin{align*} \argmax_{x \in \mathcal C} {x \choose Y},\numberthis \label{eq:ML_deletion} \end{align*} where $\mathcal C$ is the chosen codebook. In the case of multiple deletion channels with observed traces $Y^{(1)},...,Y^{(t)}$, the ML formulation is similar: \begin{align*} \argmax_{x \in \mathcal C} \prod_{j=1}^{t} {x \choose Y^{(j)}}.\numberthis \label{eq:ML_multiple_deletion} \end{align*} As yet, there is no known efficient way to come up with a solution for the above two formulations, even for \eqref{eq:ML_deletion} with $\mathcal C = \{0,1\}^n$ (see \cite{mitzenmacher2009survey}). In this work, we attempt to take a step in this direction by showing that a continuous relaxation of \eqref{eq:ML_deletion} is equivalent to \eqref{eq:ML_deletion}. However, an efficient algorithm to solve the optimization \eqref{eq:ML_deletion} remains open. In the context of trace reconstruction, the ultimate pursuit would be an algorithm for \eqref{eq:ML_multiple_deletion} with $\mathcal C = \{0,1\}^n$ and error analysis thereof. \noindent We now describe a function which can be thought of as a real-valued extension of the binomial coefficient. This function is used in sections~\ref{sec:1deletion_ML} and \ref{sec:1deletion}. Consider the function $\mathbf F(\cdot)$ defined as: \begin{definition} \label{def:f} \begin{align*} &\mathbf F: \mathbb{R}^k \times \{0,1\}^l \rightarrow \mathbb{R},\\ \mathbf F(q, v)\triangleq &\begin{cases} \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [k],\\|\mathcal S|=l}} \quad \prod\limits_{i=1}^l q_{\mathcal S_i}^{v_i} (1-q_{\mathcal S_i})^{1-v_i} & 1 \leq l\leq k \\ 1 & 0=l\leq k \\ 0 & \text{else}. \end{cases} \end{align*} \noindent An alternate definition is as follows: consider a random vector $Z\in\{0,1\}^k$ such that $Z_i\sim$ ind. Ber$(q_i)$, let $q$ be the vector of probabilities of length $k$, $\mathcal S\subseteq [k]$ a subset of the indices of size $l$, and ${v}$ a binary vector of length $l$. Then, $$\mathbf F(q,v)=\sum_{\substack{\mathcal S||\mathcal S|=l}} \Pr(Z_{\mathcal S}= v).$$ Note that if $q$ is a vector of $0$'s and $1$'s, then $\mathbf F( q, v)={q \choose v}$, the binomial coefficient of $v$ in $q$. Thus, $\mathbf F(\cdot)$ could be interpreted as an extension of the binomial coefficient where one of the parameters can take values in $[0,1]^n$ instead of $\{0,1\}^n$. \end{definition} Though at first sight, $\mathbf F(q,v)$ sums over an exponential number of subsets, a dynamic programming approach can be used to compute it in $O(|v|^2)$ time complexity. The dynamic program is described in section~\ref{sec:1deletion}. \iffalse \noindent \textbf{Edit distance:} The edit distance $d_e(f,g)$ measures similarity between two sequences of possibly different lengths \cite{Navarro2001}. $d_e(f,g)$ is the minimum number of operations needed to transform $f$ to $g$, where the permitted operations are insertion, deletion or substitution of a symbol. In this work, we quantify the performance of algorithms in Section \ref{sec:Numerics} using the edit distance metric. \fi The following definitions and ideas are relevant only for \ref{sec:exactsmap} and can be omitted for the sections on single deletion channel. Before getting to the mathematical definitions, we first state a result of ours that aids in thinking about error events in multiple deletion channels. \subsection{An alternate interpretation of multiple deletion channel model} The events occurring in the $t$-deletion channel model can be categorized into two groups: \begin{enumerate} \item an input symbol is deleted in \textit{all} the $t$-traces, \item an input symbol is reflected in at least one of the traces. \end{enumerate} The error events of the first kind are in some sense ``not correctable" or even ``detectable" since it is impossible to tell what and where the deleted symbol could have been (although the probabilities need not be uniform). The events of the second kind, however, can still be detected although they could likely be confused with a large set of similar events. This thought process gives rise to a natural decomposition of the $t$-deletion channel model into a cascade of two channels: the first one being a deletion channel which captures error events of the first kind and the second one is what we call the \textit{remnant channel} which captures events of the second kind (see Fig.~\ref{fig:channel_equiv}). More precisely, the remnant channel is defined as follows: \begin{definition} \textit{Remnant channel:} an input symbol to the remnant channel is reflected in $k>0$ given outputs and deleted in the rest with a probability $\frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. \end{definition} It is easy to note that probability of the union of all possible events here is $\sum_{k=1}^t {t \choose k} \frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}=1$, validating our definition. \begin{restatable}{theorem}{channelequiv} \label{thm:channel_equiv} The $t$-deletion channel model and the cascade of the deletion channel with remnant channel shown in Fig.~\ref{fig:channel_equiv} are probabilistically equivalent, i.e., $$\Pr({Y}^{(1)},{Y}^{(2)},...,{Y}^{(t)}|X = x) = \Pr(\tilde{Y}^{(1)},\tilde{Y}^{(2)},...,\tilde{Y}^{(t)}|X = x).$$ \end{restatable} The formal proof of the theorem requires few more definitions and is relegated to the appendix. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{channel_equiv.pdf} \caption{A channel equivalence result: the $t$-deletion channel model in (a) is probabilistically equivalent to the the cascade of a deletion channel with the \textit{remnant channel} ($ \mathcal C_2$) in (b).} \label{fig:channel_equiv} \end{center} \end{figure} \noindent\textbf{Edit graph} (as defined in \cite{Gusfield1997}): We now define a graph construct which is closely related to the error events in the remnant channel. We start with a simple case and generalize subsequently. Define an \textit{edit graph} given two sequences $f$ and $g$, where every path connecting the ``origin'' to the ``destination'' on the edit graph yields a supersequence $h$ of $f,g$, where $h$ is ``covered'' by $f,g$ -- i.e., each symbol of $h$ comes from either $f$ or $g$ or both. In other words, given that $f$ and $g$ are the outputs of the remnant channel (with two outputs), each path from the origin of the edit graph to the destination corresponds to a possible input $h$ to the remnant channel and to an error event which resulted in $f$ and $g$ with input $h$. For $f$ and $g$ in $\mathcal A^*$, we form a graph $\mathcal{G}(f,g)$ with $(|f|+1)(|g|+1)$ vertices each labelled with a distinct pair $(i,j), 0\leq i\leq |f|,\ 0\leq j\leq |g|$. A directed edge $(i_1,j_1)\rightarrow(i_2,j_2)$ exists iff at least one of the following holds: \begin{enumerate} \item$i_2-i_1=1$ and $j_1=j_2$, or \item $j_2-j_1=1$ and $i_1=i_2$, or \item $i_2-i_1=1$, $j_2-j_1=1$ and $f_{i_2}=g_{j_2}$, \end{enumerate} where $f_i$ is the $i^{th}$ symbol of the sequence $f$. The origin is the vertex $(0,0)$ and the destination $(|f|,|g|)$. \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap1.pdf} \caption{ Edit graph for sequences $f=$ `001' and $g=$ `101'. An easy way to think about this is to write down $f$ vertically with each symbol aligned to a vertical set of edges and $g$ horizontally likewise. A diagonal edge in a small square exists if the corresponding $f$ and $g$ symbols are equal. The thick red edges form a path from the origin to the destination; this path corresponds to the sequence `0101' -- append the corresponding symbol at the left of an edge if it's vertical or diagonal, otherwise append the symbol at the top. It is also easy to verify that 0101 is a supersequence of both $f$ and $g$, and could be obtained as a covering of $f$ and $g$; the path itself gives one such covering. This covering also corresponds to an error event in the remnant channel which would result in outputs $f$ and $g$ with input $h=$ `0101' -- more precisely, the error event is one in which the first symbol of $h$ is deleted only in $g$, the second symbol is deleted only in $f$ and the third and fourth symbols not deleted in either $f$ or $g$.} \label{fig:editgraph_smap1} \end{figure} Let $p=((i_1,j_1),(i_2,j_2),...,(i_m,j_m))$ be a path in $\mathcal{G}(f,g)$. We define $ s(p)$ to be the sequence corresponding to the path. Intuitively, $s(p)$ is formed by appending symbols in the following way: append the corresponding $f$ symbol for a vertical edge, $g$ symbol for horizontal edge, and $f$ or $g$ symbol for diagonal edge (see example Fig.~\ref{fig:editgraph_smap1}). Any path from $(0,0)$ to $(|f|,|g|)$ corresponds to a supersequence of $f$ and $g$ and which is ``covered" by $f$ and $g$. More formally, define $ s(p)\triangleq x_1x_2...x_{m-1}$ where $$x_k = \begin{cases} f_{i_{k+1}} \quad\text{if }j_{k}=j_{k+1},\\ g_{j_{k+1}} \quad\text{if }i_{k}=i_{k+1},\\ f_{i_{k+1}} \quad\text{else.} \end{cases} $$ The construct of edit graph can be extended to more than 2 sequences with the same idea. For sequences $f_1,f_2,...,f_t$, construct a $t$-dimensional grid with a number of vertices $(|f_1|+1)(|f_2|+1)...(|f_t|+1)$ labeled from $(0,0,...,0)$ to $(|f_1|,|f_2|,...,|f_t|)$. A vertex $u=(i_1,i_2,...,i_t)$ is connected to $v=(j_1,j_2,...,j_t)$ (we say $u \rightarrow v$) iff both of the following conditions are met: \begin{itemize} \item $j_l=i_l$ or $j_l=i_l+1$ $\forall\ l\in [t]$, i.e., $(i_1,...,i_t)$ and $(j_1,...,j_t)$ are vertices of a particular unit cube. Only these type of vertices can share an edge in the grid graph. \item Let $\mathcal T \subseteq [t]$ be the collection of indices where $j_l=i_l+1$. Then $f_{i_l}$ is same $\forall\ l \in \mathcal T$. For instance, if $\mathcal T=\{1,3,5\}$, then $f_{i_1}=f_{i_3}=f_{i_5}$. \end{itemize} Define the vertex $(0,...,0)$ to be the origin of this graph and the vertex $(|f_1|,...,|f_t|)$ to be the destination. If $|f_j|=O(n)\ \forall\ j$, this graph has a number of vertices $O(n^t)$ and a maximum number of edges $O((2n)^t)$ since each vertex has at most $2^t$ outgoing edges.\\ \noindent\textbf{Infiltration product} (introduced in \cite{lothaire1997combinatorics}): The infiltration product has been extensively used in \cite{lothaire1997combinatorics}, as a tool in non-commutative algebra. Here we give an edit-graph interpretation of this tool. We also give a formal definition later in this section. Using the edit graph we can construct the set of possible supersequences $\mathcal{S}(f,g)$ of $f,g$ which are covered by it. Clearly multiple paths could yield the same supersequence and we can count the number of distinct ways $\mathbf N(h;f,g)$ one can construct the same supersequence $h$ from $f,g$. We can informally define the \emph{infiltration product $f\uparrow g$} of $f$ and $g$, as a polynomial with monomials the supersequences $h$ in $\mathcal{S}(f,g)$ and coefficients $\langle f\uparrow g,h\rangle$ equal to $\mathbf N(h;f,g)$. In Fig.~\ref{fig:editgraph_smap1}, it is easy to verify that there is exactly one path corresponding to `101001' and hence $\langle 001\uparrow 101,101001 \rangle=1$ and similarly $\langle 001\uparrow 101,01001 \rangle=2$. One could find these coefficients for all relevant sequences and form the polynomial as described. More examples: Let $\mathcal{A}=\{a,b\}$, then \begin{itemize}[wide=2pt] \item $ab\uparrow ab=ab+2aab+2abb+4aabb+2abab$, \item $ab\uparrow ba=aba+bab+abab+2abba+2baab+baba.$ \end{itemize} The infiltration operation is commutative and associative, and infiltration of two sequences $f\uparrow g$ is a polynomial with variables of length (or \textit{degree}) at most $|f|+|g|$; see \cite{lothaire1997combinatorics}. The definition of infiltration extends to two polynomials via distributivity (precisely defined in later), and consequently to multiple sequences as well. For multiple sequences, infiltration has the same edit graph interpretation: $\langle f_1\uparrow f_2 \uparrow...\uparrow f_t, w \rangle$ is the number of distinct ways of constructing $w$ as a supersequence of $f_1, f_2, ... ,f_t$ so that the construction covers $w$, i.e., construct the $t$-dimensional edit graph of $f_1, f_2, ... ,f_t$ and count the number of paths corresponding to $w$. \subsection{Formal definition of infiltration product} We now give a more formal definition of the infiltration product (see \cite{lothaire1997combinatorics} for the equivalence of the two definitions and a more rigorous treatment). A \textit{formal series} with indeterminates (or variables) in a set $\mathcal A$ and coefficients in a commutative ring $\mathcal R$, is a mapping of $\mathcal A^*$ onto $\mathcal R$. Recall that a commutative ring is a set which forms an abelian group under an \textit{addition} operation, is a monoid under a \textit{multiplication} operation which commutes, and the multiplication operation distributes over addition. Here we consider $\mathbb Z$, the set of integers as the commutative ring $\mathcal{R}$. A formal series is called a \textit{polynomial} if only a finite number of sequences are mapped to non-zero values, the rest of the sequences map to zero. Consider two polynomials $\sigma,\tau: \mathcal{A}^*\rightarrow \mathbb Z$. The value taken by a sequence $w\in \mathcal A^*$ on $\sigma$ (or the coefficient of $w$ in $\sigma$) is denoted by $\langle \sigma,w\rangle \in \mathbb R$. We also define binary addition ($\oplus$) and multiplication operations ($\times$) on the set of polynomials as follows: \begin{align} \langle \sigma\oplus \tau,w\rangle \triangleq \langle \sigma,w\rangle + \langle \tau,w \rangle \quad \forall w\in \mathcal A^*,\label{eq:polynomial_add}\\ \langle \sigma\times \tau,w\rangle \triangleq \sum_{\substack{f,g\in \mathcal A^*:\\ f.g=w}}\langle \sigma,f\rangle \langle \tau,g \rangle \quad \forall w\in \mathcal A^*.\label{eq:polynomial_prod} \end{align} We will use the usual symbols $+$ and $.$ in place of $\oplus$ and $\times$ in this work for convenience. The meaning of the operation would be clear depending on the operands. With these operations the set of polynomials form a non-commutative ring, and is denoted by $\mathbb Z\langle\mathcal A \rangle$, also called the free $\mathbb Z$-algebra on $\mathcal A$ in ring theory. Note that the addition and multiplication operations defined in \eqref{eq:polynomial_add} and \eqref{eq:polynomial_prod} are similar to the operations defined on commutative polynomials, except that the multiplication operation under the summation in \eqref{eq:polynomial_prod} ($f.g=w$) is actually concatenation and is non-commutative. The multiplication inside the summation in \eqref{eq:polynomial_prod} is multiplication in the real field and hence commutative. It is also easy to prove that the multiplication defined in \eqref{eq:polynomial_prod} distributes over addition defined in \eqref{eq:polynomial_add}. Thus, a polynomial in $\mathbb Z\langle\mathcal A \rangle$ can be represented as a sum of monomials in $\mathcal A^*$ each with an associated coefficient in $\mathbb Z$, i.e., $\sigma=\sum\limits_{w\in \mathcal A^*} \langle\sigma,w \rangle w$. Define the \textit{degree} of a polynomial to be equal to the length of a longest sequence with a non-zero coefficient in the polynomial and the \textit{number of terms} of a polynomial as the number of sequences with non-zero coefficients in the polynomial. Note that a degree $d$ polynomial could have a number of terms upto $2^{d+1}-1$. With this, the \textit{infiltration product} (in general, for two polynomials) is defined as follows: \begin{align} \forall f\in \mathcal{A}^*,& \quad f\uparrow e = e\uparrow f=f.\nonumber \\ \forall f,g\in \mathcal{A}^*&,\quad \forall a,b\in \mathcal{A}, \nonumber\\ fa\uparrow gb=(f\uparrow gb)a&+(fa\uparrow g)b+\mathbbm{1}_{a=b}(f\uparrow g)a.\nonumber \\ \forall \sigma,\tau \in \mathbb{Z}\langle\mathcal{A} \rangle, \quad &\sigma\uparrow \tau=\sum_{f,g\in \mathcal{A}^*} \langle \sigma,f \rangle \langle \tau,g \rangle (f\uparrow g). \label{def:infiltforseq} \end{align} \textbf{Summary of definitions and ideas introduced in this section:} \begin{itemize} \item The binomial coefficient captures the likelihood of observations for deletion channels. \item An extension of the binomial coefficient function where one of the parameters can take real values has been introduced; this function is pivotal for our results on the single-trace deletion channel. \item For multiple deletion channels, the error events can be categorized into two groups -- one where an input symbol is deleted in all the traces, and second, the complement of this event. This categorization results in a natural decomposition of multiple deletion channel model into a cascade model involving the remnant channel. \item The remnant channel disregards the error events where a symbol is deleted in all the traces. \item The edit graph provides a way to visualize all the possible error events and input sequences to a remnant channel given its outputs. \item The infiltration product serves the same purpose as the edit graph, but has an algebraic flavor and provides rigor for proofs and analyses. The edit graph, on the other hand, is more helpful in designing reconstruction algorithms over deletion channels. \end{itemize} \section{Maximum likelihood for the single-trace deletion channel} \label{sec:1deletion_ML} Consider the ML formulation for the single-trace deletion channel given non-empty trace $Y$ and all possible $n$-length inputs allowed, restated here for convenience: \begin{equation} \argmax_{x\in \{0,1\}^n} {x \choose Y}. \label{eq:1deletion_ML} \end{equation} To the best of our knowledge, the only known method to solve \eqref{eq:1deletion_ML} involves iterating over all possible choices of $x$ and computing the objective value for each of the choices. We here show that it is sufficient to solve a continuous relaxation of above problem to obtain a solution to \eqref{eq:1deletion_ML}. Note that there could be multiple solutions to \eqref{eq:1deletion_ML}. Before going in the main result, we first state a useful lemma which factors out a given coordinate $p_i$ out of $\mathbf F(p,Y)$. The proof of the lemma is relegated to the appendix. \begin{restatable}{lemma}{deletionMLrellemma} For $p=[p_1,p_2,..,p_i,...,p_n]$ and $Y=Y_1Y_2...Y_m$ with $n \geq m > 0$, we have \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} \label{lemma:F_decomposition} \end{restatable} \begin{theorem} An alternative optimization to characterize ML for the single-trace deletion channel. \begin{equation} \max_{x\in \{0,1\}^n} {x \choose Y} = \max_{p\in [0,1]^n} \mathbf F(p,Y). \label{eq:ml_opt_equiv} \end{equation} Furthermore, given any non-integral $p^* \in [0,1]^n$ that maximizes $\mathbf F(p,Y)$, one could construct a corresponding integral solution $x^* \in \{0,1\}^n$ that maximizes $\mathbf F(p,Y)$ and consequently is also a solution to $\max_{x\in \{0,1\}^n} {x \choose Y}$. \end{theorem} \begin{proof} As noted earlier, we have ${x \choose Y} = \mathbf F(x,Y)$. Therefore, we are interested in proving the following: \begin{align*} \max_{x\in \{0,1\}^n} \mathbf F(x,Y) \equiv \max_{p\in [0,1]^n} \mathbf F(p,Y).\numberthis \label{eq:ml_opt_equiv_proof1} \end{align*} To show this and also the second statement, we prove that given $p=(p_1,p_2,...,p_i,...,p_n)$, at least one of the following holds true: \begin{itemize} \item$\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y)$, where $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},0,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by 0. \item $\mathbf F(p^{(i\rightarrow 1)},Y) \geq \mathbf F(p,Y)$, where $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},1,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by 1. \end{itemize} Thus if $p^*$ is an optimal solution to $\max_{p\in [0,1]^n} \mathbf F(p,Y)$ with $p_i\in (0,1)$, then at least one of $p^{(i\rightarrow 0)}$ or $p^{(i\rightarrow 1)}$ is also an optimal solution. Sequentially applying this argument for each coordinate of $p$ shows that there exists a point in $\{0,1\}^n$ which is also an optimal solution to $\max_{p\in [0,1]^n} \mathbf F(p,Y)$ and consequently to $\max_{x\in \{0,1\}^n} \mathbf F(x,Y)$. Now to prove our claim, we use Lemma~\ref{lemma:F_decomposition} to factor out $p_i$ terms in $\mathbf F(p,Y)$: \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} There are 3 cases \begin{enumerate} \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) = \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) = \mathbf F(p,Y) = \mathbf F(p^{(i\rightarrow 1)},Y).$ \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) > \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) \leq \mathbf F(p,Y) \leq \mathbf F(p^{(i\rightarrow 1)},Y).$ \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) < \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y) \geq \mathbf F(p^{(i\rightarrow 1)},Y).$ \end{enumerate} Thus in each of the 3 cases, we see that at least one of $\mathbf F(p^{(i\rightarrow 0)},Y)$ or $\mathbf F(p^{(i\rightarrow 1)},Y)$ is at least as large as $\mathbf F(p,Y)$ thus proving the theorem. Note that the proof gives a natural way to find an optimal lattice point $p_{lattice}^*$ given a non-lattice point $p^*$ by iterating through each coordinate of $p^*$ and switching them to $0$ or $1$ by comparing $\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})$ with $\sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$ \end{proof} \section{Symbolwise posterior probabilities for the single-trace deletion channel} \label{sec:1deletion} We here develop an algorithm to compute the symbolwise posterior probabilities for the single-trace deletion channel when the input symbols are independently generated with arbitrary priors. Consider the single deletion channel model in Fig.~\ref{fig:1deletion}, where $X=X_1...X_n$, each input symbol is generated $X_i \sim \text{ind. Ber}\ (p_i)$, and we observe at the output the trace $Y = Y_1Y_2...Y_m$ with $m\leq n$. Define the vector of priors as $p\triangleq(p_1,p_2,...,p_n)$. We first give an $O(n^2)$ algorithm to calculate the posterior probabilities $\Pr(X_i=1|Y)$, which in turn provides the symbolwise MAP estimate for the considered model. We then show how this algorithm can be used for trace reconstruction. We take three steps to present the algorithm. \noindent \textbf{An expression for $\Pr(X_i=1|Y)$.} Let $\Pr(X_i=1)=p_i$. As a first step, we have \vspace{6pt} \begin{align*} \Pr(X_i=1|{Y}) &= \frac{\Pr(X_i=1,Y)}{\Pr(Y)} = \frac{ \sum\limits_{\substack{ x| x_i=1}} \Pr({X=x}) \Pr(Y|X=x)}{ \sum_{\substack{x}} \Pr({X=x}) \Pr(Y|X=x)} \\ &\overset{(a)}{=} \frac{ \sum\limits_{\substack{ x| x_i=1}} \Pr({X=x}) { x \choose Y}}{ \sum_{\substack{x}} \Pr({X=x}) { x \choose Y}}, \numberthis \label{eq:approx_smap_1} \end{align*} where $(a)$ is because for a deletion channel $\Pr(Y|X=x)={x \choose Y} \delta^{|x|-|Y|}(1-\delta)^{|Y|}$. To proceed, we need to evaluate the summation in the numerator and the denominator. Theorem~\ref{thm:approx_smap_postprob} expresses \eqref{eq:approx_smap_1} in terms of relaxed binomial coefficient terms $\mathbf F(\cdot)$. Recall one way to define $\mathbf F(p,Y) \triangleq \mathbb E_{X\sim p} {X \choose Y}$, which is the denominator term in \eqref{eq:approx_smap_1}. \begin{theorem} \label{thm:approx_smap_postprob} Let $X=X_1...X_n$ where $X_i \sim \text{ind. Ber}\ (p_i)$, and let $Y$ be the observed trace when $X$ is passed through a deletion channel. Then, \begin{align*} \Pr(X_i&=1|{Y}) = \frac{p_i}{\mathbf F( p, Y)} \left( \mathbf F( p_{[n]\backslash \{i\}}, Y) + \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) \right) \numberthis \label{eq:smap_postprob_thm} \end{align*} where $\mathbf F(\cdot)$ is as given in Definition~\ref{def:f}. \end{theorem} \begin{proof} The proof of this theorem employs the same trick used in the proof of Lemma~\ref{lemma:F_decomposition}. Let $|Y|=m$. From \eqref{eq:approx_smap_1} \begin{align*} \Pr(X_i = 1 | Y) = \frac{ \sum\limits_{\substack{ x| x_i=1}} \Pr({X=x}) { x \choose Y}}{\mathbf F(p,Y)}. \end{align*} Now, \begin{align*} \sum_{\substack{ x| x_i=1}} & \Pr({X=x}) { x \choose Y} =\sum_{\substack{ x|x_i=1}} \Pr({X=x}) \sum_{\substack{\mathcal S\subseteq [n]\\ |\mathcal S|=m}} \mathbbm{1}\{ x_{\mathcal S}= Y\}\\ &=\sum_{\substack{\mathcal S\subseteq [n]\\ |\mathcal S|=m}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=Y}} \Pr({X=x}).\numberthis \label{eq:smapiter1} \end{align*} We first separate the outer summation into two cases: (a) $\mathcal S|i \notin \mathcal S$ and (b) $\mathcal S|i\in \mathcal S$. We can express the first case as \begin{align*} &\hspace{-1cm}\sum_{\substack{\mathcal S\subseteq [n] \\ |\mathcal S|=m,i\notin \mathcal S}}\sum_{\substack{ x|x_i=1\\x_{\mathcal S}=Y}} \Pr({X=x}) =\sum_{\substack{\mathcal S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=Y}} \Pr({X=x})\\ &=\sum_{\substack{S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} \sum_{\substack{ x|x_i=1\\ x_{\mathcal S}= Y}} \Big(\Pr(X_i=1)\Pr( X_{\mathcal S}= Y) \Pr( X_{[n]\backslash \mathcal S\cup\{i\}}= x_{[n]\backslash \mathcal S\cup\{i\}}) \Big)\\ &=\sum_{\substack{\mathcal S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} p_i \Pr( X_{\mathcal S}= Y) \left(\sum_{\substack{ x|x_i=1\\ x_{\mathcal S}= Y}} \Pr( X_{[n]\backslash \mathcal S\cup\{i\}}= x_{[n]\backslash \mathcal S\cup\{i\}})\right)\\ &=\sum_{\substack{\mathcal S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} p_i \Pr( X_{\mathcal S}= Y) \left(\sum_{(x_j|j\in [n]\backslash \mathcal S\cup \{i\})} \Pr( X_{[n]\backslash \mathcal S\cup\{i\}}= x_{[n]\backslash \mathcal S\cup\{i\}})\right)\\ &=p_i \sum_{\substack{\mathcal S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} \Pr( X_{\mathcal S}= Y) = p_i \mathbf F( p_{[n]\backslash \{i\}}, Y).\numberthis \label{eq:lemma3proof1} \end{align*} For the second term, we express the set $\mathcal S$ as a union $\mathcal S = \mathcal S' \cup \{i\} \cup \mathcal S''$ such that $\mathcal S' \subseteq [i-1]$ and $\mathcal S'' \subseteq [i+1:n]$ to get: \begin{align*} &\sum_{\substack{\mathcal S\subseteq [n]\\ |\mathcal S|=m,\\i\in \mathcal S}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=Y}} \Pr({X=x})= \sum_{k=1}^m\sum\limits_{\substack{\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ \mathcal S_k = i}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=Y}} \Pr({X=x})\\ &=\sum_{k=1}^m\sum_{\substack{\mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\sum_{\substack{\mathcal S''\subseteq [i+1:n]\\ |\mathcal S''|=m-k}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=Y}}\mathbbm{1}_{\{Y_k=1\}} \Pr({X=x}) \\ &=\sum_{k:Y_k=1}\sum_{\substack{\mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\sum_{\substack{\mathcal S''\subseteq [i+1:n]\\ |\mathcal S''|=m-k}} \sum_{\substack{ x|x_i=1\\ x_{\mathcal S'}= Y_{[1:k-1]}\\ x_{\mathcal S''}= Y_{[k+1:m]}}} \Bigg ( \Pr(X_i=1) \Pr( X_{\mathcal S'}= Y_{[1:k-1]}) \Pr( X_{\mathcal S''}= Y_{[k+1:m]}) \\&\hspace{7cm} \Pr( X_{[n]\backslash \mathcal S'\cup \mathcal S''\cup \{i\}}= x_{[n]\backslash \mathcal S'\cup \mathcal S'' \cup\{i\}})\Bigg )\\ &=p_i\sum_{k:Y_k=1}\Bigg ( \Big( \sum_{\substack{\mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\Pr( X_{\mathcal S'}= Y_{[1:k-1]})\Big) \Big(\sum_{\substack{\mathcal S''\subseteq [i+1:n]\\ |\mathcal S''|=m-k}}\Pr( X_{\mathcal S''}= Y_{[k+1:m]})\Big ) \\ & \hspace{5cm} \Big( \sum_{\substack{ x|x_i=1\\ x_{\mathcal S'}= Y_{[1:k-1]}\\ x_{\mathcal S''}= Y_{[k+1:m]}}} \Pr( X_{[n]\backslash \mathcal S'\cup \mathcal S''\cup \{i\}}= x_{[n]\backslash \mathcal S'\cup \mathcal S'' \cup\{i\}})\Big) \Bigg )\\ &=p_i\sum_{k|Y_k=1}\Bigg(\Big( \sum_{\substack{\mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\Pr( X_{\mathcal S'}= Y_{[1:k-1]})\Big) \Big( \sum_{\substack{\mathcal S''\subseteq [i+1:n]\\ |\mathcal S''|=m-k}}\Pr( X_{\mathcal S''}= Y_{[k+1:m]}) \Big)\Bigg)\\ &=p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \numberthis \label{eq:lemma3proof2} \end{align*} Plugging in \eqref{eq:lemma3proof1} and \eqref{eq:lemma3proof2} in \eqref{eq:approx_smap_1} proves the theorem. \end{proof} Alg.~\ref{alg:apprx_smap_dp} summarizes the computation of $\Pr(X_i=1|Y)$. The complexity of the algorithm is $O(n^2)$ which is also the complexity of computing $\mathbf F(p,Y)$; note that $m=O(n)$ since $Y$ is a deleted version of the input. \begin{algorithm}[t!] \caption{Symbolwise posterior probabilities with one trace}\label{alg:apprx_smap_dp} \begin{algorithmic}[1] \item Input: Trace {$Y$}, priors $p$\\ Outputs: Posteriors $\Pr(X_i=1|Y)\ \forall\ i$ \State Compute $\mathbf F(p_{[1:k]},Y_{[1:j]})\ \forall\ k,j$ and $\mathbf F(p_{[k:n]},Y_{[j:m]})\ \forall\ k,j$ via Alg.~\ref{alg:F_comp} \For {$i=1:n$} \State Use \eqref{eq:smap_postprob_thm} to compute $\Pr(X_i=1|Y)$ \EndFor \end{algorithmic} \end{algorithm} \noindent\textbf{A trace reconstruction heuristic with $t$ traces.} The posterior probability computation in Alg.~\ref{alg:apprx_smap_dp} naturally gives rise to a trace reconstruction heuristic that updates the symbolwise statistics sequentially on the traces, where we use Alg.~\ref{alg:apprx_smap_dp} with one trace at a time to continually update $\Pr(X_i=1|Y)$. The overall heuristic is described in Alg.~\ref{alg:apprx_smap}. The complexity of the algorithm is $O(tn^2)$ since Alg.~\ref{alg:apprx_smap} amounts to just $t$ uses of Alg.~\ref{alg:apprx_smap_dp}. \begin{algorithm}[t!] \caption{Trace reconstruction via iterative single-trace posterior probabilities}\label{alg:apprx_smap} \begin{algorithmic}[1] \item Input: Traces {$Y^{(1)},...,Y^{(t)}$}, input length $n$ \\ Outputs: Estimate of the input $\hat X$ \State Initialize priors $p^{old}=p^{new} \gets (0.5,0.5,...,0.5)$ \For {$l=1:t$} \State Use Alg.~\ref{alg:apprx_smap_dp} with $p^{old}$ and $Y^{(l)}$ to update $p^{new}$ \State $p^{old}\gets p^{new}$ \EndFor \For {$i=1:n$} \If {$p^{new}_i\geq 0.5$} $\ \hat X_i \gets 1$ \Else $\ \hat X_i \gets 0$ \EndIf \EndFor \State \textbf{return} $\hat X_1 \hat X_2 ... \hat X_n$ \end{algorithmic} \end{algorithm} \section{On Maximum Likelihood Decoding For The Single-Trace Deletion Channel} \label{sec:1deletion_ML} We here consider the single-trace ML decoding in (\ref{eq:ML_deletion}), assuming that the output sequence $Y$ is non-empty. To the best of our knowledge, the only known method to solve \eqref{eq:ML_deletion} involves solving a combinatorial optimization, essentially iterating over all possible choices of $x$ and computing the objective value for each of the choices. We here show that one could equivalently solve the continuous relaxation of \eqref{eq:ML_deletion} to obtain a solution of \eqref{eq:ML_deletion}. Before presenting the main result, we first state a useful lemma which factors a given coordinate $p_i$ out of the relaxed binomial coefficient $\mathbf F(p,Y)$ we introduced in Definition~\ref{def:f}. \begin{restatable}{lemma}{deletionMLrellemma} For $p=(p_1,p_2,..,p_i,...,p_n)$ and $Y=Y_1Y_2...Y_m$ with $n \geq m > 0$, we have \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{ [n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} \label{lemma:F_decomposition} \end{restatable} Recall that $\mathbf F(p,Y)$ sums over all $m$-length subsets $\mathcal S$ and associates $p_{\mathcal S}$ with $Y.$ Intuitively, this recursive relationship considers separately the cases where \begin{itemize} \item $i \notin \mathcal S$, \item $i \in \mathcal S$ and is associated with a particular $Y_k$ where $Y_k = 1$, \item $i \in \mathcal S$ and is associated with a particular $Y_k$ where $Y_k = 0$. \end{itemize} The detailed proof can be found in Appendix~\ref{app:F_lemma_proof}. It is clear from Lemma~\ref{lemma:F_decomposition} that $\mathbf F(p,Y)$ is affine when projected onto each coordinate $p_i$. Thus, the extrema of $\mathbf F(p,Y)$ must occur at the boundary of the support set of $p_i$; i.e., at either $p_i = 0$ or $p_i = 1$. Combining this with the fact that $\mathbf F(\cdot)$ is a relaxed version of the binomial coefficient, we observe that the maximization problem in \eqref{eq:ML_deletion} is equivalent to its real-valued relaxation. The following result makes this precise. \begin{theorem} The ML decoding problem for the single-trace deletion channel \begin{equation} \max_{x\in \{0,1\}^n} {x \choose Y} \label{eq:ml_opt_equiv1} \end{equation} is equivalent to the problem \begin{equation} \max_{p\in [0,1]^n} \mathbf F(p,Y). \label{eq:ml_opt_equiv2} \end{equation} Furthermore, given any non-integral $p^* \in [0,1]^n$ that maximizes $\mathbf F(p,Y)$, we can construct a corresponding integral solution $x^* \in \{0,1\}^n$ that maximizes $\mathbf F(x,Y)$ and consequently also maximizes ${x \choose Y}$. \label{thm:ML_relaxation} \end{theorem} \begin{proof} As noted earlier, we have ${x \choose Y} = \mathbf F(x,Y)$. Therefore, we are interested in proving the following: \begin{align*} \max_{x\in \{0,1\}^n} \mathbf F(x,Y) \equiv \max_{p\in [0,1]^n} \mathbf F(p,Y),\numberthis \label{eq:ml_opt_equiv_proof1} \end{align*} where $\equiv$ refers to that the two problems are equivalent (have the same optimal objective value). We prove this by applying the following claim.\\ \textbf{Claim:} Given any feasible $p=(p_1,p_2,...,p_i,...,p_n)$, at least one of the following holds true: \begin{itemize} \item$\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y)$. Recall from notation that $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},0,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by $0$. \item $\mathbf F(p^{(i\rightarrow 1)},Y) \geq \mathbf F(p,Y)$. \end{itemize} Thus if $p^*$ is an optimal solution to \eqref{eq:ml_opt_equiv2} with $p_i\in (0,1)$, then at least one of $p^{(i\rightarrow 0)}$ or $p^{(i\rightarrow 1)}$ is also an optimal solution. Sequentially applying this argument for each coordinate of $p$ shows that there exists a point in $\{0,1\}^n$ which is an optimal solution to \eqref{eq:ml_opt_equiv2} and consequently to \eqref{eq:ml_opt_equiv1}. It remains to prove our claim. We use Lemma~\ref{lemma:F_decomposition} to factor out $p_i$ terms in $\mathbf F(p,Y)$: \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} Now we express $\mathbf F(p^{(i\rightarrow 0)},Y)$ and $\mathbf F(p^{(i\rightarrow 1)},Y)$ as $$\mathbf F(p^{(i\rightarrow 0)},Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}),$$ $$\mathbf F(p^{(i\rightarrow 1)},Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \noindent Because $0\leq p_i\leq 1$ it directly follows that $$\min \left\{\mathbf F(p^{(i\rightarrow 0)},Y),\mathbf F(p^{(i\rightarrow 1)},Y)\right\} \leq \mathbf F(p,Y) \leq \max \left\{\mathbf F(p^{(i\rightarrow 0)},Y),\mathbf F(p^{(i\rightarrow 1)},Y)\right\},$$ thus proving our claim. Note that the proof gives a natural way to find an optimal lattice point $p_{lattice}^*$ given an optimal interior point $p^*$ by iterating through each coordinate of $p^*$ and switching them to $0$ or $1$ by comparing $\mathbf F(p^{(i\rightarrow 0)},Y)$ and $\mathbf F(p^{(i\rightarrow 1)},Y)$. \end{proof} The real-valued optimization problem in \eqref{eq:ml_opt_equiv2} falls under the umbrella of signomial optimization which is, in general, NP-hard (see for example, \cite{xu2014signomial}, \cite{chand2016signomial}). With a slight change of variables, \eqref{eq:ml_opt_equiv2} could also be expressed as a maximization of a convex function in a convex set. However, it is unclear if \eqref{eq:ml_opt_equiv2} is solvable in polynomial time or inherently hard to solve. \begin{algorithm}[t!] \caption{Coordinate switch ML heuristic}\label{alg:cood_switch} \begin{algorithmic}[1] \item Input: Blocklength $n$, Trace {$Y$}, Initial point $p = (p_1,p_2,...,p_n)$ \\ Outputs: Estimated sequence $\hat X$ \State Initialize visited set $\mathcal V = \emptyset$ \While {True} \State Compute $\mathcal F_i = |\mathbf F(p^{(i\rightarrow 1)},Y)- \mathbf F(p^{(i\rightarrow 0)},Y)|\ \forall\ i$ and let $\mathcal F = (\mathcal F_1,\mathcal F_2,...,\mathcal F_n)$. \State Define the ordered list $\mathcal S =$ \texttt{argsort}$(\mathcal F)$ where \texttt{argsort}$(\mathcal F)$ returns the index set $[n]$ sorted by descending order of $\mathcal F$, i.e., $\mathcal F_{\mathcal S_1}\geq \mathcal F_{\mathcal S_2}\geq ... \geq \mathcal F_{\mathcal S_n}$. \For {$i \in \mathcal S$ (ordered traversal)} \If {$\mathbf F(p^{(i\rightarrow 1)},Y)- \mathbf F(p^{(i\rightarrow 0)},Y) \geq 0$} \State update $p \leftarrow p^{(i\rightarrow 1)}$ \Else \State update $p \leftarrow p^{(i\rightarrow 0)}$ \EndIf \EndFor \If {$p \in \mathcal{V}$} break \EndIf \State $\mathcal V = \mathcal V \cup \{p\}$ \EndWhile \State \textbf{return} $\hat X = p$ \end{algorithmic} \end{algorithm} The proof of Theorem~\ref{thm:ML_relaxation} inspires a heuristic for sequence reconstruction (see Alg.~\ref{alg:cood_switch}): \begin{itemize} \item Start from a given point $p = (p_1,...,p_n) \in [0,1]^n$. \item One round of iteration is defined as follows: fix a traversal order for the indices $\{1,2,...,n\}$. Traverse through the indices $i$ in order and make $p_i$ either 0 or 1 depending on whether $\mathbf F(p^{(i\rightarrow 0)},Y)$ or $\mathbf F(p^{(i\rightarrow 1)},Y)$ is larger. This ensures that $\mathbf F(p,Y)$ never decreases. \item At the end of the round, check if the resultant $p$ was already obtained at the end of a previous round: if so, end the algorithm (to prevent it from going into an endless cycle). Otherwise, start a new round from the resultant $p$. \end{itemize} Clearly the resultant $p$ at the end of a round is a lattice point since we make each $p_i$ to be 0 or 1. Therefore, the algorithm will end after a finite number of steps; in the worst case it will iterate through all $2^n$ sequences, although in practice we observe that it ends in 4-5 rounds (tested upto a blocklength of 100). We also note that the complexity of each round is $O(n^3)$ since it iterates through $n$ coordinates and for each coordinate computes $\mathbf F(\cdot)$, which is $O(n^2)$. A natural question is whether it makes a difference if Alg.~\ref{alg:cood_switch} starts from an interior point ($p = (p_1,...,p_n) \in [0,1]^n$ where $\exists\ p_i \in (0,1)$) as compared to starting from a lattice point ($p = (p_1,...,p_n) \in \{0,1\}^n$). In Section~\ref{sec:Numerics} we show that starting from an interior point results in better estimation accuracy, thus supporting the usefulness of the ML relaxation. \subsection{Proofs.} \subsubsection{Proof of Theorem~\ref{thm:channel_equiv}} \label{app:channel_equiv} \iffalse We first state a lemma (introduced in \cite{Srini2019}) which is closely related to the channel equivalence. The proof of the Lemma is relegated to Appendix~\ref{app:bin_inf_lemma}. \bininfrelation* The channel equivalence can essentially be tied to this lemma as follows: consider the two channel models in Fig.~\ref{fig:channel_equiv}. The probability of observations given the input in both cases is proportional to the number of ways of obtaining the observations given the input. \begin{itemize} \item For the $t$-trace deletion channel model in Fig.~\ref{fig:channel_equiv} (a), the number of ways to obtain the traces given the input is equal to ${X \choose \tilde{Y}^{(1)}}{X \choose \tilde{Y}^{(2)}}...{X \choose \tilde{Y}^{(t)}}$. \item For the cascade model in Fig.~\ref{fig:channel_equiv} (b), the number of ways to obtain the traces given the input is equal to $\sum_{z} {X \choose z} \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle$ (this is shown in the proof of the theorem). \end{itemize} Clearly by Lemma~\ref{lemma:bin_inf_relation}, the above two are equal, thus proving the equivalence of the two channel models. On the contrary, this equivalence provides a nice physical interpretation of Lemma~\ref{lemma:bin_inf_relation}. \channelequiv* \begin{proof} We show the probabilistic equivalence between the two channel models in Fig.~\ref{fig:channel_equiv}: 1) $t$ independent deletion channels 2) cascade of a deletion channel with parameter $\delta^t$ with the \textit{remnant} channel with parameter $\delta$ defined as follows: an input symbol to the remnant channel is included in $k>0$ given outputs and deleted in the rest with a probability $\frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. Given this definition, we now first compute the probability of a given set of output sequences given an input sequence for the remnant channel, namely $\Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z)$. First, note that there can be multiple deletion patterns corresponding to outputs $\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}$ resulting from a given input $Z$. The number of such patterns is equal to $\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle$, which essentially follows from the definition of the infiltration product. Consider one such valid deletion pattern, i.e., a deletion pattern $\mathcal{D}$ that is a mapping of the symbols in $Z$ onto the symbols in $\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}$: $\mathcal{D}=\{(1,S_1),(2,S_2),...,(|Z|,S_{|Z|})\}$. Here $(i,S_i)$ represents the fact that $Z_i$ is not deleted in the output set $\tilde{Y}^{(S_i)}$ and is deleted in the rest. Clearly, $|S_i|>0$ from the definition of the remnant channel. Also $\sum_{i=1}^{|Z|}|S_i|=\sum_{j=1}^t|\tilde{Y}^{(j)}|$ since every symbol of each output is associated with exactly one input symbol and hence corresponds to one particular $S_i$. Thus, \begin{align*} \Pr(&\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z) \\&= \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z,\mathcal{D})\\ &=\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \prod_{i=1}^{|Z|}\frac{(1-\delta)^{|S_i|}\delta^{t-|S_i|}}{1-\delta^t}\\ &=\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \frac{(1-\delta)^{\sum|S_i|}\delta^{|Z|t-\sum |S_i|}}{(1-\delta^t)^{|Z|}}\\ &=\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},Z \rangle \frac{(1-\delta)^{\sum|\tilde{Y}^{(j)}|}\delta^{|Z|t-\sum |\tilde{Y}^{(j)}|}}{(1-\delta^t)^{|Z|}}. \end{align*} We can then compute the probability of the output given the input for the cascade channel as \begin{align*} \Pr(&\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|X)\\ &= \sum_{z} \Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)},Z=z|X)\\ &= \sum_{z} \Pr(Z=z|X)\Pr(\tilde Y^{(1)},\tilde Y^{(2)},...,\tilde Y^{(t)}|Z=z)\\ &= \sum_{z} \Bigg[{X \choose z} \delta^{t(|X|-|z|)}(1-\delta^t)^{|z|}\langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle \frac{(1-\delta)^{\sum|\tilde{Y}^{(j)}|}\delta^{|z|t-\sum |\tilde{Y}^{(j)}|}}{(1-\delta^t)^{|z|}}\Bigg]\\ &= \left[\sum_{z} {X \choose z} \langle \tilde Y^{(1)}\uparrow \tilde Y^{(2)} \uparrow...\uparrow \tilde Y^{(t)},z \rangle\right] \delta^{t|X|-\sum |\tilde{Y}^{(j)}|} {(1-\delta)^{\sum|\tilde{Y}^{(j)}|}}\\ &={X \choose \tilde{Y}^{(1)}}{X \choose \tilde{Y}^{(2)}}...{X \choose \tilde{Y}^{(t)}} \delta^{t|X|-\sum |\tilde{Y}^{(j)}|} {(1-\delta)^{\sum|\tilde{Y}^{(j)}|}}\\ &=\prod_{j=1}^t {X \choose \tilde{Y}^{(j)}} \delta^{|X|-|\tilde{Y}^{(j)}|} {(1-\delta)^{|\tilde{Y}^{(j)}|}}\\ &=\Pr(Y^{(1)}, Y^{(2)},..., Y^{(t)}|X), \end{align*} proving the equivalence. \end{proof} \fi The intuition behind the theorem is that the cascade model splits the error events in the $t$-trace deletion channel into 2 parts:\\ - When an input symbol is deleted in all the traces, which is captured by the deletion channel with parameter $\delta^t$.\\ - When an input symbol is not deleted in at least one of the traces, captured by the remnant channel. \begin{figure}[!h] \centering \includegraphics[scale=0.4]{channel_equiv_proof.pdf} \caption{The deletion error events occurring in the two channel models. Here `$-$' corresponds to a symbol being deleted and `$+$' corresponds to a transmission. The deletion pattern $D_i$ corresponds to the input symbol $X_i$.} \end{figure} In order to prove the theorem, we need to prove that the deletion patterns arising in the $t$-trace channel model and in the cascade model have the same distribution, i.e., \begin{align*} \Pr(D_1=d_1,D_2=d_2,...,D_n=d_n) = \Pr(\widetilde D_1=d_1,\widetilde D_2=d_2,...,\widetilde D_n=d_n), \end{align*} where $d_i \in \{-,+\}^t$, where a $-$ corresponds to a deletion and a $+$ corresponds to a transmission. Also from the definition of our channel models, the deletions act independently on each input symbol i.e., $D_i\ \raisebox{0.05em}{\rotatebox[origin=c]{90}{$\models$}}\ D_j$ for $i\neq j$. So it is sufficient to prove that the distributions of each $D_i$ and $\widetilde D_i$ are the same. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{channel_equiv_proof_2.pdf} \caption{The error events of the cascade model, expressed in terms of the error events of its components.} \end{figure} Consider $\widetilde D_i$ -- this is influenced by $\breve {D^0_i}$ which is the deletion in channel $\mathcal C_1$ and by $\breve D_i$ which are the deletion in the remnant channel $\mathcal C_2$. To prove the equivalence, we consider 2 cases: \begin{itemize} \item $d_i = (-,-,-,...,-)$, the error event where a symbol is deleted in all the observations. It can be seen that $\Pr(D_i = d_i)$ for this case is $\delta^t$. On the other hand, to compute $\Pr(\widetilde D_i = d_i)$, we note that this event is possible if and only if $\breve {D^0_i} = -$, since by definition, the remnant channel cannot delete the input symbol in all the $t$ observations. Therefore, $\Pr(\widetilde D_i = d_i)=\Pr(\breve D^0_i = -) = \delta^t$. \item $d_i \neq (-,-,-,...,-)$, i.e., the input symbol is not deleted in at least one trace. Also let us define $k$ to be the count of $-$ in $d_i$. In this case, $\Pr(D_i = d_i)=\delta^{\text{Count(-) in } d_i}(1-\delta)^{\text{Count(+) in } d_i} = \delta^k (1-\delta)^{t-k}$. For the cascade model, this event requires that $\breve {D^0_i} = +$ and $\breve {D_i}=d_i$. Thus, $$\Pr(\tilde D_i=d_i)=\Pr(\breve {D^0_i} = +)\cdot \Pr(\breve {D_i}=d_i) = (1-\delta^t) \frac{\delta^k (1-\delta)^{t-k}}{1-\delta^t}=\delta^k (1-\delta)^{t-k}.$$ \end{itemize} In both cases, the distributions of $D_i$ and $\widetilde D_i$ are the same, proving the equivalence. \subsubsection{Proof of Lemma~\ref{lemma:F_decomposition}} \label{app:F_lemma_proof} \deletionMLrellemma* \begin{proof} The proof of this lemma uses a similar approach as the proof of Thm.~\ref{thm:approx_smap_postprob}. First, in the expression for $\mathbf F(\cdot )$, we separate out the subsets that contain index $i$: \begin{align*} \mathbf F(p,y) &= \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{y_j} (1-p_{\mathcal S_j})^{1-y_j}\\ &= \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m, \\ i \notin \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{y_j} (1-p_{\mathcal S_j})^{1-y_j} + \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ i \in \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{y_j} (1-p_{\mathcal S_j})^{1-y_j}\\ &= \mathbf F(p_{[n]\backslash \{i\}},y) + \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ i \in \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{y_j} (1-p_{\mathcal S_j})^{1-y_j}. \numberthis \label{eq:Flemma_proof1} \end{align*} Now the second term can be further split as, \begin{align*} \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ i \in \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{y_j} (1-p_{\mathcal S_j})^{1-y_j} &= \sum_{k=1}^m\sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ \mathcal S_k = i}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{y_j} (1-p_{\mathcal S_j})^{1-y_j}. \end{align*} One could express the set $\mathcal S$ as the union $\mathcal S = \mathcal S' \cup \{i\} \cup \mathcal S''$ such that $\mathcal S' \subseteq [i-1]$ and $\mathcal S'' \subseteq [i+1:n]$ to get \begin{align*} &\sum_{k=1}^m\sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ \mathcal S_k = i}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{y_j} (1-p_{\mathcal S_j})^{1-y_j}\\ &= \sum_{k=1}^m\sum_{\substack{\mathcal S'|\\ \mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\sum_{\substack{\mathcal S''|\\ \mathcal S'' \subseteq [i+1:n]\\ |\mathcal S''|=m-k}} \left( \prod\limits_{j=1}^{k-1} p_{\mathcal S'_j}^{y_j} (1-p_{\mathcal S'_j})^{1-y_j} \right) \left( p_{i}^{y_k} (1-p_{i})^{1-y_k} \right) \left( \prod\limits_{j=1}^{m-k} p_{\mathcal S''_j}^{y_{j+k}} (1-p_{\mathcal S''_j})^{1-y_{j+k}} \right)\\ &= \sum_{k=1}^m p_{i}^{y_k} (1-p_{i})^{1-y_k}\left(\sum_{\substack{\mathcal S'|\\ \mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}} \prod\limits_{j=1}^{k-1} p_{\mathcal S'_j}^{y_j} (1-p_{\mathcal S'_j})^{1-y_j} \right) \left( \sum_{\substack{\mathcal S''|\\ \mathcal S'' \subseteq [i+1:n]\\ |\mathcal S''|=m-k}} \prod\limits_{j=1}^{m-k} p_{\mathcal S''_j}^{y_{j+k}} (1-p_{\mathcal S''_j})^{1-y_{j+k}} \right)\\ &= \sum_{k=1}^m p_{i}^{y_k} (1-p_{i})^{1-y_k} \mathbf F(p_{[i-1]},y_{[k-1]}) \mathbf F(p_{[i+1:n]},y_{[k+1:m]}). \end{align*} The $\sum_{k=1}^m$ summation in the above expression could further be split into the two cases depending on whether $y_k=0$ or $y_k=1$, which simplifies the term $p_{i}^{y_k} (1-p_{i})^{1-y_k}$ to either $1-p_i$ or $p_i$ respectively. Thus, \begin{align*} &\sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ i \in \mathcal S}} \quad \prod\limits_{j=1}^m p_{\mathcal S_j}^{y_j} (1-p_{\mathcal S_j})^{1-y_j}\\ &= (1-p_i)\sum_{k|y_k=0} \mathbf F(p_{[i-1]},y_{[k-1]}) \mathbf F(p_{[i+1:n]},y_{[k+1:m]}) + p_i\sum_{k|y_k=1} \mathbf F(p_{[i-1]},y_{[k-1]}) \mathbf F(p_{[i+1:n]},y_{[k+1:m]}).\numberthis \label{eq:Flemma_proof2} \end{align*} \noindent Plugging \eqref{eq:Flemma_proof2} in \eqref{eq:Flemma_proof1} concludes the proof of the Lemma. \end{proof} \subsubsection{Proof of Lemma~\ref{lemma:bin_inf_relation}} The following Lemma forms the backbone of the analyses for multiple traces. This lemma is also closely related to the channel equivalence in Theorem~\ref{thm:channel_equiv}. \label{app:bin_inf_lemma} \bininfrelation* \begin{proof} The channel equivalence can essentially be tied to this lemma as follows: consider the two channel models in Fig.~\ref{fig:channel_equiv}. The probability of observations given the input in both cases is proportional to the number of ways of obtaining the observations given the input. \begin{itemize} \item For the $t$-trace deletion channel model in Fig.~\ref{fig:channel_equiv} (a), the number of ways to obtain the traces given the input is equal to ${X \choose {Y}^{1}}{X \choose {Y}^{2}}...{X \choose {Y}^{t}}$. \item For the cascade model in Fig.~\ref{fig:channel_equiv} (b), the number of ways to obtain the traces given the input is equal to $\sum_{z} {X \choose z} \langle \tilde Y^{1}\uparrow \tilde Y^{2} \uparrow...\uparrow \tilde Y^{t},z \rangle$, which we show below. \end{itemize} The above two are expression must be equal since the two channel models are equivalent. We now first compute the probability of a given set of output sequences given an input sequence for the remnant channel, namely $\Pr(\tilde Y^{1},\tilde Y^{2},...,\tilde Y^{t}|Z)$. First, note that there can be multiple deletion patterns corresponding to outputs $\tilde Y^{1},\tilde Y^{2},...,\tilde Y^{t}$ resulting from a given input $Z$. The number of such patterns is equal to $\langle \tilde Y^{1}\uparrow \tilde Y^{2} \uparrow...\uparrow \tilde Y^{t},Z \rangle$, which essentially follows from the definition of the infiltration product. Consider one such valid deletion pattern, i.e., a deletion pattern $\mathcal{D}$ that is a mapping of the symbols in $Z$ onto the symbols in $\tilde Y^{1},\tilde Y^{2},...,\tilde Y^{t}$: $\mathcal{D}=\{(1,S_1),(2,S_2),...,(|Z|,S_{|Z|})\}$. Here $(i,S_i)$ represents the fact that $Z_i$ is not deleted in the output set $\tilde{Y}^{S_i}$ and is deleted in the rest. From the definition of the remnant channel, we have $|S_i|>0$ . Also $\sum_{i=1}^{|Z|}|S_i|=\sum_{j=1}^t|\tilde{Y}^{j}|$ since every symbol of each output is associated with exactly one input symbol and hence corresponds to one particular $S_i$. Thus, \begin{align*} \Pr(\tilde Y^{1},\tilde Y^{2},...,\tilde Y^{t}|Z) &= \langle \tilde Y^{1}\uparrow \tilde Y^{2} \uparrow...\uparrow \tilde Y^{t},Z \rangle \Pr(\tilde Y^{1},\tilde Y^{2},...,\tilde Y^{t}|Z,\mathcal{D})\\ &=\langle \tilde Y^{1}\uparrow \tilde Y^{2} \uparrow...\uparrow \tilde Y^{t},Z \rangle \prod_{i=1}^{|Z|}\frac{(1-\delta)^{|S_i|}\delta^{t-|S_i|}}{1-\delta^t}\\ &=\langle \tilde Y^{1}\uparrow \tilde Y^{2} \uparrow...\uparrow \tilde Y^{t},Z \rangle \frac{(1-\delta)^{\sum|S_i|}\delta^{|Z|t-\sum |S_i|}}{(1-\delta^t)^{|Z|}}\\ &=\langle \tilde Y^{1}\uparrow \tilde Y^{2} \uparrow...\uparrow \tilde Y^{t},Z \rangle \frac{(1-\delta)^{\sum|\tilde{Y}^{j}|}\delta^{|Z|t-\sum |\tilde{Y}^{j}|}}{(1-\delta^t)^{|Z|}}. \end{align*} We can then compute the probability of the output given the input for the cascade channel as \begin{align*} \Pr(&\tilde Y^{1},\tilde Y^{2},...,\tilde Y^{t}|X)\\ &= \sum_{z} \Pr(\tilde Y^{1},\tilde Y^{2},...,\tilde Y^{t},Z=z|X)\\ &= \sum_{z} \Pr(Z=z|X)\Pr(\tilde Y^{1},\tilde Y^{2},...,\tilde Y^{t}|Z=z)\\ &= \sum_{z} \Bigg[{X \choose z} \delta^{t(|X|-|z|)}(1-\delta^t)^{|z|}\langle \tilde Y^{1}\uparrow \tilde Y^{2} \uparrow...\uparrow \tilde Y^{t},z \rangle \frac{(1-\delta)^{\sum|\tilde{Y}^{j}|}\delta^{|z|t-\sum |\tilde{Y}^{j}|}}{(1-\delta^t)^{|z|}}\Bigg]\\ &= \left[\sum_{z} {X \choose z} \langle \tilde Y^{1}\uparrow \tilde Y^{2} \uparrow...\uparrow \tilde Y^{t},z \rangle\right] \delta^{t|X|-\sum |\tilde{Y}^{j}|} {(1-\delta)^{\sum|\tilde{Y}^{j}|}}. \numberthis \label{eq:lemma2_proof_1} \end{align*} For the $t$-trace deletion channel model, we have: \begin{align*} \Pr(Y^1,Y^2,...,Y^t|X) &= \prod_{j=1}^t {X \choose Y^j} \delta^{|X|-|Y^j|}(1-\delta)^{|Y_j|} \\ &= {X \choose {Y}^{1}}{X \choose {Y}^{2}}...{X \choose {Y}^{t}} \delta^{t|X|-\sum |{Y}^{j}|} {(1-\delta)^{\sum|{Y}^{j}|}}. \numberthis \label{eq:lemma2_proof_2} \end{align*} Equating \eqref{eq:lemma2_proof_1} and \eqref{eq:lemma2_proof_2} with $X = h$ and traces as $Y^j = \tilde Y^j = f_j$ proves the Lemma. Alternatively, we use also induction to prove the statement as we do below. The statement is trivially true when $m=1$ since, $\sum_{w}{h \choose w}\langle f_1,w \rangle={h \choose f_1}$ as $\langle f,w \rangle=\mathbbm{1}_{f=w}$. We refer the reader to equation 6.3.25 in \cite{lothaire1997combinatorics} for the proof of the lemma for the case $m=2$. Assume that the statement is true for $m=k \in \mathbb{Z}, k\geq 2$. We next prove the validity when $m=k+1$. \\ Consider \begin{align} {h \choose f_1} {h \choose f_2}...{h \choose f_k}{h \choose f_{k+1}}&=\sum_{w}{h \choose w}\langle f_1\uparrow f_2\uparrow ...\uparrow f_k,w \rangle {h \choose f_{k+1}}\nonumber\\ &=\sum_{w}\left[{h \choose w} {h \choose f_{k+1}}\right]\langle f_1\uparrow f_2\uparrow ...\uparrow f_k,w \rangle\nonumber\\ &=\sum_{w}\left[\sum_v \langle w\uparrow f_{k+1},v \rangle {h \choose v}\right]\langle f_1\uparrow f_2\uparrow ...\uparrow f_k,w \rangle\nonumber\\ &=\sum_{v}{h \choose v}\left[\sum_w \langle w\uparrow f_{k+1},v \rangle \langle f_1\uparrow f_2\uparrow ...\uparrow f_k,w \rangle\right]. \label{eq:prop2proof} \end{align} To evaluate the term in the square bracket, we use \eqref{def:infiltforseq}. For the case where $\tau \in \mathcal{A}^*,\sigma \in \mathbb{Z}\langle \mathcal{A} \rangle$ in \eqref{def:infiltforseq}, we have $$\sigma\uparrow \tau=\sum_{f\in \mathcal{A}^*} \langle \sigma,f \rangle (f\uparrow \tau),$$ and thus \begin{equation} \langle \sigma \uparrow \tau,u \rangle=\sum_{f\in \mathcal{A}^*} \langle \sigma,f \rangle \langle f\uparrow \tau,u\rangle. \label{eq:prop2proof2} \end{equation} We use \eqref{eq:prop2proof2} to replace the term in the square bracket in \eqref{eq:prop2proof}, i.e., \begin{align} {h \choose f_1} {h \choose f_2}...{h \choose f_k}{h \choose f_{k+1}}\nonumber\\ =\sum_{v}{h \choose v}\langle (f_1\uparrow f_2\uparrow ...\uparrow f_k) \uparrow f_{k+1},v \rangle, \end{align} and the lemma follows from the associativity property of the infiltration product. \end{proof} \subsubsection{Proof of Lemma~\ref{lemma:smapsum}} \label{app:smapsum} \begin{restatable}{lemma}{smapsum} \label{lemma:smapsum} \begin{align*} \sum_{\substack{f||f|{=}n\\f_i{=}a}}{f \choose g} = 2^{n-|g|}\Bigg(\frac{1}{2}{n-1 \choose |g|} &+\sum_{j|g_j=a}{i-1 \choose j-1}{n-i \choose |g|-j}\Bigg), \end{align*} where $j\in\Big[\max\{1,|g|+i-n\}:\min\{i,|g|\}\Big]$. \end{restatable} \begin{proof} First, observe that $${f \choose g} = \sum_{\substack{S\subseteq [n]:\\|S|=|g|}} \mathbbm 1_{f_S=g},$$ where the summation is over all ordered subsets of $[n]=\{1,2,...,n\}$ of size $|g|$ and $f_S$ corresponds to the subsequence of $f$ indexed by $S$. Thus, \begin{align*} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}}&{f \choose g} = \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \sum_{\substack{S\subseteq [n]|\\|S|=|g|}} \mathbbm 1_{f_S=g} = \sum_{\substack{S\subseteq [n]|\\|S|=|g|}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g}\\ &=\sum_{\substack{S\subseteq [n]|\\|S|=|g|\\i \notin S}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g} + \sum_{\substack{S\subseteq [n]|\\|S|=|g|\\i \in S}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g}\\ &=\sum_{\substack{S\subseteq [n]|\\|S|=|g|\\i \notin S}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g} + \sum_{j=1}^m \sum_{\substack{S\subseteq [n]|\\|S|=|g|\\S_j=i}} \sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}} \mathbbm 1_{f_S=g}. \numberthis \label{eq:lemmasmapsum} \end{align*} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{lemma1.pdf} \vspace*{-2mm} \caption{Figure illustrating proof of Lemma~\ref{lemma:smapsum}.} \label{fig:lemmasmapsum} \end{center} \end{figure} The two terms in \eqref{eq:lemmasmapsum} can be visualized as the number of ways to fill up the blank spaces (spaces without arrows pointing to it in $f$) in Fig.~\ref{fig:lemmasmapsum}(a) and (b) respectively. Solving this counting problem, we get $$\sum_{\substack{f\in\mathcal{A}^n|\\f_i=a}}{f \choose g}=2^{n-|g|}\left(\frac{1}{2}{n-1 \choose |g|}+\sum_{j|g_j=a}{i-1 \choose j-1}{n-i \choose |g|-j}\right).$$ \end{proof} \subsection{Dynamic program to compute $\mathbf F(\cdot)$ and $\nabla \mathbf F(\cdot)$} \subsubsection{Computation of $\mathbf F(p,v)$} \label{app:F_compute} We here describe how to compute $\mathbf F(p,v)$ in $O(mn)$ time and space complexity, where $p=(p_1,...,p_n)$ and $v=v_1...v_m$, via a dynamic programming approach. Note that $m \leq n$ otherwise $\mathbf F(p,v)=0$. We first define \begin{align*} &\mathbf G^{for}(k,j)\triangleq \mathbf F(p_{[1:k]},v_{[1:j]}). \numberthis \label{eq:G} \end{align*} Using Lemma~\ref{lemma:F_decomposition} with $i=n$, we get \begin{align*} \mathbf F(p,v) = \mathbf F(p_{[n-1]},v) + p_n^{v_m} (1-p_n)^{(1-v_m)} \mathbf F(p_{[n-1]},v_{[m-1]}). \end{align*} \noindent This translates to the following dynamic program for $\mathbf G^{for}$: \begin{align*} \mathbf G^{for}(k,j) = \mathbf G^{for}(k-1,j)+ p_{k}^{v_j}(1-p_{k})^{1-v_j}&\mathbf G^{for}(k-1,j-1),\numberthis \label{eq:approx_smap_dpfor} \end{align*} with the boundary conditions $\mathbf G^{for}(k,0)=1\ \forall\ k \geq 0$ and $\mathbf G^{for}(k,j)=0\ \forall\ k<j$. The algorithm is now summarized as Alg.~\ref{alg:F_comp}. \begin{algorithm}[h!] \caption{Computing $\mathbf F(p,v)$}\label{alg:F_comp} \begin{algorithmic}[1] \State Inputs: $p \in [0,1]^n$, $v \in \{0,1\}^m$ \State Outputs: $\mathbf F(p_{[1:k]},v_{[1:j]})$ for all $k \in [n]$ and $j\in[m]$ \State Initialize $\mathbf G^{for}(k,0)=1\ \forall\ k$ and $\mathbf G^{for}(k,j)=0\ \forall\ k<j$ \For {$k = 1:n$ and $j = 1:m$} \State Use \eqref{eq:approx_smap_dpfor} to update $\mathbf G^{for}(k,j)$ \EndFor \State return $\mathbf G^{for}(k,j)\ \forall\ k,j$ \end{algorithmic} \end{algorithm} We note that a similar dynamic programming approach yields $\mathbf F(p_{[k+1:n]},v_{[j+1:m]})$ for all $k \in [n]$ and $j\in[m]$ in $O(mn)$ time and space complexity by defining \begin{align*} &\mathbf G^{rev}(k,j)\triangleq \mathbf F(p_{[k+1:n]},v_{[j+1:m]}). \end{align*} \noindent The following dynamic program can be used for $\mathbf G^{rev}$: \begin{align*} \mathbf G^{rev}(k,j) = \mathbf G^{rev}(k+1,j)+ p_{k+1}^{v_{j+1}}(1-p_{k+1})^{1-v_{j+1}}&\mathbf G^{rev}(k+1,j+1),\numberthis \label{eq:approx_smap_dprev} \end{align*} with the boundary conditions $\mathbf G^{rev}(k,m)=1\ \forall\ k \geq 0$ and $\mathbf G^{rev}(k,j)=0\ \forall\ k,j: n-k<m-j$. \\ \subsubsection{Computation of $\nabla_p \mathbf F(p,v)$} \label{app:F_grad_comp} First, from Lemma~\ref{lemma:F_decomposition}, we have \begin{align*} \mathbf F(p,v) = \mathbf F(p_{[n]\backslash \{i\}},v) +& (1-p_i)\sum_{k|v_k=0} \mathbf F(p_{[i-1]},v_{[k-1]}) \mathbf F(p_{[i+1:n]},v_{[k+1:m]}) \\&+ p_i\sum_{k|v_k=1} \mathbf F(p_{[i-1]},v_{[k-1]}) \mathbf F(p_{[i+1:n]},v_{[k+1:m]}). \end{align*} Differentiating with respect to $p_i$, we get \begin{align*} \frac{\partial \mathbf F(p,v)}{\partial p_i} &= \sum_{k|v_k=1} \mathbf F(p_{[i-1]},v_{[k-1]}) \mathbf F(p_{[i+1:n]},v_{[k+1:m]}) - \sum_{k|v_k=0} \mathbf F(p_{[i-1]},v_{[k-1]}) \mathbf F(p_{[i+1:n]},v_{[k+1:m]}) \\ &= \sum_{k|v_k=1} \mathbf G^{for}(i{-}1,k{-}1)\mathbf G^{rev}(i,k) - \sum_{k|v_k=0}\mathbf G^{for}(i{-}1,k{-}1)\mathbf G^{rev}(i,k). \numberthis \label{eq:F_grad_comp} \end{align*} Thus, computing the $\mathbf G^{for}$ and $\mathbf G^{rev}$ terms is sufficient to compute the gradient. As discussed above, this computation requires $O(nm)$ operations. Given $\mathbf G^{for}$ and $\mathbf G^{rev}$, the computation of each partial derivative $\frac{\partial \mathbf F(p,v)}{\partial p_i}$ requires $O(m)$ operations, and we need to compute $n$ such partial derivatives. Thus, the complexity of computing $\nabla_p \mathbf F(p,v)$ can be done in $O(nm)$ time and space complexity. \begin{algorithm}[h!] \caption{Computing $\nabla_p \mathbf F(p,v)$}\label{alg:F_grad_comp} \begin{algorithmic}[1] \State Inputs: $p \in [0,1]^n$, $v \in \{0,1\}^m$ \State Outputs: $\nabla_p \mathbf F(p,v)$ \State Initialize $\mathbf G^{for}(k,0)=1\ \forall\ k$ and $\mathbf G^{for}(k,j)=0\ \forall\ k<j$ \State Initialize $\mathbf G^{rev}(k,m)=1\ \forall\ k$ and $\mathbf G^{rev}(k,j)=0\ \forall\ k,j: n-k<m-j$ \For {$k = 1:n$ and $j = 1:m$} \State Use \eqref{eq:approx_smap_dpfor} and \eqref{eq:approx_smap_dprev} to compute $\mathbf G^{for}(k,j)$ and $\mathbf G^{rev}(k,j)$ \EndFor \For {$i = 1:n$} \State Use \eqref{eq:F_grad_comp} to compute $\frac{\partial \mathbf F(p,v)}{\partial p_i} $ \EndFor \State return $\nabla_p \mathbf F(p,v)$ \end{algorithmic} \end{algorithm} \subsection{An algebraic definition of the infiltration product.} \label{app:infil_def} For completeness, we reproduce the formal definition of the infiltration product from Section 6.3 of \cite{lothaire1997combinatorics} (also see there for the equivalence of the two definitions). A \textit{formal series} with indeterminates (or variables) in a set $\mathcal A$ and coefficients in a commutative ring $\mathcal R$, is a mapping of $\mathcal A^*$ onto $\mathcal R$. Recall that a commutative ring is a set which forms an abelian group under an \textit{addition} operation, is a monoid under a \textit{multiplication} operation which commutes, and the multiplication operation distributes over addition. Here we consider $\mathbb Z$, the set of integers as the commutative ring $\mathcal{R}$. A formal series is called a \textit{polynomial} if only a finite number of sequences are mapped to non-zero values, the rest of the sequences map to zero. Consider two polynomials $\sigma,\tau: \mathcal{A}^*\rightarrow \mathbb Z$. The value taken by a sequence $w\in \mathcal A^*$ on $\sigma$ (or the coefficient of $w$ in $\sigma$) is denoted by $\langle \sigma,w\rangle \in \mathbb R$. We also define binary addition ($\oplus$) and multiplication operations ($\times$) on the set of polynomials as follows: \begin{align} \langle \sigma\oplus \tau,w\rangle \triangleq \langle \sigma,w\rangle + \langle \tau,w \rangle \quad \forall w\in \mathcal A^*,\label{eq:polynomial_add}\\ \langle \sigma\times \tau,w\rangle \triangleq \sum_{\substack{f,g\in \mathcal A^*:\\ f.g=w}}\langle \sigma,f\rangle \langle \tau,g \rangle \quad \forall w\in \mathcal A^*.\label{eq:polynomial_prod} \end{align} We will use the usual symbols $+$ and $.$ in place of $\oplus$ and $\times$ in this work for convenience. The meaning of the operation would be clear depending on the operands. With these operations the set of polynomials form a non-commutative ring, and is denoted by $\mathbb Z\langle\mathcal A \rangle$, also called the free $\mathbb Z$-algebra on $\mathcal A$ in ring theory. Note that the addition and multiplication operations defined in \eqref{eq:polynomial_add} and \eqref{eq:polynomial_prod} are similar to the operations defined on commutative polynomials, except that the multiplication operation under the summation in \eqref{eq:polynomial_prod} ($f.g=w$) is actually concatenation and is non-commutative. The multiplication inside the summation in \eqref{eq:polynomial_prod} is multiplication in the real field and hence commutative. The multiplication defined in \eqref{eq:polynomial_prod} distributes over addition defined in \eqref{eq:polynomial_add}. Thus, a polynomial in $\mathbb Z\langle\mathcal A \rangle$ can be represented as a sum of monomials in $\mathcal A^*$ each with an associated coefficient in $\mathbb Z$, i.e., $\sigma=\sum\limits_{w\in \mathcal A^*} \langle\sigma,w \rangle w$. Define the \textit{degree} of a polynomial to be equal to the length of a longest sequence with a non-zero coefficient in the polynomial and the \textit{number of terms} of a polynomial as the number of sequences with non-zero coefficients in the polynomial. Note that a degree $d$ polynomial could have a number of terms upto $2^{d+1}-1$. With this, the \textit{infiltration product} (in general, for two polynomials) is defined as follows: \begin{align} \forall f\in \mathcal{A}^*,& \quad f\uparrow e = e\uparrow f=f.\nonumber \\ \forall f,g\in \mathcal{A}^*&,\quad \forall a,b\in \mathcal{A}, \nonumber\\ fa\uparrow gb=(f\uparrow gb)a&+(fa\uparrow g)b+\mathbbm{1}_{a=b}(f\uparrow g)a.\nonumber \\ \forall \sigma,\tau \in \mathbb{Z}\langle\mathcal{A} \rangle, \quad &\sigma\uparrow \tau=\sum_{f,g\in \mathcal{A}^*} \langle \sigma,f \rangle \langle \tau,g \rangle (f\uparrow g). \label{def:infiltforseq} \end{align} \subsection{Symbolwise posterior probabilities for the remnant channel} \label{app:remnant_postprob} Consider the remnant channel shown below, and let $Z=Z_1Z_2...Z_n$. Also let $Z_i \sim \text{Ber}(0.5)$. We aim to compute $\Pr(Z_i=1|\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t)$. \begin{figure}[!h] \centering \includegraphics[scale=0.4]{remnant_channel.pdf} \caption{The remnant channel} \end{figure} From the definition of the infiltration product, the input-output relation for this channel can be derived to be: \begin{align*} \Pr(\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t|Z) =\langle y^1\uparrow y^2 \uparrow...\uparrow y^t,Z \rangle \frac{(1-\delta)^{\sum|y^j|}\delta^{nt-\sum |y^j|}}{(1-\delta^t)^{n}}. \end{align*} Now, one could write the symbolwise posterior probabilities for $Z$ as: \begin{align*} \Pr(Z_i=1&|\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t) = \sum_{\substack{z||z|=n,\\z_i=1}} \Pr(z|\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t)\\ &{=} \frac{1}{2^n \Pr(\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t)} \sum_{\substack{z||z|=n,\\z_i=1}} \Pr(\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t|z)\\ &{=} \frac{{(1-\delta)^{\sum|y^{j}|}\delta^{nt-\sum |y^{j}|}}}{(1-\delta^t)^{n} 2^n \Pr(\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t)} \sum_{\substack{z||z|=n,\\z_i=1}} \langle y^1 \uparrow y^2 \uparrow ... \uparrow y^t,z \rangle. \numberthis \label{eq:remant_map_prob_1} \end{align*} A similar expression can be obtained for the case when $Z_i=0$ as \begin{align*} \Pr(Z_i=0&|\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t)\\ &{=} \frac{{(1-\delta)^{\sum|y^{j}|}\delta^{nt-\sum |y^{j}|}}}{(1-\delta^t)^{n} 2^n \Pr(\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t)} \sum_{\substack{z||z|=n,\\z_i=0}} \langle y^1 \uparrow y^2 \uparrow ... \uparrow y^t,z \rangle. \numberthis \label{eq:remant_map_prob_0} \end{align*} We could further simplify \eqref{eq:remant_map_prob_1} and \eqref{eq:remant_map_prob_0} using the fact that the expressions in \eqref{eq:remant_map_prob_1} and \eqref{eq:remant_map_prob_0} must sum to 1, leading us to \begin{align*} \Pr(Z_i=1|\tilde Y^{1}=y^1,\tilde Y^{2}=y^2,...,\tilde Y^{t}=y^t) = \frac{ \sum\limits_{\substack{z||z|=n,\\z_i=1}} \langle y^1 \uparrow y^2 \uparrow ... \uparrow y^t,z \rangle}{\sum\limits_{\substack{z||z|=n}} \langle y^1 \uparrow y^2 \uparrow ... \uparrow y^t,z \rangle}. \numberthis \label{eq:remnant_map_prob} \end{align*} We precisely describe the algorithm which computes the terms in \eqref{eq:remnant_map_prob} in section~\ref{sec:exactsmap}, by exploiting the edit graph interpretation of the infiltration product, but give a high level idea below. The complexity of such an algorithm is $O((2n)^t)$ which is equal to the number of edges in the edit graph. Note that for a fixed number of traces, this algorithm is polynomial in the blocklength as opposed to a naive approach of iterating through all the $n$-length sequences. Recall that $\langle y^1 \uparrow y^2 \uparrow ... \uparrow y^t,z \rangle$ is the number of paths from origin to destination of the edit graph $\mathcal G(y^1,y^2,...,y^t)$ which correspond to $z$. Therefore, $\sum_{\substack{z||z|=n}} \langle y^1 \uparrow y^2 \uparrow ... \uparrow y^t,z \rangle$ is equal to the number of $n$-length paths in $\mathcal G(y^1,y^2,...,y^t)$ from the origin to the destination. Note that the edit graph has no cycles, so this quantity can be efficiently computed via the following dynamic program -- the number of $n$ length paths from the origin to a vertex $v$ is equal to the sum of the number of $n-1$ length paths from the origin to the in-neighbors of $v$. Such a procedure iterates over the vertex set of $\mathcal G(y^1,y^2,...,y^t)$ exactly once. The numerator term $\sum_{\substack{z||z|=n\\z_i=1}}\langle y^1 \uparrow y^2 \uparrow ... \uparrow y^t,z \rangle$ can be interpreted in a similar way: it is equal to the number of $n$-length paths in $\mathcal G(y^1,y^2,...,y^t)$ from the origin to the destination such that the $i^{th}$ edge of the path corresponds to a `1'. The algorithm for this, therefore, follows a similar principle but has an extra step. For each vertex $v$, we compute \begin{itemize} \item the number of paths from the origin to $v$ of length $0,1,...,n$, \item the number of paths from $v$ to the destination of length $0,1,...,n$. \end{itemize} Next we iterate over all edges in $\mathcal G(y^1,y^2,...,y^t)$ corresponding to a `1' and accumulate the number of $n$ length paths which have this particular edge as its $i^{th}$ edge. Thus, this algorithm iterates over the vertex set twice and the edge set of $\mathcal G(y^1,y^2,...,y^t)$ once. \subsection{A heuristic for ML optimization with a single trace.} {The proof of Theorem~\ref{thm:ML_relaxation} inspires a heuristic for sequence reconstruction (see Alg.~\ref{alg:cood_switch}): \begin{itemize} \item Start from a given point $p = (p_1,...,p_n) \in [0,1]^n$. \item One round of iteration is defined as follows: fix a traversal order for the indices $\{1,2,...,n\}$. Traverse through the indices $i$ in order and make $p_i$ either 0 or 1 depending on whether $\mathbf F(p^{(i\rightarrow 0)},y)$ or $\mathbf F(p^{(i\rightarrow 1)},y)$ is larger. This ensures that $\mathbf F(p,y)$ never decreases. \item At the end of the round, check if the resultant $p$ was already obtained at the end of a previous round: if so, end the algorithm (to prevent it from going into an endless cycle). Otherwise, start a new round from the resultant $p$. \end{itemize} The resultant $p$ at the end of a round is a lattice point since we make each $p_i$ to be 0 or 1. Therefore, the algorithm will end after a finite number of steps; in the worst case it will iterate through all $2^n$ sequences, although in practice we observe that it ends in 4-5 rounds (tested up to a blocklength of 100). We also note that the complexity of each round is $O(n^3)$ since it iterates through $n$ coordinates and for each coordinate computes $\mathbf F(\cdot)$, which is $O(n^2)$.} \begin{algorithm}[t!] \caption{Coordinate switch ML heuristic}\label{alg:cood_switch} \begin{algorithmic}[1] \item Input: Blocklength $n$, Trace {$Y=y$}, Initial point $p = (p_1,p_2,...,p_n)$ \\ Outputs: Estimated sequence $\hat X$ \State Initialize visited set $\mathcal V = \emptyset$ \While {True} \State Compute $\mathcal F_i = |\mathbf F(p^{(i\rightarrow 1)},y)- \mathbf F(p^{(i\rightarrow 0)},y)|\ \forall\ i$ and let $\mathcal F = (\mathcal F_1,\mathcal F_2,...,\mathcal F_n)$. \State Define the ordered list $\mathcal S =$ \texttt{argsort}$(\mathcal F)$ where \texttt{argsort}$(\mathcal F)$ returns the index set $[n]$ sorted by descending order of $\mathcal F$, i.e., $\mathcal F_{\mathcal S_1}\geq \mathcal F_{\mathcal S_2}\geq ... \geq \mathcal F_{\mathcal S_n}$. \For {$i \in \mathcal S$ (ordered traversal)} \If {$\mathbf F(p^{(i\rightarrow 1)},y)- \mathbf F(p^{(i\rightarrow 0)},y) \geq 0$} \State update $p \leftarrow p^{(i\rightarrow 1)}$ \Else \State update $p \leftarrow p^{(i\rightarrow 0)}$ \EndIf \EndFor \If {$p \in \mathcal{V}$} break \EndIf \State $\mathcal V = \mathcal V \cup \{p\}$ \EndWhile \State \textbf{return} $\hat X = p$ \end{algorithmic} \end{algorithm} A natural question is whether it makes a difference if Alg.~\ref{alg:cood_switch} starts from an interior point ($p = (p_1,...,p_n) \in [0,1]^n$ where $\exists\ p_i \in (0,1)$) as compared to starting from a lattice point (for instance, we could start from $p = (y,0,...,0) \in \{0,1\}^n$) which is the $n$-length sequence obtained via appending $y$ with zeros. It turns out that starting from an interior point results in better accuracy on both Hamming and edit error rate metrics, thus supporting the usefulness of our ML relaxation result. In Fig.~\ref{fig:singletrace}, we compare the performance of Coordinate switch heuristic with the other trace reconstruction heuristics in Section~\ref{sec:Numerics}. We see that the coordinate switch with interior point initialization performs very similar to the true ML sequence (obtained via exhaustive search), in terms of both the Hamming error rate as well as the edit error rate. This intuitively supports the idea that this is a good heuristic for the ML optimization problem. However, at this point the heuristic is applicable for reconstruction using just a single trace and it is unclear on how to extend it to multiple traces. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{Numerics/single_trace.pdf} \caption{Numerics for reconstruction from a single trace for a blocklength $n=20$. This plot compares the performance of coordinate switch heuristic (abbreviated ``Coodsw. interior init.'' and ``Coodsw. lattice init.'') with other trace reconstruction algorithms from Section~\ref{sec:Numerics}. ``ML'' refers to the true ML sequence obtained via an exhaustive search on all 20 length binary sequences. The interior point initialization initializes $p=(0.5,0.5,...,0.5)$ while the lattice point initialization appends the trace $y$ with zeros to obtain an $n$-length vector $p=(y,0,...,0)$.} \label{fig:singletrace} \end{figure} \subsection{Symbolwise MAP as the minimizer of Hamming error rate} \label{app:smap_hamming} Symbolwise MAP is an optimal estimator for minimizing the Hamming error rate for any channel, regardless of whether it is memoryless or not. This fact can be seen from the following argument: Consider a fixed observation $y$ (note that $y$ here can also be a collection of multiple observations, our arguments which follow remain unchanged) and that we aim to estimate a binary input sequence $X$; let the estimate of the input be $\hat X(y)$. Note that the estimate is a function of observation $y$ alone. Now the Hamming error rate of any estimator given $y$ is the expectation (over all inputs) of number of symbol mismatches divided by the blocklength, i.e., \begin{align*} \frac{1}{n}\E \left[ \sum_{i=1}^n \mathbbm{1} \{X_i \neq \hat X_i(y)\} \Big| Y=y \right ] &= \frac{1}{n} \sum_{i=1}^n \E \left [ \mathbbm{1}\{X_i \neq \hat X_i(y)\} \Big| Y=y \right ]\\ & = \frac{1}{n} \sum_{i=1}^n \Pr\left ( X_i \neq \hat X_i(y)\Big| Y=y \right ) \\ & = \frac{1}{n} \sum_{i=1}^n \Bigg( \Pr(X_i = 0|Y=y)\Pr(\hat X_i(y) = 1|X_i = 0,Y=y) \\ & \hspace{1cm}+ \Pr(X_i = 1|Y=y)\Pr(\hat X_i(y) = 0|X_i = 1,Y=y) \Bigg ). \end{align*} But, $\hat X_i$ is a function of only $y$ and hence is conditionally independent of $X_i$ given $y$, which implies the following: \begin{align*} \frac{1}{n}\E \left[ \sum_{i=1}^n \mathbbm{1} \{X_i \neq \hat X_i(y)\} \Big| Y=y \right ] & = \frac{1}{n} \sum_{i=1}^n \Bigg( \Pr(X_i = 0|Y=y)\Pr(\hat X_i(y) = 1|Y=y) \\ & \hspace{2cm}+ \Pr(X_i = 1|Y=y)\Pr(\hat X_i(y) = 0|Y=y)\Bigg ). \end{align*} To simplify notation, let the posterior probabilities be $q_i(y) \triangleq \Pr(X_i = 1|Y=y)$ and let $\alpha_i(y) \triangleq \Pr(\hat X_i(y) = 1|Y=y)$. Note that $q_i(y)$ is a property of the channel and is fixed given $y$, while $\alpha_i(y)$ depends on the design of our estimator. With this, the above expression can be re-written as $$\frac{1}{n}\E \left[ \sum_{i=1}^n \mathbbm{1} \{X_i \neq \hat X_i(y)\} \Big| Y=y \right ] = \frac{1}{n} \sum_{i=1}^n \Bigg( (1-q_i(y)) \alpha_i(y) + q_i(y) (1-\alpha_i(y))\Bigg ).$$ The optimal assignment of $\alpha_i(y)$ to minimize this expression is $\alpha_i(y) = 1$ if $q_i(y) \geq 0.5$ and $\alpha_i(y) = 0$ otherwise, which coincides with the symbolwise MAP estimate. This proves the optimality of symbolwise MAP for minimizing the Hamming error rate given any observation $y$, for any channel. \section{Conclusions} \label{sec:conclusions} In this work we gave, to the best of our knowledge, the first results and techniques to compute posterior distributions over single and multiple deletion channels. We also provided a new perspective on the maximum-likelihood for the deletion channel by showing an equivalence between a discrete optimization problem and its relaxed version. In this process, we introduced a variety of tools (the relaxed binomial coefficient, edit graph and infiltration product) and demonstrated their use for analyzing deletion channels. We also presented numerical evaluations of our algorithms and showed performance improvements over existing trace reconstruction algorithms. \section{Introduction} \label{sec:intro} Sequence reconstruction over deletion channels, both with and without a codebook, has received considerable attention in the information theory as well as in the theoretical computer science literature. From an information theory perspective, reconstruction over the deletion channel, or more specifically a maximum-likelihood (ML) argument for the deletion channel, would give further insight on the capacity of the deletion channel, a long-standing open problem (see \cite{mitzenmacher2009survey}). To quote \cite{mitzenmacher2009survey} -- ``at the very least, progress in this direction would likely surpass previous results on the capacity of the deletion channels''. Yet, there are no results on reconstruction over a deletion channel with statistical guarantees -- in this work, we take a step in this direction. On the other hand, the problem of \textit{trace reconstruction}, as introduced in \cite{Batu2004}, has received renewed interest in the past few years (see \cite{Holenstein2008}, \cite{Peres2017}, \cite{De2017}, \cite{holden18}, \cite{Nazarov:2017}, \cite{holden2018lower}, \cite{chase2019new}). The problem of trace reconstruction can be stated simply as follows: consider a sequence $X$ which is simultaneously passed through $t$ independent deletion channels to yield $t$ deleted observations (also called \textit{traces}) of $X$ (see Fig.~\ref{fig:tdeletion}). How many such traces are needed to reconstruct $X$ perfectly? A variety of upper and lower bounds for this problem have been proposed, both for worst case and average case reconstruction. Our problem definition, stated in the following paragraph, is closely related to the average case trace reconstruction. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2]{tdeletion.pdf} \caption{$t$-deletion channel model: sequence $X$ passed through $t$ independent deletion channels to yield $t$ \textit{traces}. We aim to estimate $X$ from the $Y^{(i)}$s.} \label{fig:tdeletion} \end{center} \end{figure} \noindent \textbf{Problem definition.} Given an input sequence of length $n$ (known apriori), the independently and identically distributed (i.i.d.) deletion channel deletes each input symbol indepedently with probability $\delta$, producing at its output a subsequence of the input sequence. Consider a sequence $X$ passed through $t$ ($t$ is fixed) such deletion channels as shown in Fig.~\ref{fig:tdeletion}. We call this the $t$-deletion channel model. We ask two questions: \begin{enumerate} \item For $t=1$ (see Fig.~\ref{fig:1deletion}, also called the single deletion channel model), and when $X_i \sim\ \text{ind. Ber}(p_i)$, compute the posterior distributions of $X_i$ given the trace $Y$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{1deletion.pdf} \caption{The single deletion channel model. We assume $X_i \sim\ \text{ind. Ber}(p_i)$.} \label{fig:1deletion} \end{center} \end{figure} \item In the $t$ deletion channel model, for a fixed $t$ assume that $X_i \sim\ \text{i.i.d. Ber}(0.5)$ and compute the posterior distributions of $X_i$ given all traces $Y^{(1)}, Y^{(2)},...,Y^{(t)}$. \end{enumerate} Note that solving 1) above doesn't lead to a natural solution for 2). This is because for a memoryless channel, we have $Y^{(j)} - X_i - Y^{(k)}$ and hence $\Pr(X_i=\alpha|Y^{(j)}, Y^{(k)}) \propto \Pr(X_i=\alpha|Y^{(j)}) \Pr(X_i=\alpha|Y^{(k)})$; so one could independently combine the posterior probabilities from each noisy observation. This is not the case for deletion channels since the markov chain $Y^{(j)} - X_i - Y^{(k)}$ no longer holds. More intuitively, one needs to first ``align'' the traces for computing the likelihoods. We point out that the problem considered in 2) asks a question complementary to the trace reconstruction: given a fixed (possibly a few) number of traces, what is our ``best'' guess of $X$? We provide algorithms which do this. Unlike trace reconstruction, we are not concerned with perfect reconstruction (since perfect reconstruction may not be possible with just a few traces), although it should also be obvious that performance guarantees for our algorithms (not a part of this work) would naturally lead to upper bounds for trace reconstruction. Deletion channel by itself is known to be notoriously difficult to analyse. As stated earlier, the capacity of a single deletion channel is still unknown (\cite{diggavi2007capacity,diggavi2006information,diggavi2001transmission}); as are optimal coding schemes. Recent works have looked at the design of codes for deletion channels (\cite{ratzer2005marker,ratzer2000codes,thomas2017polar}); these works consider use of a codebook (we do not). As a result, statistical estimation over deletion channels is also a difficult problem due its highly combinatorial nature. To the best of our knowledge, as yet there are no efficient estimation algorithms over deletion channels with statistical guarantees; not even for ML over a single deletion channel. \noindent \textbf{Biological motivation.} Trace reconstruction in itself was motivated, in part, by problems DNA sequence reconstruction. One such problem was to infer the DNA sequence of a common ancestor from the samples of its descendants. We argue that our problem definition fits more naturally in such a scenario since perfect reconstruction may not be feasible or even possible. Our motivation for considering this problem also comes from a recent DNA sequencing technology called \textit{nanopore sequencing}. The $t$-deletion channel model is a simplistic model to approximately capture the process of a DNA sequence passed through a nanopore sequencer\footnote{As seen in \cite{Mao2017},\cite{MDK17} there are more complicated effects of the nanopore reader not captured in this simple representation.}. Very recently, a variant of the trace reconstruction problem called \textit{coded trace reconstruction} has been proposed, motivated by portable DNA-based data storage systems using DNA nanopores (see \cite{abroshan2019coding}, \cite{cheraghchi2019coded}, \cite{brakensiek2019coded}) and we believe that the ideas in this work may prove useful in such a setting. There are other works on sequence assembly (see for example, \cite{Li09fastand}, \cite{Shomorony2016}), where multiple short reads (from different segments of a sequence) are used to reconstruct the bigger sequence. This work differs from sequence assembly since we are interested to infer the entire length sequence and not just small segments of it (which are then ``stitched'' together in sequence assembly). \noindent \textbf{Tools and techniques.} In terms of theoretical tools, the series of books by Lothaire (\cite{lothaire1997combinatorics,lothaire2002algebraic,lothaire2005applied}) extensively use algebraic tools for problems in the combinatorics of sequences (or \textit{words}), and our work is inspired by such techniques. We borrow many of their notations and results for our work. \noindent \textbf{Contributions.} {\color{red}} Our main contribution is to provide tools to visualize and analyze error events (described in the next section) for the multiple deletion channel model in Fig.~\ref{fig:tdeletion}. We also provide algorithms to solve the problems stated in 1) and 2) earlier in the section. \begin{itemize}[wide=5pt] \item In section~\ref{sec:1deletion}, for the single deletion channel model, we provide an $O(n^2)$ algorithm to calculate the symbolwise posterior probabilities $\Pr(X_i=1|Y)\ \forall\ i$ when $X_i \sim \text{ind. Ber}(p_i)$. \item In Section~\ref{sec:exactsmap}, for the $t$-deletion channel model, we give an $O(2^t n^{t+2})$ algorithm to calculate the symbolwise posterior probabilities $\Pr(X_i = 1|Y_1,...,Y_t)$ when $X_i \sim \text{ind. Ber}(0.5)$. \end{itemize} \section{Introduction} \label{sec:intro} Sequence reconstruction over deletion channels, both with and without a codebook, has received considerable attention in the information theory as well as in the theoretical computer science literature. From an information theory perspective, reconstruction over the deletion channel, or more specifically a maximum-likelihood (ML) argument for the deletion channel, would give further insight on the capacity of the deletion channel, a long-standing open problem (see \cite{mitzenmacher2009survey}). To quote \cite{mitzenmacher2009survey} -- ``at the very least, progress in this direction would likely surpass previous results on the capacity of the deletion channels''. Yet, there are no results on reconstruction over a deletion channel with statistical guarantees. In this work, we take steps in this direction. In this space, the problem of \textit{trace reconstruction}, as introduced in \cite{Batu2004}, has also received renewed interest in the past few years (see \cite{Holenstein2008,Peres2017}, \cite{De2017}, \cite{holden18}, \cite{Nazarov:2017}, \cite{holden2018lower}, \cite{chase2019new}). The problem of trace reconstruction can be stated simply as follows: consider a sequence $X$ which is simultaneously passed through $t$ independent deletion channels to yield $t$ output subsequences (also called \textit{traces}) of $X$ (see Fig.~\ref{fig:tdeletion}). How many such traces are needed to reconstruct $X$ perfectly? A variety of upper and lower bounds for this problem have been proposed, both for worst case and average case reconstruction. Our problem formulation is complementary to this, as we discuss next. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2]{tdeletion.pdf} \caption{The $t$-trace deletion channel model: the sequence $X$ is passed through $t$ independent deletion channels to yield $t$ \textit{traces}. We aim to estimate $X$ from the $Y^{i}$s.} \label{fig:tdeletion} \end{center} \end{figure} \noindent \textbf{Problem formulation.} Given an input sequence of length $n$ (known apriori), the independently and identically distributed (i.i.d.) deletion channel deletes each input symbol indepedently with probability $\delta$, producing at its output a subsequence of the input sequence. Consider a sequence $X$ passed through $t$ ($t$ is fixed) such deletion channels as shown in Fig.~\ref{fig:tdeletion}. We call this the $t$-trace deletion channel model. We ask four main questions: \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2]{1deletion.pdf} \caption{The single-trace deletion channel model.} \label{fig:1deletion} \end{center} \end{figure} \begin{enumerate}[leftmargin = *] \item \textbf{Sequencewise maximum-likelihood with one trace:} For $t=1$ (also called \textit{single-trace deletion channel}, see Fig.~\ref{fig:1deletion}), what is the maximum-likelihood estimate of $X$ having observed $Y=y$, i.e., a solution to $\argmax\limits_{x\in \{0,1\}^n}\ \Pr(Y=y|X=x)$. \vspace{8pt} \item \textbf{Sequencewise maximum-likelihood with multiple traces:} For a fixed $t$, with $t>1$, what is the maximum-likelihood estimate of $X$ having observed $Y^{1}=y^1,Y^{2}=y^2,...,Y^{t}=y^t $, i.e., $$\argmax\limits_{x\in \{0,1\}^n}\ \Pr(Y^{1}=y^1,Y^{2}=y^2,...,Y^{t}=y^t|X=x).$$ \vspace{-12pt} \item \textbf{Symbolwise MAP with one trace:} For $t=1$ and $X_i \sim\ \text{ind. Ber}(p_i)$ in Fig.~\ref{fig:1deletion}, what are the posterior distributions of $X_i$ given the trace $Y=y$, i.e., compute $\Pr(X_i=\alpha|Y=y)$. \item \textbf{Symbolwise MAP with multiple traces:} For a fixed $t$, with $t>1$ and $X_i \sim\ \text{i.i.d. Ber}(0.5)$ in Fig.~\ref{fig:tdeletion}, what are the posterior distributions of $X_i$ given all traces $Y^{1}=y^1, Y^{2}=y^2,...,Y^{t}=y^t$, i.e., compute $\Pr(X_i=\alpha|Y^{1}=y^1, Y^{2}=y^2,...,Y^{t}=y^t)$. \end{enumerate} \vspace{5mm} \noindent We make a few notes. \begin{itemize}[leftmargin = *] \item For a channel with memory such as the deletion channel, the symbolwise MAP/ML estimate and sequencewise MAP/ML estimate are not equivalent. For example, consider $t=1$, $n =6$ in Fig.~\ref{fig:1deletion} and say we observe the trace $Y = 1010$. The symbolwise MAP estimate with uniform priors for this case can be computed to be $\hat X_{smap} = 100110$ whereas the sequencewise ML estimate is $\hat X_{ml} = 101010$. \item An answer to 3) above doesn't lead to a natural solution for 4) which is also due to deletion channels possessing memory. In particular, for a memoryless channel, we have $Y^{j}_i - X_i - Y^{k}_i$ and hence $\Pr(X_i=\alpha|Y^{j}, Y^{k}) \propto \Pr(Y^{j}_i, Y^{k}_i|X_i=\alpha) = \Pr(Y^{j}_i|X_i=\alpha) \Pr(Y^{k}_i|X_i=\alpha)\propto \Pr(X_i=\alpha| Y^{j}) \Pr(X_i=\alpha | Y^{k})$; so one could first obtain the posterior probabilities from each independent observation and combine them after. However, this is not the case for deletion channels since the markov chain $Y^{j}_i - X_i - Y^{k}_i$ no longer holds. As a result, one first needs to ``align'' all the observations in order to compute the likelihoods. \item Solving 2) and 4) naturally leads to two different algorithms for average-case trace reconstruction -- one that selects the most likely sequence $X$ and the other that selects the most likely value for each symbol $X_i$. However, the problem formulations in 3) and 4) ask a question complementary to that of trace reconstruction: given a fixed (possibly a few) number of traces, what is our ``best'' guess of $X$? The two problems 2) and 4) have different quantification of the word ``best''. Unlike trace reconstruction, we are not concerned with perfect reconstruction (since perfect reconstruction may not be possible with just a few traces). We also note that error rate guarantees for our algorithms (not a part of this work) would naturally lead to upper bounds for trace reconstruction. \item The challenges associated with solving 1) and 2) and solving 3) and 4) are very different. On the one hand, solving 1) and 2) amounts to discovering alternate, equivalent or approximate formulations for the seemingly difficult discrete optimization problems. On the other hand, the challenge with 3) and 4) involves the design of efficient algorithms that are capable of exactly computing/approximating the symbolwise posterior probabilities, for which ``closed form'' expressions can be derived. \end{itemize} \vspace{5mm} \noindent \textbf{Contributions.} Our main contributions are as follows. \begin{itemize}[leftmargin=*] \item We introduce mathematical tools and constructs to visualize and analyze single-trace and $t$-trace deletion error events (see Section~\ref{sec:notation}). \item For the single-trace deletion channel, we establish an equivalence between finding the optimal ML decoder and a continuous optimization problem we introduce (see Section~\ref{sec:1deletion_ML}). This equivalence allows for the use of existing techniques for continuous optimization to be employed for a seemingly difficult discrete optimization problem. This continuous optimization problem also turns out to be a signomial optimization. Furthermore we also provide a polynomial time trace reconstruction heuristic with multiple traces that exploits this formulation. \item In Section~\ref{sec:1deletion}, we prove the following: \begin{theorem} For the single-trace deletion channel model with priors $X_i \sim \text{ind. Ber}(p_i)$ and observed trace $Y=y$, the symbolwise posterior probabilities $\Pr(X_i=1|Y=y)\ \forall\ i$ can be computed in $O(n^2)$ time complexity. \end{theorem} \item In Section~\ref{sec:exactsmap}, we prove the following: \begin{theorem} For the $t$-trace deletion channel model with priors $X_i \sim \text{i.i.d. Ber}(0.5)$ and observed traces $Y^{1}=y^1,...,Y^{t}=y^t$, the symbolwise posterior probabilities $\Pr(X_i = 1|Y^{1}=y^1,...,Y^{t}=y^t)\ \forall\ i$ can be computed in $O(2^t n^{t+2})$ time complexity.\\ \end{theorem} \end{itemize} \noindent \textbf{Tools and techniques.} In terms of theoretical tools, the series of books by Lothaire (\cite{lothaire1997combinatorics,lothaire2002algebraic,lothaire2005applied}) extensively use algebraic tools for problems in the combinatorics of sequences (or \textit{words}), and our work is inspired by such techniques. We borrow some notation and leverage a few of their results in our work. \\ \noindent \textbf{Biological motivation.} Trace reconstruction in itself was motivated, in part, by problems in DNA sequence reconstruction. One such problem was to infer the DNA sequence of a common ancestor from the samples of its descendants. Our problem definition, that considers a fixed value of $t$, would fit naturally in a scenario with a fixed number of descendants where perfect reconstruction may not be possible. Our motivation for considering this problem also comes from a recent DNA sequencing technology called \textit{nanopore sequencing}. The $t$-trace deletion channel model is a simplistic model to approximately capture the process of a DNA sequence passed through a nanopore sequencer\footnote{As seen in \cite{Mao2017},\cite{MDK17} there are more complicated effects of the nanopore reader not captured in this simple representation.}. \\ \noindent \textbf{More related work.} Our work falls under the general umbrella of sequence reconstruction over deletion channels (also see Levenshtein's work \cite{levenshtein2001efficient}), where we offer, to the best of our knowledge, the first non-trivial results on maximum likelihood and maximum aposteriori estimates for the single and multiple deletion channel. As mentioned earlier, the complementary problem of trace reconstruction falls closest to this work. The deletion channel by itself is known to be notoriously difficult to analyse. As stated earlier, the capacity of a single deletion channel is still unknown (\cite{diggavi2007capacity,diggavi2006information,diggavi2001transmission}); as are optimal coding schemes. Prior works have looked at the design of codes for deletion channels (\cite{ratzer2005marker,ratzer2000codes,thomas2017polar}); these works consider use of a codebook (we do not). Statistical estimation over deletion channels is a difficult problem to analyze due its highly combinatorial nature. To the best of our knowledge, as yet there are no efficient estimation algorithms over deletion channels with statistical guarantees. Very recently, a variant of the trace reconstruction problem called \textit{coded trace reconstruction} has been proposed, motivated by portable DNA-based data storage systems using DNA nanopores (see \cite{abroshan2019coding}, \cite{cheraghchi2019coded}, \cite{brakensiek2019coded}) and we believe that the ideas in this work may prove useful in such a setting. There are other works on sequence assembly (see for example, \cite{Li09fastand}, \cite{Shomorony2016}), where multiple short reads (from different segments of a sequence) are used to reconstruct the bigger sequence. This work differs from sequence assembly since we are interested in inferring the entire length sequence and not just small segments of it (which are then ``stitched'' together in sequence assembly).\\ \noindent \textbf{Paper Organization.} Section~\ref{sec:notation} introduces our notation and visualization tools for the single and $t$-trace channel error events; Section~\ref{sec:1deletion_ML} provides a result concerning questions 1) and 2) wherein we prove the equivalence of ML decoding in question 1) to solving a continuous optimization problem; Section~\ref{sec:1deletion} answers question 3) for the single-trace channel; Section~\ref{sec:exactsmap}) answers question 4) for the $t$-deletion channel; Section~\ref{sec:Numerics} gives numerical evaluations; and Section~\ref{sec:conclusions} concludes the paper. \section{Symbolwise MAP for the $t$-trace deletion channel} \label{sec:exactsmap} In this section, we put to use the ideas and constructs introduced in section~\ref{sec:notation} to exactly compute the symbolwise posterior probabilities given $t$-traces, which in turn gives a symbolwise MAP estimate with uniform input priors (motivated by average case trace reconstruction). With this formulation the symbolwise MAP with uniform priors can be seen as a minimizer of the symbol error rate in the context of average case trace reconstruction. In Appendix~\ref{app:remnant_postprob}, we also provide a method to compute the symbolwise posterior probabilities for the remnant channel -- we encourage the reader to use this appendix as a warm-up. For the $t$-trace deletion channel, similar expressions arise due to the channel equivalence result of Theorem~\ref{thm:channel_equiv}. Let $\mathcal A = \{0,1\}$, and assume that $X\sim $ Uniform $\mathcal A^n$. Our goal is to compute the symbolwise posterior probabilities $\Pr(X_i=1|Y^1=y^1,...,Y^t=y^t)$, where $Y^j$ is the $j^{th}$ trace. Our proposed algorithm is provided in Alg.~\ref{alg:exact_smap} and estimates the symbolwise MAP (with uniform priors). We can directly leverage Alg.~\ref{alg:exact_smap} to reconstruct the input as follows: for each index $i$, compute $\Pr(X_i =1|Y^1=y^1,...,Y^t=y^t)$ and decide \begin{align*} \hat{X}_i = \begin{cases} 1,\quad &$if$\ \Pr(X_i =1|Y^1=y^1,...,Y^t=y^t)\geq 0.5 \\ 0, \quad &$otherwise$. \end{cases} \end{align*} Through the rest of this section, we show how to compute $\Pr(X_i =1|Y^1=y^1,...,Y^t=y^t)$\footnote{Symbolwise MAP with non-uniform priors is part of on-going work.} in two steps: \begin{itemize} \item We first give an expression for $\Pr(X_i =1|Y^1=y^1,...,Y^t=y^t)$ which sums over potentially an exponential number of terms. \item We then show that this summation can be computed in polynomial time (polynomial in the blocklength $n$). \end{itemize} \noindent \textbf{Step 1: An expression for $\Pr(X_i =1|Y^1=y^1,...,Y^t=y^t)$.} \begin{theorem} \label{thm:exactSMAP_posteriorprob} Assume $X\sim $ Uniform $\mathcal A^n$ or equivalently $X_i \sim\ \text{Ber}(0.5)$. The posterior probability of the $i^{th}$ bit given the $t$ traces can be expressed as \begin{align*} \Pr(X_i =1|Y^1=y^1,...,Y^t&=y^t) \\ = & \Bigg[ \sum_{k=0}^n 2^{n-k-1}{n-1 \choose k} \sum_{w||w|=k} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \\ &+ \sum_{k=0}^n \sum_{j=1}^k 2^{n-k}{i-1 \choose j-1}{n-i \choose k-j} \sum_{\substack{w| |w|=k,\\w_j=1}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \Bigg] \Big/ \\ &\Bigg[ \sum_{k=0}^n 2^{n-k} {n \choose k} \sum_{\substack{w| |w|=k}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \Bigg]. \numberthis \label{eq:posterior_prob} \end{align*} \end{theorem} Note that the summation index, $w| |w|{=}k$ is over all sequences $w$ of length $k$; this is an alternate expression for $w|w{\in}\mathcal A^k$. We follow this convention throughout the rest of the paper. \begin{proof} \begin{align*} \Pr(X_i=1&|Y^1=y^1,...,Y^t=y^t) = \sum_{\substack{x||x|=n,\\x_i=1}} \Pr(X=x|Y^1=y^1,...,Y^t=y^t)\\ &\overset{(a)}{=} \frac{1}{2^n \Pr(Y^1=y^1,...,Y^t=y^t)} \sum_{\substack{x||x|=n,\\x_i=1}} \Pr(Y^1=y^1,...,Y^t=y^t|X=x)\\ &\overset{(b)}{=} \frac{1}{2^n \Pr(Y^1=y^1,...,Y^t=y^t)} \sum_{\substack{x||x|=n,\\x_i=1}} \prod_{j=1}^t\Pr(Y^{j}=y^j|X=x), \end{align*} where $(a)$ uses Bayes' principle and $(b)$ is because each deletion channel acts independently. Recall that for a deletion channel with deletion probability $\delta$, $\Pr(Y=y|X=x)={x \choose y}\delta^{|x|-|y|}(1-\delta)^{|y|}$. Also, using the fact that $\Pr(Y^1=y^1,...,Y^t=y^t)=\sum\limits_{\substack{x||x|=n}}\Pr(x) \Pr(Y^1=y^1,...,Y^t=y^t|X=x)$ we have, \begin{align*} \Pr(X_i=1|Y^1=y^1,...,Y^t=y^t)= \frac{\sum\limits_{\substack{x||x|=n,\\x_i=1}} {x \choose y^1}...{x \choose y^t}}{\sum\limits_{\substack{x||x|=n}} {x \choose y^1}...{x \choose y^t}}. \numberthis \label{eq:thmexactsmap_proofterm0} \end{align*} We first simplify the numerator $\sum\limits_{\substack{x||x|=n,\\x_i=1}} {x \choose y^1}...{x \choose y^t}$; the denominator can be simplified using the same approach. Now, \begin{align*} \sum\limits_{\substack{x||x|=n,\\x_i=1}} {x \choose y^1}...{x \choose y^t} &\overset{(a)}{=} \sum_{\substack{x||x|=n,\\x_i=1}} \sum_{w\in \{0,1\}^*} {x \choose w} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \\ &=\sum_{w\in \mathcal A^*} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \sum_{\substack{x||x|=n,\\x_i=1}}{x \choose w}\\ &\overset{(b)}{=}\sum_{w\in \mathcal A^*} 2^{n-|w|} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \left(\frac{1}{2}{n-1 \choose |w|}+\sum_{j|w_j=1}{i-1 \choose j-1}{n-i \choose |w|-j}\right) \end{align*} where $(a)$ is due to Lemma~\ref{lemma:bin_inf_relation} and $(b)$ due to Lemma~\ref{lemma:smapsum} (both introduced in \cite{Srini2018}); see Appendix~\ref{app:bin_inf_lemma} and Appendix~\ref{app:smapsum} for the statement and proof. \noindent Therefore we have, \begin{align*} \sum\limits_{\substack{x||x|=n,\\x_i=1}} {x \choose y^1}...{x \choose y^t} &\overset{(a)}{=} \sum_{k=0}^{\infty} 2^{n-k-1}{n-1 \choose k} \sum_{w| |w|=k} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \\ &\hspace{1cm}+ \sum_{k=0}^{\infty} \sum_{j=1}^k 2^{n-k}{i-1 \choose j-1}{n-i \choose k-j} \sum_{\substack{w| |w|=k,\\w_j=1}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \\ &\overset{(b)}{=} \sum_{k=0}^n 2^{n-k-1}{n-1 \choose k} \sum_{w| |w|=k} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \\ &\hspace{1cm}+ \sum_{k=0}^n \sum_{j=1}^k 2^{n-k}{i-1 \choose j-1}{n-i \choose k-j} \sum_{\substack{w| |w|=k,\\w_j=1}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle, \numberthis \label{eq:thmexactsmap_proofterm1} \end{align*} where in $(a)$ we first fix $|w|$ and then sum over all $w$ of the given length and $(b)$ holds because the combinatorial terms are $0$ when $k>n$. A similar analysis gives \begin{align*} \sum\limits_{x| |x|=n} &{x \choose y^1}...{x \choose y^t} = \sum_{k=0}^n 2^{n-k} {n \choose k} \sum_{\substack{w| |w|=k}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle.\numberthis \label{eq:thmexactsmap_proofterm2} \end{align*} Plugging \eqref{eq:thmexactsmap_proofterm1} and \eqref{eq:thmexactsmap_proofterm2} in \eqref{eq:thmexactsmap_proofterm0}, we get the expression in Theorem~\ref{thm:exactSMAP_posteriorprob}, \begin{align*} \Pr(X_i =1|Y^1=y^1,...,&Y^t=y^t) \\ = & \Bigg[ \sum_{k=0}^n 2^{n-k-1}{n-1 \choose k} \sum_{w||w|=k} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \\ &+ \sum_{k=0}^n \sum_{j=1}^k 2^{n-k}{i-1 \choose j-1}{n-i \choose k-j} \sum_{\substack{w| |w|=k,\\w_j=1}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \Bigg] \Big/ \\ &\Bigg[ \sum_{k=0}^n 2^{n-k} {n \choose k} \sum_{\substack{w| |w|=k}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \Bigg]. \end{align*} \end{proof} \noindent \textbf{Step 2: Dynamic program to compute $\sum\limits_{w| |w|=k} \langle y^1 \uparrow ... \uparrow y^t,w \rangle$ and $\sum\limits_{\substack{w| |w|=k,\\w_j=1}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle$.} Note that the number of sequences $w$ such that $|w|=k$ is $O(2^k)$ so a naive evaluation is exponential in the blocklength $n$. We can, however, exploit the edit graph to come up with a dynamic program resulting in an algorithm which is polynomial in $n$. Recall that in the edit graph, $\langle y^1 \uparrow ... \uparrow y^t,w \rangle$ is equal to the number of distinct paths from the origin $(0,...,0)$ to the destination $(|y^1|,...,|y^t|)$ and which correspond to $w$. Hence, \begin{enumerate}[wide=0pt] \item[(a)] $\sum\limits_{w| |w|=k} \langle y^1 \uparrow ... \uparrow y^t,w \rangle$ is the number of distinct paths of length $k$ from origin to destination and, \item[(b)] $\sum\limits_{\substack{w| |w|=k,\\w_j=1}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle$ is the number of such paths of length $k$ such that the $j^{th}$ edge of the path corresponds to a `1'. \end{enumerate} \noindent With this interpretation, the dynamic program for (a) follows naturally -- the number of $k$-length paths from the origin to any vertex is the sum of the number of $(k{-}1)$-length paths from the origin to all incoming neighbors of the vertex. To make this formal, associate a polynomial (in $\lambda$) for each vertex, such that the coefficient of $\lambda^k$ is equal to the number of paths of length $k$ from the origin to $v$: we call it the "forward-potential" polynomial $p^{for}_v(\lambda)$ for vertex $v$, the coefficient of $\lambda^k$ as earlier is denoted by $\langle p^{for}_v(\lambda),\lambda^k \rangle $. The dynamic program to compute $p^{for}_v(\lambda)$ for all $v$ can be expressed as: \begin{equation} p^{for}_v(\lambda) = \sum_{u|u\rightarrow v} \lambda p^{for}_u(\lambda). \end{equation} With this definition, we have $$\sum\limits_{w| |w|=k} \langle y^1 \uparrow ... \uparrow y^t,w \rangle =\langle p^{for}_{destination}(\lambda),\lambda^k \rangle.$$ In the example in Fig.~\ref{fig:editgraph_smap1}, one could do the following: order the vertices $(0,0)$ to $(3,3)$ lexicographically and then compute $p^{for}_v(\lambda)$ in the same order. Because of the directed grid nature of the edit graph, every vertex has incoming neighbors which are lexicographically ahead of itself. Also we initialize $p^{for}_{(0,0)}(\lambda)=1$. For the example in Fig.~\ref{fig:editgraph_smap1}, the forward-potentials are shown in Fig.~\ref{fig:editgraph_smap2}. The complexity of this dynamic program is $O(2^tn^{t+1})$ as it goes over $O(n^t)$ vertices and for each vertex it sums $O(2^t)$ polynomials, each of degree $O(n)$. \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap2.pdf} \caption{The forward-potential $p^{for}_v(\lambda)$ at each vertex.} \label{fig:editgraph_smap2} \end{figure} We compute (b) as follows: pick an edge $(u{\rightarrow}v)$ which corresponds to `1', count the number of $(j{-}1)$-length paths from origin to $u$ and multiply it with the number of $(k{-}j)$-length paths from $v$ to the destination -- this is exactly the number of paths of length $k$ such that its $j^{th}$ edge is $(u{\rightarrow}v)$. Summing this term for all such edges which correspond to 1 gives us the term in (b). Note that we have already computed the number of $k$-length paths ($\forall k$) from origin to every vertex in $p^{for}_v(\lambda)$ . We can similarly compute the number of $k$-length paths ($\forall k$) from every vertex to the destination as $p^{rev}_v(\lambda)$ -- the "reverse potential" polynomial. The dynamic program for $p^{rev}_v(\lambda)$ is: \begin{equation} p^{rev}_v(\lambda) = \sum_{u|v\rightarrow u} \lambda p^{rev}_u(\lambda), \end{equation} with $p^{rev}_{destination}(\lambda)=1$. The reverse potentials for the example in Fig.~\ref{fig:editgraph_smap1} is shown in Fig.~\ref{fig:editgraph_smap3}. Like in the case of forward potential, we first order the vertices reverse lexicographically and then invoke the dynamic program above sequentially to compute the reverse potential polynomial at each vertex. \begin{algorithm}[t!] \caption{Computing the forward-potentials $p^{for}_u(\lambda)$ }\label{alg:forward_pot} \begin{algorithmic}[1] \item Input: Edit graph {$\mathcal G(y^1,...,y^t)$}\\ Outputs: $p^{for}_v(\lambda)\ \forall\ v$ \State Order the vertices from $(0,0,...,0)$ to $(|y^1|,|y^2|,...,|y^t|)$ lexicogaphically; let the ordered list be $\mathcal V$ \State Initialise $p^{for}_{(0,...,0)}(\lambda)\gets 1$ \For{$v\ \in\ \mathcal V$} \State \textbf{assign} $p^{for}_v(\lambda)\gets \sum_{u|u\rightarrow v} \lambda p^{for}_u(\lambda)$ \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[t!] \caption{Computing the reverse-potentials $p^{rev}_u(\lambda)$ }\label{alg:reverse_pot} \begin{algorithmic}[1] \item Input: Edit graph {$\mathcal G(y^1,...,y^t)$}\\ Outputs: $p^{rev}_v(\lambda)\ \forall\ v$ \State Order the vertices from $(|y^1|,|y^2|,...,|y^t|)$ to $(0,0,...,0)$ reverse lexicogaphically; let the ordered list be $\mathcal V$ \State Initialise $p^{rev}_{(|y^1|,|y^2|,...,|y^t|)}(\lambda)\gets 1$ \For{$v\ \in\ \mathcal V$} \State \textbf{assign} $p^{rev}_v(\lambda) \gets \sum_{u|v\rightarrow u} \lambda p^{rev}_u(\lambda)$ \EndFor \end{algorithmic} \end{algorithm} \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap3.pdf} \caption{The reverse-potential $p^{rev}_v(\lambda)$ at each vertex.} \label{fig:editgraph_smap3} \end{figure} With this, the term in (b) can be expressed as: \begin{align*} \sum\limits_{\substack{w| |w|=k,\\w_j=1}}\langle y^1 \uparrow ... \uparrow y^t,w \rangle =\hspace{-2mm} \sum_{\substack{(u,v)|\\s(u\rightarrow v)=1}} \hspace{-2mm}\langle p^{for}_u(\lambda),\lambda^{j-1} \rangle \langle p^{rev}_v(\lambda),\lambda^{k-j} \rangle. \end{align*} Alg.~\ref{alg:exact_smap} now summarizes the computation of the posterior probabilities. This algorithm iterates over all the edges (we have $O((2n)^t)$ of these), and also $k,j$ ($O(n)$ each). The time complexity of Alg.~\ref{alg:exact_smap} hence is $O(2^tn^{t+2})$. \begin{algorithm}[t!] \caption{Symbolwise MAP with $t$ traces} \label{alg:exact_smap} \begin{algorithmic}[1] \item Input: Traces {$Y^1=y^1,...,Y^t=y^t$, input length $n$}\\ Output: $\hat X = \hat X_1\hat X_2...\hat X_n \State Construct edit graph $\mathcal G(y^1,...,y^t)$ \State Use Alg.~\ref{alg:forward_pot} and Alg.~\ref{alg:reverse_pot} on $\mathcal G(y^1,...,y^t)$ to calculate $p^{for}_v(\lambda)$ and $p^{rev}_v(\lambda)$ $\forall\ v$ \For{$k \in\ [0:n]$} \State \textbf{assign} $\sum\limits_{w| |w|=k} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \gets \langle p^{for}_{destination}(\lambda),\lambda^k \rangle.$ \For{each $j \in\ [1:n]$} \State Initialize $temp \leftarrow $ 0 \For{each edge $u\rightarrow v\ \in\ \mathcal G$} \If{$s(u{\rightarrow} v)=$ `1'} \State $temp\ += \langle p^{for}_u(\lambda),\lambda^{j-1} \rangle \langle p^{rev}_v(\lambda),\lambda^{k-j} \rangle$ \EndIf \EndFor \State \textbf{assign} $\sum\limits_{\substack{w| |w|=k,\\w_j=1}} \langle y^1 \uparrow ... \uparrow y^t,w \rangle \gets temp$ \EndFor \EndFor \For{$i \in\ [1:n]$} \State Use \eqref{eq:posterior_prob} to compute $\Pr(X_i=1|Y^1=y^1,...,Y^t=y^t)$ \State $\hat X_i \leftarrow 1$ if $\Pr(X_i=1|Y^1=y^1,...,Y^t=y^t) > 0.5$ and $\hat X_i \leftarrow 0$ otherwise \EndFor \State \textbf{return} $\hat X_1\hat X_2...\hat X_n$ \end{algorithmic} \end{algorithm} \section{Notation and Tools} \label{sec:notations} \noindent \textbf{Basic notations:} In this work, we borrow a majority of the notation and tools from \cite{lothaire1997combinatorics} which deals with non-commutative algebra. We restate the definitions here for convenience. Calligraphic letters refer to sets, capitalized letters correspond to random variables and bold letters are used for functions. Let $\mathcal{A}$ be the set of all symbols. Throughout this work, we will focus on the case where $\mathcal{A}=\{0,1\}$, though our methods extend to arbitrarily large sets of finite size. Define $\mathcal{A}^n$ to be the set of all $n$-length sequences and $\mathcal{A}^*$ to be the set of all finite length sequences with symbols in $\mathcal{A}$. For a sequence $f$, $|f|$ denotes the length of $f$. For integers $i,j$, we define $[i:j] \triangleq \{i,i+1,...,j\}$ if $j\geq i$ and $[i:j] \triangleq \emptyset$ otherwise. Also define $[i] \triangleq [1:i]$.\\ \noindent \textbf{Binomial coefficient:} Given sequences $f$ and $g$ in $\mathcal{A}^*$, the number of subsequence patterns of $f$ that are equal to $g$ is called the \textit{binomial coefficient} of $g$ in $f$ and is denoted by $f \choose g$. For example, ${'apple' \choose 'ape'} = 2$ since $'ape'$ can be obtained from two (overlapping) subsequences of $'apple'$. When the alphabet $\mathcal{A}$ is of cardinality 1, ${f \choose g} = {|f| \choose |g|}$, the classical binomial coefficient with their respective lengths as the parameters. This definition hence could be thought of as a generalization of the classical binomial coefficients. We will denote by $e$ the sequence of length 0, and {define ${f \choose e} \triangleq 1\ \forall\ f\ \in\ \mathcal{A}^*$.} We also define the classical binomial coefficient ${a \choose b}\triangleq 0,$ whenever $b>a$ or $b<0$ for ease of use. The binomial coefficient is an integral aspect of this work and for analyzing error events in deletion channels because the input-output relation for a deletion channel (with deletion probability $\delta$, input $X$ and output $Y$) can be expressed as \begin{equation} \label{eq:in-out_relation} \Pr(Y=y|X=x) = {x \choose y} \delta^{|x|-|y|} (1-\delta)^{|y|}. \end{equation} The proof is straight-forward -- the number of distinct error events that give rise to $y$ from $x$ is exactly the number of subsequences of $x$ which are equal to $y$. Each of these error events has a probability $\delta^{|x|-|y|} (1-\delta)^{|y|}$, wherein the exponent of $\delta$ corresponds to the deleted symbols and the exponent of $1-\delta$ to the undeleted symbols. Given the definition of the binomial coefficient, the maximum-likelihood (ML) estimate over a deletion channel with observed output $Y$ can be cast in the following form: \begin{align*} \argmax_{x \in \mathcal C} {x \choose Y},\numberthis \label{eq:ML_deletion} \end{align*} where $\mathcal C$ is the chosen codebook. In the case of multiple deletion channels with observed traces $Y^{(1)},...,Y^{(t)}$, the ML formulation is similar: \begin{align*} \argmax_{x \in \mathcal C} \prod_{j=1}^{t} {x \choose Y^{(j)}}.\numberthis \label{eq:ML_multiple_deletion} \end{align*} As yet, there is no known efficient way to come up with a solution for the above two formulations, even for \eqref{eq:ML_deletion} with $\mathcal C = \{0,1\}^n$ (see \cite{mitzenmacher2009survey}). In this work, we attempt to take a step in this direction by showing that a continuous relaxation of \eqref{eq:ML_deletion} is equivalent to \eqref{eq:ML_deletion}. However, an efficient algorithm to solve the optimization \eqref{eq:ML_deletion} remains open. In the context of trace reconstruction, the ultimate pursuit would be an algorithm for \eqref{eq:ML_multiple_deletion} with $\mathcal C = \{0,1\}^n$ and error analysis thereof. \noindent We now describe a function which can be thought of as a real-valued extension of the binomial coefficient. This function is used in sections~\ref{sec:1deletion_ML} and \ref{sec:1deletion}. Consider the function $\mathbf F(\cdot)$ defined as: \begin{definition} \label{def:f} \begin{align*} &\mathbf F: \mathbb{R}^k \times \{0,1\}^l \rightarrow \mathbb{R},\\ \mathbf F(q, v)\triangleq &\begin{cases} \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [k],\\|\mathcal S|=l}} \quad \prod\limits_{i=1}^l q_{\mathcal S_i}^{v_i} (1-q_{\mathcal S_i})^{1-v_i} & 1 \leq l\leq k \\ 1 & 0=l\leq k \\ 0 & \text{else}. \end{cases} \end{align*} \noindent An alternate definition is as follows: consider a random vector $Z\in\{0,1\}^k$ such that $Z_i\sim$ ind. Ber$(q_i)$, let $q$ be the vector of probabilities of length $k$, $\mathcal S\subseteq [k]$ a subset of the indices of size $l$, and ${v}$ a binary vector of length $l$. Then, $$\mathbf F(q,v)=\sum_{\substack{\mathcal S||\mathcal S|=l}} \Pr(Z_{\mathcal S}= v).$$ Note that if $q$ is a vector of $0$'s and $1$'s, then $\mathbf F( q, v)={q \choose v}$, the binomial coefficient of $v$ in $q$. Thus, $\mathbf F(\cdot)$ could be interpreted as an extension of the binomial coefficient where one of the parameters can take values in $[0,1]^n$ instead of $\{0,1\}^n$. \end{definition} Though at first sight, $\mathbf F(q,v)$ sums over an exponential number of subsets, a dynamic programming approach can be used to compute it in $O(|v|^2)$ time complexity. The dynamic program is described in section~\ref{sec:1deletion}. \iffalse \noindent \textbf{Edit distance:} The edit distance $d_e(f,g)$ measures similarity between two sequences of possibly different lengths \cite{Navarro2001}. $d_e(f,g)$ is the minimum number of operations needed to transform $f$ to $g$, where the permitted operations are insertion, deletion or substitution of a symbol. In this work, we quantify the performance of algorithms in Section \ref{sec:Numerics} using the edit distance metric. \fi The following definitions and ideas are relevant only for \ref{sec:exactsmap} and can be omitted for the sections on single deletion channel. Before getting to the mathematical definitions, we first state a result of ours that aids in thinking about error events in multiple deletion channels. \subsection{An alternate interpretation of multiple deletion channel model} The events occurring in the $t$-deletion channel model can be categorized into two groups: \begin{enumerate} \item an input symbol is deleted in \textit{all} the $t$-traces, \item an input symbol is reflected in at least one of the traces. \end{enumerate} The error events of the first kind are in some sense ``not correctable" or even ``detectable" since it is impossible to tell what and where the deleted symbol could have been (although the probabilities need not be uniform). The events of the second kind, however, can still be detected although they could likely be confused with a large set of similar events. This thought process gives rise to a natural decomposition of the $t$-deletion channel model into a cascade of two channels: the first one being a deletion channel which captures error events of the first kind and the second one is what we call the \textit{remnant channel} which captures events of the second kind (see Fig.~\ref{fig:channel_equiv}). More precisely, the remnant channel is defined as follows: \begin{definition} \textit{Remnant channel:} an input symbol to the remnant channel is reflected in $k>0$ given outputs and deleted in the rest with a probability $\frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. \end{definition} It is easy to note that probability of the union of all possible events here is $\sum_{k=1}^t {t \choose k} \frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}=1$, validating our definition. \begin{restatable}{theorem}{channelequiv} \label{thm:channel_equiv} The $t$-deletion channel model and the cascade of the deletion channel with remnant channel shown in Fig.~\ref{fig:channel_equiv} are probabilistically equivalent, i.e., $$\Pr({Y}^{(1)},{Y}^{(2)},...,{Y}^{(t)}|X = x) = \Pr(\tilde{Y}^{(1)},\tilde{Y}^{(2)},...,\tilde{Y}^{(t)}|X = x).$$ \end{restatable} The formal proof of the theorem requires few more definitions and is relegated to the appendix. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{channel_equiv.pdf} \caption{A channel equivalence result: the $t$-deletion channel model in (a) is probabilistically equivalent to the the cascade of a deletion channel with the \textit{remnant channel} ($ \mathcal C_2$) in (b).} \label{fig:channel_equiv} \end{center} \end{figure} \noindent\textbf{Edit graph} (as defined in \cite{Gusfield1997}): We now define a graph construct which is closely related to the error events in the remnant channel. We start with a simple case and generalize subsequently. Define an \textit{edit graph} given two sequences $f$ and $g$, where every path connecting the ``origin'' to the ``destination'' on the edit graph yields a supersequence $h$ of $f,g$, where $h$ is ``covered'' by $f,g$ -- i.e., each symbol of $h$ comes from either $f$ or $g$ or both. In other words, given that $f$ and $g$ are the outputs of the remnant channel (with two outputs), each path from the origin of the edit graph to the destination corresponds to a possible input $h$ to the remnant channel and to an error event which resulted in $f$ and $g$ with input $h$. For $f$ and $g$ in $\mathcal A^*$, we form a graph $\mathcal{G}(f,g)$ with $(|f|+1)(|g|+1)$ vertices each labelled with a distinct pair $(i,j), 0\leq i\leq |f|,\ 0\leq j\leq |g|$. A directed edge $(i_1,j_1)\rightarrow(i_2,j_2)$ exists iff at least one of the following holds: \begin{enumerate} \item$i_2-i_1=1$ and $j_1=j_2$, or \item $j_2-j_1=1$ and $i_1=i_2$, or \item $i_2-i_1=1$, $j_2-j_1=1$ and $f_{i_2}=g_{j_2}$, \end{enumerate} where $f_i$ is the $i^{th}$ symbol of the sequence $f$. The origin is the vertex $(0,0)$ and the destination $(|f|,|g|)$. \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap1.pdf} \caption{ Edit graph for sequences $f=$ `001' and $g=$ `101'. An easy way to think about this is to write down $f$ vertically with each symbol aligned to a vertical set of edges and $g$ horizontally likewise. A diagonal edge in a small square exists if the corresponding $f$ and $g$ symbols are equal. The thick red edges form a path from the origin to the destination; this path corresponds to the sequence `0101' -- append the corresponding symbol at the left of an edge if it's vertical or diagonal, otherwise append the symbol at the top. It is also easy to verify that 0101 is a supersequence of both $f$ and $g$, and could be obtained as a covering of $f$ and $g$; the path itself gives one such covering. This covering also corresponds to an error event in the remnant channel which would result in outputs $f$ and $g$ with input $h=$ `0101' -- more precisely, the error event is one in which the first symbol of $h$ is deleted only in $g$, the second symbol is deleted only in $f$ and the third and fourth symbols not deleted in either $f$ or $g$.} \label{fig:editgraph_smap1} \end{figure} Let $p=((i_1,j_1),(i_2,j_2),...,(i_m,j_m))$ be a path in $\mathcal{G}(f,g)$. We define $ s(p)$ to be the sequence corresponding to the path. Intuitively, $s(p)$ is formed by appending symbols in the following way: append the corresponding $f$ symbol for a vertical edge, $g$ symbol for horizontal edge, and $f$ or $g$ symbol for diagonal edge (see example Fig.~\ref{fig:editgraph_smap1}). Any path from $(0,0)$ to $(|f|,|g|)$ corresponds to a supersequence of $f$ and $g$ and which is ``covered" by $f$ and $g$. More formally, define $ s(p)\triangleq x_1x_2...x_{m-1}$ where $$x_k = \begin{cases} f_{i_{k+1}} \quad\text{if }j_{k}=j_{k+1},\\ g_{j_{k+1}} \quad\text{if }i_{k}=i_{k+1},\\ f_{i_{k+1}} \quad\text{else.} \end{cases} $$ The construct of edit graph can be extended to more than 2 sequences with the same idea. For sequences $f_1,f_2,...,f_t$, construct a $t$-dimensional grid with a number of vertices $(|f_1|+1)(|f_2|+1)...(|f_t|+1)$ labeled from $(0,0,...,0)$ to $(|f_1|,|f_2|,...,|f_t|)$. A vertex $u=(i_1,i_2,...,i_t)$ is connected to $v=(j_1,j_2,...,j_t)$ (we say $u \rightarrow v$) iff both of the following conditions are met: \begin{itemize} \item $j_l=i_l$ or $j_l=i_l+1$ $\forall\ l\in [t]$, i.e., $(i_1,...,i_t)$ and $(j_1,...,j_t)$ are vertices of a particular unit cube. Only these type of vertices can share an edge in the grid graph. \item Let $\mathcal T \subseteq [t]$ be the collection of indices where $j_l=i_l+1$. Then $f_{i_l}$ is same $\forall\ l \in \mathcal T$. For instance, if $\mathcal T=\{1,3,5\}$, then $f_{i_1}=f_{i_3}=f_{i_5}$. \end{itemize} Define the vertex $(0,...,0)$ to be the origin of this graph and the vertex $(|f_1|,...,|f_t|)$ to be the destination. If $|f_j|=O(n)\ \forall\ j$, this graph has a number of vertices $O(n^t)$ and a maximum number of edges $O((2n)^t)$ since each vertex has at most $2^t$ outgoing edges.\\ \noindent\textbf{Infiltration product} (introduced in \cite{lothaire1997combinatorics}): The infiltration product has been extensively used in \cite{lothaire1997combinatorics}, as a tool in non-commutative algebra. Here we give an edit-graph interpretation of this tool. We also give a formal definition later in this section. Using the edit graph we can construct the set of possible supersequences $\mathcal{S}(f,g)$ of $f,g$ which are covered by it. Clearly multiple paths could yield the same supersequence and we can count the number of distinct ways $\mathbf N(h;f,g)$ one can construct the same supersequence $h$ from $f,g$. We can informally define the \emph{infiltration product $f\uparrow g$} of $f$ and $g$, as a polynomial with monomials the supersequences $h$ in $\mathcal{S}(f,g)$ and coefficients $\langle f\uparrow g,h\rangle$ equal to $\mathbf N(h;f,g)$. In Fig.~\ref{fig:editgraph_smap1}, it is easy to verify that there is exactly one path corresponding to `101001' and hence $\langle 001\uparrow 101,101001 \rangle=1$ and similarly $\langle 001\uparrow 101,01001 \rangle=2$. One could find these coefficients for all relevant sequences and form the polynomial as described. More examples: Let $\mathcal{A}=\{a,b\}$, then \begin{itemize}[wide=2pt] \item $ab\uparrow ab=ab+2aab+2abb+4aabb+2abab$, \item $ab\uparrow ba=aba+bab+abab+2abba+2baab+baba.$ \end{itemize} The infiltration operation is commutative and associative, and infiltration of two sequences $f\uparrow g$ is a polynomial with variables of length (or \textit{degree}) at most $|f|+|g|$; see \cite{lothaire1997combinatorics}. The definition of infiltration extends to two polynomials via distributivity (precisely defined in later), and consequently to multiple sequences as well. For multiple sequences, infiltration has the same edit graph interpretation: $\langle f_1\uparrow f_2 \uparrow...\uparrow f_t, w \rangle$ is the number of distinct ways of constructing $w$ as a supersequence of $f_1, f_2, ... ,f_t$ so that the construction covers $w$, i.e., construct the $t$-dimensional edit graph of $f_1, f_2, ... ,f_t$ and count the number of paths corresponding to $w$. \subsection{Formal definition of infiltration product} We now give a more formal definition of the infiltration product (see \cite{lothaire1997combinatorics} for the equivalence of the two definitions and a more rigorous treatment). A \textit{formal series} with indeterminates (or variables) in a set $\mathcal A$ and coefficients in a commutative ring $\mathcal R$, is a mapping of $\mathcal A^*$ onto $\mathcal R$. Recall that a commutative ring is a set which forms an abelian group under an \textit{addition} operation, is a monoid under a \textit{multiplication} operation which commutes, and the multiplication operation distributes over addition. Here we consider $\mathbb Z$, the set of integers as the commutative ring $\mathcal{R}$. A formal series is called a \textit{polynomial} if only a finite number of sequences are mapped to non-zero values, the rest of the sequences map to zero. Consider two polynomials $\sigma,\tau: \mathcal{A}^*\rightarrow \mathbb Z$. The value taken by a sequence $w\in \mathcal A^*$ on $\sigma$ (or the coefficient of $w$ in $\sigma$) is denoted by $\langle \sigma,w\rangle \in \mathbb R$. We also define binary addition ($\oplus$) and multiplication operations ($\times$) on the set of polynomials as follows: \begin{align} \langle \sigma\oplus \tau,w\rangle \triangleq \langle \sigma,w\rangle + \langle \tau,w \rangle \quad \forall w\in \mathcal A^*,\label{eq:polynomial_add}\\ \langle \sigma\times \tau,w\rangle \triangleq \sum_{\substack{f,g\in \mathcal A^*:\\ f.g=w}}\langle \sigma,f\rangle \langle \tau,g \rangle \quad \forall w\in \mathcal A^*.\label{eq:polynomial_prod} \end{align} We will use the usual symbols $+$ and $.$ in place of $\oplus$ and $\times$ in this work for convenience. The meaning of the operation would be clear depending on the operands. With these operations the set of polynomials form a non-commutative ring, and is denoted by $\mathbb Z\langle\mathcal A \rangle$, also called the free $\mathbb Z$-algebra on $\mathcal A$ in ring theory. Note that the addition and multiplication operations defined in \eqref{eq:polynomial_add} and \eqref{eq:polynomial_prod} are similar to the operations defined on commutative polynomials, except that the multiplication operation under the summation in \eqref{eq:polynomial_prod} ($f.g=w$) is actually concatenation and is non-commutative. The multiplication inside the summation in \eqref{eq:polynomial_prod} is multiplication in the real field and hence commutative. It is also easy to prove that the multiplication defined in \eqref{eq:polynomial_prod} distributes over addition defined in \eqref{eq:polynomial_add}. Thus, a polynomial in $\mathbb Z\langle\mathcal A \rangle$ can be represented as a sum of monomials in $\mathcal A^*$ each with an associated coefficient in $\mathbb Z$, i.e., $\sigma=\sum\limits_{w\in \mathcal A^*} \langle\sigma,w \rangle w$. Define the \textit{degree} of a polynomial to be equal to the length of a longest sequence with a non-zero coefficient in the polynomial and the \textit{number of terms} of a polynomial as the number of sequences with non-zero coefficients in the polynomial. Note that a degree $d$ polynomial could have a number of terms upto $2^{d+1}-1$. With this, the \textit{infiltration product} (in general, for two polynomials) is defined as follows: \begin{align} \forall f\in \mathcal{A}^*,& \quad f\uparrow e = e\uparrow f=f.\nonumber \\ \forall f,g\in \mathcal{A}^*&,\quad \forall a,b\in \mathcal{A}, \nonumber\\ fa\uparrow gb=(f\uparrow gb)a&+(fa\uparrow g)b+\mathbbm{1}_{a=b}(f\uparrow g)a.\nonumber \\ \forall \sigma,\tau \in \mathbb{Z}\langle\mathcal{A} \rangle, \quad &\sigma\uparrow \tau=\sum_{f,g\in \mathcal{A}^*} \langle \sigma,f \rangle \langle \tau,g \rangle (f\uparrow g). \label{def:infiltforseq} \end{align} \textbf{Summary of definitions and ideas introduced in this section:} \begin{itemize} \item The binomial coefficient captures the likelihood of observations for deletion channels. \item An extension of the binomial coefficient function where one of the parameters can take real values has been introduced; this function is pivotal for our results on the single-trace deletion channel. \item For multiple deletion channels, the error events can be categorized into two groups -- one where an input symbol is deleted in all the traces, and second, the complement of this event. This categorization results in a natural decomposition of multiple deletion channel model into a cascade model involving the remnant channel. \item The remnant channel disregards the error events where a symbol is deleted in all the traces. \item The edit graph provides a way to visualize all the possible error events and input sequences to a remnant channel given its outputs. \item The infiltration product serves the same purpose as the edit graph, but has an algebraic flavor and provides rigor for proofs and analyses. The edit graph, on the other hand, is more helpful in designing reconstruction algorithms over deletion channels. \end{itemize} \section{Notation and Tools} \label{sec:notation} \noindent \textbf{Basic notation:} We borrow some notation from \cite{lothaire1997combinatorics} which deals with non-commutative algebra; we restate them here for convenience. Calligraphic letters refer to sets, capitalized letters correspond to random variables and bold letters are used for functions. Let $\mathcal{A}$ be the set of all symbols. Throughout this work, we will focus on the case where $\mathcal{A}=\{0,1\}$, though our methods extend to arbitrarily large sets of finite size. Define $\mathcal{A}^n$ to be the set of all $n$-length sequences and $\mathcal{A}^*$ to be the set of all finite length sequences with symbols in $\mathcal{A}$. For a sequence $f$, $|f|$ denotes the length of $f$. For integers $i,j$, we define $[i:j] \triangleq \{i,i+1,...,j\}$ if $j\geq i$ and $[i:j] \triangleq \emptyset$ otherwise. We also define $[i] \triangleq [1:i]$. For a vector or sequence $x=(x_1,x_2,...,x_{i-1},x_i,x_{i+1},...,x_n)$, define $$x^{(i\rightarrow s)}\triangleq (x_1,x_2,...,x_{i-1},s,x_{i+1},...,x_n),$$ where the $i^{th}$ coordinate of $x$ is replaced by symbol $s$. \\ \noindent \textbf{Binomial coefficient (section 6.3 in \cite{lothaire1997combinatorics}):} Given sequences $f$ and $g$ in $\mathcal{A}^*$, the number of subsequence patterns of $f$ that are equal to $g$ is called the \textit{binomial coefficient} of $g$ in $f$ and is denoted by $f\choose g$. For example, ${'apple' \choose 'ape'} = 2$ since $'ape'$ can be obtained from two (overlapping) subsequences of $'apple'$. This quantity has also been referred to as the \textit{embedding number} by another line of work \cite{elzinga2008algorithms}. For two sequences of lengths $n$ and $m$, the binomial coefficient can be computed using a dynamic programming approach in $O(nm)$ (see \cite{elzinga2008algorithms} or Proposition 6.3.2 in \cite{lothaire1997combinatorics}). When the alphabet $\mathcal{A}$ is of cardinality 1, ${f \choose g} = {|f| \choose |g|}$, the classical binomial coefficient with their respective lengths as the parameters. This definition hence could be thought of as a generalization of the classical binomial coefficients. We will denote by $e$ the sequence of length 0, and {define ${f \choose e} \triangleq 1\ \forall\ f\ \in\ \mathcal{A}^*$.} We also define the classical binomial coefficient ${a \choose b}\triangleq 0,$ whenever $b>a$ or $b<0$ for ease of use. The binomial coefficient forms the backbone for the probabilistic analysis of deletion channels since the input-output relation for a deletion channel (with deletion probability $\delta$, input $X$ and output $Y$) can be expressed as \begin{equation} \label{eq:in-out_relation} \Pr(Y=y|X=x) = {x \choose y} \delta^{|x|-|y|} (1-\delta)^{|y|}. \end{equation} The proof is straightforward -- the number of distinct error events that give rise to $y$ from $x$ is exactly the number of subsequences of $x$ which are equal to $y$. Each of these error events has a probability $\delta^{|x|-|y|} (1-\delta)^{|y|}$, wherein the exponent of $\delta$ corresponds to the deleted symbols and the exponent of $1-\delta$ to the undeleted symbols. \\ \noindent \textbf{Maximum Likelihood (ML) estimate:} Given the definition of the binomial coefficient, the maximum-likelihood (ML) estimate over a deletion channel with observed output $Y=y$ can be cast in the following form: \begin{align*} \argmax_{x \in \{0,1\}^n} {x \choose y}.\numberthis \label{eq:ML_deletion} \end{align*} In the case of multiple deletion channels with observed traces $Y^{1}=y^1,...,Y^{t}=y^t$, the ML formulation is similar: \begin{align*} \argmax_{x \in \{0,1\}^n} \prod_{j=1}^{t} {x \choose y^{j}}.\numberthis \label{eq:ML_multiple_deletion} \end{align*} As yet, there is no known efficient way to come up with a solution for either of the above two formulations (see \cite{mitzenmacher2009survey}).\\ \noindent \textbf{Relaxed binomial coefficient.} We now introduce the function $\mathbf F(\cdot)$ which can be thought of as a real-valued relaxation of the binomial coefficient. This function is used in sections~\ref{sec:1deletion_ML} and~\ref{sec:1deletion}. An intuitive definition is as follows: Consider a random vector $Z\in\{0,1\}^n$ such that $Z_i\sim$ ind. Ber$(p_i)$, and let $p$ be the vector of probabilities of length $n$. Then $\mathbf F(p,v)=\mathbb E_{Z\sim p}\ {Z \choose v}$, i.e., $\mathbf F(p,v)$ is the expected number of times $v$ appears as a subsequence of $Z$. If $p \in \{0,1\}^n$, then $Z=p$ with probability 1 and $\mathbf F(p,v) = {p \choose v}$. More precisely, $\mathbf F(\cdot)$ is defined as: \begin{definition} \label{def:f} \begin{align*} &\mathbf F: [0,1]^n \times \{0,1\}^m \rightarrow \mathbb{R},\\ \mathbf F(p, v)\triangleq &\begin{cases} \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [n],\\|\mathcal S|=m}} \quad \prod\limits_{i=1}^m p_{\mathcal S_i}^{v_i} (1-p_{\mathcal S_i})^{1-v_i} & 1 \leq m\leq n \\ 1 & 0=m\leq n \\ 0 & \text{else}. \end{cases} \end{align*} \end{definition} Though at first sight $\mathbf F(p,v)$ sums over an exponential number of subsets, a dynamic programming approach can be used to compute it in $O(nm)$ time complexity (see Appendix~\ref{app:F_compute}). Note that this is the same complexity as computing the binomial coefficient.\\ \noindent \textbf{Decomposition of the $t$-trace deletion channel:} The following definitions and ideas are relevant to the results pertaining to multiple traces. We first state a result that aids in thinking about error events in multiple deletion channels. The events occurring in the $t$-deletion channel model can be categorized into two groups: \begin{enumerate} \item an input symbol is deleted in \textit{all} the $t$-traces, \item an input symbol is reflected in at least one of the traces. \end{enumerate} The error events of the first kind are in some sense ``not correctable'' or even ``detectable'' in any situation since it is impossible to tell with absolute certainty what and where the deleted symbol could have been (although the probabilities need not be uniform). The events of the second kind, however, can be detected and corrected in some situations. This thought process gives rise to a natural decomposition of the $t$-deletion channel model into a cascade of two channels: the first one being a deletion channel which captures error events of the first kind and the second one is what we call the \textit{remnant channel} which captures events of the second kind (see Fig.~\ref{fig:channel_equiv}). More precisely, we define the remnant channel as follows: \begin{definition} \textit{Remnant channel:} an input symbol to the remnant channel is reflected in any $k>0$ uniformly random traces and deleted in the rest with a probability ${t \choose k} \frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. Thus, the probability of an input symbol reflected in a \textit{fixed} set of $k>0$ traces is equal to $\frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. \end{definition} Note that probability of the union of all possible events here is $\sum_{k=1}^t {t \choose k} \frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}=1$, validating our definition. \vspace{-5mm} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{channel_equiv.pdf} \caption{A channel equivalence result: the $t$-trace deletion channel model in (a) is probabilistically equivalent to the the cascade of a deletion channel with the \textit{remnant channel} ($ \mathcal C_2$) in (b).} \label{fig:channel_equiv} \end{center} \end{figure} \vspace{-10mm} \begin{restatable}{theorem}{channelequiv} \label{thm:channel_equiv} The $t$-deletion channel model and the cascade of the deletion channel with remnant channel shown in Fig.~\ref{fig:channel_equiv} are probabilistically equivalent, i.e., $$\Pr({Y}^{1}=y^1,{Y}^{2}=y^2,...,{Y}^{t}=y^t|X = x) = \Pr(\tilde{Y}^{1}=y^1,\tilde{Y}^{2}=y^2,...,\tilde{Y}^{t}=y^t|X = x).$$ \end{restatable} A rigorous proof of this theorem for arbitrary length sequences can be found in Appendix~\ref{app:channel_equiv}. A similar, though not equivalent, decomposition has been exploited in \cite{haeupler2014repeated} albeit for the purpose of characterizing the capacity of multiple deletion channels -- there the authors consider deletion patterns which are ``undetectable''; for example, a deletion in the deletion channel $\mathcal C_1$ in the cascade model is undetectable since none of the traces will reflect that input symbol. However, our channel decomposition result does not appear in \cite{haeupler2014repeated}.\\ \noindent\textbf{Edit graph} (\cite{Gusfield1997}): Similar graph constructs have been defined in related problems on common supersequences and subsequences (see \cite{Nicosia2001} for example). This graph is closely related to the error events in the remnant channel. We start with a simple case and generalize subsequently. Define a directed graph called \textit{edit graph} given two sequences $f$ and $g$, where every path connecting the ``origin'' to the ``destination'' on the edit graph yields a supersequence $h$ of $f,g$, where $h$ is ``covered'' by $f,g$ -- i.e., each symbol of $h$ comes from either $f$ or $g$ or both. In other words, given that $f$ and $g$ are the outputs of the remnant channel (with two outputs), each path from the origin of the edit graph to the destination corresponds to a possible input $h$ to the remnant channel and to an error event which resulted in outputs $f,g$ with input $h$. For $f$ and $g$ in $\mathcal A^*$, we form a directed graph $\mathcal{G}(f,g)$ with $(|f|+1)(|g|+1)$ vertices each labelled with a distinct pair $(i,j), 0\leq i\leq |f|,\ 0\leq j\leq |g|$. A directed edge $(i_1,j_1)\rightarrow(i_2,j_2)$ exists iff at least one of the following holds: \begin{enumerate} \item$i_2-i_1=1$ and $j_1=j_2$, or \item $j_2-j_1=1$ and $i_1=i_2$, or \item $i_2-i_1=1$, $j_2-j_1=1$ and $f_{i_2}=g_{j_2}$, \end{enumerate} where $f_i$ is the $i^{th}$ symbol of the sequence $f$. The origin is the vertex $(0,0)$ and the destination $(|f|,|g|)$. \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap1.pdf} \caption{ Edit graph for sequences $f=$ `001' and $g=$ `101'. Make a grid so the vertical edges are aligned with a symbol in $f$ and horizontal edges with $g$ as shown. A diagonal edge $(i{-}1,j{-}1)\rightarrow (i,j)$ exists if $f_i = g_j$. The thick red edges form a path from the origin to the destination; this path corresponds to $h=$`0101' -- sequentially append the corresponding symbol to which each edge is aligned. It can also be verified that $h$ is a supersequence of both $f$ and $g$, and could be obtained as a covering of $f$ and $g$; the path itself gives one such covering. This covering also corresponds to an error event (or a deletion pattern) in the remnant channel which would result in outputs $f$ and $g$ with input $h=$ `0101' -- the deletion pattern is shown in the figure.} \label{fig:editgraph_smap1} \end{figure} Let $p=((i_1,j_1),(i_2,j_2),...,(i_m,j_m))$ be a path in $\mathcal{G}(f,g)$. We define $ s(p)$ to be the sequence corresponding to the path. Intuitively, $s(p)$ is formed by appending symbols in the following way: append the corresponding $f$ symbol for a vertical edge, $g$ symbol for horizontal edge, and $f$ or $g$ symbol for diagonal edge (see example Fig.~\ref{fig:editgraph_smap1}). Any path from $(0,0)$ to $(|f|,|g|)$ corresponds to a supersequence of $f$ and $g$ and which is covered by $f$ and $g$. More formally, define $ s(p)\triangleq x_1x_2...x_{m-1}$ where $$x_k = \begin{cases} f_{i_{k+1}} \quad\text{if }j_{k}=j_{k+1},\\ g_{j_{k+1}} \quad\text{if }i_{k}=i_{k+1},\\ f_{i_{k+1}} \quad\text{else.} \end{cases} $$ The construct of edit graph can be extended to more than 2 sequences with the same idea. For sequences $f_1,f_2,...,f_t$, construct a $t$-dimensional grid with a number of vertices $(|f_1|+1)(|f_2|+1)...(|f_t|+1)$ labeled from $(0,0,...,0)$ to $(|f_1|,|f_2|,...,|f_t|)$. A vertex $u=(i_1,i_2,...,i_t)$ is connected to $v=(j_1,j_2,...,j_t)$ (we say $u \rightarrow v$) iff both of the following conditions are met: \begin{itemize} \item $j_l=i_l$ or $j_l=i_l+1$ $\forall\ l\in [t]$, i.e., $(i_1,...,i_t)$ and $(j_1,...,j_t)$ are vertices of a particular unit cube. Only these type of vertices can share an edge in the grid graph. \item Let $\mathcal T \subseteq [t]$ be the collection of indices where $j_l=i_l+1$. Then ${f_l}_{j_l}$ is equal $\forall\ l \in \mathcal T$. For example in 4 dimensional grid, consider the two vertices $(10,5,8,2)$ and $(10,6,9,2)$. In this case $\mathcal T = \{2,3\}$ since the second and third coordinates differ by 1. Therefore $(10,5,8,2)\rightarrow (10,6,9,2)$ iff ${f_2}_{5}={f_3}_{9}$. Note that if only one coordinate differs by 1 in the two vertices, a directed edge always exists (in other words all non-diagonal edges exist). \end{itemize} Define the vertex $(0,...,0)$ to be the origin of this graph and the vertex $(|f_1|,...,|f_t|)$ to be the destination. If $|f_j|=O(n)\ \forall\ j$, this graph has a number of vertices $O(n^t)$ and a maximum number of edges $O((2n)^t)$ since each vertex has at most $2^t-1$ outgoing edges.\\ \noindent\textbf{Infiltration product} (introduced in section 6.3 of \cite{lothaire1997combinatorics}): The infiltration product has been extensively used in \cite{lothaire1997combinatorics}, as a tool in non-commutative algebra. Here, we give an edit-graph interpretation of this tool. A formal algebraic definition of the infiltration product is in Appendix~\ref{app:infil_def}. Using the edit graph we can construct the set of possible supersequences $\mathcal{S}(f,g)$ of $f$, $g$ that are covered by the symbols in $f$ and $g$. Indeed, multiple paths could yield the same supersequence and we can count the number of distinct ways $\mathbf N(h;f,g)$ one can construct the same supersequence $h$ from $f$, $g$. We can informally define the \emph{infiltration product $f\uparrow g$} of $f$ and $g$, as a polynomial with monomials the supersequences $h$ in $\mathcal{S}(f,g)$ and coefficients $\langle f\uparrow g,h\rangle$ equal to $\mathbf N(h;f,g)$. For the example in Fig.~\ref{fig:editgraph_smap1}, there is exactly one path corresponding to `101001' and hence $\langle 001\uparrow 101,101001 \rangle=1$ and similarly $\langle 001\uparrow 101,01001 \rangle=2$. One could find these coefficients for all relevant sequences and form the polynomial as described. We now give additional examples (see 6.3.14 in \cite{lothaire1997combinatorics}). Let $\mathcal{A}=\{a,b\}$, then \begin{itemize}[wide=2pt] \item $ab\uparrow ab=ab+2aab+2abb+4aabb+2abab$, \item $ab\uparrow ba=aba+bab+abab+2abba+2baab+baba.$ \end{itemize} The infiltration operation is commutative and associative, and infiltration of two sequences $f\uparrow g$ is a polynomial with variables of length (or \textit{degree}) at most $|f|+|g|$; see \cite{lothaire1997combinatorics}. The definition of infiltration extends to two polynomials via distributivity (precisely defined in Appendix~\ref{app:infil_def}), and consequently to multiple sequences as well. For multiple sequences, infiltration has the same edit graph interpretation: $\langle f_1\uparrow f_2 \uparrow...\uparrow f_t, w \rangle$ is the number of distinct ways of constructing $w$ as a supersequence of $f_1, f_2, ... ,f_t$ so that the construction covers $w$, i.e., construct the $t$-dimensional edit graph of $f_1, f_2, ... ,f_t$ and count the number of paths corresponding to $w$. \\ {\small \begin{center} \begin{tabular}{ |P{3cm}|P{10cm}| } \hline \multicolumn{2}{|c|}{Table of notation} \\ \hline $\mathcal A$ & A set \\ \hline $X$ & A random variable or a random vector \\ \hline $x$ & A scalar or a vector variable\\ \hline $|x|$ & Length of the sequence $x$\\ \hline $[i:j]$ & $\{i,i+1,...,j\}$\\ \hline $x^{(i\rightarrow s)}$ & $(x_1,x_2,...,x_{i-1},s,x_{i+1},...,x_n)$ \\ \hline ${f \choose g}$& Binomial coefficient: number of subsequence patters of $f$ equal to $g$ \\ \hline $\mathbf F(p,v)$ & Relaxed binomial coefficient: $\mathbb E_{Z\sim p} {Z \choose v}$ \\ \hline $\langle f \uparrow g,h \rangle$ & Infiltration product: number of ways of obtaining sequence $h$ as a ``covered'' supersequence of $f$ and $g$ \\ \hline \end{tabular} \end{center}} \vspace{2mm} \section{Maximum likelihood for the single-trace deletion channel} \label{sec:1deletion_ML} Consider the ML formulation for the single-trace deletion channel given non-empty trace $Y$ and all possible $n$-length inputs allowed, restated here for convenience: \begin{equation} \argmax_{x\in \{0,1\}^n} {x \choose Y}. \label{eq:1deletion_ML} \end{equation} To the best of our knowledge, the only known method to solve \eqref{eq:1deletion_ML} involves iterating over all possible choices of $x$ and computing the objective value for each of the choices. We here show that it is sufficient to solve a continuous relaxation of above problem to obtain a solution to \eqref{eq:1deletion_ML}. Note that there could be multiple solutions to \eqref{eq:1deletion_ML}. Before going in the main result, we first state a useful lemma which factors out a given coordinate $p_i$ out of $\mathbf F(p,Y)$. The proof of the lemma is relegated to the appendix. \begin{restatable}{lemma}{deletionMLrellemma} For $p=[p_1,p_2,..,p_i,...,p_n]$ and $Y=Y_1Y_2...Y_m$ with $n \geq m > 0$, we have \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} \label{lemma:F_decomposition} \end{restatable} \begin{theorem} An alternative optimization to characterize ML for the single-trace deletion channel. \begin{equation} \max_{x\in \{0,1\}^n} {x \choose Y} = \max_{p\in [0,1]^n} \mathbf F(p,Y). \label{eq:ml_opt_equiv} \end{equation} Furthermore, given any non-integral $p^* \in [0,1]^n$ that maximizes $\mathbf F(p,Y)$, one could construct a corresponding integral solution $x^* \in \{0,1\}^n$ that maximizes $\mathbf F(p,Y)$ and consequently is also a solution to $\max_{x\in \{0,1\}^n} {x \choose Y}$. \end{theorem} \begin{proof} As noted earlier, we have ${x \choose Y} = \mathbf F(x,Y)$. Therefore, we are interested in proving the following: \begin{align*} \max_{x\in \{0,1\}^n} \mathbf F(x,Y) \equiv \max_{p\in [0,1]^n} \mathbf F(p,Y).\numberthis \label{eq:ml_opt_equiv_proof1} \end{align*} To show this and also the second statement, we prove that given $p=(p_1,p_2,...,p_i,...,p_n)$, at least one of the following holds true: \begin{itemize} \item$\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y)$, where $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},0,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by 0. \item $\mathbf F(p^{(i\rightarrow 1)},Y) \geq \mathbf F(p,Y)$, where $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},1,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by 1. \end{itemize} Thus if $p^*$ is an optimal solution to $\max_{p\in [0,1]^n} \mathbf F(p,Y)$ with $p_i\in (0,1)$, then at least one of $p^{(i\rightarrow 0)}$ or $p^{(i\rightarrow 1)}$ is also an optimal solution. Sequentially applying this argument for each coordinate of $p$ shows that there exists a point in $\{0,1\}^n$ which is also an optimal solution to $\max_{p\in [0,1]^n} \mathbf F(p,Y)$ and consequently to $\max_{x\in \{0,1\}^n} \mathbf F(x,Y)$. Now to prove our claim, we use Lemma~\ref{lemma:F_decomposition} to factor out $p_i$ terms in $\mathbf F(p,Y)$: \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} There are 3 cases \begin{enumerate} \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) = \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) = \mathbf F(p,Y) = \mathbf F(p^{(i\rightarrow 1)},Y).$ \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) > \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) \leq \mathbf F(p,Y) \leq \mathbf F(p^{(i\rightarrow 1)},Y).$ \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) < \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y) \geq \mathbf F(p^{(i\rightarrow 1)},Y).$ \end{enumerate} Thus in each of the 3 cases, we see that at least one of $\mathbf F(p^{(i\rightarrow 0)},Y)$ or $\mathbf F(p^{(i\rightarrow 1)},Y)$ is at least as large as $\mathbf F(p,Y)$ thus proving the theorem. Note that the proof gives a natural way to find an optimal lattice point $p_{lattice}^*$ given a non-lattice point $p^*$ by iterating through each coordinate of $p^*$ and switching them to $0$ or $1$ by comparing $\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})$ with $\sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$ \end{proof} \section{Numerical results} \label{sec:Numerics} In this section we show numerics supporting our theoretical results. In all of our experiments, we generate the input sequence uniformly at random (motivated by average case trace reconstruction), and obtain the $t$ traces by passing the input through a deletion channel (with a deletion probability $\delta$) $t$ times. We then reconstruct the input from the obtained traces and measure how \textit{close} the reconstructed sequence is, to the actual input sequence. We use two metrics to measure the performance of the reconstruction algorithms: 1. \textit{Hamming error rate}, which is defined as the average Hamming distance between the actual input and the estimated sequence divided by the length of the input sequence and 2. \textit{Edit error rate}, which is defined as the average edit distance between the actual input and the estimated sequence divided by the length of the input sequence. The reason for using Hamming error rate is that our goal is to reconstruct a \textit{known-length} sequence, which has been the problem formulation throughout this work. Moreover, the Hamming error rate is also of special interest to us since the symbolwise MAP is an optimal estimator for minimizing the Hamming error rate (see Appendix~\ref{app:smap_hamming} for a proof). We also use edit error rate as it is a typical metric used in the context of insertion/deletion channels. {\small \begin{center} \begin{tabular}{ |P{3cm}|P{9.5cm}|P{2.5cm}| } \hline \multicolumn{3}{|c|}{List of trace reconstruction algorithms compared in this work.} \\ \hline \bf Abbreviation & \bf Description & \bf Complexity\\ \hline Ind. post. comb. & Independent posterior combination (Alg.~\ref{alg:ind_comb}) & $O(n^2t)$\\ \hline BMA & Bitwise majority alignment of \cite{Batu2004} (Alg.~\ref{alg:bitwise_majority}) & $O(nt)$ \\ \hline Trace stats. & Algorithm based on trace symbolwise statistics from \cite{Holenstein2008} (Alg.~\ref{alg:trace_statistics}) & $O(n^{3.37}+nt)$\\ \hline Grad asc. & Projected gradient ascent (Alg.~\ref{alg:grad_asc_traces}) & $O(n^2t)$\\ \hline SMAP seq. & Sequential symbolwise MAP heuristic (Alg.~\ref{alg:apprx_smap}) & $O(n^2t)$ \\ \hline SMAP exact & Exact symbolwise MAP (Alg.~\ref{alg:exact_smap}) & $O(n^{t+2}2^t)$\\ \hline \end{tabular} \end{center}} \vspace{5mm} \noindent \textbf{Baseline algorithms:} \begin{enumerate}[leftmargin = *] \item \textbf{Independent posterior combination:} As pointed in the introduction, computing the posterior probabilities for each deletion channel and combining them as if they came from independent observations does not provide a natural solution for computing the posterior probabilities for the $t$-trace deletion channel. One could, however, check how such a naive combination of posteriors compares with our reconstruction algorithms for $t$-traces. This is detailed as Alg.~\ref{alg:ind_comb}. The complexity of this algorithm is $O(n^2t)$ since computing the posteriors takes $O(n^2)$ and we compute posteriors for $t$ traces. \item \textbf{Bitwise Majority Alignment (introduced in \cite{Batu2004}):} BMA reconstructs the input sequence by first ``aligning'' the traces using a pointer for each trace, and then taking the majority of the pointed symbols. BMA is detailed as Alg.~\ref{alg:bitwise_majority}. From an efficiency standpoint, BMA is the most efficient of all the algorithms since it is linear in the blocklength as well as the number of traces ($O(nt)$). \item \textbf{Trace statistics algorithm:} An algorithm based on trace symbol statistics (also called mean-based algorithms and summary statistics algorithms) has been extensively studied for worst-case trace reconstruction (see \cite{Holenstein2008}, \cite{De2017}, \cite{Nazarov:2017}). In essence, the algorithm first estimates the ``trace symbol statistics'' -- $\Pr(Y_i=1)\ \forall\ i$ -- from the obtained traces and uses only these estimates to reconstruct $X$. However, it uses a new set of traces for every position $i$, thus requiring at least $n$ traces (see (3.6) and the paragraph below (3.8) in \cite{Holenstein2008}). Here we modify the algorithm to adapt them for an arbitrary number of traces; in particular, we reuse the traces while estimating $\Pr(Y_i=1)\ \forall\ i$. The algorithm is detailed in Alg.~\ref{alg:trace_statistics}. The complexity analysis for this gets tricky since it depends on the algorithm used to solve the set of $2n$ linear programs. The state-of-the-art algorithm for solving a linear program in $n$ variables takes approximately $O(n^{2.37})$ (see \cite{cohen2019solving}); thus the complexity of Trace statistics algorithm is $O(n^{3.37}+nt)$, where the $nt$ term corresponds to the complexity of computing $\hat p_j$. However, in our implementation we use the solver from the "SciPy" Python library which uses primal-dual interior point methods for solving linear programs. The complexity of such methods is typically $O(n^3)$ making our implementation $O(n^4+nt)$. Also note that these are iterative methods and have many hidden constants (such as the number of iterations for convergence). \end{enumerate} We note that the state-of-the-art average-case trace reconstruction algorithms in the literature are applicable in the asymptotic regime where the blocklength $n$ and the number of traces $t$ approach $\infty$; it is not clear how to adapt such algorithms for a finite blocklength and a small number of traces. It is for this reason that we chose to compare against BMA and Trace statistics algorithm, which can be easily adapted for the finite blocklength regime and for a small number of traces. It should also be noted that the performance of the above two algorithms may not be reliable with a small number of traces (as they are not designed for this regime), yet we include them owing to the lack of better baselines.\\ \begin{algorithm}[t!] \caption{Trace reconstruction via independent posterior combination}\label{alg:ind_comb} \begin{algorithmic}[1] \item Input: Traces {$Y^{1}=y^1,...,Y^{t}=y^t$}, input length $n$ \\ Outputs: Estimate of the input $\hat X$ \State Initialize priors $p^{old} \gets (0.5,0.5,...,0.5)$ \For {$l=1:t$} \State Use Alg.~\ref{alg:apprx_smap_dp} with $p^{old}$ and $y^l$ to compute posteriors $p^{l,new}$ \EndFor \For {$i=1:n$} \If {$\prod_{l=1}^t p^{l,new}_i \geq \prod_{l=1}^t (1-p^{l,new}_i)$} $\ \hat X_i \gets 1$ \Else $\ \hat X_i \gets 0$ \EndIf \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[!t] \caption{Bitwise Majority Alignment} \label{alg:bitwise_majority} \begin{algorithmic}[1] \item Input: Traces {$Y^{1}=y^1,...,Y^{t}=y^t$, input length $n$}\\ Output: estimate of input $\hat X = \hat X_1 \hat X_2...\hat X_n$. \State Initialize $c_j=1$ for $j\in [t]$. \State Initialize $\hat X_i = 1$ for $i \in [n]$. \For{$i \in\ [1:n]$} \State Let $b$ be the majority over all $t$ of $y^j_{c_j}$ \State $\hat X_i \gets b$ \State Increment $c_j$ for each $j$ such that $y^j_{c_j} = b$ \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[!t] \caption{Trace statistics heuristic} \label{alg:trace_statistics} \begin{algorithmic}[1] \item Input: Traces {$Y^{1}=y^1,...,Y^{t}=y^t$, input length $n$}\\ Output: estimate of input $\hat X = \hat X_1 \hat X_2...\hat X_n$. \State Append each trace $y^j$ with zeros until each of them is of length $n$. \State Assign $\hat p_j \leftarrow \frac{|\{y^l:y^l_j=1\}|}{t}$. \For{$i \in\ [1:n]$} \State Solve the 2 linear programs (3.6) in \cite{Holenstein2008} by fixing $x_i=0$ and $x_i=1$: let the optimum value in the two cases be $m_0$ and $m_1$ respectively. \State If $m_0<m_1$, assign $\hat X_i = x_i \leftarrow 0$. Else fix $\hat X_i = x_i \leftarrow 1$. \EndFor \end{algorithmic} \end{algorithm} \newpage \noindent \textbf{Algorithms introduced in this paper:} \begin{enumerate}[leftmargin = *] \item \textbf{Projected gradient ascent:} Alg.~\ref{alg:grad_asc_traces} used as described, with max iterations $M=100$ and convergence criteria $C$ set as follows: the percentage difference in $\sum_j \mathbf F(p,y^j)$ over two consecutive iterations is less than 0.1\%. \item \textbf{Symbolwise MAP sequentially used one trace at a time:} Alg.~\ref{alg:apprx_smap} used as described. \item \textbf{Exact symbolwise MAP:} Alg.~\ref{alg:exact_smap} used as described. \end{enumerate} \vspace{7mm} \textbf{Observations:} In Fig.~\ref{fig:numerics_hamming} and Fig.~\ref{fig:numerics_edit}, we compare the Hamming and edit error rates for the different algorithms described above. \begin{itemize}[leftmargin = *] \item The 3 algorithms introduced in this work outperform the 3 baselines in most cases. The Hamming error rate of Grad asc. with 2 and 3 traces is a notable exception as it does worse than Ind. post. comb. However, it improves rapidly as we increase the number of traces as seen in Fig.~\ref{fig:numerics_hamming}. \item Both Ind. post. comb. as well as our SMAP seq. struggle with the problem of \textit{diminishing returns} for Hamming error rate as they do not improve much with the number of traces. This could indicate that considering traces one at a time could fail to accumulate extrinsic information (for instance, it completely neglects the possible alignments given multiple traces); one needs to simultaneously consider multiple traces in order to accomplish this. SMAP seq. however, improves with the number of traces with respect to edit error rate. \item The Grad asc. is the ``champion'' amongst the algorithms we compare here, when it comes to the edit error rate as illustrated by Fig.~\ref{fig:numerics_edit}. The Grad asc. was constructed with the aim of maximizing the likelihood of the observed traces, and this in turn seems to have some correlation with minimizing the edit distance -- it is not clear why this is the case. \item As seen in Fig.~\ref{fig:numerics_hamming} (a) and (b), SMAP exact has the minimum Hamming error rate. This supports the fact that symbolwise MAP is the minimizer of the Hamming error rate. However, note that this does not necessarily minimize the edit error rate, as seen from Fig.~\ref{fig:numerics_edit} (a) and (b). \end{itemize} \begin{figure}[!t] \centering \includegraphics[scale=0.85]{Numerics/hamming_error_rates.pdf} \caption{Comparison of Hamming error rates for a blocklength $n=100$ illustrated with 2,3,5 and 10 observed traces. Note that we do not run SMAP exact. for 5 and 10 traces since its complexity grows exponentially with the number of traces. All the subplots are plotted on the same scale to aid comparability across subplots. Few of the subplots which contain algorithms with similar error rates also contain a zoomed-in inset view.} \label{fig:numerics_hamming} \end{figure} \clearpage \begin{figure}[!t] \centering \includegraphics[scale=0.85]{Numerics/edit_error_rates.pdf} \caption{Comparison of edit error rates for a blocklength $n=100$ illustrated with 2,3,5 and 10 observed traces. Note that we do not run SMAP exact. for 5 and 10 traces since its complexity grows exponentially with the number of traces. All the subplots are plotted on the same scale to aid comparability across subplots. Few of the subplots which contain algorithms with similar error rates also contain a zoomed-in inset view.} \label{fig:numerics_edit} \end{figure} \clearpage \section{Introduction} \label{sec:intro} Sequence reconstruction over deletion channels, both with and without a codebook, has received considerable attention in the information theory as well as in the theoretical computer science literature. From an information theory perspective, reconstruction over the deletion channel, or more specifically a maximum-likelihood (ML) argument for the deletion channel, would give further insight on the capacity of the deletion channel, a long-standing open problem (see \cite{mitzenmacher2009survey}). To quote \cite{mitzenmacher2009survey} -- ``at the very least, progress in this direction would likely surpass previous results on the capacity of the deletion channels''. Yet, there are no results on reconstruction over a deletion channel with statistical guarantees -- in this work, we take a step in this direction. On the other hand, the problem of \textit{trace reconstruction}, as introduced in \cite{Batu2004}, has received renewed interest in the past few years (see \cite{Holenstein2008}, \cite{Peres2017}, \cite{De2017}, \cite{holden18}, \cite{Nazarov:2017}, \cite{holden2018lower}, \cite{chase2019new}). The problem of trace reconstruction can be stated simply as follows: consider a sequence $X$ which is simultaneously passed through $t$ independent deletion channels to yield $t$ deleted observations (also called \textit{traces}) of $X$ (see Fig.~\ref{fig:tdeletion}). How many such traces are needed to reconstruct $X$ perfectly? A variety of upper and lower bounds for this problem have been proposed, both for worst case and average case reconstruction. Our problem definition, stated in the following paragraph, is closely related to the average case trace reconstruction. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2]{tdeletion.pdf} \caption{$t$-deletion channel model: sequence $X$ passed through $t$ independent deletion channels to yield $t$ \textit{traces}. We aim to estimate $X$ from the $Y^{(i)}$s.} \label{fig:tdeletion} \end{center} \end{figure} \noindent \textbf{Problem definition.} Given an input sequence of length $n$ (known apriori), the independently and identically distributed (i.i.d.) deletion channel deletes each input symbol indepedently with probability $\delta$, producing at its output a subsequence of the input sequence. Consider a sequence $X$ passed through $t$ ($t$ is fixed) such deletion channels as shown in Fig.~\ref{fig:tdeletion}. We call this the $t$-deletion channel model. We ask two questions: \begin{enumerate} \item For $t=1$ (see Fig.~\ref{fig:1deletion}, also called the single deletion channel model), and when $X_i \sim\ \text{ind. Ber}(p_i)$, compute the posterior distributions of $X_i$ given the trace $Y$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{1deletion.pdf} \caption{The single deletion channel model. We assume $X_i \sim\ \text{ind. Ber}(p_i)$.} \label{fig:1deletion} \end{center} \end{figure} \item In the $t$ deletion channel model, for a fixed $t$ assume that $X_i \sim\ \text{i.i.d. Ber}(0.5)$ and compute the posterior distributions of $X_i$ given all traces $Y^{(1)}, Y^{(2)},...,Y^{(t)}$. \end{enumerate} Note that solving 1) above doesn't lead to a natural solution for 2). This is because for a memoryless channel, we have $Y^{(j)} - X_i - Y^{(k)}$ and hence $\Pr(X_i=\alpha|Y^{(j)}, Y^{(k)}) \propto \Pr(X_i=\alpha|Y^{(j)}) \Pr(X_i=\alpha|Y^{(k)})$; so one could independently combine the posterior probabilities from each noisy observation. This is not the case for deletion channels since the markov chain $Y^{(j)} - X_i - Y^{(k)}$ no longer holds. More intuitively, one needs to first ``align'' the traces for computing the likelihoods. We point out that the problem considered in 2) asks a question complementary to the trace reconstruction: given a fixed (possibly a few) number of traces, what is our ``best'' guess of $X$? We provide algorithms which do this. Unlike trace reconstruction, we are not concerned with perfect reconstruction (since perfect reconstruction may not be possible with just a few traces), although it should also be obvious that performance guarantees for our algorithms (not a part of this work) would naturally lead to upper bounds for trace reconstruction. Deletion channel by itself is known to be notoriously difficult to analyse. As stated earlier, the capacity of a single deletion channel is still unknown (\cite{diggavi2007capacity,diggavi2006information,diggavi2001transmission}); as are optimal coding schemes. Recent works have looked at the design of codes for deletion channels (\cite{ratzer2005marker,ratzer2000codes,thomas2017polar}); these works consider use of a codebook (we do not). As a result, statistical estimation over deletion channels is also a difficult problem due its highly combinatorial nature. To the best of our knowledge, as yet there are no efficient estimation algorithms over deletion channels with statistical guarantees; not even for ML over a single deletion channel. \noindent \textbf{Biological motivation.} Trace reconstruction in itself was motivated, in part, by problems DNA sequence reconstruction. One such problem was to infer the DNA sequence of a common ancestor from the samples of its descendants. We argue that our problem definition fits more naturally in such a scenario since perfect reconstruction may not be feasible or even possible. Our motivation for considering this problem also comes from a recent DNA sequencing technology called \textit{nanopore sequencing}. The $t$-deletion channel model is a simplistic model to approximately capture the process of a DNA sequence passed through a nanopore sequencer\footnote{As seen in \cite{Mao2017},\cite{MDK17} there are more complicated effects of the nanopore reader not captured in this simple representation.}. Very recently, a variant of the trace reconstruction problem called \textit{coded trace reconstruction} has been proposed, motivated by portable DNA-based data storage systems using DNA nanopores (see \cite{abroshan2019coding}, \cite{cheraghchi2019coded}, \cite{brakensiek2019coded}) and we believe that the ideas in this work may prove useful in such a setting. There are other works on sequence assembly (see for example, \cite{Li09fastand}, \cite{Shomorony2016}), where multiple short reads (from different segments of a sequence) are used to reconstruct the bigger sequence. This work differs from sequence assembly since we are interested to infer the entire length sequence and not just small segments of it (which are then ``stitched'' together in sequence assembly). \noindent \textbf{Tools and techniques.} In terms of theoretical tools, the series of books by Lothaire (\cite{lothaire1997combinatorics,lothaire2002algebraic,lothaire2005applied}) extensively use algebraic tools for problems in the combinatorics of sequences (or \textit{words}), and our work is inspired by such techniques. We borrow many of their notations and results for our work. \noindent \textbf{Contributions.} {\color{red}} Our main contribution is to provide tools to visualize and analyze error events (described in the next section) for the multiple deletion channel model in Fig.~\ref{fig:tdeletion}. We also provide algorithms to solve the problems stated in 1) and 2) earlier in the section. \begin{itemize}[wide=5pt] \item In section~\ref{sec:1deletion}, for the single deletion channel model, we provide an $O(n^2)$ algorithm to calculate the symbolwise posterior probabilities $\Pr(X_i=1|Y)\ \forall\ i$ when $X_i \sim \text{ind. Ber}(p_i)$. \item In Section~\ref{sec:exactsmap}, for the $t$-deletion channel model, we give an $O(2^t n^{t+2})$ algorithm to calculate the symbolwise posterior probabilities $\Pr(X_i = 1|Y_1,...,Y_t)$ when $X_i \sim \text{ind. Ber}(0.5)$. \end{itemize} \section{Notation and Tools} \label{sec:notations} \noindent \textbf{Basic notations:} In this work, we borrow a majority of the notation and tools from \cite{lothaire1997combinatorics} which deals with non-commutative algebra. We restate the definitions here for convenience. Calligraphic letters refer to sets, capitalized letters correspond to random variables and bold letters are used for functions. Let $\mathcal{A}$ be the set of all symbols. Throughout this work, we will focus on the case where $\mathcal{A}=\{0,1\}$, though our methods extend to arbitrarily large sets of finite size. Define $\mathcal{A}^n$ to be the set of all $n$-length sequences and $\mathcal{A}^*$ to be the set of all finite length sequences with symbols in $\mathcal{A}$. For a sequence $f$, $|f|$ denotes the length of $f$. For integers $i,j$, we define $[i:j] \triangleq \{i,i+1,...,j\}$ if $j\geq i$ and $[i:j] \triangleq \emptyset$ otherwise. Also define $[i] \triangleq [1:i]$.\\ \noindent \textbf{Binomial coefficient:} Given sequences $f$ and $g$ in $\mathcal{A}^*$, the number of subsequence patterns of $f$ that are equal to $g$ is called the \textit{binomial coefficient} of $g$ in $f$ and is denoted by $f \choose g$. For example, ${'apple' \choose 'ape'} = 2$ since $'ape'$ can be obtained from two (overlapping) subsequences of $'apple'$. When the alphabet $\mathcal{A}$ is of cardinality 1, ${f \choose g} = {|f| \choose |g|}$, the classical binomial coefficient with their respective lengths as the parameters. This definition hence could be thought of as a generalization of the classical binomial coefficients. We will denote by $e$ the sequence of length 0, and {define ${f \choose e} \triangleq 1\ \forall\ f\ \in\ \mathcal{A}^*$.} We also define the classical binomial coefficient ${a \choose b}\triangleq 0,$ whenever $b>a$ or $b<0$ for ease of use. The binomial coefficient is an integral aspect of this work and for analyzing error events in deletion channels because the input-output relation for a deletion channel (with deletion probability $\delta$, input $X$ and output $Y$) can be expressed as \begin{equation} \label{eq:in-out_relation} \Pr(Y=y|X=x) = {x \choose y} \delta^{|x|-|y|} (1-\delta)^{|y|}. \end{equation} The proof is straight-forward -- the number of distinct error events that give rise to $y$ from $x$ is exactly the number of subsequences of $x$ which are equal to $y$. Each of these error events has a probability $\delta^{|x|-|y|} (1-\delta)^{|y|}$, wherein the exponent of $\delta$ corresponds to the deleted symbols and the exponent of $1-\delta$ to the undeleted symbols. Given the definition of the binomial coefficient, the maximum-likelihood (ML) estimate over a deletion channel with observed output $Y$ can be cast in the following form: \begin{align*} \argmax_{x \in \mathcal C} {x \choose Y},\numberthis \label{eq:ML_deletion} \end{align*} where $\mathcal C$ is the chosen codebook. In the case of multiple deletion channels with observed traces $Y^{(1)},...,Y^{(t)}$, the ML formulation is similar: \begin{align*} \argmax_{x \in \mathcal C} \prod_{j=1}^{t} {x \choose Y^{(j)}}.\numberthis \label{eq:ML_multiple_deletion} \end{align*} As yet, there is no known efficient way to come up with a solution for the above two formulations, even for \eqref{eq:ML_deletion} with $\mathcal C = \{0,1\}^n$ (see \cite{mitzenmacher2009survey}). In this work, we attempt to take a step in this direction by showing that a continuous relaxation of \eqref{eq:ML_deletion} is equivalent to \eqref{eq:ML_deletion}. However, an efficient algorithm to solve the optimization \eqref{eq:ML_deletion} remains open. In the context of trace reconstruction, the ultimate pursuit would be an algorithm for \eqref{eq:ML_multiple_deletion} with $\mathcal C = \{0,1\}^n$ and error analysis thereof. \noindent We now describe a function which can be thought of as a real-valued extension of the binomial coefficient. This function is used in sections~\ref{sec:1deletion_ML} and \ref{sec:1deletion}. Consider the function $\mathbf F(\cdot)$ defined as: \begin{definition} \label{def:f} \begin{align*} &\mathbf F: \mathbb{R}^k \times \{0,1\}^l \rightarrow \mathbb{R},\\ \mathbf F(q, v)\triangleq &\begin{cases} \sum\limits_{\substack{\mathcal S|\mathcal S\subseteq [k],\\|\mathcal S|=l}} \quad \prod\limits_{i=1}^l q_{\mathcal S_i}^{v_i} (1-q_{\mathcal S_i})^{1-v_i} & 1 \leq l\leq k \\ 1 & 0=l\leq k \\ 0 & \text{else}. \end{cases} \end{align*} \noindent An alternate definition is as follows: consider a random vector $Z\in\{0,1\}^k$ such that $Z_i\sim$ ind. Ber$(q_i)$, let $q$ be the vector of probabilities of length $k$, $\mathcal S\subseteq [k]$ a subset of the indices of size $l$, and ${v}$ a binary vector of length $l$. Then, $$\mathbf F(q,v)=\sum_{\substack{\mathcal S||\mathcal S|=l}} \Pr(Z_{\mathcal S}= v).$$ Note that if $q$ is a vector of $0$'s and $1$'s, then $\mathbf F( q, v)={q \choose v}$, the binomial coefficient of $v$ in $q$. Thus, $\mathbf F(\cdot)$ could be interpreted as an extension of the binomial coefficient where one of the parameters can take values in $[0,1]^n$ instead of $\{0,1\}^n$. \end{definition} Though at first sight, $\mathbf F(q,v)$ sums over an exponential number of subsets, a dynamic programming approach can be used to compute it in $O(|v|^2)$ time complexity. The dynamic program is described in section~\ref{sec:1deletion}. \iffalse \noindent \textbf{Edit distance:} The edit distance $d_e(f,g)$ measures similarity between two sequences of possibly different lengths \cite{Navarro2001}. $d_e(f,g)$ is the minimum number of operations needed to transform $f$ to $g$, where the permitted operations are insertion, deletion or substitution of a symbol. In this work, we quantify the performance of algorithms in Section \ref{sec:Numerics} using the edit distance metric. \fi The following definitions and ideas are relevant only for \ref{sec:exactsmap} and can be omitted for the sections on single deletion channel. Before getting to the mathematical definitions, we first state a result of ours that aids in thinking about error events in multiple deletion channels. \subsection{An alternate interpretation of multiple deletion channel model} The events occurring in the $t$-deletion channel model can be categorized into two groups: \begin{enumerate} \item an input symbol is deleted in \textit{all} the $t$-traces, \item an input symbol is reflected in at least one of the traces. \end{enumerate} The error events of the first kind are in some sense ``not correctable" or even ``detectable" since it is impossible to tell what and where the deleted symbol could have been (although the probabilities need not be uniform). The events of the second kind, however, can still be detected although they could likely be confused with a large set of similar events. This thought process gives rise to a natural decomposition of the $t$-deletion channel model into a cascade of two channels: the first one being a deletion channel which captures error events of the first kind and the second one is what we call the \textit{remnant channel} which captures events of the second kind (see Fig.~\ref{fig:channel_equiv}). More precisely, the remnant channel is defined as follows: \begin{definition} \textit{Remnant channel:} an input symbol to the remnant channel is reflected in $k>0$ given outputs and deleted in the rest with a probability $\frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}$. \end{definition} It is easy to note that probability of the union of all possible events here is $\sum_{k=1}^t {t \choose k} \frac{\delta^{t-k}(1-\delta)^k}{1-\delta^t}=1$, validating our definition. \begin{restatable}{theorem}{channelequiv} \label{thm:channel_equiv} The $t$-deletion channel model and the cascade of the deletion channel with remnant channel shown in Fig.~\ref{fig:channel_equiv} are probabilistically equivalent, i.e., $$\Pr({Y}^{(1)},{Y}^{(2)},...,{Y}^{(t)}|X = x) = \Pr(\tilde{Y}^{(1)},\tilde{Y}^{(2)},...,\tilde{Y}^{(t)}|X = x).$$ \end{restatable} The formal proof of the theorem requires few more definitions and is relegated to the appendix. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{channel_equiv.pdf} \caption{A channel equivalence result: the $t$-deletion channel model in (a) is probabilistically equivalent to the the cascade of a deletion channel with the \textit{remnant channel} ($ \mathcal C_2$) in (b).} \label{fig:channel_equiv} \end{center} \end{figure} \noindent\textbf{Edit graph} (as defined in \cite{Gusfield1997}): We now define a graph construct which is closely related to the error events in the remnant channel. We start with a simple case and generalize subsequently. Define an \textit{edit graph} given two sequences $f$ and $g$, where every path connecting the ``origin'' to the ``destination'' on the edit graph yields a supersequence $h$ of $f,g$, where $h$ is ``covered'' by $f,g$ -- i.e., each symbol of $h$ comes from either $f$ or $g$ or both. In other words, given that $f$ and $g$ are the outputs of the remnant channel (with two outputs), each path from the origin of the edit graph to the destination corresponds to a possible input $h$ to the remnant channel and to an error event which resulted in $f$ and $g$ with input $h$. For $f$ and $g$ in $\mathcal A^*$, we form a graph $\mathcal{G}(f,g)$ with $(|f|+1)(|g|+1)$ vertices each labelled with a distinct pair $(i,j), 0\leq i\leq |f|,\ 0\leq j\leq |g|$. A directed edge $(i_1,j_1)\rightarrow(i_2,j_2)$ exists iff at least one of the following holds: \begin{enumerate} \item$i_2-i_1=1$ and $j_1=j_2$, or \item $j_2-j_1=1$ and $i_1=i_2$, or \item $i_2-i_1=1$, $j_2-j_1=1$ and $f_{i_2}=g_{j_2}$, \end{enumerate} where $f_i$ is the $i^{th}$ symbol of the sequence $f$. The origin is the vertex $(0,0)$ and the destination $(|f|,|g|)$. \begin{figure}[!h] \centering \includegraphics[scale=0.25]{editgraph_smap1.pdf} \caption{ Edit graph for sequences $f=$ `001' and $g=$ `101'. An easy way to think about this is to write down $f$ vertically with each symbol aligned to a vertical set of edges and $g$ horizontally likewise. A diagonal edge in a small square exists if the corresponding $f$ and $g$ symbols are equal. The thick red edges form a path from the origin to the destination; this path corresponds to the sequence `0101' -- append the corresponding symbol at the left of an edge if it's vertical or diagonal, otherwise append the symbol at the top. It is also easy to verify that 0101 is a supersequence of both $f$ and $g$, and could be obtained as a covering of $f$ and $g$; the path itself gives one such covering. This covering also corresponds to an error event in the remnant channel which would result in outputs $f$ and $g$ with input $h=$ `0101' -- more precisely, the error event is one in which the first symbol of $h$ is deleted only in $g$, the second symbol is deleted only in $f$ and the third and fourth symbols not deleted in either $f$ or $g$.} \label{fig:editgraph_smap1} \end{figure} Let $p=((i_1,j_1),(i_2,j_2),...,(i_m,j_m))$ be a path in $\mathcal{G}(f,g)$. We define $ s(p)$ to be the sequence corresponding to the path. Intuitively, $s(p)$ is formed by appending symbols in the following way: append the corresponding $f$ symbol for a vertical edge, $g$ symbol for horizontal edge, and $f$ or $g$ symbol for diagonal edge (see example Fig.~\ref{fig:editgraph_smap1}). Any path from $(0,0)$ to $(|f|,|g|)$ corresponds to a supersequence of $f$ and $g$ and which is ``covered" by $f$ and $g$. More formally, define $ s(p)\triangleq x_1x_2...x_{m-1}$ where $$x_k = \begin{cases} f_{i_{k+1}} \quad\text{if }j_{k}=j_{k+1},\\ g_{j_{k+1}} \quad\text{if }i_{k}=i_{k+1},\\ f_{i_{k+1}} \quad\text{else.} \end{cases} $$ The construct of edit graph can be extended to more than 2 sequences with the same idea. For sequences $f_1,f_2,...,f_t$, construct a $t$-dimensional grid with a number of vertices $(|f_1|+1)(|f_2|+1)...(|f_t|+1)$ labeled from $(0,0,...,0)$ to $(|f_1|,|f_2|,...,|f_t|)$. A vertex $u=(i_1,i_2,...,i_t)$ is connected to $v=(j_1,j_2,...,j_t)$ (we say $u \rightarrow v$) iff both of the following conditions are met: \begin{itemize} \item $j_l=i_l$ or $j_l=i_l+1$ $\forall\ l\in [t]$, i.e., $(i_1,...,i_t)$ and $(j_1,...,j_t)$ are vertices of a particular unit cube. Only these type of vertices can share an edge in the grid graph. \item Let $\mathcal T \subseteq [t]$ be the collection of indices where $j_l=i_l+1$. Then $f_{i_l}$ is same $\forall\ l \in \mathcal T$. For instance, if $\mathcal T=\{1,3,5\}$, then $f_{i_1}=f_{i_3}=f_{i_5}$. \end{itemize} Define the vertex $(0,...,0)$ to be the origin of this graph and the vertex $(|f_1|,...,|f_t|)$ to be the destination. If $|f_j|=O(n)\ \forall\ j$, this graph has a number of vertices $O(n^t)$ and a maximum number of edges $O((2n)^t)$ since each vertex has at most $2^t$ outgoing edges.\\ \noindent\textbf{Infiltration product} (introduced in \cite{lothaire1997combinatorics}): The infiltration product has been extensively used in \cite{lothaire1997combinatorics}, as a tool in non-commutative algebra. Here we give an edit-graph interpretation of this tool. We also give a formal definition later in this section. Using the edit graph we can construct the set of possible supersequences $\mathcal{S}(f,g)$ of $f,g$ which are covered by it. Clearly multiple paths could yield the same supersequence and we can count the number of distinct ways $\mathbf N(h;f,g)$ one can construct the same supersequence $h$ from $f,g$. We can informally define the \emph{infiltration product $f\uparrow g$} of $f$ and $g$, as a polynomial with monomials the supersequences $h$ in $\mathcal{S}(f,g)$ and coefficients $\langle f\uparrow g,h\rangle$ equal to $\mathbf N(h;f,g)$. In Fig.~\ref{fig:editgraph_smap1}, it is easy to verify that there is exactly one path corresponding to `101001' and hence $\langle 001\uparrow 101,101001 \rangle=1$ and similarly $\langle 001\uparrow 101,01001 \rangle=2$. One could find these coefficients for all relevant sequences and form the polynomial as described. More examples: Let $\mathcal{A}=\{a,b\}$, then \begin{itemize}[wide=2pt] \item $ab\uparrow ab=ab+2aab+2abb+4aabb+2abab$, \item $ab\uparrow ba=aba+bab+abab+2abba+2baab+baba.$ \end{itemize} The infiltration operation is commutative and associative, and infiltration of two sequences $f\uparrow g$ is a polynomial with variables of length (or \textit{degree}) at most $|f|+|g|$; see \cite{lothaire1997combinatorics}. The definition of infiltration extends to two polynomials via distributivity (precisely defined in later), and consequently to multiple sequences as well. For multiple sequences, infiltration has the same edit graph interpretation: $\langle f_1\uparrow f_2 \uparrow...\uparrow f_t, w \rangle$ is the number of distinct ways of constructing $w$ as a supersequence of $f_1, f_2, ... ,f_t$ so that the construction covers $w$, i.e., construct the $t$-dimensional edit graph of $f_1, f_2, ... ,f_t$ and count the number of paths corresponding to $w$. \subsection{Formal definition of infiltration product} We now give a more formal definition of the infiltration product (see \cite{lothaire1997combinatorics} for the equivalence of the two definitions and a more rigorous treatment). A \textit{formal series} with indeterminates (or variables) in a set $\mathcal A$ and coefficients in a commutative ring $\mathcal R$, is a mapping of $\mathcal A^*$ onto $\mathcal R$. Recall that a commutative ring is a set which forms an abelian group under an \textit{addition} operation, is a monoid under a \textit{multiplication} operation which commutes, and the multiplication operation distributes over addition. Here we consider $\mathbb Z$, the set of integers as the commutative ring $\mathcal{R}$. A formal series is called a \textit{polynomial} if only a finite number of sequences are mapped to non-zero values, the rest of the sequences map to zero. Consider two polynomials $\sigma,\tau: \mathcal{A}^*\rightarrow \mathbb Z$. The value taken by a sequence $w\in \mathcal A^*$ on $\sigma$ (or the coefficient of $w$ in $\sigma$) is denoted by $\langle \sigma,w\rangle \in \mathbb R$. We also define binary addition ($\oplus$) and multiplication operations ($\times$) on the set of polynomials as follows: \begin{align} \langle \sigma\oplus \tau,w\rangle \triangleq \langle \sigma,w\rangle + \langle \tau,w \rangle \quad \forall w\in \mathcal A^*,\label{eq:polynomial_add}\\ \langle \sigma\times \tau,w\rangle \triangleq \sum_{\substack{f,g\in \mathcal A^*:\\ f.g=w}}\langle \sigma,f\rangle \langle \tau,g \rangle \quad \forall w\in \mathcal A^*.\label{eq:polynomial_prod} \end{align} We will use the usual symbols $+$ and $.$ in place of $\oplus$ and $\times$ in this work for convenience. The meaning of the operation would be clear depending on the operands. With these operations the set of polynomials form a non-commutative ring, and is denoted by $\mathbb Z\langle\mathcal A \rangle$, also called the free $\mathbb Z$-algebra on $\mathcal A$ in ring theory. Note that the addition and multiplication operations defined in \eqref{eq:polynomial_add} and \eqref{eq:polynomial_prod} are similar to the operations defined on commutative polynomials, except that the multiplication operation under the summation in \eqref{eq:polynomial_prod} ($f.g=w$) is actually concatenation and is non-commutative. The multiplication inside the summation in \eqref{eq:polynomial_prod} is multiplication in the real field and hence commutative. It is also easy to prove that the multiplication defined in \eqref{eq:polynomial_prod} distributes over addition defined in \eqref{eq:polynomial_add}. Thus, a polynomial in $\mathbb Z\langle\mathcal A \rangle$ can be represented as a sum of monomials in $\mathcal A^*$ each with an associated coefficient in $\mathbb Z$, i.e., $\sigma=\sum\limits_{w\in \mathcal A^*} \langle\sigma,w \rangle w$. Define the \textit{degree} of a polynomial to be equal to the length of a longest sequence with a non-zero coefficient in the polynomial and the \textit{number of terms} of a polynomial as the number of sequences with non-zero coefficients in the polynomial. Note that a degree $d$ polynomial could have a number of terms upto $2^{d+1}-1$. With this, the \textit{infiltration product} (in general, for two polynomials) is defined as follows: \begin{align} \forall f\in \mathcal{A}^*,& \quad f\uparrow e = e\uparrow f=f.\nonumber \\ \forall f,g\in \mathcal{A}^*&,\quad \forall a,b\in \mathcal{A}, \nonumber\\ fa\uparrow gb=(f\uparrow gb)a&+(fa\uparrow g)b+\mathbbm{1}_{a=b}(f\uparrow g)a.\nonumber \\ \forall \sigma,\tau \in \mathbb{Z}\langle\mathcal{A} \rangle, \quad &\sigma\uparrow \tau=\sum_{f,g\in \mathcal{A}^*} \langle \sigma,f \rangle \langle \tau,g \rangle (f\uparrow g). \label{def:infiltforseq} \end{align} \textbf{Summary of definitions and ideas introduced in this section:} \begin{itemize} \item The binomial coefficient captures the likelihood of observations for deletion channels. \item An extension of the binomial coefficient function where one of the parameters can take real values has been introduced; this function is pivotal for our results on the single-trace deletion channel. \item For multiple deletion channels, the error events can be categorized into two groups -- one where an input symbol is deleted in all the traces, and second, the complement of this event. This categorization results in a natural decomposition of multiple deletion channel model into a cascade model involving the remnant channel. \item The remnant channel disregards the error events where a symbol is deleted in all the traces. \item The edit graph provides a way to visualize all the possible error events and input sequences to a remnant channel given its outputs. \item The infiltration product serves the same purpose as the edit graph, but has an algebraic flavor and provides rigor for proofs and analyses. The edit graph, on the other hand, is more helpful in designing reconstruction algorithms over deletion channels. \end{itemize} \section{Maximum likelihood for the single-trace deletion channel} \label{sec:1deletion_ML} Consider the ML formulation for the single-trace deletion channel given non-empty trace $Y$ and all possible $n$-length inputs allowed, restated here for convenience: \begin{equation} \argmax_{x\in \{0,1\}^n} {x \choose Y}. \label{eq:1deletion_ML} \end{equation} To the best of our knowledge, the only known method to solve \eqref{eq:1deletion_ML} involves iterating over all possible choices of $x$ and computing the objective value for each of the choices. We here show that it is sufficient to solve a continuous relaxation of above problem to obtain a solution to \eqref{eq:1deletion_ML}. Note that there could be multiple solutions to \eqref{eq:1deletion_ML}. Before going in the main result, we first state a useful lemma which factors out a given coordinate $p_i$ out of $\mathbf F(p,Y)$. The proof of the lemma is relegated to the appendix. \begin{restatable}{lemma}{deletionMLrellemma} For $p=[p_1,p_2,..,p_i,...,p_n]$ and $Y=Y_1Y_2...Y_m$ with $n \geq m > 0$, we have \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} \label{lemma:F_decomposition} \end{restatable} \begin{theorem} An alternative optimization to characterize ML for the single-trace deletion channel. \begin{equation} \max_{x\in \{0,1\}^n} {x \choose Y} = \max_{p\in [0,1]^n} \mathbf F(p,Y). \label{eq:ml_opt_equiv} \end{equation} Furthermore, given any non-integral $p^* \in [0,1]^n$ that maximizes $\mathbf F(p,Y)$, one could construct a corresponding integral solution $x^* \in \{0,1\}^n$ that maximizes $\mathbf F(p,Y)$ and consequently is also a solution to $\max_{x\in \{0,1\}^n} {x \choose Y}$. \end{theorem} \begin{proof} As noted earlier, we have ${x \choose Y} = \mathbf F(x,Y)$. Therefore, we are interested in proving the following: \begin{align*} \max_{x\in \{0,1\}^n} \mathbf F(x,Y) \equiv \max_{p\in [0,1]^n} \mathbf F(p,Y).\numberthis \label{eq:ml_opt_equiv_proof1} \end{align*} To show this and also the second statement, we prove that given $p=(p_1,p_2,...,p_i,...,p_n)$, at least one of the following holds true: \begin{itemize} \item$\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y)$, where $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},0,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by 0. \item $\mathbf F(p^{(i\rightarrow 1)},Y) \geq \mathbf F(p,Y)$, where $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},1,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by 1. \end{itemize} Thus if $p^*$ is an optimal solution to $\max_{p\in [0,1]^n} \mathbf F(p,Y)$ with $p_i\in (0,1)$, then at least one of $p^{(i\rightarrow 0)}$ or $p^{(i\rightarrow 1)}$ is also an optimal solution. Sequentially applying this argument for each coordinate of $p$ shows that there exists a point in $\{0,1\}^n$ which is also an optimal solution to $\max_{p\in [0,1]^n} \mathbf F(p,Y)$ and consequently to $\max_{x\in \{0,1\}^n} \mathbf F(x,Y)$. Now to prove our claim, we use Lemma~\ref{lemma:F_decomposition} to factor out $p_i$ terms in $\mathbf F(p,Y)$: \begin{align*} \mathbf F(p,Y) = \mathbf F( p_{[n]\backslash \{i\}},Y) + p_i \sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}). \end{align*} There are 3 cases \begin{enumerate} \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) = \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) = \mathbf F(p,Y) = \mathbf F(p^{(i\rightarrow 1)},Y).$ \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) > \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) \leq \mathbf F(p,Y) \leq \mathbf F(p^{(i\rightarrow 1)},Y).$ \item $$\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}) < \sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$$ \\In this case it is easy to verify that $\mathbf F(p^{(i\rightarrow 0)},Y) \geq \mathbf F(p,Y) \geq \mathbf F(p^{(i\rightarrow 1)},Y).$ \end{enumerate} Thus in each of the 3 cases, we see that at least one of $\mathbf F(p^{(i\rightarrow 0)},Y)$ or $\mathbf F(p^{(i\rightarrow 1)},Y)$ is at least as large as $\mathbf F(p,Y)$ thus proving the theorem. Note that the proof gives a natural way to find an optimal lattice point $p_{lattice}^*$ given a non-lattice point $p^*$ by iterating through each coordinate of $p^*$ and switching them to $0$ or $1$ by comparing $\sum\limits_{k|Y_k=1}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]})$ with $\sum\limits_{k|Y_k=0}\mathbf F( p_{[1:i-1]}, Y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, Y_{[k+1,m]}).$ \end{proof} \section{Symbolwise MAP for the single-trace deletion channel} \label{sec:1deletion} We here develop an algorithm to compute the symbolwise posterior probabilities for the single-trace deletion channel when the input symbols are independently generated with arbitrary priors. Consider the single deletion channel model in Fig.~\ref{fig:1deletion}, where $X=X_1...X_n$, each input symbol is generated $X_i \sim \text{ind. Ber}\ (p_i)$, and we observe the trace $Y = y = y_1y_2...y_m$ with $m\leq n$. Define the vector of priors as $p\triangleq(p_1,p_2,...,p_n)$. We first give an $O(n^2)$ algorithm to calculate the posterior probabilities $\Pr(X_i=1|Y=y)$, which in turn provides the symbolwise MAP estimate for the considered model. We then show how this algorithm can be used for trace reconstruction. We take three steps to present the algorithm. \noindent \textbf{An expression for $\Pr(X_i=1|Y=y)$.} Let $\Pr(X_i=1)=p_i$. As a first step, we have \vspace{6pt} \begin{align*} \Pr(X_i=1|{Y=y}) &= \frac{\Pr(X_i=1,Y=y)}{\Pr(Y=y)} = \frac{ \sum\limits_{\substack{ x| x_i=1}} \Pr({X=x}) \Pr(Y=y|X=x)}{ \sum_{\substack{x}} \Pr({X=x}) \Pr(Y=y|X=x)} \\ &\overset{(a)}{=} \frac{ \sum\limits_{\substack{ x| x_i=1}} \Pr({X=x}) { x \choose y}}{ \sum_{\substack{x}} \Pr({X=x}) { x \choose y}}, \numberthis \label{eq:approx_smap_1} \end{align*} where $(a)$ is because for a deletion channel $\Pr(Y=y|X=x)={x \choose y} \delta^{|x|-|y|}(1-\delta)^{|y|}$. To proceed, we need to evaluate the summation in the numerator and the denominator. Theorem~\ref{thm:approx_smap_postprob} expresses \eqref{eq:approx_smap_1} in terms of relaxed binomial coefficient terms $\mathbf F(\cdot)$. Recall that $\mathbf F(p,y) \triangleq \mathbb E_{X\sim p} {X \choose y}$, which is the denominator term in \eqref{eq:approx_smap_1}. \begin{theorem} \label{thm:approx_smap_postprob} Let $X=X_1...X_n$ where $X_i \sim \text{ind. Ber}\ (p_i)$, and let $Y=y$ be the observed trace when $X$ is passed through a deletion channel. Then, \begin{align*} \Pr(X_i&=1|Y=y) = \frac{p_i}{\mathbf F( p, y)} \left( \mathbf F( p_{[n]\backslash \{i\}}, y) + \sum\limits_{k|y_k=1}\mathbf F( p_{[1:i-1]}, y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, y_{[k+1,m]}) \right). \numberthis \label{eq:smap_postprob_thm} \end{align*} \end{theorem} \begin{proof} The proof of this theorem employs the same trick used in the proof of Lemma~\ref{lemma:F_decomposition}. From \eqref{eq:approx_smap_1}, we have \begin{align*} \Pr(X_i = 1 | Y=y) = \frac{ \sum\limits_{\substack{ x| x_i=1}} \Pr({X=x}) { x \choose y}}{\mathbf F(p,y)}. \end{align*} Now, \begin{align*} \sum_{\substack{ x| x_i=1}} & \Pr({X=x}) { x \choose y} =\sum_{\substack{ x|x_i=1}} \Pr({X=x}) \sum_{\substack{\mathcal S\subseteq [n]\\ |\mathcal S|=m}} \mathbbm{1}\{ x_{\mathcal S}= y\}\\ &=\sum_{\substack{\mathcal S\subseteq [n]\\ |\mathcal S|=m}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=y}} \Pr({X=x}).\numberthis \label{eq:smapiter1} \end{align*} We first separate the outer summation into two cases: (a) $\mathcal S|i \notin \mathcal S$ and (b) $\mathcal S|i\in \mathcal S$. We can express the first case as \begin{align*} &\hspace{-1cm}\sum_{\substack{\mathcal S\subseteq [n] \\ |\mathcal S|=m,i\notin \mathcal S}}\sum_{\substack{ x|x_i=1\\x_{\mathcal S}=y}} \Pr({X=x}) =\sum_{\substack{\mathcal S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=y}} \Pr({X=x})\\ &=\sum_{\substack{S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} \sum_{\substack{ x|x_i=1\\ x_{\mathcal S}= y}} \Big(\Pr(X_i=1)\Pr( X_{\mathcal S}= y) \Pr( X_{[n]\backslash \mathcal S\cup\{i\}}= x_{[n]\backslash \mathcal S\cup\{i\}}) \Big)\\ &=\sum_{\substack{\mathcal S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} p_i \Pr( X_{\mathcal S}= y) \left(\sum_{\substack{ x|x_i=1\\ x_{\mathcal S}= y}} \Pr( X_{[n]\backslash \mathcal S\cup\{i\}}= x_{[n]\backslash \mathcal S\cup\{i\}})\right)\\ &=\sum_{\substack{\mathcal S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} p_i \Pr( X_{\mathcal S}= y) \left(\sum_{(x_j|j\in [n]\backslash \mathcal S\cup \{i\})} \Pr( X_{[n]\backslash \mathcal S\cup\{i\}}= x_{[n]\backslash \mathcal S\cup\{i\}})\right)\\ &=p_i \sum_{\substack{\mathcal S\subseteq [n]\backslash \{i\}\\ |\mathcal S|=m}} \Pr( X_{\mathcal S}= y) = p_i \mathbf F( p_{[n]\backslash \{i\}}, y).\numberthis \label{eq:lemma3proof1} \end{align*} For the second term, we express the set $\mathcal S$ as a union $\mathcal S = \mathcal S' \cup \{i\} \cup \mathcal S''$ such that $\mathcal S' \subseteq [i-1]$ and $\mathcal S'' \subseteq [i+1:n]$ to get: \begin{align*} &\sum_{\substack{\mathcal S\subseteq [n]\\ |\mathcal S|=m,\\i\in \mathcal S}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=y}} \Pr({X=x})= \sum_{k=1}^m\sum\limits_{\substack{\mathcal S\subseteq [n],\\|\mathcal S|=m,\\ \mathcal S_k = i}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=y}} \Pr({X=x})\\ &=\sum_{k=1}^m\sum_{\substack{\mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\sum_{\substack{\mathcal S''\subseteq [i+1:n]\\ |\mathcal S''|=m-k}} \sum_{\substack{ x|x_i=1\\x_{\mathcal S}=y}}\mathbbm{1}_{\{y_k=1\}} \Pr({X=x}) \\ &=\sum_{k:y_k=1}\sum_{\substack{\mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\sum_{\substack{\mathcal S''\subseteq [i+1:n]\\ |\mathcal S''|=m-k}} \sum_{\substack{ x|x_i=1\\ x_{\mathcal S'}= y_{[1:k-1]}\\ x_{\mathcal S''}= y_{[k+1:m]}}} \Bigg ( \Pr(X_i=1) \Pr( X_{\mathcal S'}= y_{[1:k-1]}) \Pr( X_{\mathcal S''}= y_{[k+1:m]}) \\&\hspace{7cm} \Pr( X_{[n]\backslash \mathcal S'\cup \mathcal S''\cup \{i\}}= x_{[n]\backslash \mathcal S'\cup \mathcal S'' \cup\{i\}})\Bigg )\\ &=p_i\sum_{k:y_k=1}\Bigg ( \Big( \sum_{\substack{\mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\Pr( X_{\mathcal S'}= y_{[1:k-1]})\Big) \Big(\sum_{\substack{\mathcal S''\subseteq [i+1:n]\\ |\mathcal S''|=m-k}}\Pr( X_{\mathcal S''}= y_{[k+1:m]})\Big ) \\ & \hspace{5cm} \Big( \sum_{\substack{ x|x_i=1\\ x_{\mathcal S'}= y_{[1:k-1]}\\ x_{\mathcal S''}= y_{[k+1:m]}}} \Pr( X_{[n]\backslash \mathcal S'\cup \mathcal S''\cup \{i\}}= x_{[n]\backslash \mathcal S'\cup \mathcal S'' \cup\{i\}})\Big) \Bigg )\\ &=p_i\sum_{k|y_k=1}\Bigg(\Big( \sum_{\substack{\mathcal S'\subseteq [i-1]\\ |\mathcal S'|=k-1}}\Pr( X_{\mathcal S'}= y_{[1:k-1]})\Big) \Big( \sum_{\substack{\mathcal S''\subseteq [i+1:n]\\ |\mathcal S''|=m-k}}\Pr( X_{\mathcal S''}=y_{[k+1:m]}) \Big)\Bigg)\\ &=p_i \sum\limits_{k|y_k=1}\mathbf F( p_{[1:i-1]}, y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, y_{[k+1,m]}). \numberthis \label{eq:lemma3proof2} \end{align*} Plugging in \eqref{eq:lemma3proof1} and \eqref{eq:lemma3proof2} in \eqref{eq:approx_smap_1} proves the theorem. \end{proof} Alg.~\ref{alg:apprx_smap_dp} summarizes the computation of $\Pr(X_i=1|Y=y)$. \begin{algorithm}[t!] \caption{Symbolwise posterior probabilities with one trace}\label{alg:apprx_smap_dp} \begin{algorithmic}[1] \item Input: Trace {$Y=y$}, priors $p$\\ Outputs: Posteriors $\Pr(X_i=1|Y=y)\ \forall\ i$ \State Compute $\mathbf F(p_{[1:k]},y_{[1:j]})\ \forall\ k,j$ and $\mathbf F(p_{[k:n]},y_{[j:m]})\ \forall\ k,j$ via Alg.~\ref{alg:F_comp} \For {$i=1:n$} \State Use \eqref{eq:smap_postprob_thm} to compute $\Pr(X_i=1|Y=y)$ \EndFor \end{algorithmic} \end{algorithm} \noindent\textbf{A trace reconstruction heuristic with $t$ traces.} The posterior probability computation in Alg.~\ref{alg:apprx_smap_dp} naturally gives rise to a trace reconstruction heuristic that updates the symbolwise statistics sequentially on the traces, where we use Alg.~\ref{alg:apprx_smap_dp} with one trace at a time to continually update $\Pr(X_i=1|Y=y)$. The overall heuristic is described in Alg.~\ref{alg:apprx_smap}. Note that the algorithm first needs to compute $\mathbf F(p_{[1:k]},y_{[1:j]})\ \forall\ k,j$ and $\mathbf F(p_{[k:n]},y_{[j:m]})\ \forall\ k,j$ which requires $O(n^2)$ operations, as described in Appendix~\ref{app:F_compute}. Given this, the algorithm iterates over the $n$ indices and computes the posteriors in $O(n)$ for each of the index. Thus, the complexity of the algorithm is $O(n^2)$; note that $m=O(n)$ since $y$ is a deleted version of the input. \begin{algorithm}[t!] \caption{Trace reconstruction via iterative single-trace posterior probabilities}\label{alg:apprx_smap} \begin{algorithmic}[1] \item Input: Traces {$Y^{1}=y^1,...,Y^{t}=y^t$}, input length $n$ \\ Outputs: Estimate of the input $\hat X$ \State Initialize priors $p^{old}=p^{new} \gets (0.5,0.5,...,0.5)$ \For {$l=1:t$} \State Use Alg.~\ref{alg:apprx_smap_dp} with $p^{old}$ and $y^{l}$ to update $p^{new}$ \State $p^{old}\gets p^{new}$ \EndFor \For {$i=1:n$} \If {$p^{new}_i\geq 0.5$} $\ \hat X_i \gets 1$ \Else $\ \hat X_i \gets 0$ \EndIf \EndFor \State \textbf{return} $\hat X_1 \hat X_2 ... \hat X_n$ \end{algorithmic} \end{algorithm} \section{Sequencewise ML for the deletion channel} \label{sec:1deletion_ML} \subsection{A continuous optimization formulation for the single trace ML} We here consider the single-trace ML decoding in (\ref{eq:ML_deletion}), assuming that the output sequence $Y=y$ is non-empty. To the best of our knowledge, the only known method to solve \eqref{eq:ML_deletion} involves solving a combinatorial optimization, essentially iterating over all possible choices of $x$ and computing the objective value for each of the choices. The reason is that there seems to be no discernible pattern exhibited by the true ML sequence; as we see in the table below, the true ML sequence at times extends a few runs, and at times even introduces new runs! Here, we list a few examples of the trace and the corresponding 10-length ML sequences. \begin{center} \begin{tabular}{ | c | c| } \hline $y$ & The set of all $x_{ml}$ sequences \\ \hline 10111 & 1100111111 \\ \hline 1010 & 1101010100 \\ \hline 000100 & 0000001000, \quad 0000010000, \quad 0000011000 \\ \hline 111101 & 1111111001,\quad 1111111011 \\ \hline \end{tabular} \end{center} \vspace{5mm} In this section, we show that one could equivalently solve the continuous relaxation of \eqref{eq:ML_deletion} to obtain a solution for \eqref{eq:ML_deletion}. Before presenting the main result, we first state a useful lemma which factors a given coordinate $p_i$ out of the relaxed binomial coefficient $\mathbf F(p,y)$ we introduced in Definition~\ref{def:f}. \begin{restatable}{lemma}{deletionMLrellemma} For $p=(p_1,p_2,..,p_i,...,p_n)$ and $Y=y=y_1...y_m$ with $n \geq m > 0$, we have \begin{align*} \mathbf F(p,y) = \mathbf F( p_{ [n]\backslash \{i\}},y) + p_i \sum\limits_{k|y_k=1}\mathbf F( p_{[1:i-1]}, y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|y_k=0}\mathbf F( p_{[1:i-1]}, y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, y_{[k+1,m]}). \end{align*} \label{lemma:F_decomposition} \end{restatable} Recall that $\mathbf F(p,y)$ sums over all $m$-length subsets $\mathcal S$ and associates $p_{\mathcal S}$ with $y.$ Intuitively, this recursive relationship considers separately the cases where \begin{itemize} \item $i \notin \mathcal S$, \item $i \in \mathcal S$ and is associated with a particular $y_k$ where $y_k = 1$, \item $i \in \mathcal S$ and is associated with a particular $y_k$ where $y_k = 0$. \end{itemize} The detailed proof can be found in Appendix~\ref{app:F_lemma_proof}. It is clear from Lemma~\ref{lemma:F_decomposition} that $\mathbf F(p,y)$ is affine when projected onto each coordinate $p_i$. Thus, the extrema of $\mathbf F(p,y)$ must occur at the boundary of the support set of $p_i$; i.e., at either $p_i = 0$ or $p_i = 1$. Combining this with the fact that $\mathbf F(\cdot)$ is a relaxed version of the binomial coefficient, we observe that the maximization problem in \eqref{eq:ML_deletion} is equivalent to its real-valued relaxation. The following result makes this precise. \begin{theorem} The ML decoding problem for the single-trace deletion channel \begin{equation} \max_{x\in \{0,1\}^n} {x \choose y} \label{eq:ml_opt_equiv1} \end{equation} is equivalent to the problem \begin{equation} \max_{p\in [0,1]^n} \mathbf F(p,y). \label{eq:ml_opt_equiv2} \end{equation} Furthermore, given any non-integral $p^* \in [0,1]^n$ that maximizes $\mathbf F(p,y)$, we can construct a corresponding integral solution $x^* \in \{0,1\}^n$ that maximizes $\mathbf F(x,y)$ and consequently also maximizes ${x \choose y}$. \label{thm:ML_relaxation} \end{theorem} \begin{proof} As noted earlier, we have ${x \choose y} = \mathbf F(x,y)$. Therefore, we are interested in proving the following: \begin{align*} \max_{x\in \{0,1\}^n} \mathbf F(x,y) \equiv \max_{p\in [0,1]^n} \mathbf F(p,y),\numberthis \label{eq:ml_opt_equiv_proof1} \end{align*} where $\equiv$ refers to that the two problems are equivalent (have the same optimal objective value). We prove this by applying the following claim.\\ \textbf{Claim:} Given any feasible $p=(p_1,p_2,...,p_i,...,p_n)$, at least one of the following holds true: \begin{itemize} \item$\mathbf F(p^{(i\rightarrow 0)},y) \geq \mathbf F(p,y)$. Recall from notation that $p^{(i\rightarrow 0)}=(p_1,p_2,...,p_{i-1},0,p_{i+1}...,p_n)$ is the vector where the $i^{th}$ coordinate is replaced by $0$. \item $\mathbf F(p^{(i\rightarrow 1)},y) \geq \mathbf F(p,y)$. \end{itemize} Thus if $p^*$ is an optimal solution to \eqref{eq:ml_opt_equiv2} with $p_i\in (0,1)$, then at least one of $p^{(i\rightarrow 0)}$ or $p^{(i\rightarrow 1)}$ is also an optimal solution. Sequentially applying this argument for each coordinate of $p$ shows that there exists a point in $\{0,1\}^n$ which is an optimal solution to \eqref{eq:ml_opt_equiv2} and consequently to \eqref{eq:ml_opt_equiv1}. It remains to prove our claim. We use Lemma~\ref{lemma:F_decomposition} to factor out $p_i$ terms in $\mathbf F(p,Y)$: \begin{align*} \mathbf F(p,y) = \mathbf F( p_{[n]\backslash \{i\}},y) + p_i \sum\limits_{k|y_k=1}\mathbf F( p_{[1:i-1]}, y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, y_{[k+1,m]})\\ + (1-p_i) \sum\limits_{k|y_k=0}\mathbf F( p_{[1:i-1]}, y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, y_{[k+1,m]}). \end{align*} Now we express $\mathbf F(p^{(i\rightarrow 0)},y)$ and $\mathbf F(p^{(i\rightarrow 1)},y)$ as $$\mathbf F(p^{(i\rightarrow 0)},y) = \mathbf F( p_{[n]\backslash \{i\}},y) + \sum\limits_{k|y_k=0}\mathbf F( p_{[1:i-1]}, y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, y_{[k+1,m]}),$$ $$\mathbf F(p^{(i\rightarrow 1)},y) = \mathbf F( p_{[n]\backslash \{i\}},y) + \sum\limits_{k|y_k=1}\mathbf F( p_{[1:i-1]}, y_{[1:k-1]})\mathbf F( p_{[i+1:n]}, y_{[k+1,m]}).$$ \noindent Because $0\leq p_i\leq 1$ it directly follows that $$\min \left\{\mathbf F(p^{(i\rightarrow 0)},y),\mathbf F(p^{(i\rightarrow 1)},y)\right\} \leq \mathbf F(p,y) \leq \max \left\{\mathbf F(p^{(i\rightarrow 0)},y),\mathbf F(p^{(i\rightarrow 1)},y)\right\},$$ thus proving our claim. \end{proof} The real-valued optimization problem in \eqref{eq:ml_opt_equiv2} falls under the umbrella of signomial optimization which is, in general, NP-hard (see for example, \cite{xu2014signomial}, \cite{chand2016signomial}). A standard technique for signomial optimization uses convexification strategies to approximate the optimal value. In particular, as stated in \cite{chand2016signomial}, the main observation underlying their methods is that certifying the nonnegativity of a signomial with at most \textit{one negative coefficient} can be accomplished efficiently. However, there are two problems with this approach in relation to our work -- 1. when expressed as a signomial optimization problem, \textit{all} the coefficients are negative in the ML optimization objective function, and 2. the objective function has an exponential number of signomial terms as can be seen from Definition~\ref{def:f}. As a result, such strategies turn out to not be useful for the ML optimization problem. For instance, the techniques in \cite{chand2016signomial} resulted in the bound $\mathbf F(p,Y) \leq {|p| \choose |Y|}$ for most instances of $p$ and $Y$, where $|\cdot|$ denotes the length of the vector/sequence. This is a trivial bound that uses no information about $p$ and $Y$ other than their lengths. Moreover, with a slight change of variables, \eqref{eq:ml_opt_equiv2} could also be expressed as a maximization of a convex function in a convex set. With that being said, it is still unclear if \eqref{eq:ml_opt_equiv2} is solvable in polynomial time or not. \subsection{ML via gradient ascent} Given the continuous variable formulation of the ML problem in \eqref{eq:ml_opt_equiv2}, a natural heuristic to find an estimate of the ML sequence is to employ \textit{projected gradient ascent} to solve \eqref{eq:ml_opt_equiv2}. The algorithm, in short, can be described as follows (the exact algorithm is detailed as Alg.~\ref{alg:grad_asc}):\\ Step I: Start from a randomly chosen interior point (in our case, we start from $p=(0.5,0.5,...,0.5)$, the point corresponding to the uniform distribution).\\ Step II: Take a small step in the direction of the gradient $\nabla_p\ \mathbf F(p,y)$.\\ Step III: If the gradient step results in $p$ moving out of $[0,1]^n$, project it back onto $[0,1]^n$. Repeat Steps II and III until convergence.\\ Step IV: From the final $p$, determine the closest binary sequence to be the reconstructed sequence. Moreover in Appendix~\ref{app:F_grad_comp}, we show using Lemma~\ref{lemma:F_decomposition} that $\nabla_p\ \mathbf F(p,y)$ can be computed in $O(n^2)$ as a ``by-product'' of computing $\mathbf F(p,y)$. \begin{algorithm}[t!] \caption{Single trace projected gradient ascent for ML}\label{alg:grad_asc} \begin{algorithmic}[1] \item Input: Blocklength $n$, Trace {$Y=y$}, Initial point $p = (p_1,p_2,...,p_n)$, step-size $\epsilon$, Max iterations $M$, Convergence criteria $C$ \\ Outputs: Estimated sequence $\hat X$ \State Iteration count $j=0$ \While {$C$ is FALSE and $j<M$} \State $p \leftarrow p + \epsilon \frac{\nabla_p \mathbf F(p,y)}{\mathbf F(p,y)}$ \State Replace $p_i \leftarrow 1$ for all $i:p_i > 1$ \State Replace $p_i \leftarrow 0$ for all $i:p_i < 0$ \State $j\leftarrow j+1$ \EndWhile \State For each $i$, set $\hat X_i = \mathbbm 1 \{p_i>0.5\}$. \State \textbf{return} $\hat X = \hat X_1 \hat X_2 ... \hat X_n$ \end{algorithmic} \end{algorithm} \newpage \subsection{A heuristic for multiple traces} The continuous variable ML formulation in \eqref{eq:ml_opt_equiv2} optimizes over the distributions $p$, instead of sequences $x$. In particular, we proved the following: \begin{equation*} \max_{x\in \{0,1\}^n} {x \choose y} \equiv \max_{p\in [0,1]^n} \mathbf F(p,y) \equiv\ \max_{p\in [0,1]^n}\ \mathbb E_{Z \sim p} {Z \choose y}. \end{equation*} At this point, one could ask how this formulation extends to multiple traces $Y^1=y^1,Y^2=y^2,...,Y^t=y^t$. The following theorem gives such a continuous optimization formulation with multiple traces. \begin{theorem} The ML decoding with multiple traces \begin{equation} \max_{x\in \{0,1\}^n} {x \choose y^1}{x \choose y^2}...{x \choose y^t} \label{eq:ml_opt_traces1} \end{equation} is equivalent to \begin{equation} \max_{p\in [0,1]^n} \mathbb E_{Z\sim p} \left[{Z \choose y^1}{Z \choose y^2}...{Z \choose y^t} \right]. \label{eq:ml_opt_traces2} \end{equation} Furthermore, given any non-integral $p^* \in [0,1]^n$ that maximizes $\mathbb E_{Z\sim p} \left[{Z \choose y^1}{Z \choose y^2}...{Z \choose y^t} \right]$, we can construct a corresponding integral solution $x^* \in \{0,1\}^n$ that also maximizes ${x \choose y^1}{x \choose y^2}...{x \choose y^t} $. \label{thm:ML_multi_traces} \end{theorem} \begin{proof} This theorem can be proved in the same way as Theorem~\ref{thm:ML_relaxation}, by showing that \\ $\mathbb E_{Z\sim p} \left[{Z \choose y^1}{Z \choose y^2}...{Z \choose y^t} \right]$ is an affine function of each $p_i$; here we only prove this fact and the rest of the arguments follow exactly as in the proof of Theorem~\ref{thm:ML_relaxation}. To show this we use Lemma~\ref{lemma:bin_inf_relation} stated below; this Lemma is also closely related to the channel equivalence of Theorem~\ref{thm:channel_equiv} (see Appendix~\ref{app:bin_inf_lemma}). \begin{restatable}{lemma}{bininfrelation} \label{lemma:bin_inf_relation} For $h,f_1,f_2,...,f_m \in \mathcal{A}^*$,\\ $${h \choose f_1} {h \choose f_2}...{h \choose f_m}=\sum_{w\in \mathcal{A}^*}\langle f_1\uparrow f_2\uparrow ...\uparrow f_m,w \rangle{h \choose w}.$$ \end{restatable Using Lemma~\ref{lemma:bin_inf_relation}, we now have \begin{align*} \mathbb E_{Z\sim p} \left[{Z \choose y^1}{Z \choose y^2}...{Z \choose y^t}\right] &= \mathbb E_{Z\sim p} \sum_{w\in \mathcal{A}^*}\langle y^1\uparrow y^2\uparrow ...\uparrow y^t,w \rangle{Z \choose w}\\ &= \sum_{w\in \mathcal{A}^*}\langle y^1\uparrow y^2\uparrow ...\uparrow y^t,w \rangle \mathbb E_{Z\sim p} {Z \choose w}\\ & = \sum_{w\in \mathcal{A}^*}\langle y^1\uparrow y^2\uparrow ...\uparrow y^t,w \rangle \mathbf F(p,w). \end{align*} Note that $\mathbf F(p,w)$ is affine in each $p_i$. Thus $\mathbb E_{Z\sim p} \left[{Z \choose y^1}{Z \choose y^2}...{Z \choose y^t}\right]$ is a linear combination of affine functions of each $p_i$, and hence is also affine in each $p_i$. \end{proof} The formulation of \eqref{eq:ml_opt_traces2}, by itself, is not very useful as it is unclear on how to efficiently compute $\mathbb E_{Z\sim p} \left[{Z \choose y^1}{Z \choose y^2}...{Z \choose y^t}\right]$. Indeed, if ${Z \choose y^i}\ \raisebox{0.05em}{\rotatebox[origin=c]{90}{$\models$}}\ {Z \choose y^j}$, the expectation of products would decompose into the product $\prod_{j} \mathbb E_{Z\sim p} {Z \choose y^j} = \prod_{j} \mathbf F(p,y^j)$, and each of the terms in the product can be computed in $O(n^2)$ as detailed in Appendix~\ref{app:F_compute} -- this is however not the case as ${Z \choose y^i}$ and ${Z \choose y^j}$ are not independent. Having said that, we can now solve the maximization problem $\argmax_{p\in [0,1]^n} \prod_{j=1}^t \mathbf F(p,y^j)$ and hope that the resultant solution is also a good solution for $\argmax_{p\in [0,1]^n} \mathbb E_{Z\sim p} \left[{Z \choose y^1}...{Z \choose y^t}\right]$; Algorithm~\ref{alg:grad_asc_traces} makes this idea precise. Moreover, instead of maximizing $\prod_{j=1}^t \mathbf F(p,y^j)$, we can further simplify the gradient computations by taking the log of the objective function, i.e., we solve $\argmax_{p\in [0,1]^n} \sum_{j=1}^t \log \mathbf F(p,y^j)$. This heuristic turns out to perform well in a variety of situations, as illustrated in Section~\ref{sec:Numerics}. As for the complexity, note that Alg.~\ref{alg:grad_asc_traces} involves the computation of $t$ gradients (each of which takes $O(n^2)$) at each gradient iteration. For a fixed number of max iterations $M$, the complexity of the algorithm is $O(n^2t)$. \begin{algorithm}[t!] \caption{Trace reconstruction heuristic via projected gradient ascent}\label{alg:grad_asc_traces} \begin{algorithmic}[1] \item Input: Blocklength $n$, Traces {$Y^1=y^1,Y^2=y^2,...,Y^t=y^t$}, Initial point $p = (p_1,p_2,...,p_n)$, step-size $\epsilon$, Max iterations $M$, Convergence criteria $C$ \\ Outputs: Estimated sequence $\hat X$ \State Iteration count $j=0$ \While {$C$ is FALSE and $j<M$} \State $p \leftarrow p + \epsilon \sum_{j=1}^t \frac{\nabla_p \mathbf F(p,y^j)}{\mathbf F(p,y^j)}$ \State Replace $p_i \leftarrow 1$ for all $i:p_i > 1$ \State Replace $p_i \leftarrow 0$ for all $i:p_i < 0$ \State $j\leftarrow j+1$ \EndWhile \State For each $i$, set $\hat X_i = \mathbbm 1 \{p_i>0.5\}$. \State \textbf{return} $\hat X = \hat X_1 \hat X_2 ... \hat X_n$ \end{algorithmic} \end{algorithm}
{'timestamp': '2020-06-01T02:09:46', 'yymm': '2005', 'arxiv_id': '2005.14480', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14480'}
arxiv
\section{Introduction} Computer-aided thoracic disease diagnosis based on chest X-ray has been significantly advanced by deep convolutional neural networks (DCNNs) in recent years~\cite{wang2017chestx
,tang2018attention,sedai2018deep,wang2018chestnet}. Most of these approaches are formulated as a multi-task binary classification problem, where a CNN is trained to predict the risk probabilities of different thoracic diseases. In clinical practices, visual localization of the lesions on chest X-ray, such as heatmaps or segmentation masks, is also preferred to provide interpretable supports for the classification results. Precise lesion localization often requires training CNNs with strong supervision, such as bounding boxes~\cite{wang2017chestx}, beyond merely image-level labels. However, accurately annotating lesion locations is difficult, time-consuming, and infeasible to practice in large-scale. For example, one of the largest publicly available chest X-ray datasets ChestX-ray14~\cite{wang2017chestx} contains more than one hundred thousands images with image-level labels, among which only less than one thousand images are further annotated with bounding boxes for benchmarking. Therefore, weakly supervised lesion localization on chest X-ray based on image-level labels remains a challenge but vital problem for computer-aided thoracic disease diagnosis. The recent work of Class Activation Map (CAM)~\cite{zhou2016learning} demonstrates the excellent localization ability of CNNs trained on nature images with only image-level supervision. On chest X-ray images, CAM and its variations have also been used for lesions localization~\cite{wang2017chestx,tang2018attention,sedai2018deep,wang2018chestnet}. However, most of these approaches utilize CAM as a post-processing technique to first generate lesion localization heatmaps, then threshold the heatmap scores and generate lesion bounding boxes. We argue that it may be beneficial to leverage CAM even during training given its excellent localization ability. \begin{figure}[t] \begin{center} \includegraphics[width=0.95\linewidth]{fig1.png} \end{center} \caption{The framework of Probabilistic-CAM (PCAM) pooling.} \label{fig1} \end{figure} In this work, we propose a novel and simple extension to the CAM-based framework for lesion localization on chest X-ray with image-level supervision. Specifically, we propose a new global pooling operation that explicitly leverages CAM for localization during training in a probabilistic fashion, namely Probabilistic-CAM (PCAM) pooling. Figure~\ref{fig1} shows the framework of PCAM pooling. A fully convolutional backbone network first processes the input chest X-ray image and generates a feature map. Then, for a particular label of thoracic disease, e.g. ``Pneumonia'', each feature embedding within the feature map goes through a fully connected (fc) layer implemented as a $1 \times 1$ convolutional layer and generates the class activation score that monotonically measure the disease likelihood of each embedding. Unlike the standard practice that directly uses the class activation score for localization, we further bound it with the sigmoid function and interpret the output as the disease probability of each embedding. Finally, the output probability map is normalized to attention weights of each embedding, following the multiple-instance learning (MIL) framework~\cite{ilse2018attention,yao2018weakly}, which are used to pool the original feature map by weighted average pooling. The pooled embedding goes through the same fc layer introduced above and generates the image-level disease probability for training. During inference time, we directly use the probability map for lesion localization, and apply a simple probability thresholding to obtain disease regions and bounding boxes. PCAM pooling does not introduce any additional training parameters and is easy to implement. We experiment the efficacy of PCAM pooling for lesion localization with image-level supervision on the ChestX-ray14~\cite{wang2017chestx} dataset. A ResNet-34~\cite{he2016deep} model trained with PCAM pooling significantly outperforms the ChestX-ray14 baseline\cite{wang2017chestx} in both the classification task and the localization task. Qualitative visual examination shows the probability maps generated by PCAM pooling tend to have clear and sharp boundaries around lesion regions compared to the typical class activation map. \section{Related work} \subsection{Weakly supervised lesion localization} Various methods have been proposed to localize lesions on chest X-ray through image-level supervision~\cite{wang2017chestx,tang2018attention,sedai2018deep,yao2018weakly,wang2018chestnet}. Wang et al.~\cite{wang2017chestx} introduce the ChestX-ray14 dataset, together with a baseline for evaluating weakly supervised lesion localization. Instead of the typical global average pooling, Wang et al. use the Log-Sum-Exp (LSE) pooling~\cite{pinheiro2015image} to encourage model training focusing more on the discriminative regions of the chest X-ray. Sedai et al.~\cite{sedai2018deep} utilize the intermediate feature maps from CNN layers at different scales together with learned layer reference weights to improve the localization performance of small lesions. Tang et al.~\cite{tang2018attention} combine CNN with attention-guided curriculum learning to gradually learn distinctive convolutional features for localization. Most of these approaches utilize the standard CAM technique to localize lesions, where our proposed PCAM pooling serves as an extension to the standard CAM to improve lesion localization with image-level supervision. \subsection{Global pooling}\label{sec2.2} Global average pooling is arguably the most widely used global pooling operation for image classification. While it is less prone to overfitting as the network tries to identify all discriminative regions of an object in natural images~\cite{zhou2016learning}, it may also fail to highlight the most discriminative regions within a chest X-ray, given most chest X-ray images share the same anatomic structures and may only differ in fine-grained lesion regions. Therefore, different global pooling operations have also been used to analyze chest X-rays. For example, Wang et al.~\cite{wang2017chestx} uses LSE pooling, which can be viewed as an adjustable operation between max pooling and average pooling by controlling the hyper-parameter $\gamma$. We summarize the mathematical differences and correlations between different global pooling operations in Table~\ref{tab1}. Particularly, $X$ with shape $(C, H, W)$ denotes the feature map from the last convolutional layer of a CNN. $x$ denotes the feature embedding of length $C$ after global pooling. $C$ denotes the channel dimension, $H$ and $W$ denote the height and width of the feature map. \begin{table} \centering \caption{Different types of global pooling.}\label{tab1} \begin{tabular}{|l|l|} \hline Pooling type & Formulation\\ \hline Average & $x_c = \sum_{i,j}^{H,W} w_{c,i,j} X_{c,i,j} ,\ w_{c,i,j} = \frac{1}{H*W}$ \\ Linear~\cite{wang2019comparison} & $x_c = \sum_{i,j}^{H,W} w_{c,i,j} X_{c,i,j} ,\ w_{c,i,j} = \frac{X_{c,i,j}}{\sum_{i,j}^{H,W} X_{c,i,j}}$ \\ Exponential~\cite{wang2019comparison} & $x_c = \sum_{i,j}^{H,W} w_{c,i,j} X_{c,i,j} ,\ w_{c,i,j} = \frac{\exp(X_{c,i,j})}{\sum_{i,j}^{H,W} \exp(X_{c,i,j})}$ \\ LSE~\cite{pinheiro2015image} & $x_c = \frac{1}{\gamma} \log\left[ \frac{1}{H*W} \sum_{i,j}^{H,W} \exp(\gamma X_{c,i,j}) \right]$ \\ LSE-LBA~\cite{yao2018weakly} & $s = \frac{1}{\gamma_0 + \exp(\beta)} \log\left[ \frac{1}{H*W} \sum_{i,j}^{H,W} \exp\left[(\gamma_0 + \exp(\beta)) S_{i,j}\right] \right]$ \\ Attention~\cite{ilse2018attention} & $x = \sum_{i,j}^{H,W} w_{i,j} X_{i,j} ,\ w_{i,j} = \frac{\exp\left[\mathbf{w}^\intercal\tanh(\mathbf{V} X_{i,j})\right]}{\sum_{i,j}^{H,W} \exp\left[\mathbf{w}^\intercal\tanh(\mathbf{V} X_{i,j})\right]} $ \\ \hline \end{tabular} \end{table} We can see that, global pooling operations differ mainly in the ways to compute the weights for feature map averaging. Note that, most of the pooling is performed for each channel $X_{c,i,j}$ independently, except for Attention pooling~\cite{ilse2018attention} that computes the attention weight at embedding level $X_{i,j}$ with extra trainable parameters, i.e. $\mathbf{V}, \mathbf{w}$ in Table~\ref{tab1}. All the channels within the same embedding share the same weight. The basic assumption of Attention pooling follows the multiple-instance learning (MIL) framework, that treats each embedding as an instance, and the chest X-ray as a bag of instances is positive, e.g. certain thoracic disease, as long as one of the instance is positive. PCAM pooling follows the same MIL framework, but uses a different method to compute the attention weight for each embedding based on the localization ability of CAM without introducing extra trainable parameters. LSE-LBA~\cite{yao2018weakly} also falls within the MIL framework, but it performs the pooling operation on the saliency map $S$ of shape $(H, W)$ instead of the feature map $X$ to obtain a saliency score $s$ for training. \section{Probabilistic-CAM (PCAM) Pooling} The main idea of PCAM pooling is to explicitly leverage the localization ability of CAM~\cite{zhou2016learning} through the global pooling operation during training. Using the same notation from Section~\ref{sec2.2}, given a fully convolutional network trained for multi-task binary classification, the class activation map of a particular thoracic disease is given by $\{s_{i,j} = \mathbf{w}^\intercal X_{i,j} + b|i,j \in H,W\}$. $X_{i,j}$ is the feature embedding of length $C$ at the position $(i, j)$ of a feature map $X$ with shape $(C, H, W)$ from the last convolutional layer. $\mathbf{w}, b$ are the weights and bias of the last fc layer for binary classification. In other words, $s_{i,j}$ is the logit before sigmoid function under the binary classification setting. $s_{i,j}$ monotonically measures the disease likelihood of $X_{i,j}$, and is used to generate the localization heatmap after the model is trained in the standard CAM framework. PCAM pooling utilizes $s_{i,j}$ to guide lesion localization during training through the global pooling operation under the MIL framework~\cite{ilse2018attention}. The MIL framework assumes the chest X-ray as a bag of embeddings is positive, as long as one of the embedding is positive. To measure each embedding's contribution to the whole bag, the MIL framework assigns normalized attention weights to each embedding for weighted global average pooling~\cite{ilse2018attention}. Because the numerical range of $s_{i,j}$ is unbounded, i.e. $s_{i,j} \in (-\infty, +\infty)$ in theory, it's neither interpretable nor directly applicable to compute the attention weights. Therefore, we further bound $s_{i,j}$ with the sigmoid function, $p_{i,j} = \text{sigmoid}(s_{i,j})$, and normalize $p_{i,j}$ as the attention weights. In summary, PCAM pooling can be formulated as \begin{eqnarray}\label{eq1} x = \sum_{i,j}^{H,W} w_{i,j} X_{i,j} ,\ w_{i,j} = \frac{\text{sigmoid}(\mathbf{w}^\intercal X_{i,j} + b)}{\sum_{i,j}^{H,W} \text{sigmoid}(\mathbf{w}^\intercal X_{i,j} + b)} \end{eqnarray} where $w_{i,j}$ is the attention weight for $X_{i,j}$ and $x$ is the pooled feature embedding which goes through the same fc layer for final image level classification. During the inference time, we interpret the sigmoid-bounded $p_{i,j}$ as the probability of embedding $X_{i,j}$ being positive, thus named Probabilistic-CAM, and we use the probability map $\{p_{i,j}|i,j \in H,W\}$ as the localization heatmap. We use a simple probability thresholding on the probability map to obtain regions of interest. In comparison, because $s_{i,j}$ is unbounded with different numerical ranges in different chest X-ray images, it is usually normalized into $[0, 255]$ within each image and thresholded with some ad-hoc ranges, e.g. $[60, 180]$ in~\cite{wang2017chestx}, to generate regions of interest. We show in Section~\ref{sec4.3} section that PCAM pooling generates localization heatmaps with better visual quality around lesion boundary regions compared to the standard CAM. \section{Experiments} \subsection{Dataset and experiments setup}\label{sec4.1} We evaluate lesion localization with image-level supervision on the ChestX-ray14~\cite{wang2017chestx} dataset, which contains 112,120 frontal-view chest X-ray images with 14 thoracic disease labels. 8 out of the 14 diseases are further annotated with 984 bounding boxes. We randomly split the official train\_valid set into $75\%$ for training and $25\%$ for validation. On the official test set, we evaluate the classification task on the 14 diseases and the localization task on the 8 diseases that have bounding boxes. Note that the 984 bounding boxes are not used for training. We use ResNet-34~\cite{he2016deep} as the backbone network and process the input images on the original $1024 \times 1024$ scale following~\cite{wang2017chestx}. The network is trained with a batch size of 36 and a learning rate of $1e^{-4}$ for 10 epoches. We balance the binary cross entropy loss of positive and negative samples within each batch following~\cite{wang2017chestx}. For the localization task, we first apply a probability of 0.9 to threshold the probability maps from PCAM pooling, then generate bounding boxes that cover the isolated regions in the binary masks. To compare the visual quality of localization heatmaps from PCAM pooling with previous methods, we also train a ResNet-34 with LSE pooling following~\cite{wang2017chestx}. We normalize the class activation maps from LSE pooling into $[0, 255]$ for each image individually, and then apply a threshold of 180 following~\cite{wang2017chestx}. Note that the performances of LSE pooling reported in Table~\ref{tab2} and Table~\ref{tab3} are from Wang et al.~\cite{wang2017chestx}. \subsection{Classification task}\label{sec4.2} \begin{table} \centering \caption{Classification AUCs of the 14 diseases on the ChestX-ray14 test set.}\label{tab2} \begin{tabular}{|l|c|c|c|c|c|} \hline Method & LSE~\cite{wang2017chestx} & LSE-LBA~\cite{yao2018weakly} & AGCL~\cite{tang2018attention} & ChestNet~\cite{wang2018chestnet} & PCAM pooling \\ \hline Atelectasis & 0.700 & 0.733 & 0.756 & 0.743 & \textbf{0.772} \\ Cardiomegaly & 0.810 & 0.856 & \textbf{0.887} & 0.875 & 0.864 \\ Effusion & 0.759 & 0.806 & 0.819 & 0.811 & \textbf{0.825} \\ Infiltration & 0.661 & 0.673 & 0.689 & 0.677 & \textbf{0.694} \\ Mass & 0.693 & 0.777 & \textbf{0.814} & 0.783 & 0.813 \\ Nodule & 0.669 & 0.718 & 0.755 & 0.698 & \textbf{0.783} \\ Pneumonia & 0.658 & 0.684 & \textbf{0.729} & 0.696 & 0.721 \\ Pneumothorax & 0.799 & 0.805 & 0.850 & 0.810 & \textbf{0.868} \\ Consolidation & 0.703 & 0.711 & 0.728 & 0.726 & \textbf{0.732} \\ Edema & 0.805 & 0.806 & \textbf{0.848} & 0.833 & 0.833 \\ Emphysema & 0.833 & 0.842 & 0.908 & 0.822 & \textbf{0.931} \\ Fibrosis & 0.786 & 0.743 & 0.818 & 0.804 & \textbf{0.819} \\ Pleural Thickening & 0.684 & 0.724 & 0.765 & 0.751 & \textbf{0.788} \\ Hernia & 0.872 & 0.775 & 0.875 & \textbf{0.900} & 0.784 \\ \hline \end{tabular} \end{table} Table~\ref{tab2} shows the Area Under the receiver operating characteristic Curves (AUCs) of the classification task on the 14 thoracic diseases from the ChestX-ray14 official test set. PCAM pooling outperforms most of the other state-of-the-art methods including the baseline reported in ChestX-ray14~\cite{wang2017chestx}. We suspect explicitly utilizing CAM for localization during training may also benefit the classification task. Note the results from AGCL~\cite{tang2018attention} are obtained by training with additional severity-level information. \subsection{Localization task}\label{sec4.3} \begin{table} \centering \caption{Localization accuracies and average false positives of the 8 diseases that have bounding boxes on the ChestX-ray14 test set.}\label{tab3} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline Method & AT & CM & EF & IF & MS & ND & PMN & PMT \\ \hline \multicolumn{9}{|c|}{Localization accuracy, IoBB $>$ 0.5} \\ \hline LSE-baseline~\cite{wang2017chestx} & 0.2833 & 0.8767 & 0.3333 & 0.4227 & 0.1411 & 0.0126 & 0.3833 & 0.1836 \\ PCAM pooling & \textbf{0.3500} & \textbf{0.9657} & \textbf{0.5359} & \textbf{0.7642} & \textbf{0.4118} & \textbf{0.0759} & \textbf{0.7667} & \textbf{0.1939} \\ \hline \multicolumn{9}{|c|}{Average false positive} \\ \hline LSE-baseline~\cite{wang2017chestx} & 1.020 & 0.563 & \textbf{0.927} & \textbf{0.659} & \textbf{0.694} & \textbf{0.619} & \textbf{1.013} & \textbf{0.529} \\ PCAM pooling & \textbf{0.867} & \textbf{0.021} & 1.137 & 1.805 & 1.000 & 1.228 & 1.200 & 1.684 \\ \hline \end{tabular} \begin{tablenotes} \small \item AT: Atelectasis, CM: Cardiomegaly, EF: Effusion, IF: Infiltration, MS: Mass, ND: Nodule, PMN: Pneumonia, PMT: Pneumothorax. \end{tablenotes} \end{table} Table~\ref{tab3} shows the localization accuracies and the average false positives of the localization task on the 8 thoracic diseases that have bounding boxes. We use Intersection over the predicted B-Box area ratio (IoBB) to measure the overlap between predicted bounding boxes and ground truth bounding boxes annotated by radiologists following~\cite{wang2017chestx}. A correct localization is defined as at least one predicted bounding box is overlapped with the ground truth bounding box with IoBB $>$ 0.5~\cite{wang2017chestx}. PCAM pooling outperforms the baseline localization accuracy~\cite{wang2017chestx} by a significant margin on all of the diseases, demonstrating its efficacy in weakly supervised lesion localization. Figure~\ref{fig2} shows a few selected examples of the probability maps generated by PCAM pooling and the class activation maps generated by LSE pooling together with the predicted bounding boxes. Compared to the class activation maps, the probability maps are visually more clear with sharp boundaries around lesion regions. We attribute the improved visual quality to the probabilistic interpretation of the sigmoid-bounded class activation map and explicitly using it for training with global pooling. We notice the probability maps generated by PCAM pooling tend to enlarge regions of interest in general than class activation maps from LSE pooling, especially when the ground truth regions are small, such as ``Nodule'' in Figure~\ref{fig2}. This may explain the fact that PCAM pooling has relatively larger average false positives than CAM with LSE pooling. \begin{figure} \centering \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{Atelectasis.jpg} \caption{Atelectasis} \label{fig2:a} \end{subfigure} ~ \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{Cardiomegaly.jpg} \caption{Cardiomegaly} \label{fig2:b} \end{subfigure} ~ \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{Effusion.jpg} \caption{Effusion} \label{fig2:c} \end{subfigure} ~ \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{Pneumonia.jpg} \caption{Pneumonia} \label{fig2:d} \end{subfigure} ~ \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{Mass.jpg} \caption{Mass} \label{fig2:e} \end{subfigure} ~ \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{Nodule.jpg} \caption{Nodule} \label{fig2:f} \end{subfigure} ~ \caption{Selected samples of localization heatmaps and their bounding boxes generated by LSE pooling and PCAM pooling on the test set of ChestX-ray14~\cite{wang2017chestx}. In each subfigure, the left panel is the original chest X-ray with the ground truth bounding boxes (green), the middle panel is the class activation map and predicted bounding boxes (blue) by LSE pooling, the right panel is the probability map and predicted bounding boxes (blue) by PCAM pooling.} \label{fig2} \end{figure} \section{Conclusion} In this work, we present Probabilistic-CAM (PCAM) pooling, a new global pooling operation to explicitly leverage the localization ability of CAM for training. PCAM pooling is easy to implement and does not introduce any additional training parameters. Experiments of weakly supervised lesion localization on the ChestX-ray14~\cite{wang2017chestx} dataset demonstrate its efficacy in improving both the classification task and the localization task compared to several state-of-the-art baselines. Visual inspection of the probability maps generated by PCAM pooling shows clear and sharp boundaries around lesion regions compared to the standard class activation maps. Currently, PCAM pooling tends to generate localization maps that enlarge regions of interest, which may increase false positives especially for small lesions. We are working on reducing this effect as our future direction. \small \bibliographystyle{abbrv} \input{document.bbl} \end{document}
{'timestamp': '2020-06-25T02:04:28', 'yymm': '2005', 'arxiv_id': '2005.14278', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14278'}
arxiv
\section{Introduction} The space of diffeomorphisms splits into two classes: those with zero entropy and those with positive entropy (by which we always mean topological entropy). The former contains
Morse-Smale diffeomorphisms: their nonwandering set is formed by finitely many periodic points. The latter contains the systems exhibiting a transverse homoclinic orbit, \emph{i.e.} an orbit which accumulates on the past and on the future on a same periodic orbit and which persists under small perturbations: the nonwandering set is uncountable. In particular, both classes contain $C^1$-open sets. It has been proved that Morse-Smale systems and those having a transverse homoclinic intersection define a $C^1$-dense open set~\cite{PuSa,C}. However, even in the $C^1$ context, the dynamics of systems belonging to the interface of these two classes is not well understood, while in higher topologies almost nothing is known. One goal would be to characterize the systems in the boundary of the zero entropy class and in particular to try to identify, if it exists, the universal phenomenon that generates entropy. In a more general context our central question here is the transition between simple and complicated dynamics as seen from two different angles: the fundamental angle and the applied one. The later because the transition that we consider is the trace on a Poincar{\'e} map of a transition to chaos of dissipative flows in ${\mathbb R}^3$, as it has been observed in particular in a variety of natural and engineering contexts modeled that way. This happens both in some forced damped oscillators for which one observes the formation of horseshoes for a Poincar{\'e} map and in autonomous flows where the chaos is linked to a Shil'nikov bifurcation in $C^\omega$ regularity~\cite{shilnikov}, or even $C^{1+\text{Lip}}$ regularity~\cite{Tr}. We can think about two related problems when considering this central question: \smallskip -- \emph{the transition to chaos} (\emph{i.e.}, the transition from zero to positive entropy),\footnote{In dimensions $1$ and $2$, the topological entropy is continuous with respect to the $C^{\infty}$-topology by~\cite{Mi,Ka,Yo}. In particular, there is no jump in entropy at the transition. For $C^1$ families on the interval, the transition to positive entropy requires infinitely many period doubling bifurcations \cite{BlHa}.} \smallskip -- \emph{the transition from finitely to infinitely many periods of hyperbolic periodic orbits.} \hspace{-2cm}\mbox{} \smallskip In the one-dimensional context, the natural ordering on the interval allows the development of a ``combinatorial theory", which describes properties of orbits related to this ordering. An example of results is Sharkovskii's hierarchy of periodic orbits~\cite{sharkovskii}; it implies in particular that any system with zero entropy only admits periodic points of period $2^n$. One paradigmatic example is the case of unimodal maps: Coullet-Tresser and independently Feigenbaum conjectured~\cite{CT,F} that the ones in the boundary of the zero entropy class are limit of a period doubling cascade with universal metric property under rather mild smoothness assumptions and are infinitely renormalizable (see also~\cite{Chandra}). In those papers a renormalization operator was introduced\footnote {In \cite{CT}, Coullet and Tresser recognized that operator as similar to the renormalization operator introduced in Statistical Mechanics by Kennet Wilson following a prehistory in the context of high energy physics.} and it was shown that the numerical observations could be explained if this operator, defined on an appropriate space of functions, would have a hyperbolic fixed point. The central results of the universality theory for unimodal maps have been proved by Lyubich~\cite{L} for analytic unimodal maps and extended to lower regularity in~\cite{FMP}. Partial results about multimodal maps and the associated transition to chaos have been obtained by many authors (see \emph{e.g.} \cite{MiTr} and references cited or citing). \paragraph{a -- Mildly dissipative diffeomorphisms of the disc.} The first step towards that universal goal in higher dimension, is to consider embeddings of the disc $\mathbb{D}$. These embeddings can be extended to as diffeomorphisms of the two-dimensional sphere by gluing a repelling disc, as detailed in~\cite{BoFr}.\footnote{Notice that \cite{BoFr} is the first paper studying cascades of period doubling in dimensions 1 and 2.} Therefore, to avoid notations we will call \emph{dissipative diffeomorphisms of the discs} the $C^r$ embeddings $f\colon \DD\to f(\DD)\subset \text{Interior}(\DD)$ with $r>1$, such that $|\det(Df(x))|<1$ for any $x\in \DD$. Observe that any $f$-invariant ergodic probability measure $\mu$ which is not supported on a hyperbolic sink has one negative Lyapunov exponent and another one which is non-negative. In particular for $\mu$-almost every point $x$, there exists a well-defined one-dimensional stable manifold $W^s(x)$. We denote $W^s_\DD(x)$ the connected component of $W^s(x)\cap \DD$ containing $x$. We strengthen the notion of dissipation: \begin{definition}\label{SD defi} A dissipative diffeomorphism of the disc is \emph{mildly dissipative} if for any ergodic measure $\mu$ not supported on a hyperbolic sink, and for $\mu$-almost every $x$, the curve $W^s_\DD(x)$ separates $\DD$. \end{definition} That notion has been introduced for any type of surfaces\footnote {In \cite{CP} these systems are called \emph{strongly dissipative diffeomorphisms} since many results were only applied for systems with very small Jacobian; in the context of the disc we call them \emph{mildly dissipative}, since there are classes of diffeomorphisms with not such small Jacobian, as the H\'enon maps, that satisfy the main property of the definition.} in \cite{CP}, where it is shown that mild dissipation is satisfied for large classes of systems: for instance it holds for $C^2$ open sets of diffeomorphisms of the disc, and for polynomial automorphisms of $\mathbb{R}^2$ whose Jacobian is sufficiently close to $0$, including the diffeomorphisms from the H\'enon family with Jacobian of modulus less than $1/4$ (up to restricting to an appropriate trapped disc). This class captures certain properties of one-dimensional maps but keeps two-dimensional features showing all the well known complexity of dissipative surface diffeomorphisms. The dynamics of the new class, in some sense, is intermediate between one-dimensional dynamics and general surface diffeomorphisms. \paragraph{b -- Renormalization.} As it was mentioned before, the essential mechanisms for interval endomorphisms in the transition to chaos are the period doubling cascades and the main universal feature of systems in the boundary of zero entropy is that they are infinitely renormalizable. A similar result can be proved for mildly dissipative diffeomorphisms of the disc that belong to the boundary of the zero entropy class. A diffeomorphism $f$ of the disc is \emph{renormalizable} if there exist a compact set $D\subset \mathbb{D}$ homeomorphic to the unit disc and an integer $k>1$ such that $f^i(D)\cap D=\emptyset$ for each $1\leq i<k$ and $f^k(D)\subset D$. Moreover $f$ is \emph{infinitely renormalizable} if there exists an infinite nested sequence of renormalizable attracting periodic domains with arbitrarily large periods. For instance \cite{GvST} built a $C^\infty$-diffeomorphism which has vanishing entropy and is infinitely renormalizable (see also figure~\ref{f.odometer}). \begin{theorem}\label{t.theoremA} For any mildly dissipative diffeomorphism $f$ of the disc whose topological entropy vanishes, \begin{itemize} \item[--] either $f$ is renormalizable, \item[--] or any forward orbit of $f$ converges to a fixed point. \end{itemize} \end{theorem} Morse-Smale diffeomorphisms (whose non-wandering set is a finite set of hyperbolic periodic points) are certainly not infinitely renormalizable. It is natural to generalize this class of diffeomorphisms in order to allow bifurcations of periodic orbits. \begin{definition}\label{d.generalizedMS} A diffeomorphism is \emph{generalized Morse-Smale} if: \begin{itemize} \item[--] the $\omega$-limit set of any forward orbit is a periodic orbit, \item[--] the $\alpha$-limit set of any backward orbit in $\mathbb{D}$ is a periodic orbit, \item[--] the period of all the periodic orbits is bounded by some $K>0$. \end{itemize} \end{definition} Clearly these diffeomorphisms have zero entropy. We will see in section~\ref{s.gms} that the set of mildly dissipative generalized Morse-Smale diffeomorphisms of the disc is $C^1$ open. A stronger version of theorem~\ref{t.theoremA}, proved in section~\ref{s.renormalize} (see theorem \ref{t.renormalize-prime}), states that in the renormalizable case there exist finitely many renormalizable domains such that the limit set in their complement consists of fixed points. That version implies: \begin{corollary}\label{c.dichotomy0} A mildly dissipative diffeomorphism of the disc with zero entropy is \begin{itemize} \item[--] either infinitely renormalizable, \item[--] or generalized Morse-Smale. \end{itemize} \end{corollary} \paragraph{c -- Boundary of zero entropy.} The set of $C^r$ diffeomorphisms, $r>1$, with positive entropy is $C^1$ open (see~\cite{Ka}). One may thus consider how positive entropy appears: a diffeomorphism belongs to the boundary of zero entropy if its topological entropy vanishes, but it is the $C^1$ limit of diffeomorphisms with positive entropy. The previous results immediately give: \begin{corollary}\label{c.infinitely-renormalizable} A mildly dissipative diffeomorphism of the disc in the boundary of zero entropy is infinitely renormalizable. \end{corollary} We may ask if the converse also holds: \begin{question}\label{q.approximate} In the space of mildly dissipative $C^r$ diffeomorphisms of the disc, $r>1$, can one approximate any diffeomorphism exhibiting periodic orbits of arbitrary large period by diffeomorphisms with positive entropy? \end{question} This would imply that generalized Morse-Smale diffeomorphisms are the mildly dissipative diffeomorphisms of the disc with robustly vanishing entropy. Question~\ref{q.approximate} has a positive answer if one considers $C^1$-approximations of $C^2$-diffeomorphisms (this is essentially corollary 2 in~\cite{PuSa2}). In a similar spirit, it is unknown (even in the $C^1$-topology) if diffeomorphisms with zero entropy are limit of generalized Morse-Smale diffeomorphisms. \begin{question}\label{q.approximate2} In the space of mildly dissipative $C^r$ diffeomorphisms of the disc, $r>1$, can one approximate any diffeomorphism with zero entropy by a generalized Morse-Smale diffeomorphism?\footnote{For issues related to the two last questions in the context of interval maps, see \emph{e.g.}~\cite{HuTr} and references therein.} \end{question} \paragraph{d -- Decomposition of the dynamics with zero entropy.} Let us recall that Conley's theorem (see~\cite[Chapter 9.1]{robinson}) decomposes the dynamics of homeomorphisms: the chain-recurrent set splits into disjoint invariant compact sets called \emph{chain-recurrence classes}. We now describe the dynamics inside the chain-recurrence classes of mildly dissipative diffeomorphisms with zero entropy. Let $h$ be a homeomorphism of the Cantor set $\mathcal{K}$. One considers partitions of the form $\mathcal{K}=K\cup h(K)\cup\dots\cup h^{p-1}(K)$ into clopen sets that are cyclically permuted by the dynamics. We say that $h$ is an \emph{odometer} if there exist such partitions into clopen sets with arbitrarily small diameters. The set of the periods $p$ is a multiplicative semi-group which uniquely determines the odometer. Each odometer is minimal and preserves a unique probability measure (this allows to talk about almost every point of the odometer $x\in \mathcal{K}$). Figure~\ref{f.odometer} represents a diffeomorphism of the disc which induces an odometer on an invariant Cantor set. \begin{corollary}\label{c.structure} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero entropy. Then any chain-recurrence class $\mathcal{C}$ of $f$ is: \begin{itemize} \item[--] either \emph{periodic}: there exists a compact connected set $C$ and an integer $n\geq 1$ such that $\mathcal{C}=C\cup\dots\cup f^{n-1}(C)$ and any point in $K$ is fixed under $f^n$, \item[--] or a \emph{generalized odometer}: there exists an odometer $h$ on the Cantor set $\mathcal{K}$ and a continuous subjective map $\pi\colon \mathcal{C}\to \mathcal{K}$ such that $\pi\circ f=h\circ \pi$ on $\mathcal{K}$. Moreover almost every point $z\in \mathcal{K}$ has at most one preimage under $\pi$. \end{itemize} In addition: \begin{itemize} \item[--] Each generalized odometer is a quasi-attractor, i.e. admits a basis of open neighborhoods $U$ satisfying $f(\overline U)\subset U$. \item[--] The union of the generalized odometers is an invariant compact set $\Lambda$. Outside any neighborhood of $\Lambda$ the set of periods of the periodic orbits is finite. \end{itemize} \end{corollary} \begin{figure} \begin{center} \includegraphics[width=7cm,angle=0]{odometer.pdf} \end{center} \caption{Dynamics exhibiting saddle orbits of each period $2^n$, $n\in \mathbb{N}$, and one odometer.\label{f.odometer}} \end{figure} Corollary~\ref{c.structure} can be compared to a recent result by Le Calvez and Tal~\cite{lecalvez-tal} about transitive sets of homeomorphisms of the $2$-sphere with zero entropy. The methods there are quite different to ours. Note that the dissipation hypothesis is essential: for conservative systems with zero entropy, the dynamics is modeled on integrable systems, see~\cite{franks-handel}. \medskip We do not know if there exist examples of systems exhibiting generalized odometers which are not conjugated to odometers (i.e. such that the map $\pi$ is not injective). Another problem concerns the cardinality of these classes: \begin{question} Does there exist a mildly dissipative diffeomorphism of the disc with zero entropy and infinitely many generalized odometers?\footnote{A degenerate $C^1$ example can be extracted from the Denjoy-like example in \cite{BoGamLiTr}.} \end{question} The answer to this question is not known for general one-dimensional $C^r$-endo\-mor\-phism. However for multimodal endomorphisms of the interval, the nested sequences of infinitely renormalizable domains is bounded by the number of critical points. In particular, generically the number of nested renormalizable domains is finite. This type of result is not known for surface diffeomorphisms. \paragraph{e -- Periods of renormalizable domains.} For one-dimensional multimodal maps with zero entropy, Sharkovskii's theorem~\cite{sharkovskii} implies that the period of the renormalizable domains are powers of $2.$ In the context of mildly dissipative diffeomorphisms this cannot be true, but a similar result holds when one considers renormalizable domains with ``large period'': \begin{theorem}\label{t.period} Let $f$ be a mildly dissipative diffeomorphism of $\DD$ with zero topological entropy and infinitely renormalizable. There exist an open set $W$ and $m\geq 1$ such that: \begin{itemize} \item[--] $W$ is a finite disjoint union of topological discs that are trapped by $f^m$, \item[--] the periodic points in $\DD\setminus W$ have period bounded by $m$, \item[--] any renormalizable domain $D\subset W$ of $f^m$ has period of the form $2^k$; $D$ is associated to a sequence of renormalizable domains $D=D_k\subset\dots \subset D_1\subset W$ of $f^m$ with period $2^{k},\dots, 2$. \end{itemize} \end{theorem} In other words, the period of a renormalizable domain is eventually a power of $2$, meaning that, after replacing $f$ by an iterate, the period of all the renormalizable domains are powers of $2$. As explained in the paragraph {\em summary of the proof} below, the proof of theorem \ref{t.period} uses some rigidity argument. This implies an analogue of Sharkovskii's theorem for surface diffeomorphisms: \begin{corollary}\label{c.period} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero topological entropy. There exist two finite families of integers $\{n_1,\dots,n_k\}_{k\geq 1}$ and $\{m_1,\dots,m_\ell\}_{\ell \geq 0}$ such that the set of periods of the periodic orbits of $f$ coincides with \begin{equation}\label{e.period} \operatorname{Per}(f)=\{n_1,\dots,n_k\}\cup \left\{m_i.2^j, \; 1\leq i\leq \ell \text{ and } j\in \mathbb{N}\right\}. \end{equation} \end{corollary} In particular, in the setting of mildly dissipative diffeomorphisms, we get an affirmative answer to the following conjecture that was formulated by one of us in 1983, and mentioned verbally since then, but appeared in a text (see~\cite{GT}) only a few years after. \begin{conjecture}[Tresser] In the space of $C^k$ orientation preserving embeddings of the $2$-disk, with $k>1$, which are area contracting, generically, maps which belong to the boundary of positive topological entropy have a set of periodic orbits which, except for a finite subset, is made of an infinite number of periodic orbits with periods, $m.2^k$ for a given $m$ and all $k \geq 0.$ \end{conjecture} We note that it is possible to realize any set of the form~\eqref{e.period} as the set of periods a mildly dissipative diffeomorphism of the disc having zero entropy, whereas a diffeomorphism with positive entropy has a different set of periods (it always contains a set of the form $k.{\mathbb N}^*$). In a more general framework, theorem~\ref{t.period} is false if the dynamics is conservative (an integrable twist in the disc may admit all the periods and has vanishing entropy). Previous works in the direction to develop a forcing theory as it follows from Shar\-kov\-skii's theorem (see \cite{GST}) used the ideas and language of braids. For surface diffeomorphisms, a periodic orbit defines a braid type that in turns can or not, force the positivity of topological entropy (the complement of an orbit of period three or larger in the disc can be equipped with a hyperbolic structure from where Nielsen-Thurston theory can be developed). In that sense, permutations are replaced by braids, but the discussion in braid terms cannot be reduced to a discussion in terms of periods as the conjecture formulates. \paragraph{f -- H\'enon family.} Given any $C^r$ endomorphisms $h$ of an interval $I\subset {\mathbb R}$ and $b_0>0$, there exists a disc $\DD=I\times (-\epsilon,\epsilon)$ such that the maps defined by \begin{equation}\label{e.extension} f_{b}(x,y)=(h(x)+y, -bx),\quad \text{for}\quad 0<|b|<b_0 \end{equation} are dissipative diffeomorphisms of $\DD$. The (real) H\'enon family is a particular case where $h$ is a quadratic polynomial.\footnote{After studying the Lorenz model for large values of the ``Rayleigh number" $r$ on the advice of David Ruelle, Yves Pomeau presented this joint work at the observatory of Nice where Michel H\'enon was working. He showed in particular that the time-$t$ map, for $t$ varying from 0 to 1, transforms a well chosen rectangle to an incomplete horseshoe. That night, the legend tells, H\'enon extracted a model of that from his former studies of the conservative case while Pomeau and Ibanez had preferred to focus to a full double covering for which the mathematics are much simpler. ``The most recognition for the least work" H\'enon told to Tresser. Later, Coullet and Tresser realized that the H{\'e}non map appears to be in the same universality class for period doubling than the one -dimensional quadratic map: this led them to conjecture in 1976 (see~\cite{CT}) that universal period doubling should be observed in fluids, since H{\'e}non map was built to imitate a Poincar{\'e} map of the Lorenz flow in some parameters ranges, and the quadratic map is the limit as the dissipation goes to infinity, of the H{\'e}non map.} As mentioned before, the H\'enon family is mildly dissipative~\cite{CP} for $0<|b|<1/4$ in restriction to a trapped disc. Therefore, all the theorems mentioned above can be applied to the these parameters of the H\'enon family and in particular one gets the following corollary. Note that the global dynamical descriptions of the H\'enon family usually suppose $|b|\ll 1$ (see~\cite{BC,dCLM,LM}). \begin{corollary}\label{c.henon} Let $f_{b,c}\colon (x,y)\mapsto(x^2+c+y, -bx)$ be a H\'enon map with $b\in(0,1/4)$ and $c\in {\mathbb R}$. If the topological entropy vanishes, then: \begin{itemize} \item[--] for any forward (resp. backward) orbit one of the following cases occurs: \begin{enumerate} \item it escapes at infinity, i.e. it leaves any compact set, \item it converges to a periodic orbit, \item it accumulates to (a subset of) a generalized odometer; \end{enumerate} \item[--] the set of periods has the form described in~\eqref{e.period}. \end{itemize} \end{corollary} \paragraph{g -- Small Jacobian.} For diffeomorphisms of the disc close enough to an endomorphism of the interval and whose entropy vanishes, section \ref{ss.close-endo} proves that the periods of all renormalizable domains (and so the periods of all periodic orbits) are powers of two. More precisely, given a $C^r$ endomorphism of the interval $f_0$, there exists $b_0>0$ such that for any $0<|b|<b_0$ the diffeomorphism $f_{b}$ is mildly dissipative. In particular all the theorems mentioned before can be applied. Assuming the Jacobian sufficiently small, a stronger property holds: \begin{theorem}\label{t.small jacobian} Given a family $(f_b)$ associated to a $C^2$ endomorphism of the interval as in~\eqref{e.extension}, there exists $b_0>0$ such that, for any $b\in (0,b_0)$ and for any diffeomorphism $g$ with zero entropy in a $C^2$-neighborhood of $f_b$, there exists $n_0\in{\mathbb N}\cup\{\infty\}$ satisfying \begin{equation} \operatorname{Per}(g)=\{2^n, \; n<n_0\}. \end{equation} \end{theorem} In particular, the previous theorem can be applied to the H\'enon family and one recovers one of the results in~\cite{dCLM,LM}. \paragraph{h -- Some differences with the one-dimensional approach.} In the context of one-dimensional dynamics of the interval, and in particular for unimodal maps, the renormalization intervals are built using the dynamics around the turning point: the boundary of the interval contains the closest iterate to the turning point of a repelling orbit (whose period is a power of two) and a preimage of that iterate. For H\'enon maps with small Jacobian, although there is no notion of turning point, renormalization domains are built in \cite{dCLM,LM} by using the local stable manifold of a saddle periodic point of index $1$ and its preimages (those points are the analytic continuations of the repelling points of the one-dimensional map). Our approach can not rely neither of the notion of turning point neither on being close to well understood one-dimensional dynamics. So, the construction is different and uses the structure of the set of periodic points. Following the unstable branches, it is built a skeleton of the dynamics that allows to construct the trapping regions and the renormalization domains. \paragraph{i -- The renormalization operator.} In \cite{dCLM,LM} is proved that infinitely renormalizable real H\'enon-like maps whose Jacobian is small enough admit an appropriately defined renormalization operator. After proper affine rescaling, the dynamics (at the period) on the renormalizable attracting domain converge to a smooth quadratic unimodal map which is nothing else that the hyperbolic fixed point of the renormalization operator for the one-dimensional dynamics. It is not difficult to construct mildly dissipative diffeomorphisms with zero entropy which are a priori not close to a unimodal map on the interval (for instance, when the first renormalization domain has period larger than two) and in that case the renormalization scheme developed for H\'enon-like maps with small Jacobian would need to be recasted. Although the present paper does not provide a well-defined renormalization operator for mildly dissipative diffeomorphism of the disk, it gives the existence of nested renormalization domains and deep renormalizations seem to drive the system towards the one-dimensional model. Indeed the renormalization domains eventually have (relative) period two; moreover the return dynamics on these domains recover certain smooth properties that are satisfied by diffeomorphism close to the one-dimensional endomorphisms (see section \ref{ss.uniform stable}); \cite{CP} associates a quotient dynamics which, on these ``deep domains'', induces an endomorphism of a real tree. That raises the following question: \begin{question} Given a sequence of nested renormalizable domains, is it true that (after proper rescalings) the sequence of return maps generically converges to an unimodal map? \rm One does not expect to replace ``generic" by ``general" because of the expected possible alternate convergence of the renormalizations to more that one fixed point: this happens in dimension 1, see \emph{e.g.} \cite{MiTr} and also \cite{OETr}. \end{question} When $f$ is mildly dissipative, the larger Lyapunov exponent of each generalized odometer $\mathcal{C}$ vanishes, hence the iterates of the derivative of $f$ on $\mathcal{C}$ do not grow exponentially; but one can ask if a stronger property holds: {\em given a nested sequence of renormalization domains $(D_n)$ and their induced maps $(f_n)$, are the derivatives $\|Df_n|_{D_n}\|$ uniformly bounded?} \paragraph{j -- New general tools.} Few new results obtained in the present paper hold for any mildly dissipative diffeomorphism of the disk. \begin{description} \item[\it Closing lemma.] One of them is a new version of the closing lemma proved in \cite{CP} which states that for mildly dissipative diffeomorphisms of the disk, the support of any measure is contained in the closure of periodic points. Our improvement (theorem \ref{t.measure local}) localizes the periodic points: given an invariant cellular connected compact set $\Lambda$, the support of any invariant probability on that set is contained in the closure of the periodic points in $\Lambda$. In that sense, theorem \ref{t.measure local} is a extension of a well known result by Cartwright and Littlewood about the existence of fixed points for invariant cellular sets (see proposition \ref{p.CL}). \item[\it No cycle.] Another one is a generalization of the result proved by Pixton~\cite{pixton} (improving a previous work by Robinson~\cite{robinson1}: it states that for $C^\infty$-generic diffeomorphisms of the sphere, a cyclic accumulation between stable and unstable branches of periodic points can be perturbed to produce a homoclinic connection and positive entropy. Theorem \ref{t.cycle} shows that the generic hypothesis is not needed for mildly dissipative diffeomorphisms of the disc: there is no finite sequence of fixed points such that the unstable manifold of each one accumulates on the next point and the unstable manifold of the last one accumulates on the first point (theorems \ref{t.cycle} and \ref{t.cycle2} in section \ref{no cycle section}). This is clear when the intersections between unstable and stable manifolds are transversal but when they just accumulate, it is more difficult. The strategy consists in building special Jordan domains (that we call \emph{Pixton discs}) from the accumulation of unstable branches on stable manifolds. \end{description} \paragraph{k -- Summary of the proof.} In order to present the envisioned proof strategy, we first present a class of examples of infinitely renormalizable dissipative homeomorphisms of the disc (inspired by the examples in~\cite{GvST}) and we explain their main dynamical features. We use them as a prototype model for maps with zero entropy. The proofs below will show that these features (essentially) apply also for infinitely renormalizable mildly dissipative diffeomorphisms. \paragraph{\it Prototype models.} Let $f_0, f_1$ be two Morse-Smale dissipative diffeomorphisms of the disc. The limit set of $f_0$ is given by a fixed saddle whose unstable branches are interchanged and an attracting orbit of period two that revolves around the fixed point: the fixed point is then said to be {\it stabilized} and the attracting orbit is analogous to a period doubling sink for interval maps. The limit set of $f_1$ is given by a fixed attracting periodic point, a saddle of period three (also said to be {\it stabilized}) that revolves around the fixed point which anchors one of the unstable branch of the saddle periodic points, and an attracting periodic orbit (also of period three) that attracts the other unstable branch of the saddles. Both diffeomorphisms are depicted in figure~\ref{MSpdf}. \begin{figure} \begin{center} \includegraphics[width=15cm,angle=0]{MS.pdf} \end{center} \caption{The diffeomorphisms $f_0$ (left) and $f_1$ (right). The attracting domains are depicted with a dash boundary.\label{MSpdf}} \end{figure} Observe that $f_0$ has an attracting disc of period $2,$ whose iterates belong to two different regions bounded by the local stable manifold of the saddle; $f_1$ has an attracting disc of period three contained inside the disjoint regions bounded by the local stable manifolds of the saddle of period three (these regions, in both cases, are called {\it decorated regions}). Given a sequence $(k_i)\in\{0,1\}^{\mathbb N}$, one can build a sequence of dissipative diffeomorphisms $g_i= f_{k_i}\sqcupplus f_{k_{i-1}}\sqcupplus\dots\sqcupplus f_{k_0}$ with a sink of period $\tau_i:=\Pi_{j=1}^i(2+k_j)$. The symbol $\sqcupplus$ means that the diffeomorphism $f_{k_j}$ is pasted in the basin of the sink of $ g_{j-1}$ (by writing $f_{k_j}$ as the composition of $\tau_{j-1}$ diffeomorphisms). In that way, $g_i$ has a nested sequence of attracting discs $D_0\supset D_1\supset\dots\supset D_j$ of periods $\tau_0,\dots,\tau_i$. Each diffeomorphism $g_i$ is Morse-Smale and the sequence $(g_i)$ converges to a homeomorphism whose limit set is made of periodic points and of an odometer supported on a Cantor set (the intersection of the nested sequence of attracting domain). We make some remarks: (i) The construction shows that there exist diffeomorphisms with vanishing entropy and with periodic points whose period is not $2^n$. (ii) The sequence can converge to a mildly dissipative diffeomorphism if $k_i=0$ for $i$ large (the convergence towards a diffeomorphism is more difficult, see \cite{GvST}). (iii) The previous construction can be performed with more pasted diffeomorphisms: the period of the saddle and the non-fixed sink may be larger; one can also consider more complicate Morse-Smale systems. \paragraph{\it Pixton discs.} The unstable branches connect the periodic points of $g_i$ and form a {\it chain} with a tree structure, see figure \ref{treepdf}. The tree branches land at points that are: \begin{itemize} \item[--] either attracting and may anchor unstable manifolds of points of larger period, \item[--] or saddles whose unstable branches are exchanged at the period. \end{itemize} \begin{figure}[h] \begin{center} \includegraphics[width=15cm, height=4cm, angle=0]{tree.pdf} \end{center} \caption{Chain of periodic points associated to $f_1\sqcupplus f_0\sqcupplus f_0$: there are one saddle fixed point, a saddle of period two (at the period its unstable branches are exchanged), a sink of period four, a saddle of period twelve, and a sink of period twelve. The arrows indicate if the periodic points are saddles or sinks (on the one-dimensional structure a saddle appears as a sink). \label{treepdf}} \end{figure} That observation will allows to reconstruct the attracting discs, see figure~\ref{Pixtpdf}. In the first case (left of the figure), the unstable manifold of a fixed point $p$ accumulates on a fixed sink which anchors a stabilized revolving saddle with larger period: the unstable branch of $p$ has to cross the stable manifolds of the iterates of the saddle; this defines an attracting disc which contains all the periodic points attached to the sink. In the second case (right of the figure), the unstable manifold of the fixed point $p$ accumulates on a fixed saddle whose unstable branches are exchanged by the dynamics and accumulate on a sink of period $2$: the unstable branch of $p$ has to cross the stable manifold of the fixed saddle; this also defines an attracting disc which contains all the attached periodic points. We call the domains built in this way, {\em Pixton discs}. \begin{figure}[h] \begin{center} \includegraphics[width=15cm,angle=0]{Pixt.pdf} \put(-63,44){\small $p$} \put(-323,44){\small $p$} \end{center} \caption{Attracting discs obtained from an unstable branch and stable manifolds. \label{Pixtpdf}} \end{figure} \medskip \paragraph{\it When all the periodic points are fixed.} We now explain how to handle a general mildly dissipative diffeomorphism with zero entropy. In order to prove theorem \ref{t.theoremA}, one first has to show that if all the periodic points are fixed, then the limit set of the dynamics consists of only fixed points. The ``no-cycle property" is crucial. Another ingredient is to prove that the $\omega-$limit set of any orbit contains a fixed point: this follows from our closing lemma (theorem \ref{t.measure local}). With these tools, one builds a filtration associated to the fixed points and conclude that the limit set of the dynamics is reduced to the set of fixed points. \paragraph{\it Periodic structure.} When there are periodic points which are not fixed, we prove that the unstable branches induce a structure as in the previous examples: they form {\it chains} (see definition \ref{d.chain}) that branch at points of low period to which are attached saddles of larger or equal period. A special role is played by \emph{stabilized points}: these are saddles that either are fixed and whose unstable branches are exchanged, or are not fixed but whose unstable manifold is anchored by a fixed point (see definition \ref{d.stabilization} and propositions \ref{p.chain} and \ref{p.stab-decorate}). The local stable manifolds of the stabilized points bound domains called {\it decorated regions} (see definition \ref{d.decorated region}) which are two by two disjoints: indeed if two such regions intersect, the unstable manifold of a stabilized point has to cross the stable manifold of another iterate in order to accumulate on the anchoring fixed point, contradicting the fact that the entropy vanishes. The decorating regions contain all the periodic point of larger period (see definition \ref{d.decreasing-chain}, proposition \ref{p.chain-decreasing} and \ref{p.decreasing-chain}). \paragraph{\it Construction of trapping discs.} To each unstable branch $\Gamma$, fixed by an iterate $f^n$, we build a disc that is trapped by $f^n$ and contain all the accumulation set of the branch $\Gamma$ (theorems \ref{t.renormalize} and \ref{t.renormalize2}). To each saddle accumulated by $\Gamma$ one associates a {\it Pixton disc} which is a candidate to be trapped. These discs are bounded by arcs in $\Gamma$ and stable manifolds of saddles in the accumulation set, as in the previous examples (see lemma \ref{l.highperiod}). A finite number of these Pixton discs is enough to cover the accumulation set, implying the trapping property. The closing lemma mentioned above (theorem \ref{t.measure local}) is a key point for proving the finiteness. \paragraph{\it Finiteness of the renormalization domains.} A stronger version of the renormalization (theorem \ref{t.renormalize-prime}) implies corollaries \ref{c.infinitely-renormalizable} and \ref{c.structure}. It asserts that the number of renormalization domains required to cover the dynamics is finite. Since the renormalization discs are related to decorated regions, we have to show that the periods of the stabilized saddles is bounded (see theorem \ref{t.finite}). \paragraph{\it Bound on the renormalization period.} For showing that after several renormalization steps, the renormalization periods eventually equal two (theorem \ref{t.period}), we develop a {\em rigidity argument}: the limit attractors (the generalized odometers obtained as intersection of nested renormalizable domains of an infinitely renormalizable diffeomorphism) induce a stable lamination whose leaves vary continuously for the $C^1$-topology over sets with measure arbitrary close to one. This property follows from a \emph{$\gamma-$dissipation property} (see section \ref{ss.gamma-dissipation}). In particular, for a large proportion of points, the leaves of the lamination by local stable manifolds are ``parallel". Since the renormalization domains (inside a renormalization disc obtained previously) are contained in a (relative) decorated regions, and since the measure is equidistributed between the different renormalization components, a relative renormalization period larger than two would contradict that a large proportion of local stable manifolds are parallel. A simple heuristic of that argument is the following: at small scale, the quotient by the local stable manifolds provides an interval that contains a large proportion of the points of the odometer and that is enough to recover the period doubling mantra that permeates the renormalization scheme for zero entropy maps of the interval. \paragraph{l -- Other attracting domains.} One can wonder about the transition to chaos for dissipative diffeomormorphisms on others attracting domains as it is the case of the annulus. The transition to chaos is already much more complicated on the circle than on the interval (see \emph{e.g.,} \cite{FrTr} and references therein), as a result in particular of the non-triviality of the circle at the homotopy level. A prototype family that plays the role of the H\'enon maps for the circle, is an annulus version of the Arnold family. Results related to the transition to chaos in that context can be found in \cite{CKKP} and \cite{GY}. \paragraph{m -- Organization of the paper.} The next three sections present preliminary results: section~\ref{preliminaries} describes how fixed points may be rearranged inside finitely many fixed curves, and recalls the Lefschetz formula and a fixed point criterion due to Cartwright and Littlewood; in section \ref{s.quantitative} we revisit the notion of $\gamma-$dissipation introduced in \cite{CP} and we present a few results that allow to improve the lower bound on $\gamma$; in section \ref{ss.closing} we state a new closing lemma. Section~\ref{no cycle section} proves that (under the hypothesis of zero entropy and mild dissipation) there is no cycle between periodic points. This is essential to show in section \ref{decoration section} that periodic points are organized in chains; also in that section we introduce the notions of decoration and stabilization that provides a hierarchical organization of the chains. Section~\ref{s.gms} discusses the notion of generalized Morse-Smale diffeomorphisms. In section \ref{ss.trapping} we prove that the accumulation set of an unstable branch of a fixed point is contained in an arbitrarily small attracting domain and in section \ref{ss.local renormalization} we conclude the proof of the local renormalization (theorem \ref{t.theoremA}). A global version of that theorem (theorem \ref{t.renormalize-prime}) is obtained in section \ref{s.renormalize}; this requires to first show that the periods of the stabilized points are bounded (this is proved in section \ref{ss.finitness}). The proof of theorem \ref{t.period} is provided in section \ref{s.period}: this uses the description of the chain-recurrent set (corollary \ref{c.structure}) which is proved in section \ref{s.odometers}. In the last two sections, we prove the results about dynamics close to interval maps and about the H\'enon maps (corollary~\ref{c.henon} and theorem~\ref{t.small jacobian}). \paragraph{Acknowledgements.} We are indebted to Mikhail Lyubich or the discussions we exchanged during the preparation of this work and to Damien Thomine for his remarks on a first version of the text. \section{Periodic orbits} \label{preliminaries} In subsection \ref{ss.periodic}, we analyze the different types of periodic points that could exist for a dissipative diffeomorphism of the disc. When there exist infinitely many periodic points of a given period, we rearrange them inside finitely many periodic arcs. In subsection \ref{ss.index arc} we recall the Lefschetz formula. In subsection \ref{ss.accumulation of unstable} we present a kind of topological $\lambda-$lemma that is useful to describe the accumulation set of unstable manifolds of periodic points. In section \ref{ss.fixedcriterion} we recall a classical result by Cartwright and Littlewood about the existence of fixed points. \subsection{Dynamics near a periodic point}\label{ss.periodic} We describe the dynamics in the neighborhood of a periodic orbit. Note that up to replace $f$ by an iterate, one may reduce to consider fixed points. When $p$ is fixed, the eigenvalues $\lambda^-_p, \lambda^+_p$ of $D_pf$ verify $|\lambda^-_p|\leq |\lambda^+_p|$ and $|\lambda^-_p\lambda_p^+|<1$. \paragraph{\it Hyperbolic sink.} When $|\lambda^+_p|<1$, the point $p$ is a hyperbolic sink. This covers in particular all the cases where $|\lambda^-_p|= |\lambda^+_p|$. We now describe the other cases. \paragraph{\it Stable curve.} When $|\lambda^-_p|<|\lambda^+_p|$, there exists a well defined (strong) stable manifold which is a $C^1$ curve. The connected component containing $p$ is denoted by $W^s_\mathbb{D}(p)$. For orbits with higher period $\mathcal{O}$, we denote by $W^s_\mathbb{D}(\mathcal{O})$ the union of the curves $W^s_\mathbb{D}(p)$, $p\in \mathcal{O}$. \paragraph{\it Local stable set.} The local stable set of $p$, i.e. the set of points whose forward orbit converges to $p$ and remains in a small neighborhood of $p$, is either a neighborhood of $p$ (a sink), a subset of $W^s_\mathbb{d}(p)$, or a half neighborhood of $p$ bounded by $W^s_\mathbb{D}(p)$. \paragraph{\it Center manifold.} When $|\lambda^-_p|<|\lambda^+_p|$, the center manifold theorem asserts that there exists a $C^1$ curve $\gamma$ which contains $p$, is tangent to the eigendirection of $D_pf$ associated to $\lambda^+_p$ and is locally invariant: there exists $\varepsilon>0$ such that $f(\gamma\cap B(p,\varepsilon))\subset \gamma$. The two components of $\gamma\setminus \{p\}$ are either preserved or exchanged (depending if the eigenvalue $\lambda^+_p$ is positive or negative). Along each component $\Gamma$ of $\gamma\setminus \{p\}$, the dynamics (under $f$ or $f^2$) is either attracting, repelling, or neutral (in which case $p$ is accumulated by periodic points inside $\Gamma$). The type of dynamics does not depend on the choice of $\gamma$. \paragraph{\it Unstable branches.} The \emph{unstable set} $W^u(p)$ of $p$ is the set of points $x$ such that the distance $d(f^{-n}(x),f^{-n}(p))$ decreases to $0$ as $n\to +\infty$. When it is not reduced to $p$, it is a $C^1$ curve which contains $p$. The local unstable set is defined as the set of points whose backward orbit converges to $p$ and remains in a small neighborhood of $p$ and observe that they are contained in the center manifold $\gamma$. Each connected component $\Gamma$ of $W^u(p)\setminus \{p\}$ is called an unstable branch of $p$. \paragraph{\it Hyperbolic saddle.} When $|\lambda^-_p|<1< |\lambda^+_p|$, the point $p$ is a hyperbolic saddle. It admits two unstable branches. \paragraph{\it Indifferent fixed point.} When $|\lambda^-_p|<1= |\lambda^+_p|$, the point $p$ is indifferent. We then consider the dynamics (under $f$ or $f^2$) on each side of a center manifold. When $p$ is isolated among points of period $1$ and $2$, it is either a sink (both components are attracting), a saddle (both components are repelling) or a saddle-node (the components are fixed, one is attracting, one is repelling): the type does not depend on the choice of the center curve $\gamma$. \paragraph{\it Saddle with reflexion.} When the unstable branches of a fixed saddle $p$ are exchanged by the dynamics, we say that $p$ is a \emph{(fixed) saddle with reflexion}. \paragraph{\it Index.} For an isolated fixed point, one can define the index of that fixed point as the winding number of the vector field $f(x)-x$ around the fixed point. For dissipative diffeomorphisms the index of an isolated fixed point is: \begin{itemize} \item $1$ for a sink or a saddle with reflexion, \item $0$ for a saddle-node, \item $-1$ for a saddle with no reflexion. \end{itemize} By the classical Lefschetz formula, when the number of fixed points is finite the sum of the index of the fixed points in the disc is equal to $1.$ \begin{remark}\label{r.degenerated} A saddle-node can be considered as the degenerated case of a sink and a saddle of index $-1$ that have collided. Similarly, a fixed point with an eigenvalue less or equal to $-1$ can be considered as the collision of a fixed sink with the points of a $2$-periodic orbit with positive eigenvalues. In particular, fixed saddles of index $1$ may be considered as the union of a fixed sink with a saddle of period $2$. \end{remark} \subsection{Normally hyperbolic periodic arcs}\label{ss.arc} When the number of fixed points is infinite, they appear inside normally hyperbolic arcs. \begin{definition}\label{d.fixed-arc} A \emph{fixed arc} is a compact $f$-invariant $C^1$ curve $I$ whose endpoints are fixed and which admits an invariant splitting $T_x\mathbb{D}|_{x\in I}=E^s\oplus F$ satisfying: \begin{itemize} \item[--] $T_xI\subset F_x$ for each $x\in I$, \item[--] there is $k\geq 1$ such that ${|Df^k_{E^s_x}|} < |Df^k_{F_x}|$ and $|Df^k_{E^s_x}|<1$ for each $x\in I$, \end{itemize} It is \emph{isolated} if all the fixed points in a neighborhood are contained in $I$. \end{definition} A fixed point is a fixed arc: for a hyperbolic sink, the splitting is trivial $F=\{0\}$. When $I$ has two distinct endpoints $p_1,p_2$, the forward orbit of any point in the strip $W^s_{\mathbb D}(I)$ bounded by $W^s_{\mathbb D}(p_1)$ and $W^s_{\mathbb D}(p_2)$ converges to a fixed point in $I$. When $I$ is not reduced to a sink, $\mathbb{D}\setminus W^s_{\mathbb D}(I)$ has two connected components. \medskip The unstable set of $I$ is contained in the unstable branches of the endpoints of $I$. \medskip \begin{definition}\label{d.type-arc} Four cases may occur for an isolated fixed arc $I$. It has the \emph{type of}: \begin{itemize} \item \emph{a sink}, if the orbit of any point in a neighborhood converges to a fixed point in $I$, \item \emph{a saddle with reflexion}, if $I$ is a single fixed point $p$ with an eigenvalue $\lambda^+_p\leq -1$, \item \emph{a saddle-node}, if the arc has one $f$-invariant unstable branches, \item \emph{a saddle with no reflexion}, if the arc has two $f$-invariant unstable branches. \end{itemize} \end{definition} \begin{remark} Note that if an isolated fixed arc $I$ contains a fixed point $p$ with an eigenvalue less or equal to $-1$, then $I=\{p\}$ (since the endpoints of $I$ are fixed points). This is the only case where there may exists periodic orbits in arbitrarily small neighborhoods of $I$. The arc $I$ is isolated since $p$ may be accumulated only by points with period $2$. \end{remark} \medskip \begin{proposition}\label{p.group} If $f$ is a dissipative diffeomorphism of the disc, there exists a finite collection $\mathcal{I}$ of disjoint isolated fixed arcs whose union contains all the fixed points of $f$. \end{proposition} \begin{proof} By the implicit function theorem, the set of fixed points of $f$ is the union of a finite set of isolated fixed points and of a compact set $K$ of fixed points $p$ having one eigenvalue $\lambda^+_p\geq 1/2$. Each isolated fixed point is an isolated fixed arc and it remains to cover $K$ by finitely many pairwise disjoint isolated fixed arcs. To each fixed point $p$ having an eigenvalue $\lambda^+_p\geq 1/2$, the center manifold theorem (see~\cite{BoCr}) associates a $C^1$ curve $\gamma$ which contains $p$, is tangent to the eigenspace $F_p$ associated to the eigenvalue $\lambda^+_p$, and is locally invariant: $f(\gamma)\cap \gamma$ contains a neighborhood of $p$ in $\gamma$; moreover, any periodic point in a neighborhood of $p$ is contained in $\gamma$. One can thus build an arc $I\subset \gamma$ bounded by two fixed points, which is invariant by $f$, normally contracted and which contains all the fixed points in a neighborhood of $p$: it is a fixed arc, as in definition~\ref{d.fixed-arc}. By compactness, there exists a finite family of such fixed arcs. Let us choose $\varepsilon>0$ small. By decomposing the arcs, one can assume that each such arc $I$ has diameter smaller than $\varepsilon$, is contained in a $C^1$ curve $J$ such that $J\setminus I$ has two connected components, both of diameter larger than $2\varepsilon$ and such that any fixed point in the $2\varepsilon$-neighborhood of $I$ is contained in $J$. If there exists two arcs $I,I'$ which intersect, one consider the larger curve associated to $J$. We note that all the fixed points of $I'$ are contained in $J$. One can thus reduce $I'$ as an arc $\widetilde I'$ such that all the fixed points of $K\cap I'\cup I$ are contained in $I\cup \widetilde I'$ and $I\cup \widetilde I'$ is a $C^1$ curve. One repeat this argument for all pairs of fixed intervals. This ensures that the union of all the fixed intervals $I$ contains $K$ and is a union of disjoint $C^1$ curves. By construction, each of these curves is an isolated fixed arc. \end{proof} The choice of the collection $\mathcal{I}$ is in general not unique. Note that for any distinct $I,I'\in \mathcal{I}$ which are not sinks, the strips $W^s_{\mathbb D}(I)$, $W^s_{\mathbb D}(I')$ are disjoint. \paragraph {\it Partial order:} The finite collection of fixed arcs $\mathcal{I}$ can be partially ordered in such a way that at least one the unstable branches of the extremal points of $I_j$ accumulate on $I_{j+1}.$ \subsection{Lefschetz formula and arcs} \label{ss.index arc} In the present subsection we recall the definition of index for isolated invariant arcs and ``half'' arcs. \paragraph{\it Index of an arc.} To any simple closed curve $\sigma\subset \mathbb{D}\setminus \operatorname{Fix}(f)$, one associates an index $i(\sigma,f)$, which is the winding number of the vector field $f(x)-x$ along the curve. This defines for any isolated fixed arc $I$ an index $index(I,f)$: this is the index $i(\sigma,f)$ associated to any simple closed curve contained in a small neighborhood of $I$ and surrounding $I$. For arcs as described in definition~\ref{d.type-arc}, the index takes a value in $\{-1,0,1\}$, equal to $1$ for a sink and a saddle with reflexion, $0$ for a saddle-node, and $-1$ for a saddle with no reflexion. In particular, the index is $1$ exactly when the arc has no $f$-invariant unstable branch. When $I$ is reduced to an isolated fixed point, $index(I,f)$ coincides with the usual index. \paragraph{\it Index of a half arc.} Let us consider a fixed arc $I$ which contains a fixed point $p$ having an eigenvalue $\lambda^+_p\geq 1$ and a connected component $V$ of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$. Let us assume that $I$ is \emph{isolated in $V$}, i.e. that any fixed point in a neighborhood of $I$ that belongs to $V$, also belongs to $I$. Then, one can associate an index $index(I,V,f)$: it has the value $-1/2$ if $I$ has a (local) unstable branch in $V$ that is fixed by $f$ and the value $1/2$ otherwise (in which case $I$ is semi-attracting in $V$). When $I$ is isolated, the index of $I$ is equal to the sum of the two indices associated to the two connected components of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$. \medskip The next proposition restates the Lefschetz formula in our setting. \begin{proposition}\label{p.lefschetz} Let $f$ be a dissipative diffeomorphism of the disc and $\mathcal{I}$ a set of isolated fixed arcs as in proposition~\ref{p.group}. Then the sum of the indices $index(I,f)$ of the arcs $I\in \mathcal{I}$ is $1$. \end{proposition} \begin{proof} One can modify the dynamics inside each isolated fixed arc $I\in \mathcal{I}$ and obtain in this way a diffeomorphism $g$ satisfying: \begin{itemize} \item[--] each $I\in \mathcal{I}$ is still a fixed arc for $g$, \item[--] the fixed points of $g$ are all hyperbolic and contained in the union of the arcs $I$. \end{itemize} In particular $g$ has only finitely many fixed points. In an arc $I$, two saddles are separated by a sink. This proves that the index of any arc $I\in \mathcal{I}$ for $f$ coincides with the sum of the indices of the fixed points of $g$ that are contained in $I$. Consequently, the sum of the indices of the arcs $I\in \mathcal{I}$ for $f$ is equal to the sum of the indices of the fixed points of $g$. From~\cite[proposition VII.6.6]{dold}, this sums equal $1$ since $g$ is a map homotopic to a constant. \end{proof} \subsection{The accumulation set of unstable branches} \label{ss.accumulation of unstable} Let $\Gamma$ be a $f$-invariant unstable branch of a fixed point $p$ and $\gamma\subset \Gamma$ be a curve which is a fundamental domain. The \emph{accumulation set} of $\Gamma$ is the limit set of the iterates of $f^n(\gamma)$ as $n\to +\infty$. We say that $\Gamma$ accumulates on a set $X$ if $X$ intersects the accumulation set of $\Gamma$. These definitions naturally extend to unstable branches of periodic points. Next proposition, is a kind of topological version of the classical $\lambda-$lemma for mildly dissipative diffeomorphisms without assuming homoclinic intersections. \begin{proposition} \label{p.transitive} Let $p$, $q$ be two fixed points of a mildly dissipative diffeomorphisms and let $\Gamma_p$, $\Gamma_q$ be two $f$-invariant unstable branches such that $\Gamma_p$ accumulates on a point of $\Gamma_q$. Then the accumulation set of $\Gamma_q$ is included in the accumulation set of $\Gamma_p$. \end{proposition} \begin{proof} Let $U$ be a small simple connected neighborhood of $q$. There are points $y_k\in\Gamma_p$ arbitrarily close to a point $y\in \Gamma_q\cap U$ having iterates $f^{-1}(y_k)$, $f^{-2}(y_k)$,\dots , $f^{-m_k}(y_k)$ in $U$ such that $f^{-m_k}(y_k)$ converge to a point $x\in U\cap W^s_\mathbb{D}(q)$. Let $\gamma^s\subset U\cap W^s_\mathbb{D}(q)\setminus \{q\}$ be a compact curve containing $x$ and such that both connected components of $\gamma^s\setminus\{x\}$ properly contain a fundamental domain and its iterate. Let also $\gamma^u\subset \Gamma^u(q)$ be a compact curve which properly contains a fundamental domain and its iterate. Now we take two curves $l^u_1, l^u_2$ transversal to $W^s_{\mathbb{D}}(p)$ trough the extremal points of $\gamma^s$ and two curves $l^s_1, l^s_2$ transversal to $\Gamma_q$ trough the extremal points of $\gamma^u$. We construct the rectangles $R_n$ bounded by $l^u_1, l^u_2$ and the connected components of $f^{-n}(l^s_1), f^{-n}(l^s_2)$ inside $U\cap f^{-1}(U)\cap\dots\cap f^{-n}(U)$ that intersect $l^u_1$ and $l^u_2.$ Observe that those rectangles converges to $\gamma^s.$ Let $y_n\in W^u(p)\cap R_n$ converging to $x.$ Let $l_n$ be a connected arc inside $W^u(p)$ that joins $y_n$ and $y_{n+2}$. It follows that either \begin{enumerate} \item there is a connected subsegment $l_n'$ inside $l_n\cap (R_{n}\cup R_{n+1}\cup R_{n+2}\cup\dots )$ that contains $y_{n+2}$ and intersects either $l^u_1$ or $l^u_2,$ or \item there is a subsegment $l_n'$ inside $l_n$ that crosses $R_{n+1}$ and is disjoint from $l^u_1\cup l^u_2.$ \end{enumerate} In the first case, the accumulation set of $\Gamma_p$ contains a fundamental domain of a stable branch of $q$: this is a contradiction since each stable branch of $q$ contains a point in $\mathbb{D}\setminus f(\mathbb{D})$. In the second case, $f^n(l_n')$ converge to $\gamma^u$ in the Hausdorff topology and from there, the proposition is concluded. \end{proof} \subsection{Decoration}\label{ss.decoration} The geometry described in the next definition is essential in this work. \begin{definition}\label{d.decoration} Let $f$ be a mildly dissipative diffeomorphism of the disc. A periodic orbit $\mathcal{O}$ which is not a sink is \emph{decorated} if for each $p\in \mathcal{O}$, one connected component of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$ does not intersects $\mathcal{O}$ (see figure~\ref{f.decoration}). \end{definition} \begin{figure} \begin{center} \includegraphics[width=5cm,angle=0]{decoration.pdf} \put(-62,30){$f^2(p)$} \put(-53,92){$f(p)$} \put(-90,95){$p$} \put(-124,43){$f^4(p)$} \end{center} \caption{A decorated periodic orbit.\label{f.decoration}} \end{figure} \subsection{A fixed point criterion} \label{ss.fixedcriterion} The following result refines Brouwer fixed point theorem inside the disc. \begin{proposition}[Cartwright-Littlewood~\cite{CL}]\label{p.CL} Let $f$ be an orientation-preserving homeomorphism of the plane $\mathbb{R}^2$ and let $C$ be an invariant compact set whose complement $\mathbb{R}^2\setminus C$ is connected. Then $f$ has a fixed point in $C$. \end{proposition} \section{Quantitative dissipation} \label{s.quantitative} We recall a quantitative version of the dissipation that was introduced in \cite{CP}: \begin{definition} Let $f$ be a dissipative diffeomorphism of the disc and $K$ be a $f$-invariant compact set. For $\gamma\in (0,1)$, we say that the diffeomorphism $f$ is \emph{$\gamma$-dissipative on $K$} if there is $n\geq 1$ such that for any $x,y\in K$ and any unit vector $u\in T_x\mathbb{D}$, $$|\det Df^n(y)|< \|Df^n(x).u\|^\gamma.$$ \end{definition} With this definition in mind, it is possible to get a uniform geometry of stable manifolds for all ergodic measure as it is presented in subsection~\ref{ss.uniform stable}. In subsection~\ref{ss.gamma-dissipation} it shown that $\gamma-$dissipation holds for uniquely ergodic aperiodic compact invariant sets. \subsection{Criterion for $\gamma$-dissipation} \label{ss.gamma-dissipation} The next proposition provides sufficient conditions for $\gamma-$dissipation. Observe that the hypothesis are satisfied by odometers. \begin{proposition}\label{p.gamma-strong} Let $f$ be a dissipative $C^r$ diffeomorphism, $r>1$. Let $K$ be an invariant compact set which does not contain any periodic point and is uniquely ergodic and does not intersect any transitive compact set with positive entropy. Then $f$ is $\gamma$-dissipative on $K$ for all $\gamma\in (0,1)$. \end{proposition} \begin{proof} We first claim that if a $C^r$ diffeomorphism $f$ of a surface (with $r>1$) admits a hyperbolic ergodic measure $\mu$ with no atom, then ${\rm {supp}}(\mu)$ is contained in a transitive set with positive topological entropy. A theorem of Katok~\cite{Ka} asserts the existence of periodic points $p_n$ whose orbits equidistribute towards $\mu$. Note that the points are distinct and their period goes to $+\infty$ since $\mu$ has no atom. Moreover, from the proof one can check that all the points $p_n$ belong to a compact set of points $p$ having uniform local stable and local unstable manifolds which vary continuously with $p$ (and this is not necessarily true for all the iterates of the points $p_n$). One may thus find two distinct points $p_n$ and $p_m$ close, so that their stable and unstable manifolds have transverse intersections. This implies that $f$ has a horseshoe $\Lambda$. Taking $n$ larger, one gets an increasing sequence of horseshoes whose union accumulates on ${\rm {supp}}(\mu)$. \medskip We now turn to the proof itself. Since $f$ is dissipative, there exists $b\in (0,1)$ such that $|\det Df(x)|<b$ for any $x\in K$. Let us fix $\gamma\in (0,1)$ and any $\varepsilon\in (0,-\frac 1 4(1-\gamma).\log b)$. Let $\mu$ be the ergodic probability on $K$. Note that its upper Lyapunov exponent is non positive: otherwise, $\mu$ would be hyperbolic with no atom and the claim above would imply that $K$ intersects a transitive set with positive entropy, contradicting the assumptions of the proposition. Consequently, there exists $\ell\geq 1$ such that $\frac 1 \ell \int \log \|Df^\ell\| d\mu<\varepsilon/4$. For $n\geq 1$ large enough and any $x\in K$, the distribution of the iterates $x,\dots, f^n(x)$ is close to $\mu$, implying that for any $x,y\in K$, $$\frac 1 n \log \|Df^n(x)\|\leq \varepsilon/4+\frac 1 n \sum_{j=0}^{n-1} \frac 1 \ell\log\| Df^\ell(f^j(x))\| \leq \frac 1 2 \varepsilon + \frac 1 \ell \int \log \|Df^\ell\| d\mu \leq\frac 3 4 \varepsilon,$$ $$\text{and }\quad e^{-\frac{n\varepsilon}{4}}|\det Df^n(y)|\leq |\det Df^n(x)|.$$ For any $x,y\in K$ and any unit vector $u\in T_x\mathbb{D}$, we thus have: $$e^{-\frac{n\varepsilon}{4}}|\det Df^n(y)|\leq |\det Df^n(x)|\leq \|Df^n(x)\| . \|Df^n(x).u\|\leq e^{n.\frac{3\varepsilon}4}\|Df^n(x).u\|.$$ If $u_0$ is the unit vector in $T_x\mathbb{D}$ which is the most contracted under $Df^n(x)$, we also have $$\|Df^n(x).u_0\|^2\leq |\det Df^n(x)|\leq b^n.$$ Hence $$|\det Df^n(y)|\leq e^{n.\varepsilon}\|Df^n(x).u_0\|\leq e^{n.\varepsilon}b^{n.(1-\gamma)/2}.\|Df^n(x).u_0\|^\gamma< \|Df^n(x).u_0\|^\gamma.$$ This gives as required $$|\det Df^n(y)|< \|Df^n(x).u_0\|^\gamma\leq \|Df^n(x).u\|^\gamma.$$ So $f^n$ is $\gamma$-dissipative on $K$, provided $n$ is chosen large enough. \end{proof} \subsection{Uniform geometry of the strong stable leaves} \label{ss.uniform stable} The next theorem has been essentially proved in~\cite{CP}. However, it has to be recasted to get a precise quantitative estimate. \begin{theorem}\label{t.stable} For any $\alpha\in(0,1]$ and $\varepsilon>0$, there exists $\gamma\in (0,1)$ with the following property. If $f$ is a $C^{1+\alpha}$ diffeomorphism which is $\gamma$-dissipative on an invariant compact set $K$ which does not contain any sink, then there exists a compact set $A\subset K$ such that: \begin{itemize} \item[--] For any ergodic measure $\mu$ supported on $K$, we have $\mu(A)>1-\varepsilon$. \item[--] Each point $x\in A$ has a stable manifold $W^s_\mathbb{D}(x)$ which varies continuously with $x$ for the $C^1$ topology. \end{itemize} \end{theorem} For the proof, we refer to \cite{CP} and the following slight changes in the results therein that are needed to be done. Let $\tilde \sigma, \sigma, \tilde \rho, \rho \in (0,1)$ satisfying \begin{equation}\label{e.pesin} {\textstyle \frac{\tilde \rho\tilde \sigma}{\rho \sigma}} >\sigma^\alpha, \end{equation} and $A_{\tilde \sigma\sigma\tilde \rho\rho }(f)$ be the set of points $x$ having a direction $E\subset T_xS$ such that for each $n\geq 0$ \begin{equation}\label{e.stable} \tilde \sigma^n\leq \|Df^n(x)_{|E}\|\leq \sigma^{n},\; \text{ and }\; \tilde \rho^n\leq \frac{\|Df^n(x)_{|E}\|^2}{|\det Df^n(x)|}\leq \rho^{n}. \end{equation} We recall theorem 5 and remark 2.1 in \cite{CP}: \begin{theorem}[Stable manifold at non-uniformly hyperbolic points]\label{t.stablecp} Consider a $C^{1+\alpha}$ diffeomorphism $f$ with $\alpha\in (0,1]$. Provided~\eqref{e.pesin} holds, the points in $A_{\tilde \sigma\sigma\tilde \rho\rho }(f)$ have a one-dimensional stable manifold which varies continuously for the $C^1$ topology with $x\in S$. \end{theorem} To conclude theorem \ref{t.stable}, observe that it is enough to prove following proposition: \begin{proposition}\label{p.uniform-gamma} Given $\varepsilon>0$ and $\alpha\in (0,1]$, there is $\gamma>0$ with the following property. Let us consider a diffeomorphism $f$ and an invariant compact set $K$ which does not contain any sink and where $f$ is $\gamma-$dissipative. Then there exist $\tilde \sigma, \sigma, \tilde \rho, \rho \in (0,1)$ satisfying~\eqref{e.pesin} such that for any ergodic measure $\mu$ supported on $K$, $\mu(A_{\tilde \sigma\sigma\tilde \rho\rho }(f))>1-\varepsilon.$ \end{proposition} \begin{proof} This is proved in \cite[proposition 3.2]{CP} in the case $\varepsilon= 5/6$, $r=2$ and $\gamma=9/10$. We explain how to adapt the proof by modifying the constants. Let us take $$D = \sup_{x\in K} |det Df(x)|,\,\,\, m = \inf_{x\in K} \|Df^{-1}(x)\|^{-1},$$ $$\tilde \sigma = m,\,\,\, \tilde \rho = m^2/D,\,\,\, \sigma = D^{1-\alpha/3},\,\,\, \rho = D^{1-\alpha/3}.$$ Since $f$ is $\gamma-$dissipative, $D< m^\gamma$ and the condition~\eqref{e.pesin} is satisfied provided $(1+\alpha/9)\gamma>1$. Using Pliss lemma (as stated in \cite[lemma 3.1]{CP}), the first condition in~\eqref{e.stable}, holds on a set with $\mu-$measure larger than $\frac{(1-\alpha/3)\log(D)-\log(D)}{(1-\alpha/3)\log(D)-\log(m)}>\frac{-\alpha/3}{(1-\alpha/3-\frac{1}{\gamma})}$. Similarly, the second condition in~\eqref{e.stable} holds on a set with $\mu-$measure larger than $\frac{(1-\alpha/3)\log(D)-\log(D)}{(1-\alpha/3)\log(D)-2\log(m)+\log(D)}>\frac{-\alpha/3}{(2-\alpha/3-\frac 2 \gamma)}$. Hence~\eqref{e.stable} holds on a set with measure larger than $1-\varepsilon$ provided $\gamma$ is chosen close to $1$ so that $\frac{-\alpha/3}{(2-\alpha/3-\frac 2 \gamma)}>1-\varepsilon/2$. \end{proof} \section{Closing lemmas} \label{ss.closing} The following theorem is proved in~\cite{CP}. \begin{theorem}\label{t.measure revisited} For any mildly dissipative diffeomorphism of the disc, the support of any $f$-invariant probability is contained in the closure of the set of periodic points. \end{theorem} We state now a local version of that result. Let us recall that a compact connected set of the plane is \emph{cellular} if its complement is connected. Equivalently it is the decreasing intersection of sets homeomorphic to the unit disc. \addtocounter{theorem}{-1} \renewcommand{\thetheorem}{\Alph{theorem}'} \begin{theorem}[Local version]\label{t.measure local} Let $f$ be a mildly dissipative diffeomorphism of the disc, and $\Lambda$ be an invariant cellular connected compact set. Then the support of any $f$-invariant probability on $\Lambda$ is contained in the closure of the periodic points in $\Lambda$. \end{theorem} \renewcommand{\thetheorem}{\Alph{theorem}} This section is devoted to the proof of theorem~\ref{t.measure local}. We may assume that $\mu$ is ergodic and that $\mu$ is not supported on a finite set since otherwise the conclusion of the theorem holds trivially. We have to find a periodic point in $\Lambda$ arbitrarily close to $x$. Note that one can replace $f$ by $f^2$ and reduce to the case where $f$ preserves the orientation. Also, by a slight modification of the boundary of the disk, it can be assumed that for almost every point the complement of the local stable manifold in the disc has two connected components. \begin{definition}\label{d.crossing} For $\mu$-almost every point $x$, the connected components of $W^s_\mathbb{D}(x)\setminus \{x\}$ are called \emph{stable branches of $x$}. We say that the connected compact set $\Lambda$ \emph{crosses} a stable branch $\sigma$ of $x$ if there exists a connected compact set $C\subset \Lambda$ which intersects both connected components of $\mathbb{D}\setminus W^s_\mathbb{D}(x)$ and is disjoint from $W^s_\mathbb{D}(x)\setminus \sigma$. \end{definition} \begin{remark}\label{r.small} One can build connected compact sets $C'\subset C$ satisfying the definition and contained in arbitrarily small neighborhoods of $W^s_\mathbb{D}(x)$. {\rm If this were not the case there would exist a small neighborhood $U$ of $W^s_\mathbb{D}(x)$ such that each connected component of $C\cap U$ is disjoint from one of the connected components of $\mathbb{D}\setminus W^s_\DD(x)$. Hence the points in $W^s_\DD(x)\cap C$ would not be accumulated by points of $C$ from both components of $\mathbb{D}\setminus W^s_\DD(x)$: there would be a continuous partition of $C$ as points to the ``left" or to the ``right" of $W^s_\DD(x)$, contradicting the connectedness. } \end{remark} \begin{lemma}\label{l.trichotomy} Three cases occur. \begin{itemize} \item[--] for $\mu$-almost every point $x$, the set $\Lambda$ crosses both stable branches of $x$, \item[--] for $\mu$-almost every point $x$, the set $\Lambda$ crosses one stable branch of $x$ and is disjoint from the other one, \item[--] for $\mu$-almost every point $x$, the set $\Lambda$ is disjoint from both stable branches of $x$. \end{itemize} \end{lemma} \begin{proof} We first note that the set of points such that both stable branches $W^s_\mathbb{D}(x)$ are crossed by $\Lambda$ is forward invariant, hence is $f$-invariant on a set with full $\mu$-measure. Similarly for the set of points having only one stable branch crossed by $\Lambda$. By ergodicity, three cases occur on a set $X$ with full measure: $\Lambda$ crosses both branches of each point, or exactly one branch, or none of them. Pesin theory gives the continuity of $W^s_\mathbb{D}(x)$ for the $C^1$ topology on a set with positive $\mu$-measure. Up to removing from $X$ a set with zero measure, one can thus assume that each point $x\in X$, is accumulated by points $y$ of $X$ in each component of $\mathbb{D}\setminus W^s_\DD(x)$ such that $W^s_\mathbb{D}(x)$ and $W^s_\mathbb{D}(y)$ are arbitrarily close for the $C^1$ topology. Let us consider a stable branch $\sigma$ of $x\in X$ and assume that there exists $z\in \sigma\cap \Lambda$. Since $\Lambda$ is compact and invariant, there exists $z_1,z_2\in \sigma\setminus \Lambda$ such that $z$ belongs to the subarc $[z_1,z_2]$ of $\sigma$ connecting $z_1$ to $z_2$. Since $\Lambda$ is connected and is not contained in $W^s_\mathbb{D}(x)$, there exists a compact connected set $C\subset \Lambda$ that intersects $[z_1,z_2]$, that is contained in a small neighborhood of $[z_1,z_2]$, and that contains a point $\zeta\in \Lambda\setminus W^s_\mathbb{D}(x)$. Considering a point $y$ as above in the same component of $\mathbb{D}\setminus W^s_\DD(x)$ as $\zeta$, one deduces that the stable branch $\sigma_y$ of $y$ close to $\sigma$ is crossed by $\Lambda$. Not also that if the other stable branch $\sigma'$ of $x$ is crossed by $\Lambda$, then the stable branch $\sigma'_y$ of $y$ that is close to $\sigma_y$ is also crossed by $\Lambda$. Since $x$ and $y$ have the same number of stable branches crossed by $\Lambda$, one deduces that $\sigma$ is crossed by $\Lambda$. \end{proof} For proving theorem~\ref{t.measure local}, the three cases of lemma~\ref{l.trichotomy} have to be addressed. In the first case, using that both branches of local stable manifolds intersects $\Lambda$, for a point $x$ in a hyperbolic block of the measure $\mu$, we build a rectangle that contains $x$ in its interior and the boundary of the rectangle are given by two local stable manifolds of generic points of the measure and two connected arcs contained in $\Lambda.$ By theorem \ref{t.measure revisited} there is a periodic point close to $x$ and so in the interior of the rectangle; on the other hand, by the construction of the rectangle, forward iterates of it converge to $\Lambda$; therefore, the periodic point in the interior of the rectangle, has to be in the intersection of $\Lambda$ with the rectangle. In the second case, a similar rectangle (with boundaries given by two local stable manifolds of generic points of the measure and two connected arcs contained in $\Lambda$) can be built. However, that rectangle does not contain points of $\Lambda$ in its interior and so theorem \ref{t.measure revisited} does not guarantee the existence of a periodic point in $R$ (the periodic points provided by that theorem accumulate on the boundary of the rectangle). So a different strategy has to be formulated, which is described after the preparatory claim \ref{c.reduction}. For the third case, we use a slight variation of the strategy developed for the second case. \paragraph{\it First case: $\Lambda$ crosses both stable branches of $x$.} We select a neighborhood of $x$ verifying: \begin{claim} There is a neighborhood $R$ of $x$ whose boundary is contained in $\Lambda\cup W^s_\mathbb{D}(x')\cup W^s_\mathbb{D}(x'')$ where $x',x''$ are iterates of $x$. \end{claim} \begin{proof} From Pesin theory, there exists a set $X$ with positive measure for $\mu$ such that $W^s_\mathbb{D}(z)$ exists and varies continuously with $z\in X$ for the $C^1$ topology. Since $\mu$ has no atom, one can furthermore require that any point $z\in X$ is accumulated in both components of $\mathbb{D}\setminus W^s_\mathbb{D}(z)$ by forward iterates of $z$ in $X$. Without loss of generality, one can assume that $x$ belongs to $X$ and consider two forward iterates $x',x''\in X$ of $x$, arbitrarily close to $x$ and separated by $W^s_\mathbb{D}(x)$. See figure~\ref{f.localization}. Since $\Lambda$ crosses both stable branches of $x$, there exists two connected compact sets $C_1,C_2\subset \Lambda$ which intersect both curves $W^s_\mathbb{D}(x')$, $W^s_\mathbb{D}(x'')$ and which do not contain $x$. The connected component $R$ of $\mathbb{D}\setminus (W^s_\mathbb{D}(x')\cup W^s_\mathbb{D}(x'')\cup \Lambda)$ containing $x$ has its closure contained in the interior of $\mathbb{D}$: otherwise, there would exist an arc connecting $x$ to the boundary of $\mathbb{D}$, contained in the strip bounded by $W^s_\mathbb{D}(x')\cup W^s_\mathbb{D}(x'')$, and disjoint from $\Lambda$, contradicting the connectedness of $C_1$ and $C_2$. \end{proof} \begin{figure} \begin{center} \includegraphics[width=5cm,angle=0]{closing-rectangle1.pdf} \put(-33,34){\small $\Lambda$} \put(-76,60){\small $x$} \put(-95,60){\small $x'$} \put(-60,58){\small $x''$} \put(-85,-10){\small $W^s_{\DD}(x)$} \put(-83,77){\small $R$} \end{center} \caption{$\Lambda$ crosses both stable branches of $x$.\label{f.localization}} \end{figure} The volume of the iterates $f^k(R)$ and the length of the iterates $f^k(W^s_\mathbb{D}(x'))$ and $f^k(W^s_\mathbb{D}(x''))$ decreases to zero as $k\to +\infty$. Hence the distance between $f^k(R)$ and $\Lambda$ goes to zero when $k$ goes to $+\infty$. By applying theorem~\ref{t.measure revisited}, there exists a periodic point $q$ in $R$. Let $\ell$ denote its period. This periodic point also belongs to $f^{k\ell}(R)$ for $k$ arbitrarily large, hence it also belongs to $\Lambda$ by our construction. The theorem follows in that case. \paragraph{\it Second case: $\Lambda$ crosses only one stable branch of almost every point $x$.} As in the proof of the previous claim, we introduce a compact Pesin block $X\subset \Lambda$ for $\mu$ with no isolated point, containing $x$ and with positive $\mu$-measure. One can replace $x$ by another point close in $X$ and require that $x$ is accumulated by $X$ in both components of $\mathbb{D}\setminus W^s_\mathbb{D}(x)$. \begin{claim}\label{c.reduction} There exists $N\geq 0$ (arbitrarily large) and two points $x',x''\in X$ such that \begin{itemize} \item[--] $W^s_\mathbb{D}(f^{-N}(x))$ separates $f^{-N}(x')$ and $f^{-N}(x'')$ in $\mathbb{D}$, \item[--] the image by $f^N$ of the strip bounded by $W^s_\mathbb{D}(f^{-N}(x'))$ and $W^s_\mathbb{D}(f^{-N}(x''))$ in $\mathbb{D}$ is an arbitrarily small neighborhood $R$ of $x$, \item[--] $x''$ is a forward iterate $f^j(x')$ of $x'$, \item[--] for any $n\geq 1$, the point $x''$ is accumulated by its forward iterates under $f^n$ in both components $\mathbb{D}\setminus W^s_\mathbb{D}(x'')$. \end{itemize} \end{claim} \begin{proof} Since the length of the iterates $f^n(W^s_\mathbb{D}(z))$ decreases uniformly to $0$ as $n$ goes to $+\infty$, the curve $f^N(W^s_\mathbb{D}(f^{-N}(x)))$ is arbitrarily small for $N$ large enough. Note that $f^N(\partial \mathbb{D})$ crosses both stable branches of $x$. Considering points $x',x''$ close to $x$ in $X$, one defines a rectangle $R$ bounded by $f^N(\partial \mathbb{D})\cup W^s_\mathbb{D}(x')\cup W^s_\mathbb{D}(x'')$. Since $x'$ and $x''$ can be chosen in different components of $\mathbb{D}\setminus W^s_\mathbb{D}(x)$, the point $x$ belongs to the interior of $R$. Since $X$ has positive measure, one can choose $x',x''$ in the same orbit. Moreover, up to removing a set with zero measure, one can choose $x'$ (and $x''$) to be accumulated by its forward iterates under $f^n$ (for any $n\geq 1$) inside both components $\mathbb{D}\setminus W^s_\mathbb{D}(x'')$. \end{proof} In the following, one replaces $\mathbb{D}$ by $f^N(\mathbb{D})$ and $f$ by $f^j$. Hence without any loss of generality one reduces to the case where: \begin{itemize} \item[--] $W^s_\mathbb{D}(x)$ separates $x'$ and $x''$, \item[--] $R$ is the strip in $\mathbb{D}$ bounded by $W^s_\mathbb{D}(x')$ and $W^s_\mathbb{D}(x'')$, \item[--] $f(x')=x''$. \end{itemize} We now have to find a periodic point $q$ in $R\cap \Lambda$. The ergodicity of the measure will not be used anymore. We denote by $D'$ (resp. $D''$) the (open) component of $\mathbb{D}\setminus W^s_{\mathbb{D}}(x')$ (resp. of $\mathbb{D}\setminus W^s_{\mathbb{D}}(x'')$) which does not contain $W^s_{\mathbb{D}}(x'')$ (resp. $W^s_{\mathbb{D}}(x')$). See figure~\ref{f.single-cross}. \medskip \begin{figure} \begin{center} \includegraphics[width=7cm,angle=0]{single-cross.pdf} \put(-33,34){\small $D''$} \put(-33,85){\small $A$} \put(-170,45){\small $D'$} \put(-91,40){\small $x$} \put(-117,45){\small $x'$} \put(-80,50){\small $x''$} \put(-112,-10){\small $W^s_{\DD}(x)$} \put(-105,60){\small $R$} \put(-150,115){\small $f^N(\mathbb{D})$} \end{center} \caption{Localization when $\Lambda$ crosses one or no stable branch of $x$.\label{f.single-cross}} \end{figure} The strategy now consist in using the stable manifolds of generic points of the measure to build a forward invariant cellular set $\Delta$ that contains $\Lambda$ and such that its forward iterates converge to $\Lambda$ (see lemma \ref{l.delta}). Then, after considering the following three sets, $\Delta'=\Delta\cap D'), \Delta''=\Delta\cap D''$ and $\Delta\cap R$, we show that it is possible to build a continuous map $g$ that sends $\Delta$ into itself, coincides with an iterate of $f$ in $R$ and satisfies $g(\Delta')\cap \Delta'=\emptyset$ and $g(\Delta'')\cap \Delta''=\emptyset$ (see lemma \ref{l.g}). From proposition \ref{p.CL} it follows that $g$ has a fixed point in $\Delta;$ since that fixed point can not be neither in $\Delta'$ nor in $\Delta''$, it has to be in $R\cap \Delta$ and so it is a periodic point for $f$; since the forward iterates of $\Delta$ converges to $\Lambda$, it follows that it has to be in $\Lambda.$ The last item of the claim~\ref{c.reduction} implies that there exists a compact set $A\subset D''$ which contains arbitrarily large iterates of $x'$ and $x''$, which are contained in $f^m(X)$ for some $m\geq 1$ such that $\Lambda$ crosses a stable branch of each point $z\in A$ (and is disjoint from the other one). The stable curves $W^s_\mathbb{D}(z)$ vary continuously with $z\in A$ for the $C^1$ topology. \begin{lemma}\label{l.delta} There exists a connected compact set $\Delta$ which has the following properties: \begin{itemize} \item[i.] $\Delta$ is cellular (i.e. its complement is connected). \item[ii.] $\Delta$ is forward invariant: $f(\Delta)\subset \Delta$. \item[iii.] The forward orbit of any point in $\Delta$ accumulates on $\Lambda$. \item[iv.] One stable branch of $x'$ is disjoint from $\Delta$, the other one intersects $\Delta$ along an arc; moreover there exists a (non-empty) arc in $W^s_\mathbb{D}(x')$ which contains $x'$ in its closure and is included in the interior of $\Delta$. The same holds for the stable branches of $x''$. \item[v.] There is $\varepsilon>0$ such that for any forward iterate $z\in A$ of $x'$, there exists a curve of size $\varepsilon$ in $W^s_\mathbb{D}(z)$ containing $z$ in its closure and included in the interior of $\Delta$. \end{itemize} \end{lemma} Let us denote $\Delta':=\Delta\cap D'$ and $\Delta'':=\Delta\cap D''$. Note that it is enough now to obtain a periodic point $q\in R\cap \Delta$. Indeed, since the accumulation set of the forward orbit of $q$ coincides with the orbit of $q$, the item (ii) ensures that $q\in\Lambda$ as required. \begin{proof} We consider for each $z\in X\subset \Lambda$ the maximal curve $I_z$ in $W^s_{\mathbb{D}}(z)$ bounded by points of $\Lambda$ (possibly reduced to a point). The union $\Delta_0$ of $\Lambda$ with all the forward iterates of the curves $I_z$, $z\in X$, is a forward invariant set which is compact (since the set $X$ is compact, since the curves $W^s_{\mathbb{D}}(z)$ vary continuously with $z\in X$ for the $C^1$ topology and since the length of their iterates decreases uniformly) and is connected (since $\Lambda$ is connected). The set $\Delta$ is obtained by filling the union $\Delta_0$, i.e. it coincides with the complement of the connected component of $\mathbb{D}\setminus \Delta_0$ which contains the boundary of $\mathbb{D}$. Properties (i) and (ii) are satisfied. In order to prove the property (iii), we consider a point $y\in \Delta$. Note that if $y$ belongs to $\Lambda$ or to some $W^s_{\mathbb{D}}(z)$ with $z\in X$, the conclusion of (iii) holds trivially. We thus reduce to the case where $z$ belongs to a connected component $C$ of $\Delta\setminus \Delta_0$. Note that the boundary of this component decomposes as the union of a subset of $\Lambda$ and a set contained in the union of the $f^n(W^s_\mathbb{D}(z))$ with $z\in X$ and $n\geq 0$. Since the volume decreases under forward iterations, for $n$ large enough the point $f^n(z)$ gets arbitrarily close to the boundary of $f^n(C)$. Since the length of stable manifolds $f^n(W^s_\mathbb{D}(z))$ gets uniformly arbitrarily small as $n\to +\infty$, any point in $f^n(C)$ is arbitrarily close to $\Lambda$ provided $n$ is large enough, proving (iii). By construction of $\Delta$, for any point $z\in X$, the intersection $\Delta\cap W^s_\mathbb{D}(z)$ is an arc bounded by two points of $\Lambda$ (and not reduced to $z$). This is the case in particular for the intersections $\Delta\cap W^s_\mathbb{D}(x')$ and $\Delta\cap W^s_\mathbb{D}(x'')$. Since one stable branch of $x'$ (resp. $x''$) does not meet $\Lambda$, the first half of (iv) follows. Let $\sigma$ be the stable branch of $x'$ that is crossed by $\Lambda$ and let us choose a connected compact set $C_1\subset \Lambda$ as in definition~\ref{d.crossing}. One can choose another set $C_2$ which is contained in an arbitrarily small neighborhood of $x$. Indeed, let $f^{-k}(x')$ be a backward iterate of $x'$ in $X$. By choosing $k$ large, the image $f^k(W^s_\mathbb{D}(f^{-k}(x')))$ gets arbitrarily small. Let $C'_2$ be a connected set crossing a stable branch of $f^{-k}(x')$ as in definition~\ref{d.crossing}. One can choose it in a small neighborhood of $W^s_\mathbb{D}(f^{-k}(x'))$ (by remark~\ref{r.small}), hence the image $C_2:=f^k(C'_2)$ is contained in a small neighborhood of $x$. One deduces that the smallest arc $\gamma$ connecting $C_1$ to $C_2$ inside $W^s_\mathbb{D}(x')$ is contained (after removing its endpoints) in the interior of $\Delta$. Indeed, one can choose two points $y_l,y_r\in X$ close to $x'$, separated by $W^s_\mathbb{D}(x')$ and with stable curves close to $W^s_\mathbb{D}(x')$ for the $C^1$ topology. These curves are crossed by $C_1$ and $C_2$, hence the connected components of $\mathbb{D}\setminus (C_1\cup C_2 \cup W^s_\mathbb{D}(y_l) \cup W^s_\mathbb{D}(y_r))$ containing $\gamma$ is bounded away from $\partial \mathbb{D}$ and contained in $\Delta$ as claimed. See figure~\ref{f.single-cross2}. \begin{figure} \begin{center} \includegraphics[width=2cm,angle=0]{single-cross2.pdf} \put(-68,100){\small $C_1$} \put(-52,71){\small $C_2$} \put(-35,50){\small $y$} \put(-48,50){\small $y_l$} \put(-20,50){\small $y_r$} \put(-50,-10){\small $W^s_{\DD}(x)$} \end{center} \caption{The interior of $\Delta$ contains stable arcs. \label{f.single-cross2}} \end{figure} Since $C_2$ can be chosen in an arbitrarily small neighborhood of $x'$, one deduces that the interior of $\Delta$ contains a (non-empty) arc in $W^s_\mathbb{D}(x')$ which contains $x'$ in its closure. The same holds for the point $x''$ and (iv) is satisfied. As a consequence, for any $z$ in a forward iterate of $X$, the intersection $\Delta\cap W^s_\mathbb{D}(z)$ is a finite union of arcs bounded by points of $\Lambda$. In order to check (v), one notices that by the same argument as in the previous paragraph, for any point $z_0\in A$, there exists a non-trivial curve $\alpha_{z_0}\subset W^s_\mathbb{D}(z_0)$ contained in the interior of $\Delta$ and containing $z_0$ in its closure. By construction, the length of $\alpha_z$ is bounded from below for any $z\in A$ close to $z_0$. by compactness of $A$, there exists a uniform bound $\varepsilon>0$ for all $\alpha_z$ with $z\in A$, proving (v). \end{proof} Let $I':=\Delta\cap W^s_{\mathbb{D}}(x')$ and $I'':=\Delta\cap W^s_{\mathbb{D}}(x'')$. By (iv), these are arcs. \begin{claim} For $\ell\geq 1$ large, $f^\ell(I'\setminus \{x'\})$ and $f^\ell(I''\setminus \{x''\})$ are in the interior of $\Delta$. \end{claim} \begin{proof} Let us consider a large forward iterate $f^k(x')\in A$. The image $f^k(I'\setminus \{x'\})$ is arbitrarily small, hence by (iv) and (v) is contained in the interior of $\Delta$. By (ii) the interior of $\Delta$ is forward invariant. This shows that for any integer $\ell$ large, the image $f^\ell(I'\setminus \{x'\})$ is contained in the interior of $\Delta$. And the same holds for $f^\ell(I''\setminus \{x''\})$. \end{proof} We choose $\ell\geq 1$ large such that $f^\ell(x')\in D''$. Since the stable manifolds are disjoint or coincide, this gives $f^\ell(I')\subset D''$. Note that since $\mu$ is not a periodic measure, the large forward iterate of $x'$ do not intersect $W^s_\mathbb{D}(x')$. One can thus choose $\ell$ such that we have also $f^{\ell+1}(x')\not\in \overline{D''}$: this gives $f^{\ell+1}(I'')\subset \mathbb{D}\setminus \overline{D''}$. We fix such an iterate $f^\ell$. \begin{lemma}\label{l.g} There exists a continuous map $g$ which: \begin{itemize} \item[(a)] maps $\Delta$ inside itself, \item[(b)] is the restriction of an orientation-preserving homeomorphism of the plane, \item[(c)] satisfies $g(\Delta')\cap\Delta'=\emptyset$ and $g(\Delta'')\cap\Delta''=\emptyset$, \item[(d)] coincides with $f^\ell$ on $R$. \end{itemize} \end{lemma} \begin{proof} One chooses two small neighborhoods $U',U''$ of $x'$ and $x''$. One builds a homeomorphism $\varphi$ which coincides with the identity on $R$ and near the boundary of $\mathbb{D}$ and which sends $\Delta'$ in a small neighborhood of $I'$ and $\Delta''$ in a small neighborhood of $I''$. More precisely, from the property (iv) of lemma~\ref{l.delta}, the curve $(U'\cap I')\setminus \{x'\}$ is contained in the interior of $\Delta$ and the stable branch of $x'$ which does not meet $I'$ is disjoint from $\Delta$. One can thus require that $\varphi(\Delta'\cap U')\subset \Delta'$ so that $f^\ell\circ \varphi(\Delta'\cap U')\subset \Delta$. One can furthermore require that $\Delta'$ is sent in a small neighborhood of $I'$. By our choice of $\ell$, the compact arc $f^\ell(I' \setminus U')$ is contained in the interior of $\Delta$, hence one gets $f^\ell\circ \varphi(\Delta'\setminus U')\subset \Delta$. This shows that $g:=f^\ell\circ \varphi$ satisfies $g(\Delta')\subset \Delta$. A similar construction in $D''$, implies that $g(\Delta'')\subset \Delta$. Since $f^\ell(\Delta)\subset \Delta$ by (ii), this implies $g(\Delta)\subset \Delta$, hence (a). The properties (b) and (d) follows from the definition of $\varphi$ and $g$. Since $\varphi(\Delta')$ is contained in a small neighborhood of $I'$ and since $f^\ell(I')\subset D''$, one gets $g(\Delta')\subset D''$. Similarly $g(\Delta'')\subset \Delta'$. Hence property (c) holds. \end{proof} From (a), the sequence $g^n(\Delta)$ is decreasing and the intersection $\widetilde \Delta$ is $g$-invariant. From the property (i) and as the intersection of a decreasing sequence of cellular sets, it is cellular. Together with (b), one can apply Cartwright-Littlewood's theorem (proposition~\ref{p.CL}): the orientation preserving homeomorphism of the plane $g$ has a fixed point $q\in \widetilde \Delta\subset \Delta$. From (c), the fixed point does not belong to $\Delta'\cup \Delta''$, hence it belongs to $R\cap \Delta$. From (d), it is an $\ell$-periodic point of $f$, as we wanted. The proof of the theorem follows in the second case. \paragraph{\it Third case: $\Lambda$ is disjoint from the two stable branches of almost every $x$.} We adapt the proof done in the second case. We can first reduce to the setting of the figure~\ref{f.single-cross}: $W^s_\mathbb{D}(x)$ separates two points $x'$ and $x''=f(x')$; $R$ is the strip bounded by $W^s_\mathbb{D}(x')$ and $W^s_\mathbb{D}(x'')$; we have to find a periodic point $q$ in $R\cap \Lambda$. In this case, for any iterate $f^k(x')$, the set $\Lambda$ intersects $W^s_\mathbb{D}(f^k(x'))$ only at $x'$. We choose $\ell\geq 1$ large such that $f^\ell(x')\in D''$ and $f^{\ell+1}(x')\in \mathbb{D}\setminus \overline{D''}$. Note that the sets $(\Lambda\cap D')\cup \{x'\}$, $(\Lambda\cap R)\cup \{x',x''\}$, $(\Lambda\cap D'')\cup \{x''\}$ are compact, connected, and only intersect at $x'$ or $x''$. The image by $f^\ell$ of the second intersects both $D''$ and $\mathbb{D}\setminus \overline{D''}$: consequently it contains $x''$. One deduces that the image $f^\ell(\Lambda\cap \overline{D'})$ does not intersect $W^s_\mathbb{D}(x'')$, hence is contained in $D''$. For the same reason the image $f^\ell(\Lambda\cap \overline{D''})$ does not intersect $W^s_\mathbb{D}(x'')$, hence is contained in $\mathbb{D}\setminus \overline{D''}$. This proves that $f^\ell$ has no fixed point in $(D'\cup D'')\cap \Lambda$. By Cartwright-Littlewood's theorem (proposition~\ref{p.CL}) it has a fixed point in the cellular set $\Lambda$, hence in $\Lambda\cap R$ as wanted. The proof of theorem~\ref{t.measure local} is now complete. \qed \section{No cycle} \label{no cycle section} One says that a diffeomorphism $f$ admits a \emph{cycle of periodic orbits} if there exists a sequences of periodic orbits $\mathcal{O}_0$, $\mathcal{O}_1$, $\dots,$ $\mathcal{O}_n=\mathcal{O}_0$ such that for each $i=0,\dots,n-1$, the unstable set of $\mathcal{O}_i$ accumulates on $\mathcal{O}_{i+1}$. The goal of this section is to prove the following: \begin{theorem}\label{t.cycle} A mildly dissipative diffeomorphisms of the disc has zero topological entropy if and only if it does not admit any cycle of periodic orbit. \end{theorem} This result can be localized. A set $U$ is \emph{filtrating} for $f$ if it may be written as the intersection of two open sets $U=V\cap W$ such that $f(\overline V)\subset V$ and $f^{-1}(\overline W)\subset W$. \addtocounter{theorem}{-1} \renewcommand{\thetheorem}{\Alph{theorem}'} \begin{theorem}[Local version]\label{t.cycle2} Let $f$ be mildly dissipative diffeomorphisms of the disc and $U$ be a filtrating set. The restriction of $f$ to $U$ has zero topological entropy if and only if it does not admit any cycle of periodic orbits contained in $U$. \end{theorem} \renewcommand{\thetheorem}{\Alph{theorem}} The non-existence of cycle of periodic orbits extend to fixed arcs. One says that a diffeomorphism $f$ admits a \emph{cycle of fixed arcs} if there is sequence of disjoint fixed arcs $I_0,I_1,\dots, I_{n}=I_0$ such that each arc $I_i$ admits a $f$-invariant unstable branch which accumulates on $I_{i+1}.$ \begin{corollary}\label{c.cycle} Consider a mildly dissipative diffeomorphism $f$ of the disc. If $f$ has zero topological entropy, then it does not admit any cycle of fixed arcs. The same property holds inside any filtrating set $U$. \end{corollary} \begin{proof} Let us assume that $f$ admits such a sequence of fixed arc: for each $i$, there exists a fixed point $p_i\in I_i$ with a $f$-invariant unstable branch $\Gamma_i$ that accumulates on $I_{i+1}$. We may assume that the length $n$ is minimal. For each interval $I_i$, we claim that one component of $\mathbb{D}\setminus W^s_\mathbb{D}(I_i)$ contains all the other arcs $I_j$. Indeed, let $\Gamma_i$ be the $f$-invariant unstable branch of $I_i$ that accumulates on $I_{i+1}$. It is the unstable branch of an endpoint $p_i$ of $I_i$. Since $f$ has no cycle of fixed points, $\Gamma_i$ is disjoint from $W^s_{\mathbb{D}}(p_i)$. As a consequence, $\Gamma_i$ is contained in a component $C_i$ of $\mathbb{D}\setminus W^s_\mathbb{D}(I_i)$, which also contains $I_{i+1}$. If there exist arcs $I_j$, $i+1<j$, that are not contained in $C_i$, one considers the first one; consequently the unstable branch of $I_{j-1}$ crosses $W^s_\mathbb{D}(I_i)$ and the length $n$ of the cycle is not be minimal. From the previous paragraph, the point $p_{i-1}$ is contained in the component of $\mathbb{D}\setminus W^s_\mathbb{D}(I_{i})$ which contains $\Gamma_i$, named $V_i$, which is is also the component bounded by $W^s_\mathbb{D}(p_{i})$; otherwise, if $p_{i-1}$ is not contained in $V_i$, since $p_0$ is in $V_i$ because the unstable branch of $p_{n-1}$ that accumulates on $I_0$ is in $V_0$, it follows that for some $i'<i$ holds that $I_{i'}$ is in $V_i$, but it that case, its unstable branch involved in the cycle has to cross the local stable of $p_i$ a so the cycle would not be minimal; a contradiction. One deduces that $\Gamma_{i-1}$ accumulates on $p_{i}$ for each $i$. The fixed points $p_0,p_1,\dots,p_n$ define a cycle and by theorem~\ref{t.cycle} $f$ has positive topological entropy. Using theorem~\ref{t.cycle2}, one gets the same property inside filtrating sets. \end{proof} A simple example of a cycle is a $1-$cycle associated to a fixed point: the unstable manifold of the fixed point accumulates on the stable one. In that context, in \cite{pixton}, it was proved that either there exists a transversal homoclinic point (and so the topological entropy is positive) or that intersection can be created by a smooth perturbation. Under the hypothesis of mild dissipation, we prove that if such a $1$-cycle exists, the topological entropy is positive even if there is not transverse intersection between the invariant manifolds of the fixed point. The end of this section is devoted to the proof of theorems~\ref{t.cycle} and~\ref{t.cycle2}. \subsection{Homoclinic orbit of a fixed point} We first consider the case of a cycle of a unique fixed point with a homoclinic orbit. \begin{lemma}\label{l.heteroclinic-cycle} Let $f$ be a mildly dissipative diffeomorphism of the disc. If there exists a point $p$ with a fixed unstable branch $\Gamma$ which intersects $W^s_\mathbb{D}(p)$, then the topological entropy of $f$ is positive. \end{lemma} \begin{proof} Let us consider $x\in W^s_\mathbb{D}(p)\cap \Gamma$. We denote by $\gamma$ the arc of $\Gamma$ which connects $p$ to $x$. Up to replace $f$ by $f^2$, one can assume that $q$ is fixed and that the eigenvalues of $Df(p)$ are $0<\lambda<1\leq \mu$. Since $f$ contract the volume, $\lambda\mu<1$. Let us choose some point $z^s\in W^s_\mathbb{D}(p)$ such that $x$ belongs to the interior of the segment $[z^s,f(z^s)]$ in $W^s_\mathbb{D}(p)$ and such that $z^s$ is not accumulated by $\Gamma$ (the point $z^s$ can be chosen as one that has a backward orbit outside the disc). Ones chooses two small $C^1$ arcs $\alpha$, $\alpha'$ transverse to $\sigma$ at $z^s$ and $f(z^s)$. Similarly, one chooses some point $z^u\in \Gamma$ such that $x$ belongs to the interior of the segment $[z^u,f(z^u)]$ in $\Gamma$ and such that the orbit of $z^u$ does not intersect the strong stable arc $W^s_\mathbb{D}(p)$. Ones fixes two small arcs $\beta$, $\beta'$ transverse to $\gamma$ at $z^u$ and $f(z^u)$. For $n$ large, there exist four arcs $B\subset f^{-n}(\beta)$, $B'\subset f^{-n}(\beta')$ and $A\subset\alpha$, $A'\subset \alpha'$ which bound a rectangle $R$ whose $n$ first iterates remain close to the forward orbit of $x$ and the backward orbit of $x$. See Figure~\ref{f.cross}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.37]{cross.pdf} \begin{picture}(0,0) \put(-240,114){$W^s_\mathbb{D}(p)$} \put(-198,115){$\alpha$} \put(-47,115){$\alpha'$} \put(-15,115){$p$} \put(-55,0){$\Gamma$} \put(-80,8){$\beta'$} \put(-170,8){$\beta$} \put(-190,85){$R$} \put(-162,40){$f^{n}(R)$} \put(-95,120){$x$} \end{picture} \end{center} \vspace{-0.5cm} \caption{ Proof of lemma~\ref{l.heteroclinic-cycle}.\label{f.cross}} \end{figure} For any $\varepsilon>0$ there is $C>0$ such that if $n$ has been chosen large enough, $$\min\big(d(B, W^s_\mathbb{D}(p)),d(B', W^s_\mathbb{D}(p))\big)\geq C^{-1}(1+\varepsilon)^{-n}\mu^{-n},$$ $$\max \big(d(f^{n}(A),\gamma), d(f^{n}(A'),\gamma)\big) \leq C(1+\varepsilon)^{n}\lambda^{n}.$$ One chooses the integer $n$ such that $$C^{-1}(1+\varepsilon)^{-n}\mu^{-n}>C(1+\varepsilon)^{n}\lambda^{n}.$$ This is possible since the dissipation gives $\lambda\mu<1$. In particular $f^{n}(R)$ ``crosses" $R$. One deduces that for any curve $\delta$ in $R$ which connects the arcs $B,B'$, the image $f^n(\delta)$ contains two curves $\delta'_1,\delta'_2\subset R$ which also connect the arcs $B,B'$, and which are $\varepsilon$-separated for some $\varepsilon>0$ independent from $\delta$. One can thus iterate $\delta$ and apply the property inductively. This implies that the topological entropy is positive. \end{proof} \subsection{Periods and heteroclinic orbits.} The following proposition allows to get (topological) transverse heteroclinic intersections between periodic orbits with different periods and will be used again in other sections. \begin{proposition}\label{p.heteroclinic} Let $f$ be a mildly dissipative diffeomorphism of the disc $f$ which preserves the orientation and with zero topological entropy. Let $p$ be a fixed point having a real eigenvalue larger or equal to $1$ and $q$ be a periodic point with an unstable branch $\Gamma_q$ which is not fixed by $f$. If $\Gamma_q$ accumulates on $p$, then it intersects both components of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$. \end{proposition} \begin{proof} First observe that the period of $q$ is larger than one: if it is fixed, since the branch $\Gamma_q$ that accumulates on the fixed points is not invariant, then both unstable branches in each component of $\DD\setminus W^s_\DD(q)$ accumulates on the same fixed point; a contradiction. Let us assume now by contradiction that $\Gamma_q$ intersects only one component of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$. Since the largest eigenvalue at $p$ is positive, the components are locally preserved by $f$. Hence each unstable branch $f^k(\Gamma_q)$ intersects the same components as $\Gamma_q$. This proves that all the iterates of $q$ are contained in a same component $U$ of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$. The set $(\partial U)\setminus \{p\}$ is an arc that may be parametrized by $\mathbb{R}$, hence may be endowed with an order $<$. To each iterate $f^k(q)$, since the other iterates $f^j(q)$ with $j\neq k$ are in the connected component of $\mathbb{D}\setminus W^s_\mathbb{D}(f^k(q))$ that contains $p$ (otherwise the unstable branch of some $f^j(q)$ has to cross the stable of $f^k(q)$) one associates the component $V_k$ of $\mathbb{D}\setminus W^s_\mathbb{D}(f^k(q))$ which does not contain $p$, nor the other iterates of $q$. The components $V_k$ are thus disjoint, hence ordered by their prints on the boundary of $\partial U$. This induces an ordering on the iterates of $q$. \begin{figure} \label{f.period-and-cycle} \includegraphics[width=5cm,angle=0]{period-and-cycle.pdf} \hspace{2cm} \includegraphics[width=5cm,angle=0]{period-and-cycle2.pdf} \put(-300,57){\small $p$} \put(-250,98){\small $q$} \put(-256,70){\tiny $f(q)$} \put(-286,91){\small $x$} \put(-275,84){\small $y$} \put(-62,145){\small $z$} \put(-130,60){\small $p$} \put(-45,98){\small $q$} \put(-49,70){\tiny $f(q)$} \put(-73,84){\small $y$} \caption{Cases of the proof of proposition~\ref{p.heteroclinic}: $W^u(q)$ accumulates $W^s(x)\setminus \{x\}$ or not.} \end{figure} \paragraph{\it First case. The branch $\Gamma_q$ accumulates on a point $x$ of $W^s_{\mathbb{D}}(p)$ which is different from $p$.} The iterates $f^k(x)$ converge to $p$ as $k\to +\infty$. Since $f$ preserves the orientation, these iterates belong to the same branch of $W^s_\mathbb{D}(p)$. Up to modify the parametrization of the boundary of $U$, one can assume that the sequence $f^k(x)$ is increasing for the order $<$. We choose $y\in \Gamma_q$ close to $x$, a small arc $\delta$ connecting $y$ to $x$ and consider the arc $\gamma\subset \Gamma_q$ connecting $q$ to $y$. This gives an oriented arc $\sigma:=\delta\cup\gamma$ connecting $q$ to $x$ in $U$. The set $U\setminus (\overline V_0\cup \sigma)$ has two connected components that are Jordan domains. One of them (denoted by $O$) contains in its boundary all the forward iterates of $x$ and the point $p$. Up to replace $q$ by another point in its orbit, one can assume that $O$ contains all the iterates $f^k(q)\neq q$, that is $f^k(q)<q$. See figure~\ref{f.period-and-cycle}. Since the endpoints of $\sigma$ (resp. of $f(\sigma)$) do not belong to $f(\sigma)$ (resp. to $\sigma$), the algebraic intersection number between $\sigma$ and $f(\sigma)$ is well defined. Since $\sigma$ is contained in the boundary of $O$ and since the endpoints of $f(\sigma)$ belong to $O$, the algebraic intersection number between $\sigma$ and $f(\sigma)$ is zero. This implies that for any $k\geq 0$, the intersection number between $f^k(\sigma)$ and $f^{k+1}(\sigma)$ is zero. This proves that in $\overline U\setminus (f^k(\sigma)\cup W^s_\mathbb{D}(f^k(q)))$, the points $f^{k+1}(x)$ and $f^{k+1}(q)$ are in the same connected component. Since $f^k(x)<f^{k+1}(x)$, one deduces that $f^{k+1}(q)<f^{k}(q)$ for any $k\geq 0$. This is a contradiction since when $k+1$ coincides with the period of $q$, we have $f^{k+1}(q)=q>f^k(q)$. \paragraph{\it Second case. The accumulation set of $\Gamma_q$ is disjoint from $W^s_{\mathbb{D}}\setminus \{p\}$.} We modify the previous argument. Note that in this case the stable set of $p$ contains a neighborhood of $W^s_\mathbb{D}(p)$ in $\overline U$. Moreover this neighborhood is foliated by strong stable curves, that we still denotes by $W^s_\mathbb{D}(z)$. We choose $y\in \Gamma_q$ in the stable set of $p$ and consider the oriented arc $\sigma\subset \Gamma_q$ connecting $q$ to $y$. One can choose $y$ such that the arc $\sigma$ does not intersects the component of $U\setminus W^s_{\mathbb{D}}(y)$ containing $p$. Let $L$ be the half curve in $W^s_\mathbb{D}(y)$ connecting $y$ to a point $z$ in $\partial U$. We can choose the endpoint $z$ such that $V_0<z$. Since $V_1<V_0$, one deduces that $f(q)$ and $f(y)$ belong to the same connected component of $U\setminus (V_0\cup \sigma\cup L)$. See figure~\ref{f.period-and-cycle}. In particular the algebraic intersection number between $\sigma$ and $f(\sigma)$ is zero. For any $k\geq 0$, let $L_k$ be the half curve in $W^s_\mathbb{D}(f^k(y))$ connecting $f^k(y)$ to a point $z_k$ in $\partial U$ such that $V_k<z_k$. Since $f^{k+1}(y)$ belongs to the strip bounded by $W^s_{\mathbb{D}}(p)$ and $W^s_{\mathbb{D}}(f^k(y))$, we have $z_k<z_{k+1}$. Since the algebraic intersection number between $f^k(\sigma)$ and $f^{k+1}(\sigma)$ is zero, one deduces that $V_{k+1}$ and $f^{k+1}(y)$ belong to the same component of $U\setminus (V_k\cup \sigma\cup L_k)$. In particular $V_{k+1}<V_k$ for any $k\geq 0$. As in the previous case, this is a contradiction. \end{proof} \begin{remark} In the case where $f$ does not preserves the orientation, the same statement applies if one assumes that the period of $\Gamma_q$ is larger than $2$ (one applies the previous proposition to $f^2$). \end{remark} \subsection{Cycles of fixed points} \begin{lemma}\label{l.cycle-fixed} If $f$ has a cycle of periodic orbits, there is an iterate $f^m$, $m\geq 1$ which has a cycle of fixed points. More precisely, there exists a fixed point $p$ for $f^m$ with a fixed unstable branch $\Gamma$ whose accumulation set contains $\Gamma$. \end{lemma} \begin{proof} Let $\mathcal{O}_0,\mathcal{O}_1,\dots\mathcal{O}_n=\mathcal{O}_0$ be a cycle of periodic orbits. We extend periodically the sequence $(\mathcal{O}_k)$ to any $k\in \mathbb{N}$. By invariance of the dynamics, one deduces that for each $i$ and each $p_i\in \mathcal{O}_i$, there exists an unstable branch $\Gamma_i$ of $\mathcal{O}_i$ which accumulates on a point of $\mathcal{O}_{i+1}$. One deduces that for any $p_0\in \mathcal{O}_0$, there exists points $p_k\in \mathcal{O}_k$, $k\geq 0$. such that an unstable branch of $p_{k-1}$ accumulates on $p_{k}$ for each $k\geq 1$. All the points $p_{\ell n}$ belong to $\mathcal{O}_0$, hence two of them $p_{\ell_1n}, p_{\ell_2n}$ should coincide. There exists $m\geq 1$ such that all the points in $\cup_k \mathcal{O}_k$ are fixed. The sequence $p_{\ell_1n},p_{\ell_1n+1},\dots,p_{\ell_2n}$ is a cycle of fixed points for $f^m$. This proves the first assertion. \medskip Let us consider a cycle of fixed points for $g=f^m$ with minimal length. Replacing $m$ by $2m$, one can also assume that all their unstable branches are fixed. For each fixed point $p_i$ in the cycle, the other fixed points are all contained in a same component $U_i$ of $\mathbb{D}\setminus W^s_\mathbb{D}(p_i)$: otherwise, one find a point $p_j\neq p_{i-1}$ with an unstable branch which meets both connected components and one contradicts the minimality of the cycle. Each fixed point $p_i$ has an unstable branch $\Gamma_i$ whose accumulation set contains $p_{i+1}$. If $\Gamma_i$ intersects both components of $\mathbb{D}\setminus W^s_\mathbb{D}(p_i)$, by proposition~\ref{p.transitive} the accumulation set contains $\Gamma_i$ and the second assertion of the lemma holds. We are thus reduced to assume that $\Gamma_i$ is contained in the closure of $U_i$. Since $p_i$ and $\Gamma_{i+1}$ are contained in $U_{i+1}$, one deduces that the accumulation set of $\Gamma_i$ contains a point of $\Gamma_{i+1}$; by proposition~\ref{p.transitive}, it contains $\Gamma_{i+1}$. Hence the cycle is not minimal, unless $p_i=p_{i+1}$, i.e. the cycle has only one fixed point. The second assertion holds. \end{proof} A sequence of fixed unstable branches $\Gamma_0,\Gamma_1,\dots,\Gamma_n=\Gamma_0$ associated to fixed points $p_0,\dots,p_n$ is a \emph{cycle of unstable branches} if for each $0\leq i<n$, the accumulation set of $\Gamma_i$ contains $\Gamma_{i+1}$. By proposition~\ref{p.transitive}, this implies that for each $i,j$, the accumulation set of $\Gamma_i$ contains $\Gamma_j$. We generalize lemma~\ref{l.heteroclinic-cycle} to cycles of unstable branches. \begin{lemma}\label{l.heteroclinic-cycle2} Let $f$ be a mildly dissipative diffeomorphism of the disc. If there exists a sequence of fixed unstable branches $\Gamma_0,\Gamma_1,\dots,\Gamma_n=\Gamma_0$ associated to fixed points $p_0,\dots,p_n$ such that $\Gamma_i$ intersects $W^s_\mathbb{D}(p_{i+1})$ for each $0\leq i<n$, then the topological entropy of $f$ is positive. \end{lemma} \begin{proof} From lemma~\ref{l.heteroclinic-cycle}, for each point $p_i$, the unstable branch $\Gamma_i$ is contained in a connected component $U_i$ of $\mathbb{D}\setminus W^s_\mathbb{D}(p_i)$. Since the accumulation set of $\Gamma_i$ contains the other unstable branches, all the $p_j$ and the $\Gamma_j$ are contained in $\overline U_i$. Let $U$ be the intersection of the sets $U_i$: it is a connected component $U$ of $\mathbb{D}\setminus \cup_i W^s_\mathbb{D}(p_i)$ whose closure contains all the $\Gamma_i$. We now argue as for lemma~\ref{l.heteroclinic-cycle}. Let us assume that $\Gamma_i$ intersects $W^s_\mathbb{D}(p_{i+1})$ at some point $x_i$. We build a rectangle $R_i\subset U$ that stretches along a fundamental domain of $W^s_\mathbb{D}(p_{i+1})$ containing $x_i$. One chooses $n_i\geq 1$ large such that $f^{n_i}(R_i)$ crosses $R_{i+1}$. The same argument as before applies following the periodic sequence of rectangles $R_0,R_1,\dots,R_n=R_0$. \end{proof} \subsection{Pixton discs} Let $p$ be a fixed point with a fixed unstable branch $\Gamma$ which is contained in its accumulation set. We introduce a notion similar to a construction in~\cite{pixton}, which improved~\cite{robinson1}. A compact set $D\subset \mathbb{D}$ is a \emph{(topological) disc} if it is homeomorphic to the unit disc. \begin{definition}\label{d.pixton0} A \emph{Pixton disc associated to $\Gamma$} is a disc $D$ bounded by three $C^1$ arcs:\begin{itemize} \item[--] an arc $\gamma\subset\Gamma$ such that $p$ is one endpoint, \item[--] an arc $\sigma\subset W^s_\mathbb{D}(p)$ whose endpoints are $p$ and a point $x\neq p$ accumulated by $\Gamma$, \item[--] a closing arc $\delta$ disjoint from $f(\delta)$, joining $\sigma$ and $\gamma$ such that $\delta\cap W^s_\mathbb{D}(p)=\{x\}$. \end{itemize} \end{definition} Note that the last property implies that $f(\delta\setminus \{x\})$ is contained in the interior of $D$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.24]{disc.pdf} \begin{picture}(0,0) \put(-85,55){$\sigma$} \put(-85,75){$p$} \put(-55,80){$\gamma$} \put(-76,12){$\delta$} \put(-90,28){$x$} \put(-40,50){$D$} \put(-110,120){$\DD$} \end{picture} \end{center} \vspace{-0.5cm} \caption{A Pixton disc. \label{f.disc}} \end{figure} \begin{lemma}\label{l.pixton-cycle} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero topological entropy and let $p$ be a fixed point having a fixed unstable branch $\Gamma$ which is contained in its accumulation set. Then, there exists an aperiodic ergodic measure $\mu$ such that \begin{itemize} \item[--] $\mu(D)=0$ for any Pixton disc $D$ associated to $\Gamma$, \item[--] the closure of $\Gamma$ contains the support of $\mu$. \end{itemize} \end{lemma} \begin{proof} Note that it is enough to prove the proposition for $f^2$, hence one can assume that $f$ preserves the orientation. Let us consider a cycle of fixed unstable branches $\Gamma_0,\Gamma_1,\dots,\Gamma_n=\Gamma_0$ associated to fixed points $p_0,p_1,\dots,p_n=p_0$ such that $p_0=p$, $\Gamma_0=\Gamma$, and whose cardinality $n$ is maximal. This exists since by proposition~\ref{p.group} all the fixed points belong to finitely many disjoint fixed arcs so that the cardinals of cycles are uniformly bounded. By lemma~\ref{l.heteroclinic-cycle2}, there exists $p_i$ such that $W^s_\mathbb{D}(p_i)$ is disjoint from all the unstable branches $\Gamma_j$ for $1\leq j\leq n$. By assumption there exists $x\in W^s_\mathbb{D}(p_i)$ which is accumulated by all the $\Gamma_j$. Note that the preimages of $x$ are all well-defined in $\mathbb{D}$ (since $x$ is the limit of points in $\Gamma_i$ having infinitely many preimages in $\mathbb{D}$ and since $f(\mathbb D)$ is contained in the interior of $\mathbb D$). We introduce $K:=\alpha(x)$. By construction $K$ is contained in the accumulation set of $\Gamma$. The goal is to show that there exists an aperiodic ergodic measure supported in $K$ satisfying the thesis of the proposition. \begin{claim}\label{c.back} Let $D$ be a Pixton disc for an iterate $g=f^m$, $m\geq 1$: \begin{itemize} \item[--] associated to an unstable branch $\Gamma_D$ contained in the accumulation set of $\Gamma_i$, \item[--] whose closing arc $\delta\subset \partial D$ does not meet both components of $\mathbb{D}\setminus W^s_\mathbb{D}(p_i)$. \end{itemize} Then the interior of $D$ does not intersect the orbit of $x$. \end{claim} \begin{proof} Let $q$ be the periodic point associated to $D$, which is fixed by $g$ and let $\gamma\cup\sigma\cup\delta$ be the boundary of $D$. Let $\{x_q\}=\delta\cap \sigma$. If $f^{-k}(x)$ belongs to the interior of $D$, the stable manifold of $W^s_\mathbb{D}(f^{-k}(x))$ intersects the interior of $D$ and its complement. Hence $\delta\cup \gamma$ intersects both components of $\mathbb{D}\setminus W^s_\mathbb{D}(f^{-k}(x))$. Note that $\gamma\subset \Gamma_D$ does not intersect both of these components: since $\Gamma_i$ accumulates on $\Gamma_D$, it would imply that $\Gamma_i$ does also and then by iteration that $\Gamma_i$ intersects $W^s_\mathbb{D}(x)$ contradicting our assumptions. As a consequence $\delta\setminus \{x_q\}$ intersects both components of $\mathbb{D}\setminus W^s_\mathbb{D}(f^{-k}(x))$. Since $g(\delta\setminus \{x_q\})=f^m(\delta\setminus \{x_q\})$ is contained in the interior of $D$, one deduces that $W^s_\mathbb{D}(f^{-k+m}(x))$ intersects the interior of $D$. By induction this implies that $W^s_\mathbb{D}(f^{-k+\ell m}(x))$ intersects the interior of $D$ for any $\ell\geq 0$ and also holds that $\delta\setminus\{x_q\}$ intersects both components of $\mathbb{D}\setminus W^s_\mathbb{D}(f^{-k+\ell m}(x))$. But for $\ell$ large $W^s_\mathbb{D}(f^{-k+\ell m}(x))=W^s_\mathbb{D}(p_i)$. One deduces that $\delta\setminus \{x_q\}$ intersects both components of $\mathbb{D}\setminus W^s_\mathbb{D}(p_i)$, a contradiction. \end{proof} \begin{claim}\label{c.modify-disc} Let $D$ be a Pixton disc associated to $\Gamma$, with a boundary $\partial D:=\gamma\cup\sigma\cup\delta$ and let $\varepsilon>0$. For $n$ large enough $f^n(D)$ is contained in a Pixton disc $D'$ whose closing arc $\delta'$ has diameter smaller than $\varepsilon$. \end{claim} \begin{proof} Replacing $D$ by $f(D)$ if necessary, one can suppose that $f^{-1}(x)$ belongs to $W^s_\mathbb{D}(p)$. Indeed let $\delta_1$ be the connected component of $\delta\setminus \gamma$ which intersects $\sigma$ and let $\gamma_1$ be the arc in $f(\gamma)$ connecting $\gamma$ to $\delta_1$. The disc bounded by $\sigma\cup\delta_1\cup \gamma_1$ is a Pixton disc which contains $f(D)$. One can repeat that construction and define for each $n\geq 1$ a Pixton disc $D_n$ containing $f^n(D)$ with a closing arc $\delta_n\subset \delta$. Note that $f^n(\gamma)\subset \partial f^n(D)$, has points arbitrarily close to $f^{-1}(x)$ as $n$ gets large. One can thus connect $f^n(\gamma)$ to $f^{-1}(x)$ by an arc $\delta'$ with small diameter and build a Pixton disc bounded by $\sigma'=f^{-1}(\sigma)$, $\delta'$ and an arc $\gamma'\subset f^n(\gamma)$, which by construction contains $f^n(D)$. \end{proof} Let $D$ be any Pixton disc associated to $\Gamma$ and consider a Pixton disc $D'\supset f^n(D)$ given by claim~\ref{c.modify-disc}. The assumptions of claim~\ref{c.back} are also satisfied by $D'$, hence it holds that $\mu(\operatorname{Interior}(D'))=0$. This gives $\mu(\operatorname{Interior}(D))=0$. Hence either $\mu(D)=0$, or $\mu$ is supported on the orbit of the periodic point associated to $D$. In particular if $K$ supports an aperiodic measure, the conclusion of the proposition holds. We are thus reduced to suppose that all the ergodic measures on $K$ are periodic and get a contradiction. \begin{claim} For any periodic point $q\in K$, there exists $z\in (W^u(q)\setminus \{q\})\cap K$. \end{claim} \begin{proof} By definition of $K=\alpha(x)$, if the conclusion does not hold then $x$ belongs to an unstable branch $\Gamma(q)$ of $q$. In the case where $\Gamma(q)$ is fixed, since $x$ is accumulated by $\Gamma_i$, the proposition~\ref{p.transitive} proves that $\Gamma(q)$ accumulates and it is accumulated by $\Gamma_i$. Since the periodic cycle $\Gamma_0,\dots,\Gamma_n$ has maximal length, $\Gamma(q)$ coincides with one of the $\Gamma_j$. This is a contradiction since $x\in \Gamma(q)\cap W^s_\mathbb{D}(p_i)$ but $W^s_\mathbb{D}(p_i)$ is disjoint from all the $\Gamma_j$, by our choice of $p_i$. In the case where $q$ has larger period, the proposition~\ref{p.heteroclinic} implies (since we have reduced to the case where $f$ preserves the orientation) that $\Gamma(q)$ intersects both components of $W^s_\mathbb{D}(p_i)$. Since the accumulation set of $\Gamma_i$ contains $\Gamma(q)$, one deduces that $\Gamma_i$ intersects both components of $W^s_\mathbb{D}(p_i)$, which contradicts the fact that it is disjoint from $W^s_\mathbb{D}(p_i)$. \end{proof} One deduces that any periodic point $q_0\in K$ admits an unstable branch $\Gamma(q_0)$ (fixed by an iterate $f^n$ which may not be $f$) which is accumulated by $\Gamma$. The accumulation set of $\Gamma(q_0)$ contains a periodic point $q_1\in K$ and then an unstable branch $\Gamma(q_1)$ accumulated by $\Gamma(q_0)$. One can build in this way infinite sequences of unstable branches $(\Gamma(q_k))_{k\in \mathbb{N}}$. \paragraph{\it First case: there exists a periodic sequence of unstable branches.} By proposition~\ref{p.transitive}, there exists a branch $\Gamma(q_k)$ which is fixed by some iterate $f^n$ and whose accumulation set contains $\Gamma(q_k)$. One can thus build a Pixton disc $D$ for $f^n$ associated to this branch. One can choose the closing arc $\delta$ with arbitrarily small diameter so that it is is disjoint from one of components of $\mathbb{D}\setminus W^s_\mathbb{D}(p_i)$. By construction the interior of $D$ contains points of $K$, hence arbitrarily large backward iterates of $x$. This contradicts the claim~\ref{c.back}. \paragraph{\it Second case: there is no periodic sequence of unstable branches.} Since each unstable branch $\Gamma(q_k)$ intersects $K$, it can not be contained in a normally hyperbolic arc fixed by some iterate of $f$. By proposition~\ref{p.group}, there are at most finitely many unstable branches $\Gamma(q_k)$ for each period. One can thus consider a sequence $(\Gamma(q_k))$ with the following property: for any $N\geq 1$, there exists $\ell\geq 1$ such that for any $k\geq \ell$, all the periodic points $q\in K$ in the accumulation set of $\Gamma(q_k)$ have period larger than $N$. If $K_k$ denotes the intersection of $K$ with the orbit of the accumulation set of $\Gamma(q_k)$, we get a decreasing sequence of compact sets which do not contain periodic points of period $N$ for $k$ large enough. Let $K_\infty$ be the intersection of all the $K_k$: by construction it does not contain any periodic point; hence it supports an aperiodic measure, which contradicts our assumptions. \medskip The proof of the proposition is now complete. \end{proof} \subsection{Proof of theorems~\ref{t.cycle} and~\ref{t.cycle2}} We first prove theorem~\ref{t.cycle}. It is well known that if $f$ has positive entropy, then it admits horseshoes~\cite{Ka} and in particular a cycle of periodic orbits. Conversely, let us assume that $f$ has a cycle of periodic orbits. Up to replace $f$ by an iterate, one can suppose (by lemma~\ref{l.cycle-fixed}) that $f$ has a fixed unstable branch $\Gamma$ which is contained in its accumulation set. Lemma~\ref{l.pixton-cycle} then gives an aperiodic ergodic measure $\mu$ supported on the closure of $\Gamma$ such that $\mu(D)=0$ for any Pixton disc $D$ associated to $\Gamma$. Let us denote by $\mathbb{D}_\Gamma$ the closure of the connected component of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$ which contains $\Gamma$. By our assumption, the support of $\mu$ is contained in $\mathbb{D}_\Gamma$. \begin{lemma}\label{l.support} There exists a neighborhood $U$ of $p$ such that $\mu(U)=0$. In particular, the support of $\mu$ is disjoint from $\Gamma$ and $W^s_\mathbb{D}(p)$. \end{lemma} \begin{proof} The measure $\mu$ is supported inside $\mathbb{D}_\Gamma$. Moreover we have $\mu(p)=0$ since $\mu$ is not atomic. Hence if one assumes that any neighborhood of $p$ has positive $\mu$-measure, there exists some point $x\neq p$ in $W^s_\mathbb{D}(p)$ which belongs to the support of $\mu$. One deduces that $x$ is accumulated by $\Gamma$. One can thus build a Pixton disc $D$ by closing near $f^{-1}(x)$: the disc contains a neighborhood of $x$ in $\mathbb{D}_\Gamma$, hence has positive measure. This contradicts lemma~\ref{l.pixton-cycle}. \end{proof} Let $W^{s,+}_\mathbb{D}(p)$ be one of the components of $W^s_\mathbb{D}(p)\setminus \{p\}$ which contains points accumulated by $\Gamma$. Let $\Gamma_{loc}$ be a local unstable manifold of $p$, i.e. a neighborhood of $p$ inside $\Gamma$ for the intrinsic topology. It separates small neighborhoods of $p$ in $\mathbb{D}_\Gamma$ into two components: we denote by $U^+$ the component which meets $W^{s,+}_\mathbb{D}(p)$. See Figure~\ref{f.quadrant}. \begin{figure}[ht] \begin{center} \includegraphics[width=5cm]{quadrant.pdf} \begin{picture}(0,0) \put(-135,20){$U$} \put(-92,70){$p$} \put(-125,140){$W^{s,+}(p)$} \put(-10,65){$\Gamma_{loc}$} \put(-50,80){$U^+$} \put(-40,20){$W_{n-1}$} \put(-70,5){$W_n$} \end{picture} \end{center} \caption{Quadrant separated by $\Gamma$ and $W^{s,+}(p)$.\label{f.quadrant}} \end{figure} Note that $\mu$-almost every point $x$ is accumulated by its orbit inside each component of $\mathbb{D}\setminus W^s_\mathbb{D}(x)$. In particular $\Gamma$ meets these two components and intersects $W^s_\mathbb{D}(x)$ at some point $z$. Iterating backward $W^s_\mathbb{D}(z)$, one thus gets a sequence of stable curves $W_n\subset \DD^+$ such that $f(W_n)\subset W_{n-1}$, $f^n(W_n)\subset W^s_\mathbb{D}(z)$, which converge to $W^s_\mathbb{D}(p)$ for the Haudsdorff topology. We denote by $W_n^+$ a connected component of $W_n\setminus W^s_\mathbb{D}(x)$ which is close to $W^{s,+}_\mathbb{D}(p)$ for the Hausdorff topology. By choosing $n$ large enough, $W^+_n$ separates $W^+_{n-1}$ and $W^{s,+}_\mathbb{D}(p)$ in $U^+$. See Figure~\ref{f.quadrant}. \bigskip Let $x^s\in W^{s,+}_\mathbb{D}(p)$ be a point that is not accumulated by $\Gamma$ and let $\beta^s$ be a small $C^1$ arc transverse to $W^{s,+}_\mathbb{D}(p)$ at $x^s$. We also choose $x^u\in \Gamma_{loc}$ and a small $C^1$ arc $\beta^u$ transverse to $x^u$ at $\Gamma$. For $m\geq 1$ large, the arcs $f^{-m}(\beta^u)$, $\beta^s$, $f(\beta^s)$ and $W^{s,+}_\mathbb{D}(p)$ bound a rectangle $R$. Similarly, the arcs $f^{-1}(\beta^u)$, $\beta^u$, $f^m(\beta^s)$ and $\Gamma_{loc}$ bound a rectangle $R'$. We may choose $W^+_n$ and $W^+_{n-1}$ to separate $p$ from $\beta$ and $m$ large enough. One thus get the following properties: \begin{itemize} \item[(a)] $R'$ is separated from $R$ by $W^+_{n-1}$ in $U^+$, \item[(b)] Any point in $R\setminus W^s_\mathbb{D}(p)$ has a forward iterate in $R'$. \end{itemize} Note that the forward iterates of $W_n$ and $W_{n-1}$ accumulate on the support of $\mu$. As a consequence of lemma~\ref{l.support}, if $R$ has been chosen small enough, we get: \begin{itemize} \item[(c)] The forward iterates of $W_n$ and $W_{n-1}$ do not meet $R$. \end{itemize} Let $D$ be a Pixton disc associated to $\Gamma$, whose boundary is the union of three arcs: $\sigma\subset W^{s,+}(p)$, $\gamma\subset \Gamma$ and a closing arc $\delta$. We chose $D$ so that $\delta$ is contained in $R$ and $\gamma$ intersects $R$ in only one point. Since $f^{-1}(\gamma)\subset \gamma$, this implies that the arc $\gamma$ is disjoint from the forward iterates of $R$. In particular, \begin{itemize} \item[(d)] $R'$ is contained in $D$. \end{itemize} Let us consider the two curves $\alpha'\subset W_n^+$ and $\alpha\subset W_{n-1}^+$, contained in $U^+\cap D$ which connect $\Gamma_{loc}$ to another point of the boundary of $D$ (and intersecting the boundary of $D$ only at their endpoints, which by construction belong to $\Gamma$). Note that $f(\alpha)\subset \alpha'$ by definition; both are contained in $W_{n-1}$. The curve $\alpha\cup\gamma$ bounds a disc $\Delta$ whereas the curve $\alpha'\cup\gamma$ bounds a disc $\Delta'$. Since $W^+_n$ separates $W^+_{n-1}$ and $W^{s,+}_\mathbb{D}(p)$, the discs are nested: $\Delta'\subset \Delta$. See Figure~\ref{f.disc2}. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm]{disc2.pdf} \begin{picture}(0,0) \put(-73,95){$\alpha'$} \put(-100,70){$\Delta'$} \put(-100,130){$\Delta$} \put(-100,175){$\sigma$} \put(-197,175){$\delta$} \put(-10,160){$p$} \put(-40,37){$\gamma$} \put(-70,145){$\alpha$} \end{picture} \end{center} \caption{Proof of Theorem~\ref{t.cycle}.\label{f.disc2}} \end{figure} Let $\overset{\circ}\alpha$ and $\overset{\circ}{\alpha}{}'$ denote the arcs $\alpha,\alpha'$ without their endpoints. Using (c), that $f^{-1}(\gamma)\subset \gamma$ and that $\gamma$ is disjoint from $\overset{\circ}\alpha\cup \overset{\circ}{\alpha}{}'$, one deduces that the forward iterates of $\overset{\circ}\alpha$, $\overset{\circ}{\alpha}{}'$ do not intersect $\partial D=\sigma\cup\gamma\cup\delta$. Hence: \begin{itemize} \item[(e)] Any forward iterate of $\overset{\circ}\alpha$ or $\overset{\circ}{\alpha}{}'$ is either in the interior of $D$ or disjoint from $D$. \end{itemize} \bigskip For $k$ large, the images $f^k(\alpha)$ and $f^k(\alpha')$ are contained in a small neighborhood of the support of $\mu$, hence are outside $D$. Since $f(\alpha)\subset \alpha'$, one gets: $f^{i+1}(\overset{\circ}\alpha)$ is disjoint from $D$ if and only if $f^{i}(\overset{\circ}{\alpha}{}')$ is disjoint from $D$. Together with (e), one deduces that there exists $k_0$ such that $f^{k_0}(\overset{\circ}{\alpha}{}')$ is disjoint from $D$, $f^{k_0}(\overset{\circ}{\alpha})$ is in the interior of $D$ and all the larger iterates are disjoint from $D$. This implies the following lemma. \begin{lemma}\label{l.separated} There exists $k_0\geq 1$ such that for any curve $\beta\subset \Gamma$ in the interior of $D$ and connecting $\alpha$ to $\alpha'$: \begin{itemize} \item[--] $f^{k_0}(\beta)$ meets $\delta$, has one endpoint in the interior of $D$ and another one outside $D$, \item[--] all the forward iterates of the endpoints of $f^{k_0}(\beta)$ are outside $D$. \end{itemize} \end{lemma} \begin{proof} Consider the iterate $k_0$ such that $f^{k_0}(\alpha)$ is inside, $f^{k_0}(\alpha')$ is outside and all larger iterates of $\alpha'$ are outside also. Note that the forward iterates of $\beta\subset \Gamma\setminus \gamma$ do not intersects $\gamma$ nor $\sigma\subset W^s_\mathbb{D}(p)$. \end{proof} From a curve $\beta$, one gets two new ones $\beta_1,\beta_2$. \begin{lemma}\label{l.separation} There exists $k_1$ and $\varepsilon>0$ such that any curve $\beta$, in the interior of $D$ and connecting $\alpha$ to $\alpha'$, contains two sub-curves $\beta_1$, $\beta_2$ such that: \begin{itemize} \item[] $f^{k_1}(\beta_1)$, $f^{k_1}(\beta_2)$ are $\varepsilon$-separated, contained in $\operatorname{Interior}(D)$ and connect $\alpha$ to $\alpha'$. \end{itemize} \end{lemma} \begin{proof} By Lemma~\ref{l.separated}, the curve $\beta$ contains sub curves $\bar \beta_1, \bar \beta_2$ such that: \begin{itemize} \item[--] $\bar \beta_1$ (resp. $\bar \beta_2$) contains an endpoint $b_1$ (resp. $b_2$) of $\beta$ and a point of $f^{-k_0}(\delta)$, \item[--] $f^{k_0}(\bar \beta_1)$ is disjoint from the interior of $D$, \item[--] $f^{k_0}(\bar \beta_2)$ is contained in $D$. \end{itemize} From lemma~\ref{l.separated}, all the forward iterates of $f^{k_0+1}(b_i)$, for $i\in\{1,2\}$ are outside $D$. From (b), there exists $k\geq k_0$ such that $f^{k}(\bar \beta_1)$ and $f^{k}(\bar \beta_2)$ have a point in the interior of $R'$, hence in the interior of $\Delta'$ (from (d)). Thus by (a) these curves contain a point of $\alpha'$ and a point of $\alpha$. Since the iterates of $\beta$ never intersects $\gamma\cup \sigma$, one deduces that for any $k'\geq k$, $f^{k'}(\bar \beta_1)$ and $f^{k'}(\bar \beta_2)$ still intersect $\alpha$ and $\alpha'$ and in particular contain two curves connecting $\alpha$ to $\alpha'$. The integer $k$ may depend on $\beta$, but since $f^{k_0}(\beta)$ intersect $\delta\setminus W^s_\mathbb{D}(p)$ in a compact set which does not depend on $\beta$, the integer $k$ is is uniformly bounded. One can thus find $k'=k_1$ independent from $\beta$ such that both $f^{k_1}(\bar \beta_1)$ and $f^{k_1}(\bar \beta_2)$ meet $\alpha$ and $\alpha'$. Let us choose the minimal curve $\hat \beta_1\subset \bar \beta_1$ which connects $b_1$ to $f^{-k_1}(\alpha')$ and the minimal curve $\hat \beta_2\subset \bar \beta_2$ which connects $b_2$ to$ f^{-k_1}(\alpha')$. In particular for any $k_0\leq k<k_1$, the curves $f^k(\hat \beta_i)$ are disjoint from $\Delta'$. One then choose $\beta_1\subset \bar \beta_1$ and $\beta_2\subset \bar \beta_2$ such that $f^{k_1}(\beta_1)$ and $f^{k_1}(\beta_2)$ meet $\alpha$ and $\alpha'$ at their endpoint and nowhere else. By construction $f^{k_0}(\beta_1)$ is disjoint from $D$ and $f^{k_0}(\beta_2)$ is contained in the interior of $D$. They are contained in two different connected components of $f^{k_1-k_0}(\Delta\setminus \Delta')\setminus \delta$. Moreover they avoid a uniform neighborhood of $\tilde \delta:=\delta\cap f^{k_1-k_0}(\Delta\setminus \Delta')$: indeed there exists $\ell_0$ such that any point $y$ in $\tilde \delta$ has a forward iterate $f^\ell(y)$ in $R'$ with $\ell\leq \ell_0$. By compactness the same holds for any point in a neighborhood of $\widetilde \delta$. But by construction for any point in $f^{k_0}(\beta_1)$ and $f^{k_0}(\beta_2)$, the $k_1-k_0-1$ first iterates are disjoint from $R'$. Hence by choosing $k_1>k_0+\ell$, one can ensure that $f^{k_0}(\beta_1)$ and $f^{k_0}(\beta_2)$ are disjoint from a uniform neighborhood of $\widetilde \delta$. After having fixed $k_1$ and having chosen $\varepsilon>0$ small enough, one deduces that the curves $f^{k_1}(\sigma_1)$, $f^{k_1}(\sigma_2)$ are $\varepsilon$-separated for some $\varepsilon>0$ small as required. \end{proof} Note that $\Gamma\setminus \gamma$ contains an arc that connect a point in $R$ with a point in $R'$: this shows that there exists a curve $\beta\subset \Gamma$ contained in the interior of $D$ which connects $\alpha$ to $\alpha'$. One then apply lemma~\ref{l.separation} inductively: it shows that for each $\ell$, the arc $\beta$ contains $2^\ell$ orbits of length $\ell.k_1$ that are $\varepsilon$-separated. One deduces that the topological entropy of $f$ is larger than $\log(2)/k_1$, hence positive. The proof of Theorem~\ref{t.cycle} is now complete. \bigskip The proof of Theorem~\ref{t.cycle2} is the same, working inside the filtrating domain. \qed \section{Generalized Morse-Smale diffeomorphisms}\label{s.gms} We extend the definition \ref{d.generalizedMS} to filtrating sets: \begin{definition} A diffeomorphism is \emph{generalized Morse-Smale} in a filtrating set $U$ if \begin{itemize} \item[--] the $\omega$-limit set of any forward orbit in $U$ is a periodic orbit, \item[--] the $\alpha$-limit set of any backward orbit in $U$ is a periodic orbit, \item[--] the period of all the periodic orbits contained in $U$ is bounded by some $K>0$. \end{itemize} \end{definition} We also say that a diffeomorphism is \emph{mildly dissipative} in a filtrating set $U$ if for any ergodic measure $\mu$ for $f|_U$, which is not not supported on a hyperbolic sink, and for $\mu$-almost every $x$, $W^s_U(x)$ separates $U$. \begin{proposition}\label{p.MS} Any diffeomorphism of the disc which is mildly dissipative and generalized Morse-Smale in a filtrating set $U$ has zero topological entropy in $U$. Moreover the chain-recurrent points in $U$ are all periodic. \end{proposition} \begin{proof} Any ergodic measure of $f|_U$ is supported on a periodic orbit, hence has zero entropy. The variational principle concludes that the topological entropy of $f|_U$ is zero. Up to replace $f$ by an iterate, one can suppose that all the periodic points and all the unstable branches in $U$ are fixed by $f$. Let us assume by contradiction that there exists a chain-recurrent point $x$ which is not periodic. One chooses as in proposition~\ref{p.group} a finite collection of disjoint fixed arcs $\mathcal{I}$ of $U$. One can require that they do not contain $x$. By our assumption, $x$ belongs to an unstable branch of an arc $I_0$, which accumulates on an arc $I_1$. Since $x$ is chain-recurrent, there exists pseudo-orbits from $I_1$ to $I_0$, hence there exists an unstable branch of $I_1$ which accumulates on another arc $I_2$ and there exists pseudo-orbits from $I_2$ to $I_0$. Arguing inductively, one builds a sequence of fixed arcs $I_n$ in $U$ such that the unstable manifold of $I_n$ accumulates on the arc $I_{n+1}$. Since $\mathcal{I}$ is finite, this implies that there exists a cycle of arcs in $U$, contradicting corollary~\ref{c.cycle}. \end{proof} \begin{proposition} The set of diffeomorphisms of the disc which are mildly dissipative and generalized Morse-Smale in a filtrating set $U$ is open for the $C^1$ topology. \end{proposition} \begin{proof} Note that if $U$ is filtrating for $f$, it is still filtrating for diffeomorphisms close. From proposition~\ref{p.group}, there exists a finite collection $\mathcal{I}$ of disjoint arcs in $U$ that are fixed by an iterate $f^k$ of $f$. By normal hyperbolicity (see~\cite{BoCr}), for any $I\in \mathcal{I}$, there exists a neighborhood $V_I$ such that for any diffeomorphism $g$ that is $C^1$ close to $f$, any orbit of $g^k$ contained in $V_I$ is contained in a $g^k$-fixed arc contained in $V_I$; such an arc is still normally contracted. One deduces that any forward (resp. backward) $g^k$-orbit contained in $V_I$ accumulates on a fixed point of $g^k$. Since $\mathcal{I}$ is finite and since the neighborhoods $V_I$ of the arcs $I$ may be chosen small, one gets a neighborhood $V=\cup V_I$ of the set of periodic points of $f$ with the following property: for any diffeomorphism $g$ that is $C^1$ close to $f$, the $\omega$-limit of any forward orbit of $g$ contained in $V$ is an orbit of period less or equal to $k$ and the same holds for the $\alpha$-limit of any backward orbit of $g$ contained in $V$. The chain-recurrent set varies upper semi-continuously with the dynamics. Hence for any diffeomorphism $g$ that is close to $f$, the $\omega$-limit and the $\alpha$-limit sets of a $g$-orbit contained in $U$ is contained in $V$, and they are periodic orbits of period less or equal to $k$. This proves that $g$ is generalized Morse-Smale in $U$. \end{proof} \section{Stabilization, decoration, structure of periodic points} \label{decoration section} In this section $f$ is a mildly dissipative diffeomorphism of the disc with zero entropy. First we introduce and discuss two related types of configurations of saddle periodic orbits: the decoration and the stabilization (subsection \ref{s.stab-dec}). We then describe how the set of fixed points (or points of a given period) are organized through chains (see sections \ref{s.connectedness-fixed}). Later, using the chains, we define a hierarchy between periodic points (section \ref{s.connectedness-periodic}) and at the end, in proposition \ref{p.decreasing-chain}, we show that all periodic points are related through this hierarchy. \subsection{Stabilization and decoration}\label{s.stab-dec} \begin{definition}\label{d.stabilization} A periodic point $p$ is \emph{stabilized by a fixed point $q$} if one of the two following cases occurs (see figure~\ref{f.stabilized}): \begin{itemize} \item[--] either $p=q$ is a fixed point, not a sink, and $D_pf$ has an eigenvalue $\lambda^+_p\leq -1$, \item[--] or $p$ has period larger than $1$ and an unstable branch $\Gamma$ which accumulates on $q$ such that either $q$ is not stabilized or $q$ is also a stabilized saddle and in that case it is required that $\Gamma$ intersects both components of $\mathbb{D}\setminus W^s_{\DD}(q)$. \end{itemize} Sometimes we also say that the orbit of $p$ is a \emph{stabilized periodic orbit} and that $q$ is a \emph{stabilizing point}. The unstable branch that accumulates on $q$ is a \emph{stabilizing branch}. \end{definition} \begin{figure} \begin{center} \includegraphics[width=12cm,angle=0]{stabilized.pdf} \put(-50,92){$p$} \put(-90,95){$q$} \put(-115,43){$f(p)$} \put(-260,70){$p=q$} \end{center} \caption{Stabilized periodic orbits with period $1$ and $2$. (Decorated regions in grey.)\label{f.stabilized}} \end{figure} \begin{remarks} Let us make a few observations about stabilized and stabilizing points. \begin{enumerate} \item The first case can be considered as a degenerate case of the second: as explained in remark \ref{r.degenerated}, $p$ can be considered as a $2$-periodic point which has collided with the stabilizing fixed sink $q$. The stabilizing branches are hidden in $q$ in this case. \item In the second case, $q$ could be a fixed point of any type: a sink, indifferent or a saddle (in that case, it could be either stabilized or not). \item There may exist several stabilizing points $q$ associated to a stabilized point $p$. \end{enumerate} \end{remarks} We have introduced the notion of \emph{decorated periodic orbit} in section~\ref{ss.decoration}. \begin{proposition}[Stabilization implies decoration]\label{p.stab-decorate} If $f$ is a mildly dissipative diffeomorphism with zero entropy, then any periodic orbit $\mathcal{O}$ which is stabilized by a fixed point is decorated. Each point $p\in \mathcal{O}$ has at most one stabilizing unstable branch. \end{proposition} \begin{proof} In the particular case where $\mathcal{O}$ is a fixed point, the statement become trivial, we will thus assume that $\mathcal{O}$ has period larger than $1$. Consider $p\in \mathcal{O}$ and the connected component $C$ of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$ which does not contain the stabilizing point $q$. If one assumes that some iterate $f^j(p)$ belongs to $C$, then the unstable branch of $f^j(p)$ which accumulates on $q$ intersects both components of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$, hence intersects $W^s(p)$: this implies that $f$ has a cycle of periodic orbit, a contradiction. We have proved that the orbit of $p$ is contained in the connecting component of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$ which contains the stabilizing branch. In particular $p$ has at most one stabilizing unstable branch. \end{proof} \begin{definition} \label{d.decorated region} When $p$ is stabilized by a fixed point $q$, the connected component of $\mathbb{D}\setminus W^s_\mathbb{D}(p)$ which does not contain $q$ is called the \emph{decorated region of $p$}. (In the special case where $p=q$ is a fixed point, it admits two decorated regions.) The \emph{period} of the decorated region is either the period of $p$ (when $p$ is not fixed) or $2$ (when $p$ is fixed): this is the return time to the decorated region for points close to $p$. \end{definition} \begin{proposition} If $f$ is a mildly dissipative diffeomorphism with zero entropy which reverses the orientation, then each stabilized orbit has period $1$ or $2$. \end{proposition} \begin{proof} Let us consider a stabilized periodic point $p$ with period $k$. By definition there exists an unstable branch $\Gamma$ of $p$ which accumulates on a fixed point $q$. We denote by $K_p$ the accumulation set of $\Gamma$. This set is cellular and fixed by $f^k$. Hence the set $K:=\cup_n f^n(K_p)$ is a cellular set fixed by $f^k$. The complement $\mathbb{D}\setminus K$ is an invariant annulus. Let us denote by $B=\mathbb{R}\times (0,1)$ the universal cover with the covering automorphism $(x,t)\mapsto (x+1,t)$. The map $f$ on the annulus lifts as a map $h$ on $B$ which reverses the orientation and satisfies $h(x+1,t)=h(x,t)-(1,0)$. Let $\gamma$ be the union of $\Gamma$ with only one local stable branch of $p$: it is a proper curve in the annulus which connects one end to the other one. It lifts in $B$ as a curve $\widehat \gamma_0$ whose complement has two connected components. Repeating the construction for each iterate of $p$ and considering the translated curves, one obtains a family of curves $(\widehat \gamma_n)_{n\in {\mathbb Z}}$ in $B$ with the properties: \begin{itemize} \item[--] $\widehat \gamma_{n+k}=\widehat \gamma_n+(1,0)$, \item[--] $B\setminus \gamma_n$ has two connected components $U^-_n$ and $U^+_n$ satisfying $U^-_n\subset U^-_m$ when $n\geq m$, \item[--] there exists a bijection $\tau$ of $\mathbb{Z}$ such that $h(\gamma_n)\subset \gamma_{\tau(n)}$. \end{itemize} In particular $\tau$ is monotone. Since $h$ reverses the orientation there exists $a\in {\mathbb Z}$ such that $\tau(n)=-n+a$ for each $n\in{\mathbb Z}$. In particular either $\tau$ has a fixed point or a point of period $2$. This implies that $p$ is either fixed or has period $2$. \end{proof} The previous proposition shows that when $f$ reverses the orientation, all the decorated regions have period $2$. \begin{corollary}\label{c.orientation} If $k$ is the period of a decorated region, then $f^k$ preserves the orientation. \end{corollary} \subsection{Structure of the set of fixed points}\label{s.connectedness-fixed} We introduce a notion which generalizes the fixed arcs. \begin{definition}\label{d.chain} Let $p,p'$ be two fixed points. A \emph{chain} for $f$ between $p$ and $p'$ is a (not necessarily compact) connected set $C$ which is the union of: \begin{itemize} \item[--] a set of fixed points $X$ containing $p$ and $p'$, \item[--] some $f$-invariant unstable branches of points in $X$. \end{itemize} \end{definition} \begin{proposition}\label{p.chain} If $f$ is an orientation preserving mildly dissipative diffeomorphism with zero entropy, then between any pair of fixed points $p,p'$ there exists a chain for $f$. \end{proposition} The end of this section is devoted to the proof of this proposition. Previous proposition also holds for mildly dissipative diffeomorphisms without cycles. \begin{lemma}\label{p.accumulation} If $f$ is an orientation-preserving dissipative diffeomorphism of the disc, any $f$-invariant unstable branch $\Gamma$ accumulates on a fixed point. \end{lemma} \begin{proof} Let $\gamma\subset \Gamma$ be a curve which is a fundamental domain. By definition the accumulation set $\Lambda$ is an invariant compact set. Since it is arbitrarily close in the Hausdorff topology to a curve $f^i(\gamma)\cup f^{i+1}(\gamma)\cup\dots\cup f^j(\gamma)$, the set $\Lambda$ is connected. Since $f$ is dissipative, the complement $\mathbb{D}\setminus \Lambda$ is connected. One deduces from proposition \ref{p.CL} that $\Lambda$ contains a fixed point. \end{proof} Let us consider the finite set $\mathcal{I}$ of normally hyperbolic fixed arcs as in section~\ref{ss.arc} and recall that fixed points can be treated as arcs. \begin{lemma}\label{l.acc} For any $f$-invariant unstable branch $\Gamma$ of an arc $I\in \mathcal{I}$, the accumulation set intersects an arc $I'\in \mathcal{I}$ of index $0$ or $1$. \end{lemma} \begin{proof} The branch $\Gamma_0=\Gamma$ is contained in the unstable set of a fixed point $p_0$. From lemma~\ref{p.accumulation}, the accumulation set of $\Gamma_0$ contains a fixed point $p_1$. Let $I_1\in \mathcal{I}$ be the fixed arc that contains $p_1$. If $I_1$ has index $0$ or $1$, the corollary holds. Otherwise $I_1$ has the type of a saddle with no reflexion. Since $p_0\not\in I_1$, the branch $\Gamma_0$ intersects the stable manifold of an endpoint of $I_1$. One can thus reduce to the case where $p_1$ is an endpoint of $I_1$ and where $I_1$ and $p_1$ have a common unstable branch $\Gamma_1$ which intersects the accumulation set of $\Gamma_0$. One can repeat the previous construction with the arc $I_1$ and the unstable branch $\Gamma_1$. One build in this way inductively a sequence of arcs $I_n$ with an unstable branch $\Gamma_n$ which accumulate on $I_{n+1}$. Since the number of arcs in $\mathcal{I}$ is finite, and there is no cycle (corollary~\ref{c.cycle}), this sequence stops with one arc of index $0$ or $1$. By construction, each unstable branch $\Gamma_n$ accumulates on the unstable branch $\Gamma_{n+1}$. The proposition~\ref{p.transitive} shows that $\Gamma_0$ accumulates on $\Gamma_{\ell-1}$, hence on the last arc $I_\ell$. \end{proof} We introduce the following equivalence relation $\sim$ between fixed arcs $I,I'\in \mathcal{I}$: \begin{description} \item[$I\sim I'$:] \emph{There exists a sequence of arcs $I=I_1,I_2,\dots, I_{\ell}=I'$ in $\mathcal{I}$ such that for each $0\leq i<\ell$ either $I_i$ admits a $f$-invariant unstable branch which accumulates on $I_{i+1}$ or $I_{i+1}$ admits a $f$-invariant unstable branch which accumulates on $I_i$.} \end{description} \begin{lemma}\label{l.class-unique} The relation $\sim$ has only one equivalence class. \end{lemma} \begin{proof} It is enough to prove that the sum of the indices of the arcs in an equivalence class is larger or equal to $1$. Then the Lefschetz formula (proposition~\ref{p.lefschetz}) will conclude that there is at most one class. Let $C$ be any equivalence class for $\sim$. It always contains a fixed arc of index $1$: \begin{claim} The class $C$ contains a fixed arc with no $f$-invariant unstable branch. \end{claim} \begin{proof} From lemma~\ref{p.accumulation}, each $f$-invariant unstable branch of a fixed interval accumulates on a fixed interval. If the conclusion of the claim does not hold, one thus obtain an infinite sequence $I_n$ in $C$ such that $I_n$ admits a $f$-invariant unstable branch which accumulates on $I_{n+1}$. Since the set $\mathcal{I}$ of fixed intervals is finite, one gets a cycle between fixed interval, contradicting the conclusion of corollary~\ref{c.cycle}. \end{proof} We then associate to any fixed arc of index $-1$ another arc of index $1$. Let: $$\mathcal{N}:=\mathbb{D}\setminus \bigcup\{W^s_\mathbb{D}(I_i), I_i\in \mathcal{I} \text{ of index $-1$}\}.$$ \begin{claim}\label{c.index1} Let $U$ be a connected component of $\mathcal{N}$. Let $I\in \mathcal{I}$ be an arc of index $-1$ such that $W^s_\mathbb{D}(I)$ bounds $U$. Then $U$ contains an arc $I'\in \mathcal{I}$ of index $1$ such that $I\sim I'$. \end{claim} \begin{proof} We consider the sequences of arcs $I_1,\dots,I_\ell$ in $\mathcal{I}$ such that $I_1=I$, for each $k$ there exists an unstable branch of $I_k$ which accumulates on $I_{k+1}$ and each $W^s_\mathbb{D}(I_k)$ either is contained in $U$ or bounds $U$. From corollary~\ref{c.cycle} such a sequence is finite. One can assume that it has maximal length. We claim that its last element $I_\ell$ has index $1$ (hence is included in $U$). If this is not the case and $I_\ell$ has index $0$, it is contained in $U$ and admits an unstable branch. From lemma~\ref{l.acc}, either this unstable branch accumulated on a fixed arc contained in $U$ or it intersects one of the boundaries $W^s_\mathbb{D}(\widetilde I)$ of $U$. In both case, we build a new fixed arc and the sequence $I_1,\dots,I_\ell$ is not maximal, a contradiction. By construction the last element $I':=I_\ell$ belongs to the class $C$. \end{proof} Now we proceed to finish the proof of lemma \ref{l.class-unique}. Let us choose arbitrarily a fixed arc $I(0)\in C$ of index $1$. For each arc $I\in \mathcal{I}$ of index $-1$, let us consider the connected component $V$ of $\mathbb{D}\setminus W^s_\mathbb{D}(I)$ which does not contain $I(0)$. Let $U$ be the connected component of $\mathcal{N}$ which is contained in $V$ and whose boundary intersects $W^s_\mathbb{D}(I)$. The previous claim associates to it an arc $I'\in C$ of index $1$ contained in $U$. It is by construction different from $I(0)$. Note that if $\widetilde I\in C$ is another arc of index $-1$, the associated arc $\widetilde I'$ of index $1$ is different: indeed, in each component $U$ of $\mathbb{D}\setminus W^s_\mathbb{D}(I)$ which does not contain $I(0)$, there exists a unique $I\in \mathcal{I}$ such that $W^s_\mathbb{D}(I)$ bounds $U$ and separates $U$ from $I(0)$. We have shown that in $C$ the number of arcs of index $-1$ is smaller than the number of arcs of index $1$. This concludes the proof of the lemma~\ref{l.class-unique}. \end{proof} \begin{proof}[Proof of proposition~\ref{p.chain}] Any normally hyperbolic fixed arc is a chain for $f$. The lemma~\ref{l.class-unique} proves that the union $C$ of arcs in $\mathcal{I}$ with their $f$-invariant unstable branches is a connected set. Note that any arc is the union of a set of fixed points with $f$-invariant unstable branches. This shows that $C$ of $f$ is a chain between any pair of fixed points. \end{proof} \begin{remark}\label{r.lefschetz} The proof of the proposition (and claim~\ref{c.index1}) shows the following property: \emph{Assume that $f$ preserves the orientation. Let $\mathcal{I}$ be a finite collection of disjoint isolated arcs fixed by $f$ such that for any $I\in \mathcal{I}$ and any $f$-invariant unstable branch $\Gamma$ of $I$, any periodic point in the accumulation set of $\Gamma$ belongs to some $I'\in \mathcal{I}$. Then the sum of the indices $index(I,f)$ of all $I\in \mathcal{I}$ is larger or equal to $1$.} \end{remark} \subsection{Points decreasing chain related to a stabilized point} \label{s.connectedness-periodic} In the present section we discuss how periodic points of larger period are related to points of lower period. Since proposition \ref{p.chain} holds for any (orientation preserving) iterate of $f$, any periodic point can be related to the fixed points through a chain associated to a large iterate of $f$. In the next definitions and propositions we show that these chains have a particular structure that link points of larger period to points of lower one. \begin{definition} \label{d.decreasing-chain} Let $p$ be a stabilized periodic point. A periodic point $w\neq p$ is \emph{decreasing chain related} to $p$ if there exists $k\geq 2$ and a chain $C$ for $f^k$ between $w$ and $p$ which is contained in the closure of a decorated region of $p$. \end{definition} \begin{remark}\label{r.decreasing} Note that any iterate $f^i(C)$ of the chain is contained in the closure of a decorated region of $f^i(p)$, hence $f^i(w)$ is decreasing chain-related to $p$. One deduces that the period of the decorated region of $p$ divides the period of $w$. We also say that the orbit of $w$ is decreasing chain related to the orbit of $p$. \end{remark} The unstable set of a decreasing chain related point can be localized. \begin{proposition}\label{p.unstable} If $w$ is decreasing chain related to a stabilized periodic point $p$, then the unstable set of $w$ is contained in the closure of a decorated region $V$ of $p$. Moreover, if the period of $w$ is larger than the period of $V$ and $f$ is orientation preserving, then the closure of the unstable set of $w$ is contained in $V$. \end{proposition} \begin{proof} Let us consider the two connected components of $\mathbb{D}\setminus W^s_\mathbb{D}(w)$ (one has to consider only periodic points which are not sinks, so as described in section \ref{preliminaries}, there is a unique stable manifold well defined). Since $w$ belongs to a decorated region $V$ of $p$, one of these components $U_1$ is contained in $V$. The other one is denoted by $U_2$. From theorem~\ref{t.cycle} (no cycle), any unstable branch $\Gamma$ of $w$ is contained in one of these components. If $\Gamma$ is included in $U_1$, then it is included in a decorated region of $p$ and the proof is concluded in that case. We may thus assume that $\Gamma$ is included in the component $U_2$ of $\mathbb{D}\setminus W^s_\mathbb{D}(w)$ which contains $p$ and let us prove first that its accumulation set is contained in the closure of $V.$ See figure~\ref{f.related-localized}. Since $w$ is decreasing chain related to $p$ then there exists a chain $C$ for an iterate $f^k$ which contains $w,p$ and which is included in the closure of the decorated region $V$ (recall definition \ref{d.decreasing-chain}). If $\Gamma$ is part of the chain $C$, by definition is included in the closure of $V$. So, let us consider the case where $\Gamma$ is not part of the chain; then there exist points of $C$ in $U_2$ which accumulate on $\Gamma$. Since the period of points in $C$ is uniformly bounded and since $\Gamma$ is an unstable branch in $U_2$, the point $w$ is not accumulated by periodic points of $C\cap U_2$. Consequently, there exists an unstable branch $\Gamma_C$ in $C$ which accumulates on a point of $\Gamma$. From proposition~\ref{p.transitive}, one deduces that the accumulation set of $\Gamma$ is included in the accumulation set of $\Gamma_C$, which is contained in $\overline V$, and so the accumulation set of $\Gamma$ is also contained in $\overline V$. \begin{figure} \begin{center} \includegraphics[width=5cm,angle=0]{related-localized.pdf} \put(-83,88){\small $\Gamma$} \put(-60,65){\small $p$} \put(-115,72){\small $w$} \put(-130,40){\small $U_1$} \put(-90,30){\small $U_2$} \end{center} \caption{Proof of proposition~\ref{p.unstable}.\label{f.related-localized}} \end{figure} \medskip In order to conclude, we distinguish three cases: \begin{itemize} \item[--] The period of $p$ is larger than $1$. Let $\Gamma_p$ be the unstable branch of $p$ which accumulates on a fixed point $q$ (not contained in $V$). If $\Gamma$ is not included in the closure of $V$, it crosses $W^s_\mathbb{D}(p)$. As a consequence the accumulation set of $\Gamma$ contains the accumulation set of $\Gamma_p$, hence $q$. This is a contradiction since we have shown before that it is contained in $\overline V$. \item[--] The point $p$ is a fixed saddle with reflexion: it admits an unstable branch $\Gamma_p$ which accumulates on a periodic point $q$ which is not in $\overline V$ (by lemma~\ref{p.accumulation}). One can conclude as in the previous case. \item[--] The $p$ is a fixed point, admits an eigenvalue $\lambda^+_p=-1$, is not a sink, and not a saddle: it is accumulated by points $z\in \mathbb{D}\setminus \overline V$ of period $2$. If $\Gamma$ is not included in the closure of $V$, it crosses $W^s_\mathbb{D}(p)$, hence it intersects the stable manifold of one of them. As a consequence the accumulation set of $\Gamma$ contains $z$. This is a contradiction since we have shown before that it is contained in $\overline V$. \end{itemize} In all the cases we have shown that any unstable branch of $w$ is contained in $\overline V$. \medskip Let $k$ be the period of the decorated region of $p$. If the period of $w$ is larger than $k$ and if $f$ is orientation preserving, one applies proposition~\ref{p.heteroclinic} to the diffeomorphism $f^k$: the unstable set of $w$ does not accumulate on $p$; hence its closure is contained in $V$, proving the last part of the proposition. \end{proof} \begin{corollary}\label{c.dichotomy-stabilized} A point $w$ which is stabilized can not be decreasing chain-related to a stabilized point $p$. \end{corollary} \begin{proof} We argue by contradiction. Let us first assume that $p$ is not fixed. From proposition~\ref{p.unstable}, the unstable set of each iterate $f^i(w)$ is contained in the decorated region of $f^i(p)$. Since the decorated region has period larger or equal to $2$ and since $p$ is not fixed, the unstable set of $w$ does not accumulate on a fixed point, a contradiction. When $p$ is fixed, since the unstable manifold of $w$ is contained in a decorated region, the point $w$ can only be stabilized by $p$. The definition of stabilized point for $p$ gives that $p$ is not a sink; the definition for $w$ implies that the unstable manifold of $w$ crosses $W^s_\mathbb{D}(p)$, a contradiction. \end{proof} \begin{proposition}\label{p.both-related} Let us consider two stabilized fixed points $p_1,p_2$ with decorated regions $V_1,V_2$. If there exists a point $w\in V_1$ that is decreasing chain related to $p_2$, then $V_2\subset V_1$. \end{proposition} \begin{proof} By assumption $q\in V_1\cap V_2$. Since the stable manifolds of $p_1$ and $p_2$ are disjoint, and the conclusion of the proposition does not hold, then $V_1\subset V_2$. Since $w$ is decreasing chain related to $p_2$ in $V_2$, there exists an iterate $f^\ell$ and a chain $C$ for $f^\ell$ included in $\overline {V_2}$ which contains both $w$ and $p_2$. In particular it intersects $W^s_{\DD}(p_1)$. This implies that there exists an unstable branch $\Gamma\subset C$ which meets both components of $\DD\setminus W^s_\DD(p_1)$. One deduces from proposition~\ref{p.transitive} that the closure of the unstable set of $p_1$ is contained in the closure of $\Gamma$, hence in $\overline {V_2}$. The closure of the unstable set $p_1$ contains a fixed point (since $p_1$ is stabilized): the only possible fixed point is $p_2$. By definition of stabilization, either $p_2$ is not stabilized or the unstable set $p_1$ meets both components of $\DD\setminus W^s_\DD(p_2)$. In both cases we get a contradiction \end{proof} \begin{corollary}\label{c.unique-stabilized} A point $w$ can not be decreasing chain-related to two different stabilized point $p_1,p_2$. \end{corollary} \begin{proof} Otherwise $w$ would belong to decorated regions $V_1$ and $V_2$ for $p_1$ and $p_2$ and respectively. The proposition~\ref{p.both-related} would imply that $V_1\subset V_2$ and $V_2\subset V_1$ simultaneously. A contradiction since $p_1\neq p_2$. \end{proof} One also describes the accumulation sets of $f$-invariant unstable branches. \begin{proposition}\label{p.chain-decreasing} Let $z$ be a fixed point and $\Gamma$ be a $f$-invariant unstable branch of $z$. Let $C$ be a chain for some iterate $f^k$ between two periodic points $w,p$ such that: \begin{itemize} \item[--] $p$ is stabilized by a fixed point $q$, \item[--] $w$ is decreasing chain-related to $p$, \item[--] $C$ is contained in the closure of a decorated region of $p$. \end{itemize} If the accumulation set of $\Gamma$ contains $w$, then it also contains $p$ (see figure~\ref{f.unstable}). \end{proposition} \begin{proof} By invariance, the points $f(w)$ and $f(p)$ and the chain $f(C)$ satisfy the same properties. Note that the decorated region of $p$ containing $w$ and the decorated region of $f(p)$ containing $f(w)$ are disjoint: this is a consequence of proposition~\ref{p.stab-decorate} when $p$ has period larger than $1$; when $p$ is fixed, this is a consequence of the fact that its two decorated regions are locally exchanged (since in the case that $p$ is fixed by definition \ref{d.stabilization} the non-stable eigenvalue with modulus larger and equal to one is negative). The $f$-invariant unstable branch $\Gamma$ accumulates in $w$ and $f(w)$ and so it has to intersect two different decorated regions, hence intersects the stable manifold of the orbit of $p$. This gives the conclusion. \end{proof} \begin{figure} \begin{center} \includegraphics[width=5cm,angle=0]{unstable.pdf} \put(-70,78){\small $q$} \put(-50,70){\small $p$} \put(-120,95){\small $z$} \put(-75,45){\small $f^2(p)$} \end{center} \caption{A $f$-invariant unstable branch intersecting a decorated region.\label{f.unstable}} \end{figure} \subsection{Lefschetz formula associated to a stabilized point} Using the notion of decreasing chain related periodic point, we define the notion of index of a decorated region. \paragraph{\it Index of a decorated region.} Given a decorated region $V$ of a stabilized periodic point $p$, and a multiple $n$ of the period $k$ of $V$, one can compute the total index of the set $C(p,V,n)$ of points $w\in V$ that are fixed by $f^n$ and decreasing chain-related to $p$. Note that by corollary~\ref{c.orientation}, the map $f^n$ preserves the orientation. From section \ref{ss.arc}, there exists a finite family $\mathcal{I}$ of disjoint arcs that are fixed by $f^n$ and contained in $\overline V$ such that the set of periodic points in $\cup_{I\in\mathcal{I}} I$ is exactly $\{p\}\cup C(p,V,n)$. We denote by $I_0$ the arc of $\mathcal{I}$ which contains $p$. Note that the other arcs $I\in \mathcal{I}$ are isolated, hence have an index $index(I, f^n)$. The arc $I_0$ is maybe not isolated (in the case $p$ is a fixed stabilized point), but one can consider the index $index(I_0, V,f^n)$ of the half arc $I_0$ in the region $V$ for $f^n$ as defined in section~\ref{ss.index arc}. Then, one defines the index of the decorated region $V$ for $f^n$ as $$L(V,f^n):=index(I_0,V,f^n)+\sum_{I\in \mathcal{I}\setminus \{I_0\}} index(I, f^n). $$ Observe that the number $L(V,f^n)$ does not depend on the choice of the family $\mathcal{I}$: \begin{proposition}\label{p.local-lefschetz} For any decorated region $V$ of a stabilized periodic point $p$, and for any multiple $n$ of the period $k$ of $V$, the index $L(V,f^n)$ equals $1/2$. \end{proposition} The proof is postponed to section~\ref{ss.structure}. Before, we prove a weaker statement. \begin{lemma}\label{l.local-lefschetz} For any decorated region $V$ of a stabilized periodic point $p$, and for any multiple $n$ of the period $k$ of $V$, the index $L(V,f^n)$ is larger or equal to $1/2$. \end{lemma} \begin{proof} Since $f^n$ preserves the orientation, we follow the proof of proposition~\ref{p.chain} and prove a version of remark~\ref{r.lefschetz} inside the decorated region $V$, after the following observations: \begin{itemize} \item[--] for any $I\in \mathcal{I}$, any $f^n$-invariant unstable branch $\Gamma$ of $I$ is contained in $\overline V$ (unless when $I=I_0$ and $\Gamma$ is the unstable branch that stabilizes $p$), \item[--] any point fixed by $f^n$ in the accumulation set of $\Gamma$ is contained in $\overline V$ (since such a point coincides with $p$ or is decreasing chain related to $p$). \end{itemize} Let: $$\mathcal{N}:=V\setminus \bigcup\{W^s_\mathbb{D}(I_i), I_i\in \mathcal{I} \text{ of index $-1$}\}.$$ The proof of claim~\ref{c.index1} shows that any component $U$ of $\mathcal{N}$, either it contains an arc $I'\in \mathcal{I}$ of index $1$ or it is the component bounded by $W^s_\mathbb{D}(I_0)$ and $I_0$ is semi-attracting in $V$. To each arc $I\in \mathcal{I}\setminus \{I_0\}$ of index $-1$, one let $V_I$ be the component of $\mathbb{D}\setminus W^s_\mathbb{D}(I)$ which does not contain the stabilizing unstable branch of $I_0$. One associates by claim~\ref{c.index1} an arc $I'$ of index $1$ in the component of $\mathcal{N}$ bounded by $W^s_\mathbb{D}(I)$ which belongs to $V_I$. When the arc $I_0$ has a $f^n$-invariant unstable branch in $V$ (and has half index $index(I_0,V,f^n)=-1/2$), one can also associate by the claim~\ref{c.index1} an arc of index $1$ which belongs to the component of $\mathcal{N}$ bounded by $W^s_\mathbb{D}(I_0)$. The number of arcs of index $1$ in $\mathcal{I}$ is thus larger or equal to the number of arcs of index $-1$, and it is larger or equal to the number of arcs of index $-1$ plus $1$ in the case $index(I_0,V,f^n)=-1/2$. This proves that the sum of the indices $L(V,f^n)$ is always larger or equal to $1/2$. \end{proof} \subsection{Structure of the set of periodic points}\label{ss.structure} The next proposition classifies the periodic points. \begin{proposition}\label{p.decreasing-chain} For any periodic point $w$, one and only one of the possibilities occurs: \begin{itemize} \item[(1)] $w$ is fixed and either is a sink or $Df(w)$ has an eigenvalue $\geq 1$, \item[(2)] $w$ is stabilized, \item[(3)] $w$ is decreasing chain related to a stabilized periodic point. \end{itemize} \end{proposition} \begin{proof} The options (1) and (2) are incompatible by definition of the stabilization. Options (2) and (3) are incompatible by corollary~\ref{c.dichotomy-stabilized}. Also (1) and (3) are incompatible by remark~\ref{r.decreasing}. It remains to prove that any periodic point $w$ satisfies one of the cases. Let $f^n$ be an orientation-preserving iterate that fixes $w$ and let $\mathcal{I}$ be a finite collection of isolated arcs fixed by $f^n$ which contains all the points fixed by $f^n$. Let $\mathcal{I}_0$ be the set of intervals $I\in \mathcal{I}$ containing a periodic point satisfying one of the cases (1), (2) or (3). \begin{claim}\label{c.decompose-arc} For $I\in \mathcal{I}_0$, any periodic point in $I$ satisfies the proposition. More precisely one and only one of the following cases occurs: \begin{itemize} \item[--] the periodic points in $I$ are all fixed and not stabilized, \item[--] $I$ contains either a stabilized point $p$ or a point decreasing chain related to a stabilized point $p$: all the other periodic points in $I$ are decreasing chain related to $p$. \end{itemize} \end{claim} \begin{proof} We can assume that $I$ is not reduced to a single periodic point (in that case the statement holds immediately). We consider three cases: If $I$ contains a fixed point $q$ with an eigenvalue $\lambda^+_q\geq 1$, then any periodic point in $I$ is fixed and can not be stabilized. The first case occurs. If $I$ contains a fixed point $q$ with eigenvalue $\lambda^+_q\leq -1$, the other periodic points in $I$ have period $2$: if $q$ is not a sink, it is stabilized and the other periodic points in $I$ are decreasing chain-related to $q$; if $q$ is a sink, its basin in $I$ is bounded by a $2$-periodic orbit $\{p,f(p)\}$, the other periodic points in $I$ are decreasing chain related to $p$ or $f(p)$. If $I$ does not contain any fixed point, but contains a stabilized point $p$, then it is contained in the closure of the decorated region $V$ of $p$. Otherwise, by $f^n$-invariance, $I$ would contain the stabilized unstable branch of $p$ and its accumulation set: a contradiction since $I$ does not contain any fixed point. One deduces that any periodic point $I$ different from $p$ is decreasing chain related to $p$. If $I$ does not contain any fixed point, nor any stabilized periodic point, but contains a point decreasing chain related to a stabilized point $p$, one deduces that $I$ is contained in a decorated region $V$ of $p$. Otherwise $I$ would intersect $W^s_\DD(p)$, and hence by $f^n$-invariance would contain $p$. Therefore any periodic point in $I$ is also decreasing chain related to $p$. \end{proof} \begin{claim}\label{c.control-unstable} For any $I\in \mathcal{I}\setminus \mathcal{I}_0$ and any $f^n$-invariant unstable branch $\Gamma$, any periodic point in the accumulation set of $\Gamma$ belongs to some $I'\in \mathcal{I}\setminus \mathcal{I}_0$. \end{claim} \begin{proof} Let us consider an endpoint $z\in I$ with a $f^n$-invariant unstable branch $\Gamma$ whose accumulation set contains a $f^n$-invariant point $q$. Let us assume by contradiction that the interval $I'\in \mathcal{I}$ containing $q$ belongs to $\mathcal{I}_0$. We distinguish two cases. \smallskip {\em i--} The point $q$ is fixed: if $q$ satisfies case $(1)$, then $z$ is stabilized, a contradiction; if $q$ satisfies case $(2)$, since $z$ is not stabilized, the definition~\ref{d.stabilization} implies that $\Gamma$ does not intersect one of the components of $\DD\setminus W^s_\DD(q)$. Therefore, by definition~\ref{d.decreasing-chain} one deduces that $z$ is decreasing chain related to $q$; this is a contradiction since $I\not\in \mathcal{I}_0$.\hspace{-1cm}\mbox{} \smallskip {\em ii--} The point $q$ is not fixed: Since $I'\in \mathcal{I}_0$, from the previous claim there exists a stabilized point $p$ such that all the periodic points in $I'$ are decreasing chain related to $p$ or coincide with $p$. Let $V$ be the decorated region associated to $p$ which contains $q$. By definition~\ref{d.decreasing-chain}, there exists a chain $C\subset \overline V$ for $f^n$ containing $q$ and $p$. Note that $\Gamma$ cannot intersect the region $\DD\setminus \overline V$: when $p$ is fixed, this would immediately imply that $z$ is stabilized, a contradiction; when $p$ is not fixed, this would imply (by proposition~\ref{p.transitive}) that the accumulation set of $\Gamma$ would contain the accumulation set of the stabilized branch of $p$, and then that $z$ is stabilized, a contradiction. One deduces that $I\cup I'\cup \Gamma\cup C$ is a chain for $f^n$ containing $z$ and $p$. It is contained in $\overline V$ hence $z$ is decreasing chain related to $p$. A contradiction. \end{proof} We can now conclude the proof of the proposition~\ref{p.decreasing-chain}. From claim~\ref{c.decompose-arc}, one can for each stabilized point $p$ consider the family $\mathcal{I}_p$ of arcs $I\in \mathcal{I}_0$ such that all the periodic points in $\mathcal{I}_p$ are decreasing chain related to $p$ or equal to $p$. One can also consider the family $\mathcal{I}_{fix}$ of arcs whose periodic points are fixed and not stabilized. The family $\mathcal{I}_0$ decomposes as the disjoint union of $\mathcal{I}_{fix}$ with the families $\mathcal{I}_p$, for $p$ stabilized. Let $p$ be a stabilized fixed point, with decorated regions $V_1,V_2$. Lemma~\ref{l.local-lefschetz} implies \begin{equation}\label{e.fixed-stab} \sum_{I\in\mathcal{I}_p}index(I,f^n)=L(V_1,f^n)+L(V_2,f^n)\geq 1. \end{equation} Let $p$ a stabilized point fixed by $f^n$ but not by $f$. It has one decorated region $V$. Let $I_p$ be the arc in $\mathcal{I}_p$ which contains $p$. Since $p$ has an unstable branch in the region $\mathbb{D}\setminus \overline V$, we get $index(I,\mathbb{D}\setminus \overline V,f^k)=-1/2$. Consequently lemma~\ref{l.local-lefschetz} implies \begin{equation}\label{e.notfixed-stab} \sum_{I\in\mathcal{I}_p}index(I,f^n)=L(V,f^n)+index(I_p,\mathbb{D}\setminus \overline V,f^n)\geq 0. \end{equation} Note that if $I\in \mathcal{I}$ contains a stabilized fixed point, then $index(I,f)=1$, whereas for $I\in \mathcal{I}_{fix}$ one has $index(I,f)=index(I,f^n)$. Therefore the Lefchetz formula (proposition~\ref{p.lefschetz}) for $f$ gives $$\sum_{I\in\mathcal{I}_{fix}}index(I,f^n)=\sum_{I\in\mathcal{I}_{fix}}index(I,f)=1-\sqcupplus\{p \text{ fixed and stabilized}\}. $$ Combining the three previous inequalities give $$\sum_{I\in\mathcal{I}_{0}}index(I,f^n)\geq 1.$$ If one assumes that $\mathcal{I}\setminus \mathcal{I}_0$ is non-empty, the claim~\ref{c.control-unstable} and the remark~\ref{r.lefschetz} give $$\sum_{I\in\mathcal{I}\setminus \mathcal{I}_{0}}index(I,f^n)\geq 1.$$ This gives $\sum_{I\in\mathcal{I}}index(I,f^n)\geq 2$ which contradicts the Lefschetz formula (proposition~\ref{p.lefschetz}). Consequently $\mathcal{I}=\mathcal{I}_0$ and any point fixed by $f^n$ satisfies one of the cases of the proposition~\ref{p.decreasing-chain}. The proof is complete. \end{proof} We can now complete the proof of the Lefschetz formula inside a decorated region. \begin{proof}[Proof of proposition~\ref{p.local-lefschetz}] We argue as in the proof of the proposition~\ref{p.decreasing-chain} for the orientation preserving iterate $f^n$. We consider a collection $\mathcal{I}$ of disjoint isolated arcs fixed by $f^n$. For each stabilized point $p$, we consider the collection of arcs $\mathcal{I}_p$ containing points decreasing chain related to $p$ and the point $p$ itself. We also consider the family $\mathcal{I}_{fix}$ of arcs whose periodic points are fixed and not stabilized. The family $\mathcal{I}$ is partitioned as the disjoint union of $\mathcal{I}_{fix}$ with the families $\mathcal{I}_p$, for $p$ stabilized. Arguing as before, the conclusion of lemma~\ref{l.local-lefschetz} gives the inequality~\eqref{e.fixed-stab} for any $p$ stabilized and fixed and it gives the inequality~\eqref{e.notfixed-stab} for $p$ stabilized and not fixed. If one of these inequalities is strict, one deduces $\sum_{I\in\mathcal{I}}index(I,f^n)> 1$ and contradicts the Lefschetz formula (proposition~\ref{p.lefschetz}). Consequently the inequalities~\eqref{e.fixed-stab} and~\eqref{e.notfixed-stab} are equalities. This means that the inequality in lemma~\ref{l.local-lefschetz} is an equality and proposition~\ref{p.local-lefschetz} holds. \end{proof} \section{Trapping discs} \label{ss.trapping} A compact set $\Delta\subset \mathbb{D}$ is a \emph{(topological) disc} if it is homeomorphic to the unit disc. It is \emph{trapping} for $f$ if $f(\Delta)\subset \operatorname{Interior}(\Delta)$. In this section we prove the following result. \begin{theorem}\label{t.renormalize} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero topological entropy and $\Gamma$ be a $f$-invariant unstable branch of a fixed point $p$. Then there exists a trapping disc $\Delta$ containing the accumulation set of $\Gamma$ and disjoint from $W^s(p)$. \end{theorem} It is enough to prove the theorem in the case where $f$ is orientation preserving. Let us consider the finite set $\mathcal{I}$ of isolated fixed arcs as introduced in section~\ref{ss.arc}. Since there is no cycle of fixed arcs (corollary~\ref{c.cycle}), the elements of $\mathcal{I}$ can be ordered as a sequence $I_1,\dots,I_n$ such that there is no $f-$invariant unstable branch of $I_i$ which accumulates on $I_j$ when $j\geq i$. The proof first deals with the $f$-invariant unstable branches of the arcs $I_i$, by induction on $i$. In this case we have a more precise version. \addtocounter{theorem}{-1} \renewcommand{\thetheorem}{\Alph{theorem}'} \begin{theorem}\label{t.renormalize2} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero topological entropy and $\mathcal{I}$ a set of isolated fixed arcs as introduced in section~\ref{ss.arc}. For any $I_i\in \mathcal{I}$ and any $f$-invariant unstable branch $\Gamma$ of $I_i$, let $Z$ be the closure of the union of: \begin{itemize} \item[--] the accumulation set $\Lambda$ of $\Gamma$, \item[--] the arcs $I_j\in \mathcal{I}$ for $j<i$, \item[--] the $f$-invariant unstable branches of the arcs $I_j$ for $j<i$. \end{itemize} Then, $\Lambda$ is included in an trapping disc $\Delta$ which is contained in an arbitrarily small neighborhood of $Z$. \end{theorem} \renewcommand{\thetheorem}{\Alph{theorem}} In the following we will first prove the second theorem and then deduce the first. As an immediate consequence one gets: \begin{corollary}\label{c.trapping} Let us consider an isolated fixed arc $I=I_i$ which is not reduced to a fixed point with eigenvalue $-1$. Let $U$ be an open set which contains the arcs $I_j\in \mathcal{I}$ for $j\leq i$ and the closure of their $f$-invariant unstable branches. Then there exists a trapping disc $\Delta\subset U$ which contains $I$. \end{corollary} \medskip One also deduces that periodic points are almost isolated in the recurrent set of $f$. \begin{corollary}\label{c.isolated} Let us consider an isolated fixed arc $I$ which is not reduced to a fixed point with eigenvalue $-1$. Then, there exists a neighborhood $W$ of $I$ such that \begin{itemize} \item[--] the $\alpha$-limit set of any point $z\in W$ is either disjoint from $W$ or a fixed point of $I$, \item[--] the $\omega$-limit set of any point $z\in W$ is either disjoint from $W$ or a fixed point of $I$. \end{itemize} \end{corollary} Note that a fixed point with eigenvalue $-1$ is contained in an isolated fixed arc $I'$ for $f^2$ to which the corollary may be applied. This gives: \begin{corollary}\label{c.isolated2} Any periodic orbit $\mathcal{O}$, with period $N$, admits a neighborhood $W$ such that any ergodic measure $\mu$ satisfying $\mu(W)>0$ is supported on a periodic orbit with period less or equal to $2N$. \end{corollary} The construction of the trapping domains in a small neighborhood that contains the accumulation set of $\Gamma$ in the proofs of theorem \ref{t.renormalize} and \ref{t.renormalize2} go along the following lines: 1- using a slight variation of definition \ref{d.pixton0} we build {\em Pixton disc} given by either i) arcs of $\Gamma$ and local stable manifold of stabilized periodic orbits (with period one or larger, lemma \ref{l.highperiod}), ii) basin of attraction of (semi)attracting fixed points (lemma \ref{l.periodone}) ; 2- the union of these Pixton discs can be refined in a larger Pixton disc that contains all its iterates, the periodic points accumulated by $\Gamma$ and any decreasing chain-related point to them (corollary \ref{c.pixton}); 3) using the closing lemma (theorem \ref{t.measure local}) we prove that any point in the accumulation set of $\Gamma$ has it backward orbit in the interior of the Pixton disc described in previous item (showing that the forward iterate of the disc is contained in the disc) and that allows to perform the last step which consists in slight modification of the Pixton disc to guarantee that the forward iterate is contained in its interior. \subsection{Pixton discs revisited}\label{ss.pixton} We prepare here the proof of theorem~\ref{t.renormalize2}. We assume in this section that $f$ preserves the orientation. We consider an arc $I_i\in \mathcal{I}$ and a $f$-invariant unstable branch $\Gamma$ of an endpoint $p$ of $I_i$. Arguing by induction, we may assume that theorem~\ref{t.renormalize2} holds for the $f$-invariant unstable branches of any arc $I_k\in \mathcal{I}$ with $k<i$. Let $Z$ be the invariant compact set introduced in the statement of the theorem. By assumption on the order inside the family $\mathcal{I}$, the set $Z$ disjoint from $W^s_\mathbb{D}(p)$. We choose a neighborhood $U$ of $Z$ disjoint from $W^s_{\mathbb{D}}(p)$. \medskip We introduce the following notion, which is slightly different than the definition \ref{d.pixton0} given before. \begin{definition}\label{d.pixton} Given a $f$-invariant unstable branch $\Gamma$, a \emph{Pixton disc} associated to $\Gamma$ is a closed topological disc $D$ whose boundary is a Jordan curve which decomposes into \begin{itemize} \item[--] a closed set $\gamma^s$ satisfying $f^n(\gamma^s) \subset \operatorname{Interior}(D)$ for all $n$ larger than some $n_D\geq 1$, \item[--] and its complement $\gamma^u$ (that could be empty) which is contained in $\Gamma$. \end{itemize} \end{definition} \begin{remarks}\label{r.pixton} About a Pixton discs the next easy statements follow: \begin{enumerate} \item A trapping disc is a Pixton disc. Conversely a Pixton disc such that $\gamma^u=\emptyset$ is a trapping disc. In particular, an attracting fixed point has associated a Pixton disc. \item The forward iterates of a Pixton disc are Pixton discs. \item If $D_1,D_2$ are two Pixton discs whose intersection is non-empty, then one obtains a new Pixton disc $D$ by considering their ``filled union": this is the union of $D_1\cup D_2$ with all the connected components of this set which do not contain the boundary of $\mathbb{D}$. By~\cite{kerekjarto} (see also~\cite{LY}), the filled union is a disc. The new set $\gamma^s$ is contained in the union of the sets $\gamma^s_1,\gamma_2^s$ associated to $D_1,D_2$. The same holds for $\gamma^u$. \end{enumerate} \end{remarks} Observe that previous remark provides the proof of the first step in the induction argument: the first arc $I_1$ in $\mathcal{I}$ is an attracting arc. In what follow until the end of the subsection, $p, \Gamma, U$ are the fixed point, unstable arc and neighborhood defined at the beginning of the subsection. In order to prove theorem~\ref{t.renormalize}, we need to cover periodic points in the accumulation set of $\Gamma$ by Pixton discs . This is done first for periodic points with period larger than $1$, and later for fixed points. \begin{lemma}\label{l.highperiod} Consider a periodic orbit $\mathcal{O}$ accumulated by $\Gamma$ and stabilized by a fixed point $q$. Then there exists a Pixton disc $D\subset U$ which contains $\mathcal{O}$ in its interior and whose stable boundary $\gamma^s$ is contained in the stable manifold $W^s_\mathbb{D}(\mathcal{O})$ of $\mathcal{O}$. \end{lemma} \begin{proof} We first assume that $\widetilde w$ has period $\tau\geq 2$. See figure~\ref{f.pixton}. Let us consider the universal cover $\widetilde{ \mathbb{D}}$ of $\mathbb{D}\setminus \{q\}$: it is homeomorphic to the strip $\mathbb{R}\times [0,1)$ and the translation $(x,y)\mapsto (x+1,y)$ can be chosen to be a covering automorphism which generates the fundamental group. Let $\widetilde p$ and $\widetilde \Gamma$ be lifts of $p$ and of the unstable branch $\Gamma$. We choose the lift $\widetilde f$ of $f$ which preserves $\widetilde p$ and $\widetilde \Gamma$. \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{Pixton.pdf} \put(-178,168){\small $\Gamma$} \put(-88,110){\small $q$} \put(-220,150){\small $p$} \put(-83,133){\small $w$} \put(-92,83){\small $f(w)$} \put(-35,80){\small $D$} \put(-170,80){\small $\gamma^u$} \put(-135,180){\small $\gamma^s$} \put(-125,200){\small $W^s$} \put(-107,120){\small $W^u$} \includegraphics[width=9cm,angle=0]{Pixton2.pdf} \put(-210,135){\small $p$} \put(-163,153){\small $\Gamma$} \put(-160,80){\small $\gamma^u$} \put(-125,155){\small $\gamma^s$} \put(-35,80){\small $D$} \put(-120,190){\small $W^s$} \put(-78,88){\small $w$} \end{center} \caption{Construction of a Pixton disc: $w$ has period $2$ (left) or $1$ (right).\label{f.pixton}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=12cm,angle=0]{Pixton-lift.pdf} \put(-300,115){\small $\widetilde \Gamma$} \put(-328,98){\small $p$} \put(-238,95){\small $a$} \put(-75,122){\small $b$} \put(-238,140){\small $\widetilde W^s_n$} \put(-230,40){\small $\widetilde W^u_n$} \put(-153,140){\small $\widetilde W^s_{n+1}$} \put(-73,140){\small $\widetilde W^s_{n+\tau}$} \end{center} \caption{Proof of lemma~\ref{l.highperiod}.\label{f.pixton-lift}} \end{figure} Consider $w\in \mathcal{O}$, one of its stable branches $W^s\subset W^s_\mathbb{D}(w)$ connecting $w$ to a point $z$ in the boundary of $\mathbb{D}$, and $W^u$ the unstable branch that accumulates on $q$. The points $w,z$ and the curve $W:= W^s\cup W^u$ lift as $\widetilde w, \widetilde z\in \widetilde W=\widetilde W^s\cup\widetilde W^u$. One may assume that $\widetilde z=(0,0)$. Note that $\widetilde W$ separates the strip: its complement contains two components bounded by $(-\infty,0)\times \{0\}$ and $(0,+\infty)\times \{0\}$ respectively. To any lift $\widetilde w'=\widetilde f^k(\widetilde w)+(\ell,0)$ of any iterate $f^k(w)$ of $w$, one associates in a same way a curve $\widetilde W'$, disjoint from $\widetilde W$: it either lands on $(-\infty,0)\times \{0\}$ (in which case one denotes $\widetilde W'<\widetilde W$) or on $(0,+\infty)\times \{0\}$. One defines in this way a totally ordered collection of separating sets $\dots<\widetilde W_{n-1}<\widetilde W_n<\widetilde W_n<\dots$ such that $\widetilde W_n+(1,0)=\widetilde W_{n+\tau}$. Since the point $w$ is not fixed, the sets $\widetilde W_n=\widetilde W^s\cup\widetilde W^u$ are not fixed by $\widetilde f$: there exists $j\neq 0$ such that $\widetilde f(\widetilde W_n)\subset \widetilde W_{n+j}$ for any $n\in \mathbb Z$. We may assume without loss of generality that $j\geq 1$. See figure~\ref{f.pixton-lift}. The $\widetilde f$-invariant curve $\widetilde \Gamma$ accumulates on each set $\widetilde f^k(\widetilde W)\subset \widetilde W_{kj}$, $k\geq 0$. Since the sets are separating, it intersects all the sets $\widetilde W_n$, $n\geq 0$. Note that the unstable branch $\widetilde \Gamma$ does not intersects the curve $\widetilde W^u_n$. It follows that it intersects all the $\widetilde W^s_n$, $n\geq 0$. For $n\geq 1$ large, let $\widetilde \gamma^u$ be a (open) curve in $\widetilde \Gamma$ which connects $\widetilde W^s_n$ to $\widetilde W^s_{n+\tau}=\widetilde W^s_n+(1,0)$ at two points $a\in \widetilde W^s_n$ and $b\in \widetilde W^s_n+(1,0)$. Let $\widetilde \gamma^s\subset \widetilde W^s_n$ be the (closed) curve which connects $a$ to $b-(1,0)$. The curve $\widetilde \gamma^s\cup \widetilde \gamma^u$ projects on a simple closed curve $\gamma=\gamma^s\cup \gamma^u$ of $\mathbb{D}$ which bounds a disc $D$. By construction, the large forward iterates of $\gamma^s$ converge to the orbit of $w$, hence are contained in $D$. One deduces that $D$ is a Pixton disc. Note that the lift $\widetilde \gamma=\cup_{k\in \mathbb Z} (\widetilde \gamma^s\cup \widetilde \gamma^u+(k,0))$ separates the boundary $\mathbb{R}\times \{0\}$ from the sets $\widetilde W^u_n$. This implies that the disc $D$ contains all the unstable branches $f^k(W^u)$ of the iterates of $w$ and in particular the orbit $\mathcal O$. Up to replace $D$ by a large iterate, one find a Pixton disc whose unstable boundary $\gamma^u$ is arbitrarily close to the limit set $\Lambda$, whose stable boundary $\gamma^s\subset W^s_\mathbb{D}(w)$ has arbitrarily small diameter, and whose area is arbitrarily small. One deduces that the disc is in an arbitrarily small neighborhood of its unstable boundary, hence of $\Lambda$. Consequently it is included in $U$ as required. \medskip In the case where $w=q$ has period $1$ but negative eigenvalue, we argue in a similar way. We denote by $W^s_0$ and $W^s_1$ the two stable branches of $q$ and we lift them as an ordered collection of separating curves $\dots<\widetilde W^s_n<\widetilde W^s_{n+1}<\dots$ such that the curves $\widetilde W^s_{2n}$ lift $W^s_0$ and the curves $\widetilde W^s_{2n+1}$ lift $W^s_1$. Moreover $\widetilde W^s_{n+2}=\widetilde W^s_n+(1,0)$. Since $f(W^s_0)\subset W^s_1$ and $f(W^s_1)\subset W^s_0$, the curves $\widetilde W^s_n$ are not fixed by $\widetilde f$. The end of the proof is similar: we get a curve $\widetilde \gamma$ which separates the boundary $\mathbb{R}\times \{0\}$ from a line $\mathbb{R}\times \{1-\delta\}$, $\delta>0$ small. It projects as a simple closed curve which bounds a Pixton disc containing $q$ as required. \end{proof} \begin{lemma}\label{l.periodone} Each fixed point $p'$ accumulated by $\Gamma$ and which does not have an eigenvalue less or equal to $-1$ is contained in a trapping disc $D\subset U$. \end{lemma} \begin{proof} We use the inductive assumption stated before the section~\ref{ss.pixton}. The fixed point $p'$ belongs to an arc $I'=I_j$ in $\mathcal{I}$. From our choice of the order on $\mathcal{I}$, we have $j<i$. If $I'$ has the type of a sink, it admits arbitrarily small neighborhoods that are trapping disc. Note that $I'$ can not have the type of a point with reflexion (since $p'$ does not have an eigenvalue less or equal to $-1$). \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{tube.pdf} \put(-80,55){\small $V$} \put(-130,38){\small $I'$} \put(-30,20){\small $D'_2$} \put(-200,20){\small $D'_1$} \end{center} \caption{A Pixton disc $D=D'_1\cup V\cup D'_2$ obtained by gluing two Pixton dics.\label{f.tube}} \end{figure} Consequently, we are reduced to consider the case where $I'$ has a non-trivial bundle $F$ and each endpoint is either attracting in the direction $F$ or attached to a $f$-invariant unstable branch $\Gamma'$ (it has the type of a saddle-node or of a saddle with no reflexion). The proposition holds for the branches $\Gamma'$ (this is our inductive assumption). One deduces that there exists one or two trapping discs $D'$ containing the accumulation sets of these branches and included in $U$. Taking the union with a tubular neighborhood $V$ of $I'$ and of the branches $\Gamma'$, one obtains a trapping disc $D\subset U$ which contains the fixed point $p'$. See figure~\ref{f.tube}.\end{proof} \begin{corollary}\label{c.pixton} Under the setting of theorem~\ref{t.renormalize}, there exists a collection $\mathcal{D}$ of Pixton discs $D$ (disjoint from $W^s_\DD(p)$) such that: \begin{itemize} \item[(a)] all the forward iterates $f^k(D)$ of discs $D\in \mathcal{D}$ are included in $U$, \item[(b)] any periodic orbit $\mathcal{O}$ in the accumulation set of $\Gamma$ is contained in one $D\in\mathcal{D}$, \item[(c)] for any periodic orbit $\widetilde \mathcal{O}$ in the accumulation set of $\Gamma$ and which is stabilized by a fixed point, there exists a Pixton disc $D\in \mathcal{D}$ which contains the unstable set of $\widetilde \mathcal{O}$, any periodic orbit $\mathcal{O}$ decreasing chain-related to $\widetilde \mathcal{O}$ and the unstable set of $\mathcal{O}$. \end{itemize} \end{corollary} \begin{proof} For fixed points $p'$ accumulated by $\Gamma$, we either apply the remark~\ref{r.pixton} (when $p'$ is a sink), lemma~\ref{l.highperiod} (when $p'$ is not a sink and has an eigenvalue smaller or equal to $-1$: it is then stabilized by the fixed point $q=p'$), or lemma~\ref{l.periodone} (in the other case). For any periodic orbit $\widetilde \mathcal{O}$ which is stabilized by a fixed point, the lemma~\ref{l.highperiod} provides a Pixton disc $D\subset U$ which contains $\widetilde \mathcal{O}$ in its interior and whose stable boundary $\gamma^s$ is contained in $W^s_\mathbb{D}(\widetilde \mathcal{O})$. By the no-cycle theorem (theorem~\ref{t.cycle}), the unstable manifold of $\widetilde \mathcal{O}$ does not intersects $W^s_\mathbb{D}(\widetilde \mathcal{O})\setminus \widetilde \mathcal{O}$. This proves that the unstable set of $\widetilde \mathcal{O}$ is included in $D$. Let $\mathcal{O}$ be a periodic orbit decreasing chain related to $\widetilde \mathcal{O}$. For any point $w\in \mathcal{O}$, there exists $\widetilde w\in \widetilde \mathcal{O}$ and a chain $C\subset D$ for an iterate of $f$ which contains $w$ and $\widetilde w$. The closure of $C$ is fixed by an iterate of $f$ and is disjoint from $W^s_\mathbb{D}(p)$ (since it is contained in $U$). As a consequence, it is disjoint from $W^u(p)$. Since $C$ is connected and contained in the closure of a decorated region of $\widetilde \mathcal{O}$, it intersects at most one connected component of $\mathbb{D}\setminus W^s_\mathbb{D}(\widetilde \mathcal{O})$. Since the boundary of $D$ is contained in $W^s_\mathbb{D}(\widetilde \mathcal{O})\cup W^u(p)$, the chain $C$ is contained in $D$. This proves that $\mathcal{O}$ is included in $D$. By proposition~\ref{p.unstable} the unstable set of $w$ intersects at most one component of $\mathbb{D}\setminus W^s_\mathbb{D}(\widetilde \mathcal{O})$. It does not intersects $W^u(p)$ either. Consequently, $W^u(\mathcal{O})$ is also included in $D$. This gives the item (c). From proposition~\ref{p.decreasing-chain}, any periodic orbit $\mathcal{O}$ in the accumulation set of $\Gamma$ which has period larger than $1$ and is not stabilized by a fixed point is decreasing chain related to a periodic orbit $\widetilde \mathcal{O}$ stabilized by a fixed point. Moreover from proposition~\ref{p.chain-decreasing}, the stabilized orbit $\widetilde \mathcal{O}$ is also accumulated by $\Gamma$. The item (c) provides a Pixton disc $D\subset U$ which contains $\widetilde O$ and $\mathcal{O}$. This completes (b). It remains to prove the item (a). By construction, the discs $D$ are contained in $U$. In the case $D$ is trapping, all its forward iterates are contained in $U$ also. In the other cases, $D$ is obtained with lemma~\ref{l.highperiod}: since $f$ is dissipative, the volume of $f^k(D)$ is arbitrarily small for $k$ large; since the stable boundary $\gamma^s$ is contained in the stable curve of a periodic orbit, its length $f^k(\gamma^s)$ gets arbitrarily small; since the unstable boundary $\gamma^u$ is contained in $\Gamma$, one concludes that all the iterates $f^k(D)$, for $k$ large, are contained in $U$. Up to replace $D$ by a larger iterate, the property (a) is satisfied. \end{proof} \subsection{Proof of theorem~\ref{t.renormalize2}} We first assume that $f$ preserves the orientation and we consider the setting of the section~\ref{ss.pixton}. The accumulation set $\Lambda$ of $\Gamma$ may be covered by Pixton discs. \begin{lemma}\label{l.covering} Let us consider the family of Pixton discs $\mathcal{D}$ obtained by corollary~\ref{c.pixton}. Then, any point $x$ in the accumulation set $\Lambda$ of $\Gamma$ has a backward iterate in the interior of one of the Pixton discs $D\in \mathcal{D}$. \end{lemma} \begin{proof} The proof of this lemma is done by contradiction. If the conclusion does not hold, the backward orbit of a point $x\in \Lambda$ accumulates on an invariant set $K\subset \Lambda$ that is disjoint from the interior of all the discs $D\in \mathcal{D}$. Then $K$ supports an ergodic measure $\mu$. From the item (b) of corollary~\ref{c.pixton}, this measure $\mu$ is non-atomic. The Pesin theory associates a compact set $B\subset \operatorname{Support}(\mu)$ with $\mu(B)>0$ such that all the points $z$ in $B$ have a stable manifold $W^s_\DD(z)$ which separates $\DD$ and varies continuously with $z\in B$ for the $C^1$ topology. We can thus choose $z\in B$ whose forward orbit is dense in the support of $\mu$ and two forward iterates $z',z''\in B$ close to $z$ and separated by $W^s_\DD(z)$. In particular the region $R\subset \mathbb{D}$ bounded by $W^s_\DD(z')$ and $W^s_\DD(z'')$ does not contain any fixed point. From the closing lemma (theorem~\ref{t.measure local}), there exists a sequence $(w_k)$ of periodic points in $\Lambda$ converging to $z$. See picture~\ref{f.finiteness-cross}. \begin{figure} \begin{center} \includegraphics[width=4cm,angle=0]{finiteness-cross.pdf} \put(-80,120){\small $R$} \put(-60,72){\small $z''$} \put(-100,70){\small $z'$} \put(-80,68){\small $z$} \put(-10,50){\small $q$} \put(-83,45){\small $w_k$} \end{center} \caption{Proof of lemma~\ref{l.covering}.\label{f.finiteness-cross}} \end{figure} \medskip We first assume that $w_k$ is stabilized by a fixed point $q$; since $w_k$ are in $\Lambda$, they are accumulated by $\Gamma$ and so by item (c) in corollary~\ref{c.pixton}, there a Pixton disc $D\in \mathcal{D}$ which contains the orbit of $w_k$ and its unstable set. Since $w_k\in R$ and $q\not\in R$, the unstable set of $w_k$ intersects the boundary of $R$, hence the stable manifold of $z'$ or $z''$. Since the forward orbits of $z'$ and $z''$ equidistribute towards the measure $\mu$, this implies that $\mu$ is supported on $D$, hence on the boundary of $D$. This is a contradiction since the orbit of any point in the boundary of $D$ converges in the future or in the past towards $p$ or the orbit of $w_k$. \medskip When $w_k$ is not stabilized by a fixed point, proposition~\ref{p.decreasing-chain} implies that $w_k$ is decreasing chain-related to a periodic point $\widetilde w_k$ which is stabilized by a fixed point $q_k$ and proposition~\ref{p.chain-decreasing} implies that $\widetilde w_k$ also belongs to $\Lambda$. In the case $\widetilde w_k$ belongs to $R$, the previous argument applies and gives a contradiction. We are thus reduced to the case where $\widetilde w_k$ does not belong to $R$. Let us consider a chain $C$ for an iterate of $f$ which contains $w_k$ and $\widetilde w_k$. Let $D\in \mathcal{D}$ be a Pixton disc associated to $\widetilde w_k$ as in corollary~\ref{c.pixton} item (c): in particular it contains the chain $C$. Since $C$ is connected and intersects both $R$ and its complement, there exists an unstable branch $\Gamma$ of a periodic point $w'_k\in C$ which intersects the stable curve of $z'$ or $z''$. Since $\Gamma\subset D$, this implies as before that $\mu$ is supported on $D$ and gives a contradiction. \end{proof} We can now complete the proof of the theorem. \begin{proof}[End of the proof of theorem~\ref{t.renormalize2}] Let $\Lambda$ be the accumulation set of $\Gamma$: this is an invariant compact set. Let us consider the collection $\mathcal{D}$ of Pixton discs given by corollary~\ref{c.pixton}. Let $V$ be the union of all the open sets $\operatorname{Interior}(f^k(D))$ over $D\in \mathcal{D}$ and over all $k\geq 0$. This is an open set satisfying $f(V)\subset V\subset U$. By lemma~\ref{l.covering}, any point in the accumulation set $\Lambda$ of $\Gamma$ has a backward iterate in $V$. Since $V$ is forward invariant and by compactness of $\Lambda$, there exists a finite number of Pixton discs $f^k(D_n)$ such that the union of their interiors covers $\Lambda$. The remark~\ref{r.pixton}.(3) allows to replace any two of these discs which intersect by a single Pixton disc. We repeat this inductively. Since $\Lambda$ is connected, one gets a Pixton disc $\widetilde D$ whose interior contains $\Lambda$. We denote by $\widetilde \gamma^s, \widetilde \gamma^u$ its stable and unstable boundaries. We modify $\widetilde D$ in order to obtain a Pixton disc satisfying some forward invariance. One chooses $k$ large such that $f^k(\widetilde \gamma^u)$ is contained in a small neighborhood of $\Lambda$, hence in $\operatorname{Interior}(\widetilde D)$. From the definition of the Pixton disc we also have $f^k(\widetilde \gamma^s)\subset \operatorname{Interior}(\widetilde D)$. One applies remark~\ref{r.pixton}.(3) again in order to build a Pixton disc $D$ which contains $\widetilde D\cup f(\widetilde D)\cup\dots\cup f^{k-1}(\widetilde D)$. Since $f(\widetilde \gamma^s)\subset \widetilde D$, the stable boundary $\gamma^s$ of $D$ is included in $\widetilde \gamma^s$: in particular, the $k$ first iterates of $\gamma^s$ are contained in $\operatorname{Interior}(D)$. By construction, the $k-1$ first iterates of the boundary of $D$ are contained in $D$ and all the larger iterates are contained in $\operatorname{Interior}(\widetilde D)$. This proves that the $k-1$ first iterates of $\gamma^u$ are contained in $D$ and the $k-$th iterate is included in $\operatorname{Interior}(D)$. One deduces that $D$ is a Pixton disc whose interior contains $\Lambda$ and which furthermore satisfies $f(D)\subset D$ and $f^k(D)\subset \operatorname{Interior}(D)$. One finally modifies $D$ in order to build a disc $\Delta$ trapped by $f$: for each $x$ in the boundary of $D$, one considers the smallest integer $i\geq 1$ such that $f^i(x)\in \operatorname{Interior}(D)$; one chooses small closed discs $D_{x,0}$, $D_{x,1}$, \dots, $D_{x,i-1}$ centered at $x$, $f(x)$, \dots, $f^{i-1}(x)$ respectively such that $f(D_{x,j})\subset \operatorname{Interior}(D_{x,j+1})$ when $j<i-1$ and $f(D_{x,i})\subset \operatorname{Interior}(D)$. By compactness, one selects finitely many points $x_1$, \dots, $x_m$ in the boundary of $D$, such that the union of the interior of the $D_{x_i,0}$ covers the boundary of $D$. By construction, the union of $D$ with all the discs $D_{x_k,j}$ is a compact set $\widetilde \Delta$ whose image is contained in $\operatorname{Interior}(\widetilde \Delta)$. By~\cite{kerekjarto}, $\widetilde \Delta$ is contained in a disc $\Delta$ whose boundary is boundary of $\widetilde \Delta$. In particular, $f(\Delta)\subset \operatorname{Interior}(\Delta)$. We have thus obtained a trapping disc which contains $\Lambda$. From the item (b) of corollary~\ref{c.pixton}, the trapping disc is disjoint from $W^s_\DD(p)$ as required. The conclusion of the theorem~\ref{t.renormalize} thus holds for $\Gamma$. Since all the forward iterates of the Pixton discs $D\in \mathcal{D}$ are included in $U$, the disc $\Delta$ may be chosen in a small neighborhood of $\overline U$. This argument applied inductively concludes the proof of theorem~\ref{t.renormalize2} in the case $f$ is orientation preserving. When $f$ is orientation reversing, one first considers $f^2$ and gets a disc $\Delta_0$ disjoint from $W^s(p)$, which contains the accumulation set of $\Gamma$ and is trapping for $f^2$. The filled union $\Delta_1$ of $\Delta_0$ and $f(\Delta_0)$ has the same property (see remark~\ref{r.pixton}), but also satisfies $f(\Delta_0)\subset \Delta_0$. Arguing as above, one can then modify $\Delta_1$ and get a disc $\Delta\supset \Delta_1$ contained in an arbitrarily small neighborhood of $\Delta_1$ which is trapping for $f$. \end{proof} \subsection{Proof of theorem~\ref{t.renormalize} and its consequences} \begin{proof}[Proof of theorem~\ref{t.renormalize}] From theorem~\ref{t.renormalize2}, the conclusion of theorem~\ref{t.renormalize} holds for the $f$-invariant unstable branches of the arcs $I_i\in \mathcal{I}$. One can easily conclude for the other $f$-invariant branches, i.e. for the unstable branches $\Gamma$ contained in $I_i$. Indeed the accumulation set of such a branch belongs to an isolated fixed arc $I\subset I_i$, which is disjoint from the unstable branch $\Gamma$ and bounded by an endpoint $p_i$ of $I_i$. If $I_i$ has the type of a sink, it admits arbitrarily small neighborhoods that are trapping discs and the proposition follows. Otherwise $p_i$ has a $f$-invariant unstable branch and we know from theorem~\ref{t.renormalize2} that its accumulation set is contained in a trapping disc $\Delta_0$ disjoint from $I_i$. One can then extend the disc $\Delta_0$ with a tubular neighborhood of $I$ and of the unstable branch of $p$: one then gets a trapping disc $\Delta$ which contains $I$ (hence the accumulation set of $\Gamma$) as required. \end{proof} \begin{proof}[Proof of corollary~\ref{c.isolated}] Since $I$ is not reduced to a fixed point with eigenvalue $-1$, three types are possible (see definition~\ref{d.type-arc}). If $I$ has the type of a sink, the $\omega$-limit set of any point in an open neighborhood $W$ is a fixed point of $I$. Moreover by compactness, there exists $k\geq 1$ such that $f^k(\overline W)\subset W$ and $\cap_{n\geq 0} f^n(W)=I$. Hence the $\alpha$-limit set of any point in $W\setminus I$ is disjoint from $W$. If $I$ has the type of a saddle with no reflexion, one applies theorem~\ref{t.renormalize} and consider two trapping discs $V_1,V_2$ disjoint from $I$ which contain the accumulation sets of the unstable branches of $I$. The forward orbit of any point in a neighborhood $W$ of $I$ either intersect $V_1\cup V_2$ (in this case the $\omega$-limit set is contained in $V_1\cup V_2$ and is disjoint from $I$) or is contained in $I$ (it is a fixed point). Let us define $W'=V_1\cup V_2\cup (\cup_n f^n(W))$. By compactness, there exists $k\geq 1$ such that $f^k(\overline W')\subset W'$. Hence the $\alpha$-limit set of any point in $W$ is either disjoint from $W$ or contained in $W$. Since $I$ is normally hyperbolic, any $\alpha$-limit set contained in $W$ is a fixed point of $I$. If $I$ has the type of a saddle-node, one consider a trapping disc $V$ disjoint from $I$ which contains the accumulation set of the (unique) unstable branch of $I$. The forward orbit of any point in a neighborhood $W$ of $I$ either intersects $V$ or is contained in $I$. One introduces $W'=V\cup (\cup_n f^n(W))$ and one argues as in the previous case. \end{proof} \subsection{Trapping discs and periodic measures} As a byproduct of the previous arguments we obtain the following property which will be useful later. \begin{proposition}\label{p.trapping-aperiodic} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero entropy, and let $\mu$ be an aperiodic invariant measure. Then for $\mu$-almost every point $z$ there exists $\varepsilon>0$ with the following property: if $\Delta$ is a disc trapped by $f$ which contains a point $\varepsilon$-close to $z$, then $\mu$ is supported on $\Delta$. \end{proposition} \begin{proof} The argument appeared in the proof of lemma~\ref{l.covering}. The point $z$ is contained in a strip $R$ bounded by two stable manifolds $W^s_{loc}(z'), W^s_{loc}(z'')$ such that the forward orbits of $z'$ and $z''$ equidistribute towards $\mu$, and such that $R$ does not contain any fixed point. If the disc $\Delta$ contains a point close to $z$, it intersects $R$. Since $\Delta$ is trapped, it also contain a fixed point. Consequently $\Delta$ meets the stable manifold of $z'$ or $z''$, and therefore the forward orbit of a large iterate of this point. This forward orbit equidistributes towards $\mu$. This shows that $\mu$ is supported on $\Delta$. \end{proof} \section{Local renormalization} \label{ss.local renormalization} In this section we prove the theorem~\ref{t.theoremA} about the existence of a renormalizable disc. We also explain in proposition~\ref{p.semi} how to renormalize inside a decorated region; this proposition is the main step to prove the global renormalization stated by theorem \ref{t.renormalize-prime} in section \ref{s.renormalize}. \subsection{Renormalizable diffeomorphisms, proof of theorem~\ref{t.theoremA}} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero entropy. We distinguish two cases, either all periodic points are fixed or not. In the first case, one have to prove that $f$ is generalized Morse-Smale; in the second, that there is a renormalizable domain. \paragraph{\it First case: any periodic point of $f$ is fixed.} For any $x\in \DD$, let $\mu$ be an ergodic measure supported on $\omega(x)$. By the closing lemma (theorem~\ref{t.measure local}), $\mu$ is supported on a fixed point $p$ and in particular, the forward orbit of $x$ accumulates on $p$. If $x$ does not belong to the stable set of $p$ then $p$ has an unstable branch and the forward orbit of $x$ accumulate on that unstable branch. By theorem \ref{t.renormalize} there exists a disc $\Delta$ which is trapped (by $f$ or by $f^2$) containing the accumulation set of the unstable branch and disjoint from a neighborhood of $p$. In particular, $\omega(x)\subset \Delta\cup f(\Delta)$ and so $\omega(x)$ does not contain $p$; a contradiction. We have shown that any forward orbit converges to a fixed point, thus $f$ is a generalized Morse-Smale. \paragraph{\it Second case: there are periodic points with period larger than $1$.} By proposition~\ref{p.decreasing-chain}, there exists a stabilized periodic point $p$. If $p$ has period $k>1$, one considers the decorated region $V_p$ associated to $p$ and observe that one of the following cases holds. \begin{itemize} \item[\;2.a.] there exists an unstable branch of $p$ contained in $V_p$, \item[\;2.b.] $p$ belongs to an arc which is fixed for $f^k$, contained in $\overline{V_p}$ and not reduced to $p$, \item[\;2.c.] $p$ is a saddle-node of $f^k$. \end{itemize} In the case 2.a, we can apply again theorem~\ref{t.renormalize} for $f^k$ and the unstable branch of $p$ that is contained in $V_p$: this gives a disc $D\subset V_p$ which is trapped by $f^k$; since the decorated regions of the iterates of $p$ are disjoint, the disc $D$ is disjoint from its $k-1$ first iterates. In the first cases 2.b and 2.c, it follows immediately that there is a compact disc disjoint from its $k-1$ first iterates and mapped into itself by $f^k$. If $p$ is a stabilized fixed point, it is not a sink. Let $V_p$ be one of its decorated regions. Only the cases 2.a and 2.b can occur. In case 2.a, $p$ has two unstable branches that are exchanged by $f$; hence there exists a disc $D\subset V_p$ which is trapped by $f^2$. In case 2.b, $p$ is accumulated by points of period $2$: one can then find an arc $I\subset V_p$ which is fixed by $f^2$ and disjoint from $f(I)$ and then a disc $D\subset V_p$ which is mapped into itself by $f^2$. To summarize, in the second case we have found a disc $D$, disjoint from its first $k-1$ iterates and mapped into itself by $f^k$: the diffeomorphism is renormalizable. The theorem~\ref{t.theoremA} is now proved.\qed \subsection{Renormalization inside decorated regions} The following proposition provides the renormalization inside each decorated region, refining the trapped domain inside a decorated region of a periodic point $p$ into finite disjoint periodic trapping domains that capture only the periodic points of larger period that are decreasing chain related to $p$. \begin{proposition}\label{p.semi} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero entropy, $p$ be a stabilized periodic point with a decorated region $V$ and $k$ be the period of $V$. Then, there exists a finite number of disjoint topological disks $D_1,..., D_m$ such that \begin{enumerate} \item[a.] $D_1\cup\dots\cup D_m\subset V,$ \item[b.] each $D_i$ is trapped by $f^k,$ \item[c.] $D_1\cup\dots\cup D_m$ contains all the periodic points $q\in V$ that are decreasing chain related to $p$ in $V$ with period larger than $k$, \item[d.] conversely any periodic point in $D_1\cup\dots\cup D_m$ is decreasing chain related to $p$. \end{enumerate} \end{proposition} \begin{proof} From corollary~\ref{c.orientation}, the map $f^k$ preserves the orientation. Let $\mathcal{P}$ be the set of $q\in V$ which are decreasing chain-related to $p$ such that \begin{itemize} \item[--] either the period of $q$ is larger than $k$, \item[--] or the period of $q$ equals $k$ and $Df^k(p)$ has an eigenvalue less or equal to $-1$. \end{itemize} For each $\tau>k$, $\mathcal{P}(\tau)$ will denote the set of $q\in \mathcal{P}$ with period less or equal to $\tau$. \begin{lemma}\label{l.first-disc} Any $q\in \mathcal{P}$ belongs to a disc $\Delta_q\subset V$ trapped by some iterate of $f^k$. \end{lemma} \begin{proof} The case where $q$ is a sink is clear. One can thus assume that there exists $\tau > k$ such that $f^\tau(q)=q$ and $Df^\tau(q)$ has an eigenvalue larger or equal to $1$. We consider a finite collection $\mathcal{J}$ of disjoint isolated arcs fixed by $f^\tau$ which contains all the points that are fixed by $f^\tau$. Since $p$ has an unstable branch in $\DD\setminus \overline V$, we also may assume that each arc is either contained in $\overline V$ or disjoint from it. As in the statement of theorem \ref{t.renormalize2}, we write $J>J'$ if $J$ has an unstable manifold fixed by $f^\tau$ which accumulates on $J'$ and we consider all the sequences $J^0>J^1>\dots>J^n$ in $\mathcal{J}$ such that $q\in J^0$. \begin{claim} The periodic points (different from $p$) in all the arcs $J^i$ are decreasing chain related to $p$. \end{claim} \begin{proof} The property holds for $J^0$ since $J^0$ contains $q\in \mathcal{P}$ and is included in $\overline V$. If the property holds for $J^0,\dots,J^i$, then by proposition~\ref{p.unstable} the unstable branches of $J^i$ are contained in $\overline V$. By definition~\ref{d.decreasing-chain}, their unstable sets only accumulate on periodic points that are decreasing chain related to $p$ or coincide with $p$. In particular $J^{i+1}$ is included in $\overline V$ and then satisfies the inductive property. \end{proof} \begin{claim}\label{c.local} The unstable branches of $J^n$ do not accumulate on $p$. \end{claim} \begin{proof} The claim is proved inductively for all arcs $J^0,J^1,\dots,J^n$. Hence one will assume that for any arc $J^0,\dots,J^{n-1}$ the unstable branches do not accumulate on $p$ and by contradiction that one of the unstable branches of $J^n$ accumulates on $p$. Up to removing some intervals $J^i$, $0< i<n$, from the sequence $J^0,\dots, J^n$, one can also assume that $J^i>J^{i'}$ does not hold when $i+1<i'$. We first prove that for each $i\in\{0,\dots,n-1\}$, the unstable branches of $J^i$ avoid one of the components of $\DD\setminus W^s_{\DD}(J^{i+1})$: otherwise one unstable branch $\Gamma$ of $J^i$ would accumulate on the unstable branches of $J^{i+1}$ and from proposition~\ref{p.transitive}, the accumulation set of $\Gamma$ contains the accumulation set of the unstable branches of $J^{i+1}$. When $i<n-1$, this implies $J^i>J^{i+2}$, contradicting our choice on the sequence $J^0,\dots,J^n$. When $i=n-1$, our assumption on $J^n$ implies that $\Gamma$ accumulates on $p$, contradicting our assumption that the unstable branches of $J^{n-1}$ do not accumulate on $p$. \medskip Recall that from corollary~\ref{c.orientation}, the map $f^k$ preserves the orientation. The property obtained in the last paragraph together with proposition~\ref{p.heteroclinic} imply that for each $i\in\{0,\dots,n-1\}$, the period of the unstable branches of $J^i$ equals the period of the unstable branches of $J^{i+1}$. By definition of $\mathcal{P}$, either $q\in J^0$ has period larger than $k$, or has period $k$ and the unstable branches of $J^0$ are exchanged by $f^k$. In any case the unstable branches of $J^0$ have period larger than $k$. Consequently the same property holds for each arc $J^i$. But by assumption an unstable branch of $J^n$ accumulate on $p$ and is contained in $\overline V$. Since $f^k$ is orientable, Proposition~\ref{p.heteroclinic} implies that the unstable branches of $J^n$ have period $k$. This is a contradiction and the claim is proved. \end{proof} Theorem \ref{t.renormalize2} applied to $f^\tau$ provides discs that are trapped by $f^\tau$, that are contained in $V$ (thanks to claim~\ref{c.local}), and that contain the accumulation sets of the unstable branches of $J^0$. Consider a neighborhood of $J^0$. Iterating forward, it may be glued to the trapped discs. This defines a disc contained in $V$ that is trapped by $f^\tau$. The lemma~\ref{l.first-disc} is proved. \end{proof} \begin{lemma}\label{l.trapping-tau} For any $\tau>k$ there exists a finite number of disjoint $f^k-$trapped discs whose union $U_\tau$ is included in $V$ and contains $\mathcal{P}(\tau)$. Moreover $U_{\tau}\subset U_{\tau'}$ when $\tau\leq \tau'$. \end{lemma} \begin{proof} Observe that $\mathcal{P}(\tau)$ is compact. From lemma~\ref{l.first-disc}, there exist finitely many discs $\Delta_q\subset V$ that are trapped by some iterates of $f^k$ and contain all the points of $\mathcal{P}(\tau)$. Up to modify slightly their boundaries if necessary, one can assume that they are transverse. As a consequence the union $U_\tau$ of the discs is a finite disjoint union of submanifolds with boundary which are trapped by an iterate of $f^k$. Since $f$ is dissipative, the components of $U_\tau$ are topological discs. The construction is performed inductively on $\tau$, so that $U_{\tau}\subset U_{\tau'}$ when $\tau\leq \tau'$. It remains to prove that each component of $U_\tau$ is trapped by $f^k$ (instead of an iterate of $f^k$). Let us assume by contradiction that this is not the case: there exists $k<l\leq \tau$ and a disc $\Delta_q\subset U_\tau$ that only contains points decreasing chain related to $p$ with period larger or equal to $l.$ As in the proof of lemma~\ref{l.first-disc}, one considers a finite collection of disjoint isolated arcs fixed by $f^\tau$ and compute their contribution to the indices of $f^k$ and $f^l$ in the decorated region $V$. The arc $I_0$ which contains $p$ is fixed by $f^k$ and $index(I_0,V,f^k)=index(I_0,V,f^k)$. Similarly, the total contribution of the arcs contained in a disc trapped by $f^k$ equals $1$ (both for the maps $f^k$ and $f^l$). But the total contribution of the arcs contained in a disc trapped by $f^l$ and not by $f^k$ equals $1$ for the map $f^l$ and $0$ for the map $f^k$. Consequently the index of $f^l$ in $V$ is larger than the index of $f^k$ in $V$. This is a contradiction, since from proposition~\ref{p.local-lefschetz}, the indices of $f^k$ and $f^l$ in the decorated region $V$ coincide (and equal $1/2$). \end{proof} \begin{lemma}\label{c.U} The set $\mathcal P$ is contained in one of the regions $U_\tau$. \end{lemma} \begin{proof} If the conclusion of the lemma does not hold, one can find a sequence $\tau_k\to +\infty$ such that $U_{\tau_k+1}\setminus U_{\tau_k}$ contains a periodic $f^k$-orbit $\mathcal{O}_k$ supported on $\mathcal{P}$. Up to take a subsequence, $(\mathcal{O}_k)$ converges towards a $f^k$-invariant compact set $K\subset \overline V$. Note that $K$ is aperiodic. Indeed any $x\in K$ is accumulated by a sequence of points $(q_n)$ of $\mathcal{P}$. If $x$ were periodic, then corollary~\ref{c.isolated2} would imply that the periods of the points $q_n$ is bounded. Since the $q_n$ are decreasing chain-related to $p$, this would imply that $x$ has the same property and belongs to $\mathcal P$. This is a contradiction since $K$ is disjoint from the increasing union $\cup U_\mathcal \tau$ which contains $\mathcal{P}$. Let $\mu$ be an ergodic $f^k$-invariant measure supported on $K$. By construction, for $\mu$-almost every point $x$ there exist a component $D$ of $U_\tau$ which contains a point arbitrarily close to $x$ as $\tau\to +\infty$. Since $D$ is trapped by $f^k$, the proposition~\ref{p.trapping-aperiodic} implies that $\mu$ is supported on $D$. A contradiction since $K$ is disjoint from $U_\tau$. \end{proof} We have shown that $\mathcal{P}$ is contained in the union $U_\tau$ of finitely many disjoint disks (denoted by $D_1,\dots, D_m$) in $V$ that are trapped by $f^k$. To conclude, we need to prove that for each disc $D_i$, any periodic point in $D_i$ is decreasing chain related to $p.$ Note that the iterates of $D_i$ do not meet the stable manifold of the orbit of $p$ (otherwise the trapping property would imply that $D_i$ contain $p$, a contradiction). In particular $D_i$ can not contain any fixed point. Observe also that $D_i$ does not contain a stabilized periodic point (since one of its unstable branches accumulates on a fixed point and has to be contained in $D_i$). Hence by proposition~\ref{p.decreasing-chain}, each periodic point in $D_i$ is decreasing chain related to some stabilized periodic point. The next lemma asserts that they are necessarily decreasing chain related to $p$. \begin{lemma}\label{c.disc2} Any periodic point $q\in D_i$ is decreasing chain related to $p$. \end{lemma} \begin{proof} The proof is done by contradiction: we assume that $D_i$ contains $q$ which is decreasing chain related to a stabilized periodic point $p'$ which is different from $p$. Since $D$ does not contain the stabilized point $p'$ and is trapped by $f^k$, it is disjoint from $W^s_{\mathbb{D}}(p')$. In particular, it is contained in a decorated region $V'$ of $p'$. Note that either $V\subset V'$ or $V'\subset V$. We will assume that the first case occurs (the proof in the second case is similar). By definition~\ref{d.decreasing-chain}, there exists a chain $C$ for an iterate of $f$ that is contained in $\overline V'$ intersects $V$ and $p'$. Hence $C$ meets $W^s_\mathbb{D}(p)$: there is an unstable branch in $C$ which intersects the stable manifold of $p$ implying that $p$ is decreasing chain-related to $p'$. The corollary~\ref{c.unique-stabilized} then gives the contradiction. \end{proof} The proof of proposition~\ref{p.semi} is now complete. \end{proof} \section{Finiteness of the set of stabilized periodic points} \label{ss.finitness} In order to prove the global renormalization (theorem \ref{t.renormalize-prime} in the next section), one need to show that the number of stabilized orbits is finite. This section is devoted to prove the following: \begin{theorem}\label{t.finite} Let $f$ be a mildly dissipative diffeomorphism of the disc with zero topological entropy. Then the set of its stabilized periodic orbits is finite. \end{theorem} Before proving the theorem we associate to any stabilized orbit a filtrating region. \begin{proposition}\label{p.separation} For any stabilized periodic orbit $\mathcal{O}$ there exist two topological disks $U_\mathcal{O}\subset \widehat U_\mathcal{O}$ that are trapped by $f$ such that \begin{enumerate} \item[--] $\widehat U_\mathcal{O}\setminus U_\mathcal{O}$ contains $\mathcal{O}$ and any periodic orbit decreasing chain related to $\mathcal{O}$, \item[--] any periodic orbit in $\widehat U_\mathcal{O}\setminus U_\mathcal{O}$ is either $\mathcal{O}$ or is decreasing chain related to $\mathcal{O}$. \end{enumerate} Moreover if $\mathcal{O}$ is contained in a trapping disc $\Delta$, then $U_\mathcal{O}$ can be chosen in $\Delta$. \end{proposition} \begin{proof} We first assume that the period $k$ of $\mathcal{O}$ is larger than $1$. Let $p\in \mathcal{O}$, let $V$ be the decorated region associated to $p$. Theorem \ref{t.renormalize2} applied to the stabilized unstable branch $\Gamma$ of $p$ associates a trapping disc $U_\mathcal{O}$ which contains the accumulation set of $\Gamma$. Note that if $\mathcal{O}$ is contained in a trapping disc $\Delta$, then $U_\mathcal{O}$ can be chosen to be included in $\Delta$. Now, we deal with the decorated region. Let $D_1,\dots,D_n$ be the $f^k$-trapping discs given by proposition \ref{p.semi}. Let $I_1,\dots,I_\ell$ be isolated $f^k$-fixed arcs containing all the periodic points in $V$ that are decreasing chain-related to $p$ and not in $\cup D_i$. Since any periodic point close to $p$ is contained in $V$ (by corollary~\ref{c.isolated2}), we can assume that the arcs $I_j$ are contained in $\overline V$. By definition~\ref{d.chain} of the chains and the invariance of the arcs, each periodic point in $I_j$ either coincides with $p$ or is decreasing chain related to $p$. Each $I_j$ admits a neighborhood $O_j$ which is a topological disc such that if the $\omega$-limit set of a point $x$ by $f^k$ intersects $\overline O_j$, then it is contained in $I_j$ (by corollary~\ref{c.isolated}). Let $\widetilde U_\mathcal{O}$ be the forward invariant set defined by the union of the forward iterates of $O_j$, of $D_i$ and of $U_\mathcal{O}$. It can be written as the union of finitely many connected sets: \begin{itemize} \item[--] the disc $U_\mathcal{O}$, \item[--] the trapping discs $f^m(D_i)$ for $0\leq m<k$ and $1\leq i\leq n$, \item[--] the connected unions $f^m(T_j):=f^m(O_j)\cup f^{m+k}(O_j)\cup f^{m+2k}(O_j)\cup\dots$ for $1\leq j\leq \ell$ and $0\leq m<k$. \end{itemize} By definition of the decreasing chain relation, for each $0\leq m<k$, the union of the interior of the sets $f^m(D_i)$ and $f^m(T_j)$ for $1\leq i\leq n$ and $1\leq j\leq \ell$ is connected. One set $O_{j_0}$ contains the point $p$ so that each $f^m(T_{j_0})$ for $0\leq m<k$ intersects $U_\mathcal{O}$. This proves that the interior of $\widetilde U_\mathcal{O}$ is connected. By construction, the set $\widetilde U_\mathcal{O}$ is forward invariant. Since $f$ contracts the volume, the interior of $\widetilde U_\mathcal{O}$ is simply connected, hence homeomorphic to the open disc. \begin{lemma}\label{l.trapped-finite} If $O_1$,\dots, $O_\ell$ are sufficiently small neighborhoods of $I_1$,\dots, $I_\ell$, the $\omega$-limit set of any point in $\operatorname{Closure}(\widetilde U_\mathcal{O})$ is contained in $\operatorname{Interior}(\widetilde U_\mathcal{O})$. \end{lemma} \begin{proof} Since there is no cycle, one can enumerate the arcs $I_j$ in a way that the unstable branches of $I_j$ do not accumulate on $I_{j'}$ when $j\geq j'$. \begin{claim}\label{c.unstable} Let us consider an unstable branch $\Gamma$ of $I_j$. Then the $\omega$-limit set by $f^k$ of any point of $\Gamma$ belongs to $D_1\cup \dots\cup D_n\cup I_1\cup\dots\cup I_{j-1}$. \end{claim} \begin{proof} Let us consider a $f^k$-invariant ergodic measure $\mu$ supported on the accumulation set $L$ of $\Gamma$. If $\mu$ is supported on a $f^k$-periodic orbit, this orbit is decreasing chain-related to $\mathcal{O}$, hence is contained in $D_1\cup \dots\cup D_n\cup I_1\cup\dots\cup I_\ell$. By our choice of the indices of the arcs $I_1,\dots,I_\ell$, the periodic orbit is contained in $D_1\cup \dots\cup D_n\cup I_1\cup\dots\cup I_{j-1}$ and the claim follows in this case. If $\mu$ is aperiodic, the closing lemma (theorem~\ref{t.measure local}) implies that there exist periodic points in $L$ which accumulate on the support of $\mu$. These periodic points are also decreasing chain-related to $\mathcal{O}$ and are contained in $D_1\cup \dots\cup D_n\cup I_1\cup\dots\cup I_{j-1}$. Passing to the limit one deduces that $\mu$ is also contained in $D_1\cup \dots\cup D_n\cup I_1\cup\dots\cup I_{j-1}$. \end{proof} Since the discs $U_\mathcal{O}$ and $D_i$ are trapped by some iterates of $f$, it is enough to prove that the $\omega$-limit set under $f^k$ of any point in $\operatorname{Closure}(O_j)$ is contained in the union $$U_\mathcal{O} \cup (D_1\cup ,\dots, \cup D_n) \cup (I_1\cup \dots \cup I_{j}).$$ This is proved inductively. We assume that the property holds for any $j'<j$ and consequently we can suppose that the closure of $$\Delta_{j-1}:=U_\mathcal{O} \cup (D_1\cup ,\dots, \cup D_n) \cup (T_1\cup \dots \cup T_{j-1})$$ is mapped into its interior by some iterate of $f^k$. Let us consider any point $x$ in $\operatorname{Closure}(O_j)$. If its $\omega$-limit set belongs to $I_j$, the inductive property holds trivially. Otherwise, $x$ has a forward iterate close to a neighborhood $W$ of a fundamental domain of the unstable branches of $I_j$. By choosing the neighborhood $O_j$ of $I_j$ small enough, the neighborhood $W$ can be chosen arbitrarily small and by the claim~\ref{c.unstable}, any point in $W$ has a forward iterate by $f^k$ in the interior of $\Delta_{j-1}$. We have thus proved that if the $\omega$-limit set of $x$ is not contained in $I_j$, then a forward iterate of $x$ by $f^k$ belongs to $\Delta_{j-1}$ as required. The inductive property is proved, which concludes the proof of lemma~\ref{l.trapped-finite}. \end{proof} From the previous lemma, one can thus modify $\widetilde U_\mathcal{O}$ near its boundary (as explained in the proof of theorem~\ref{t.renormalize2}) and define a topological disc $\widehat U_\mathcal{O}$ which is trapped by $f$. The limit orbits in $\widehat U_\mathcal{O}$ and $\widetilde U_\mathcal{O}$ are the same: for any point in $\widehat U_\mathcal{O}$, the $\omega$-limit set is contained in one of the trapping discs $D_i$, or in $U_\mathcal{O}$, or in one of the arcs $I_j$. With proposition \ref{p.semi}, this shows that any periodic orbit in $\widehat U_\mathcal{O}\setminus U_\mathcal{O}$ either coincide with $\mathcal{O}$ or is decreasing chain related to $\mathcal{O}$. \medskip When the period is $1$, the proof is almost the same. The point $p$ is fixed and has two decorated regions $V,V'$, each one of period $2$. Working with $V$, one builds $f^2$-trapping discs $D_1,\dots,D_n$, isolated $f^2$-fixed arcs $I_1,\dots,I_\ell$, and neighborhoods $O_1,\dots, O_\ell$ as before. \end{proof} \begin{proof}[Proof of theorem~\ref{t.finite}] We distinguish two types of stabilized periodic orbits $\mathcal{O}$. \begin{itemize} \item[--] First type: $\mathcal{O}$ admits trapping discs $U_\mathcal{O}\subset \widehat U_\mathcal{O}$ as in proposition~\ref{p.separation} such that the set of stabilized periodic orbits in $U_\mathcal{O}$ is finite. \item[--] Second type: for any trapping discs $U_\mathcal{O}\subset \widehat U_\mathcal{O}$ associated to $\mathcal{O}$ as in proposition~\ref{p.separation} there exists infinitely many stabilized periodic orbits in $U_\mathcal{O}$. \end{itemize} The following shows that the set of stabilized periodic orbits of the first type is finite. \begin{claim}\label{c.subsequence} Let $(\mathcal{O}_n)$ be a sequence of distinct stabilized periodic orbits and let $U_{\mathcal{O}_n}\subset \widehat U_{\mathcal{O}_n}$ be trapping discs associated to $\mathcal{O}$ as in proposition~\ref{p.separation}. Up to consider a subsequence, the following property holds: $\mathcal{O}_{m}\subset U_{\mathcal{O}_n}$ for each $m>n$. \end{claim} \begin{proof} Up to take a subsequence, one can assume that the sequence $(\mathcal{O}_{n})$ converges for the Hausdorff topology towards an invariant compact set $K$. From the fact that periodic points are isolated from periodic points of large period (corollary~\ref{c.isolated2}) it follows that $K$ does not contain any periodic point. Let $\mu$ be an ergodic measure supported on $K$. It is aperiodic and by proposition~\ref{p.trapping-aperiodic} for any $n$ large the disc $\widehat U_{\mathcal{O}_n}$ contains the support of $\mu$. In particular the stabilized orbits $\mathcal{O}_m$ for $m>n$ large intersect $\widehat U_{\mathcal{O}_n}$. All the stabilized periodic points (different from the points of $\mathcal{O}_n$) that are contained in $\widehat U_{\mathcal{O}_n}$ are contained in $U_{\mathcal{O}_n}$, hence the orbits $\mathcal{O}_m$ meet and (by the trapping property) are contained in $U_{\mathcal{O}_n}$. Up to extract the sequence $(\mathcal{O}_n)$, one can assume that for any $n$, the set $K$ is contained in $U_{\mathcal{O}_n}$. Let us fix the integer $n$. We have obtained that for $m$ large $\mathcal{O}_m$ is contained in $U_{\mathcal{O}_n}$. Up to extract the subsequence $(\mathcal{O}_m)_{m>n}$, one can assume that all the orbits $\mathcal{O}_m$ for $m>n$ are contained in $U_{\mathcal{O}_n}$. By induction, one builds an extracted sequence which satisfies the required property for all integers $m>n$. \end{proof} To conclude, it is enough to show that there are not stabilized points of the second type. Let us assume now by contradiction that this is not the case. One builds inductively a sequence of stabilized periodic orbits of the second type $(\mathcal{O}_n)$ and trapping discs $(U_n)$ satisfying for each $n\geq 1$: \begin{itemize} \item[--] $U_{n}\subset U_{n-1}$, \item[--] $\mathcal{O}_{n}\subset U_{n-1}\setminus U_{n}$, \item[--] $U_n$ contains infinitely many stabilized periodic orbits, \item[--] the period of $\mathcal{O}_{n}$ is minimal among the periods of the stabilized periodic orbits of the second type contained in $U_{n-1}$. \end{itemize} When $\mathcal{O}_n$ and $U_n$ have been built, we choose $\mathcal{O}_{n+1}$ as a stabilized periodic orbits of the second type contained in $U_{n}$ which minimizes the period. By proposition~\ref{p.separation}, there exists trapping discs $U_{n+1}\subset \widehat U_{n+1}$ associated to $\mathcal{O}_n$ and one can require that $U_{n+1}$ is contained in the trapping $U_n$. In particular $\mathcal{O}_{n+1}\subset U_{n}\setminus U_{n+1}$ Since $\mathcal{O}_{n+1}$ is of the second type, $U_{n+1}$ contains infinitely many stabilized periodic orbits as required. \medskip Once the sequences $(\mathcal{O}_n)$ and $(U_n)$ have been built, one considers (up to extract a subsequence) the Hausdorff limit $K$ of $(\mathcal{O}_n)$. As in the proof of claim~\ref{c.subsequence}, it supports an ergodic measure $\mu$ which is aperiodic. The intersection of the discs $U_n$ defines an invariant cellular set $\Lambda$ that contains $K$. The closing lemma~\ref{t.measure local} implies that $\Lambda$ contains periodic points $(p_k)$ with arbitrarily large period which accumulate on a point $x$ of $K$. We consider different cases: \begin{itemize} \item[--] \emph{Some $p_k$ belongs to stabilized periodic orbits of the second type.} Since the minimal period of stabilized periodic orbits of the second type contained in $U_n$ goes to $+\infty$ as $n\to +\infty$, this is a contradiction. \item[--] \emph{Some $p_k$ is decreasing chain related to a stabilized periodic orbit $\mathcal{O}$ of the second type.} Each trapping disc $U_n$ contains a fixed point $q$ and there exists a decorated region $V$ of $\mathcal{O}$ which does not contain $q$. Up to replace $p_k$ by one of its iterates, one can assume $p_k\in V$. This shows that $U_n$ meets $V$ and its complement, hence intersects the stable set of $\mathcal{O}$. Since $U_n$ is a trapping disc, it contains $\mathcal{O}$. But the minimal period of stabilized periodic orbits of the second type contained in $U_n$ goes to $+\infty$ as $n\to +\infty$, and this is a contradiction. \item[--] \emph{ All the points $p_k$ are decreasing chain related to stabilized periodic orbits of the first type.} Since the number of this type of stabilized periodic orbits is finite, one can assume that the $p_k$ are all decreasing chain related to the same stabilized periodic orbit $\mathcal{O}$. Let us consider trapping discs $U_\mathcal{O}\subset \widehat U_\mathcal{O}$ as in proposition~\ref{p.separation}. All the $p_k$ are contained in the filtrating region $\widehat U_\mathcal{O}\setminus U_\mathcal{O}$. Taking the limit, $K$ meets that region. In particular, the orbits $\mathcal{O}_n$ for $n$ large also meet that region. This is a contradiction since the orbits $\mathcal{O}_n$ are stabilized and the region $\widehat U_\mathcal{O}\setminus U_\mathcal{O}$ contains only one stabilized periodic orbit (the orbit $\mathcal{O}$). \end{itemize} In all the cases we found a contradiction. This ends the proof of theorem~\ref{t.finite}. \end{proof} \section{Global renormalization} \label{s.renormalize} We now prove a strong version of theorem~\ref{t.theoremA} and its corollary~\ref{c.dichotomy0}. The proof of corollary~\ref{c.infinitely-renormalizable} is then immediate and left to the reader. \newcounter{theorembis} \setcounter{theorembis}{\value{theorem}} \setcounter{theorem}{0} \renewcommand{\thetheorem}{\Alph{theorem}'} \begin{theorem}\label{t.renormalize-prime} For any mildly dissipative diffeomorphism $f$ of the disc with zero topological entropy, there exist $\ell\geq 0$, some disjoint topological discs $D_1, \dots, D_\ell$, some integers $k_1,\dots,k_\ell\geq 2$ such that: \begin{enumerate} \item[--] each $D_i$ is trapped by $f^{k_i}$, \item[--] the discs $f^m(D_i)$ for $1\leq i\leq \ell$ and $0\leq m<k_i$ are pairwise disjoint, \item[--] for each $D_i$ there is a stabilized orbit such that each iterate of $D_i$ is contained in a decorated region of an iterate of the stabilized orbit, \item[--] $f$ is generalized Morse-Smale in the complement of the union of the iterates of the disks $(D_i)$ with periodic points of period smaller and equal to to $\max\{1, k_1,\dots, k_\ell\}$. \end{enumerate} In particular the interior $W$ of the union $\bigcap _i \bigcap _{m\geq 0} f^m(D_i)$ is a filtrating open set. \end{theorem} \setcounter{theorem}{\value{theorembis}} \renewcommand{\thetheorem}{\Alph{theorem}} \subsection{Global renormalization: proof of theorem~\ref{t.renormalize-prime}} We apply proposition \ref{p.semi} and associate to each stabilized periodic orbit $\mathcal{O}_i$ some discs $D_{i,1},\dots,D_{i,\ell_i}$ that are trapped by $f^{k_i}$ where $k_i$ is the period of the decorated regions associated to $\mathcal{O}_i$. By construction each $D_{i,j}$ is contained in a decorated region of $\mathcal{O}_i$ and all the periodic points decreasing chain-related to $\mathcal{O}_i$ and with period larger than $k_i$ belong to the orbit of the $D_{i,j}$. The discs $D_{i,j}$ and $D_{i',j'}$ associated to different orbits $\mathcal{O}_i,\mathcal{O}_{i'}$ are disjoint by lemma \ref{c.disc2}, proposition~\ref{p.decreasing-chain} and corollary~\ref{c.unique-stabilized}. Since the number of stabilized orbits is finite (theorem~\ref{t.finite}), we get the two first items. Note that any periodic point which does not belong to the $D_{i,j}$ is either fixed, or stabilized, or decreasing chain-related to a stabilized point with the same period. Hence its period is bounded by $\max\{1, k_1,\dots, k_\ell\}$. Let $x$ be any point whose $\omega$-limit set does not belong to a trapped disc. The limit set supports an ergodic measure $\mu$ This measure cannot be aperiodic since the closing lemma would produce a periodic orbit with large period outside the discs $D_{i,j}$. Hence the limit set contains a periodic orbit and by corollary \ref{c.isolated} coincides with the periodic orbit. The theorem~\ref{t.renormalize-prime} is now proved. \qed \subsection{Infinite renormalization: proof of corollary~\ref{c.dichotomy0}} By theorem~\ref{t.renormalize-prime}, the dynamics of $f$ reduces to a generalized Morse-Smale dynamics in a filtrating set $\mathbb{D}\setminus \overline W$. If $W=\emptyset$, the diffeomorphism $f$ is generalized Morse-Smale and corollary~\ref{c.dichotomy0} holds. Each connected component of $W$ is a topological disc $\Delta$ which is trapped by an iterate $f^k$ of $f$; moreover the restriction of $f^k$ to $\Delta$ is a mildly dissipative diffeomorphism. One may thus apply theorem~\ref{t.renormalize-prime} inside each of these discs. Arguing inductively, one gets a new decomposition of the dynamics into a generalized Morse-Smale part and discs that are eventually trapped after a return time which increases at each step of the induction. If $f$ is not generalized Morse-Smale, the induction does not stop and $f$ is infinitely renormalizable. Corollary~\ref{c.dichotomy0} follows. \qed \section{Chain-recurrent dynamics} \label{s.odometers} We now describe in detail the dynamics of a mildly dissipative diffeomorphism with zero entropy and prove corollary~\ref{c.structure}. \subsection{Generalized odometers} \begin{proposition}\label{p.odometer} Let $f$ be a mildly dissipative diffeomorphism of the disc, $(D_i)$ be a sequence of topological discs and $(k_i)$ be a sequence of integer such that \begin{itemize} \item[--] $D_i$ is trapped by $f^{k_i}$ and disjoint from its $k_i-1$ first iterates, \item[--] $D_i\subset D_{i+1}$ and $k_i<k_{i+1}$ for each $i$. \end{itemize} Then the intersection of the sets $f^{k_1}(D_i)\cup f^{k_i+1}(D_i)\cup\dots \cup f^{2k_i-1}(D_i)$ is a chain-recurrence class $\mathcal{C}$ which is a generalized odometer. In particular it supports a unique invariant measure $\mu$ and for $\mu$-almost every point $x$, the connected component of $x$ in $\mathcal{C}$ is a singleton. \end{proposition} \begin{proof} Let us denote by $\mathcal{C}$ the intersection of the sets $f^{k_1}(D_i)\cup f^{k_i+1}(D_i)\cup\dots \cup f^{2k_i-1}(D_i)$. It is a compact invariant set. For each $i$, let $(\mathcal{O}_i,h_i)$ be the cyclic permutation on the set with $k_i$ elements. The inverse limit of the systems $(\mathcal{O}_i,h_i)$ defines an odometer $(\mathcal{K},h)$ on the Cantor set. The sets $D_i,\dots, f^{k_i-1}(D_i)$ define a partition of $\mathcal{C}$ and induce a factor map on $(\mathcal{O}_i,h_i)$, hence a semi-conjugacy $p\colon (\mathcal{C},f)\to (\mathcal{K},h)$. Since the connected components of $\mathcal{C}$ coincide with the decreasing intersections of sequences of the form $f^{m_i+k_i}(D_i)$, the preimages $p^{-1}(x)$ coincide with the connected components of $\mathcal{C}$. Let $\nu$ be the unique invariant probability measure on $(\mathcal{K},h)$ and let $\mu$ be an ergodic probability on $(\mathcal{C},f)$ such that $p_*(\mu)=\nu$. The following claim shows that $\mathcal{C}$ is a generalized odometer. \begin{claim} For $\nu$-almost every point $z\in\mathcal{K}$, the preimage $p^{-1}(z)$ is a singleton. \end{claim} \begin{proof} Let us consider a set $X\subset \mathcal{C}$ with positive $\mu$-measure which is a hyperbolic block, such that $W^s_\mathbb{D}(x)$ varies continuously with $x\in X$. One can also find a disc $D\subset \mathbb{D}$ which contains $\mathcal{C}$ such that $f(D)\subset \operatorname{Interior}(D)$ and whose boundary is transverse to the manifolds $W^s_\mathbb{D}(x)$, for $x\in X$. Let $B\subset X$ be a subset with positive measure of points having arbitrarily large backward iterates $f^{-n}(x)\in X$ that are accumulated by points of $X$ on both components of $\mathbb{D}\setminus W^s_\mathbb{D}(f^{-n}(x))$. Let us choose $\varepsilon>0$. For each $x\in B$, there exist backward iterates $f^{-n}(x)\in X$ such that $f^n(W^s_\mathbb{D}(f^{-n}(x)))$ has diameter smaller than $\varepsilon/2$. As in the proof of theorem~\ref{t.measure local}, one can thus find a rectangle $R$ with diameter smaller than $\varepsilon$, which contains $x$ and whose boundary is contained in $\partial f^n(D) \cup W^s_\mathbb{D}(x')\cup W^s_\mathbb{D}(x'')$ for two forward iterates $x',x''$ of $x$. For $i$ large enough, the disc $f^{m_i+k_i}(D_i)$ which contains $x$ is contained in $f^n(D)$, and does not meet the iterates $x',x''$, nor their stable manifolds. Consequently, $f^{m_i+k_i}(D_i)$ has diameter smaller than $\varepsilon$. Since $\varepsilon>0$ has been chosen arbitrarily, the connected component of $\mathcal{C}$ containing $x\in B$ is reduced to $x$. Since $x$ is arbitrary in $B$ which has positive measure and since $\mu$ is ergodic, one deduces that for $\mu$-almost every $x$, the connected component of $x$ in $\mathcal{C}$ (which coincides with $p^{-1}(p(x))$) is a singleton. Since $p_*(\mu)=\nu$, the claim follows. \end{proof} The claim and the characterization of the connected components of $\mathcal{C}$ prove that for $\mu$-almost every point $x$, the connected component of $x$ in $\mathcal{C}$ is a singleton. Since the discs $D_i$ are trapped by $f^{k_i}$, any chain-recurrence class which meets $\mathcal{C}$ is contained in $\mathcal{C}$. For any $\varepsilon$, let us consider $i$ and an iterate $f^{m_i+k_i}(D_i)$ with diameter smaller than $\varepsilon$. Any forward and backward orbit in $\mathcal{C}$ intersects $f^{m_i+k_i}(D_i)$, showing that $\mathcal{C}$ is chain-transitive. This implies that $\mathcal{C}$ is a chain-recurrence class. \end{proof} \subsection{Dynamics on chain-recurrence classes: proof of corollary~\ref{c.structure}} Let us apply inductively theorem~\ref{t.renormalize-prime}, as in the proof of corollary~\ref{c.dichotomy0}. We obtain a decreasing sequence $(W_n)$ of trapped open sets such that the dynamics in each $\DD\setminus \overline {W_n}$ is generalized Morse-Smale. By proposition~\ref{p.MS}, the chain-recurrent set in that region is the set of periodic points. Since their period is bounded, the chain-recurrence classes $\mathcal{C}$ of $f$ in the complement of any $W_k$ can be written as a disjoint union $\mathcal{C}=C\cup\dots\cup f^{m-1}(C)$, where $C$ is a connected component of the set of periodic points and $f^m(C)=C$. Corollary~\ref{c.structure} is proved when $f$ is generalized Morse-Smale. It remains to describe the dynamics when $f$ is infinitely renormalizable, that is when the sequence $(W_n)$ is infinite. By construction the infimum of the periods of the periodic points in $W_n$ gets arbitrarily large as $n$ goes to infinity. Up to replace $(W_n)$ by the sequence $(f^n(W_n))$, one can assume that the intersection $\cap W_n$ is an invariant compact set $\Lambda$. By construction, each connected component of $W_n$ is a topological disc $D$ which is trapped by an iterate $f^m$ and disjoint from its $m-1$ first iterates. Moreover $m$ goes uniformly to infinity as $n\to +\infty$ (since the periods in $D$ get large when $n$ increases). One deduces from proposition~\ref{p.odometer} that $\Lambda$ is a union of generalized odometers that are chain-recurrences classes of $f$. This ends the proof of corollary~\ref{c.structure}. \qed \section{Set of periods} \label{s.period} In the present section we provide the proof of theorem~\ref{t.period} and corollary~\ref{c.period}. \subsection{Proof of theorem~\ref{t.period}} We consider an infinitely renormalizable $C^r$ diffeomorphism, since for generalized Morse-Smale systems the conclusion of theorem~\ref{t.period} holds immediately by taking $W=\emptyset$. From theorem~\ref{t.renormalize-prime}, if one assumes by contradiction that theorem~\ref{t.period} is not satisfied, there exist a sequence of topological discs $(D_i)$, integers $k_i,\tau_i\to +\infty$, periodic orbits $(\mathcal{O}_i)$ such that \begin{itemize} \item[--] $D_i$ is trapped by $f^{k_i}$ and disjoint from its $k_i-1$ first iterates, \item[--] $\mathcal{O}_{i}$ is contained in $D_i\cup f(D_i)\cup\dots\cup f^{k_i-1}(D_i)$ and has period $\tau_i$, \item[--] $\mathcal{O}_i\cap f^m(D_i)$ is a stabilized orbit of $f^{k_i}$ for $0\leq m<k_i$ and has period $\tau_i/k_i\geq 3$. \end{itemize} \begin{claim} For each $i,m$, $\mathcal{O}_{i}\cap f^{m}(D_i)$ is a decorated orbit of $f^{k_i}$ in $\DD$. \end{claim} \begin{proof} The intersection $\mathcal{O}_{i}\cap f^{m}(D_i)$ is a decorated orbit in $D=f^{m}(D_i)$: this means that for any points $x,y$ in the orbit, there exists a path in $D$ which connect them and is disjoint from the local stable manifolds $W^s_D(z)$ in $D$ of the other points of the orbit. In particular there exists a path in $\DD$ which connect them and is disjoint from the local manifolds $W^s_\DD(z)$ in $\DD$, proving that $\mathcal{O}_{i+1}\cap f^{m}(D_i)$ is decorated in $\DD$ \end{proof} Let us choose $\alpha\in (0,\min(1, r-1))$ and $\varepsilon \in (0, 1/4)$. Theorem~\ref{t.stable} associates $\gamma\in (0,1)$. By theorem~\ref{t.renormalize-prime}, there exists a nested sequence of topological discs $\widehat D_i$ that are periodic and trapped with periods $\widehat k_i\to+\infty$ such that $D_i\subset \widehat D_i$. By proposition \ref{p.odometer} the intersection of the sets $\widehat D_i\cup f(\widehat D_i)\cup\dots \cup f^{\widehat k_i-1}(\widehat D_i)$ is a chain-recurrence class $\mathcal{C}$ which is a generalized odometer. In particular it does not contain any periodic points and it supports a unique invariant probability $\mu$. Proposition~\ref{p.gamma-strong} implies that $f$ is $\gamma$-dissipative on $\mathcal{C}$, hence on the domains $\widehat D_i\cup f(\widehat D_i)\cup\dots \cup f^{\widehat k_i-1}(\widehat D_i)$ for $i$ large enough. Theorem~\ref{t.stable} provides a compact set $A$ such that $W^s_\mathbb{D}(x)$ exists and varies continuously with $x\in A$ in the $C^1$ topology and $\nu(A)>3/4$ for any invariant probability measure supported on a neighborhood of $\mathcal{C}$. In particular the orbits $\mathcal{O}_i$ have at least $3\tau_i/4$ iterates in $A$. By proposition~\ref{p.odometer}, for $\mu$-almost every point $x$, the connected component of $x$ in $\mathcal{C}$ is reduced to $\{x\}$. This implies that for any $\delta>0$ and for $i$ large enough, at least $3\widehat k_i/4$ discs in the family $\widehat D_i\cup f(\widehat D_i)\cup\dots \cup f^{\widehat k_i-1}(\widehat D_i)$ have diameter smaller than $\delta$. The number of discs $f^{m}(\widehat D_i)$ ($0\leq m<\widehat k_i$) which contain at most $2$ points in $\mathcal{O}_{i+1}\cap A$ is smaller than $(\tau_{i}/\widehat k_i-2)^{-1}\operatorname{Card}(\mathcal{O}_{i}\setminus A)$, hence than $\tau_i/4$. Consequently there exists a disc $f^{m+k_i}(\widehat D_i)$ with diameter smaller than $\delta$ which contains at least $3$ points $x,y,z$ of $A\cap \mathcal{O}_i$. Since the three points are close, the local stable manifolds $W^s_\DD(x)$, $W^s_\DD(y)$, $W^s_\DD(z)$ are close for the $C^1$-topology. In particular there are coordinates in the disc such that the three curves are graphs over one of the coordinate axis. This implies that one of the stable manifolds separates the two other ones in $\DD$. This is a contradiction since the orbit $\mathcal{O}_{i}\cap f^{m}(D_i)$ of $f^{k_i}$ is decorated. Theorem~\ref{t.period} is proved. \qed \subsection{Proof of corollary~\ref{c.period}} Theorem~\ref{t.period} implies that there exists $m\geq 1$, a finite number of topological discs $D_1,\dots,D_\ell$ and integers $m_1,\dots,m_\ell$ such that \begin{itemize} \item[--] the discs $f^k(D_i)$ with $1\leq i\leq \ell$ and $0\leq k<m_i$ are pairwise disjoint, \item[--] each disc $D_i$ is trapped by $f^{m_i}$, \item[--] the set $F$ of periodic points in the complement of $\cup_{i,k} f^k(D_i)$ is finite, \item[--] each $f^{k_i}|_{D_i}$ is infinitely renormalizable and each renormalization disc $\Delta\subset D_i$ is contained in a sequence of renormalization discs $\Delta_0=\Delta\subset \Delta_1\subset\dots\subset \Delta_s=D_i$ such that the period of $\Delta_{j}$ is the double of the period of $\Delta_{j+1}$. \end{itemize} Theorem~\ref{t.renormalize-prime} shows that the set of periods of each diffeomorphism $f^{k_i}|_{D_i}$ coincides with $\{2^n,n\geq 0\}$. This shows that the set of periods of $f$ coincides with $$F\cup \{m_i.2^n,\: 1\leq i\leq \ell \text{ and } n\in{\mathbb N}\}.$$ Corollary~\ref{c.period} follows.\qed \section{Dynamics close to one-dimensional endomorphisms}\label{ss.close-endo} In this section, we prove theorem \ref{t.small jacobian}. \subsection{Extension of one-dimensional endomorphisms} From now on, and to keep the approach described in \cite{CP} we consider extensions of a one-dimensional endomorphisms which slightly differ from~\eqref{e.extension}, but which work both for the interval and the circle: given a one-dimensional manifold $I$ (the circle $S^1$ or the interval $(0, 1)$), a $C^2$ map $h : I \to I$ isotopic to the identity (such that $h(\partial I) \subset \operatorname{Interior}(I)$ in the case of the interval), $\epsilon>0$ small and $b \in (-1, 1)$ even smaller, we get a map $f_b$ on $\DD := I \times (-\epsilon, \epsilon)$ defined by $$f_b : (x, y) \to (h(x) + y, b(h(x)-x + y)).$$ Indeed for any $y\in {\mathbb R}$ close to $0$ and any $x \in h(I)$, the sum $x + y$ is well defined and, since $h$ is isotopic to the identity, the difference $h(x) - x$ belongs to ${\mathbb R}$. Note that the Jacobian is constant and equal to $b$. When $|b|> 0,$ the map $f_b$ is a diffeomorphism onto its image. When $b = 0$ the image $f_0 (\DD)$ is contained in $I \times \{0\}$ and the restriction of $f_0$ coincides with $h \times \{0\}.$ Theorems 1 and 2 in~\cite{CP} assert that for $|b|>0$ small enough, the map $f_b$ is mildly dissipative and that the same property holds for any diffeomorphism. The diffeomorphism~\eqref{e.extension} that is presented in the introduction can be handled in the same way. Indeed for $b = 0$, the map $f_0$ is an endomorphism which contracts the curves $h(x) + y = \operatorname{cte}$ to a point: these curves are analogous to strong stable manifolds. One can check moreover that, for any ergodic measure which is not supported on a sink, the points in a set with uniform measure are far from the critical set, implying that these curves cross the domain $I \times (-\epsilon, \epsilon)$. For $|b| > 0$, the control of the uniformity of the stable manifolds ensures that for points in a set with uniform measure has local stable manifolds close to the curves $h(x) + y = \operatorname{cte}$. \subsection{Parallel laminations} The proof of theorem \ref{t.small jacobian} follows from the property that for points on a large set, the stable manifolds are ``parallel", i.e. do not contain decorated configurations. \begin{definition}\label{d.paralell} A family of $C^1-$curves $\gamma\colon [0,1]\to \DD$ is \emph{parallel} if: \begin{enumerate} \item[--] every curve separates the disc: $\gamma(\{0,1\})\subset \partial \DD$ and $\gamma((0,1))\subset \operatorname{Interior}(\DD)$; \item[--] given three of them, there is one that separates the other two. \end{enumerate} \end{definition} \begin{proposition}\label{p.improving-onedim} Given $\delta>0$ and a $C^2-$endomorphism $h$ of the interval, there exists $b_0>0$ such that for any $b$ with $0<|b|<b_0$, and for any diffeomorphism $g$ in a $C^2$-neighborhood of $f_b$, there exists a compact set $S$ such that: \begin{enumerate} \item[--] each $x\in S$ has a stable manifold and the family $\{W^s_\DD(x),x\in S\}$ is parallel; \item[--] for any ergodic measure $\mu$ which is not supported on a sink, $\mu(S)>1-\delta$. \end{enumerate} \end{proposition} \begin{proof} We follow and adapt the proof of theorem 2 in~\cite{CP}. Let $K>\|Dh\|$ and fix $L\gg K$. We introduce four numbers, depending on $b$: $$\sigma(b):=L.|b|,\quad \tilde \sigma(b)=|b|/5K,\quad \tilde \rho(b):=|b|/25K^2,\quad \rho=L^2.|b|.$$ Consider the set $\mathcal{A}(f_b)$ of points $x$ having a direction $E\subset T_x\mathbb{D}$ satisfying \begin{equation*} \forall n\geq 0,\;\;\; {\tilde \sigma}^{n}\leq \|Df^n(x)_{|E}\|\leq \sigma^{n},\; \text{ and }\; {\tilde \rho}^n\leq \frac{\|Df^n(x)_{|E}\|^2}{|\det Df^n(x)|}\leq \rho^{n}, \end{equation*} The proof of lemma 4.4 in~\cite{CP} shows that by taking $L$ large enough, then $\mu(\mathcal{A}(f_b))>1-\delta/2$ for any invariant ergodic probability $\mu$ which is not supported on a sink. Let us choose a small neighborhood $U$ of the critical set $\{x, Dh(x)=0\}$. Then the measure $\mu(U\times (-\epsilon,\epsilon))$ is smaller than $\delta/2$ and on its complement, the angle between the stable manifolds $W^s(x)$ for $x\in \mathcal{A}(f_b)\setminus U\times (-\epsilon,\epsilon)$ is bounded away from zero. Having chosen $|b|$ small enough, the leaves $W^s_\DD(x)$ for $f_b$ are $C^1$-close to affine segments (theorem 1 in~\cite{CP}) and are uniformly transverse to the horizontal for points $x$ in the set $S:=\mathcal{A}(f_b)\setminus U\times (-\epsilon,\epsilon)$, defining a parallel lamination for $f_b$. When $|b|$ is small enough, the mild dissipation is robust (see theorem 1 of~\cite{CP}) and the property extends to diffeomorphisms $g$ that are $C^2$-close to $f_b$. \end{proof} \subsection{Proof of theorem \ref{t.small jacobian}} Let us choose a diffeomorphism $g$ as in the statement of theorem \ref{t.small jacobian}. Having chosen $g$ in a small neighborhood of a diffeomorphism $f_b$, with $|b|$ small, ensures that proposition \ref{p.improving-onedim} holds for some $\delta\in(0,1/3).$ In particular at least $2/3$ of the iterates of any stabilized periodic orbit meets the set $S$: the parallel property then implies that the period of any stabilized periodic orbit is $1$ or $2$. The theorem~\ref{t.renormalize-prime} gives disjoint renormalization discs with period $2$, such that any periodic orbit in the complement has period $1$ or $2$. Let us consider any one of the obtained renormalization domains $D$ and the induced diffeomorphism $g^2|_D$. Note that if $O\subset D$ is a stabilized orbit of $g^2$, then both $O$ and $g(O)$ are stabilized periodic orbits of $g^2$ in $\DD$. By proposition \ref{p.improving-onedim}, at least $2/3$ of the iterates of $O$, or $g(O)$ belong to $S$. The parallel property then implies that the period of $O$ under $g^2$ is $1$ or $2$. The theorem~\ref{t.renormalize-prime} gives smaller disjoint renormalization discs with period $4$, such that any periodic orbit in the complement is fixed by $g^4$. Arguing inductively, one deduces that there exists renormalization discs of period $2^n$ such that the periodic orbits in the complement are fixed by $g^{2^n}$. Consequently, any periodic orbit is fixed by some iterate $g^{2^n}$, hence has a period which is a power of $2$. \qed \section{Dynamics of the H\'enon map} In this section we prove corollary~\ref{c.henon}. \subsection{Reduction to a dissipative diffeomorphism of the disc} The dynamics of a dissipative H\'enon map is the same as the dynamics of a dissipative diffeomorphism of the disc. \begin{proposition}\label{p.reduction} For any H\'enon map $f_{b,c}$ with $|b|\in(0,1)$, there exists: \begin{itemize} \item[--] a smooth dissipative diffeomorphism of the disc $g\colon \DD\to f(\DD)$, \item[--] a topological disc $\Delta\subset {\mathbb R}^2$, \item[--] a homeomorphism $h\colon \Delta\to \DD$, \item[--] a decomposition $\DD=\DD_1\cup\DD_2$ into half discs separated by a diameter, \item[--] a decomposition $\Delta=\Delta_1\cup \Delta_2$ with $\Delta_i=h(\DD_i)$, \end{itemize} such that: \begin{enumerate} \item $g(\DD_2)\subset \operatorname{interior}(\DD_2)$ and any forward orbit of $g|_{\DD_2}$ converges to a fixed point $p_0$; \item $f_{b,c}(\Delta_1)\subset \operatorname{interior}(\Delta)$ and $f_{b,c}=h\circ g\circ h^{-1}$ on $\Delta_1$; \item the forward orbit of any $x\in \Delta_2$ escapes to infinity: $\|f_{b,c}^n(x)\|\underset{\tiny n\to+\infty}\longrightarrow \infty$; \item the backward orbit of any $x\in \Delta\setminus f_{b,c}(\Delta)$ escapes to infinity: $\|f_{b,c}^n(x)\|\underset{\tiny n\to-\infty}\longrightarrow \infty$; \item any $f_{b,c}$-orbit which does not meet $\Delta$ escapes to infinity in the past and future. \end{enumerate} \end{proposition} \begin{remark}\label{r.reduction} When $|b|<1/4$, the diffeomorphism $g$ is mildly dissipative. Indeed let us consider an ergodic measure $\mu$ of $g$ which is not supported on a sink. From item (1), it is supported on $\DD_1$. From items (2), $\nu:=h^{-1}_*(\mu)$ is an ergodic measure for $f_{b,c}$ which is not supported on a sink. From Wiman theorem (see~\cite{CP} theorem 2 and section 4.2), for $\nu$-almost every point $x$, each stable curve of $x$ is unbounded in ${\mathbb R}^2$, hence intersects the boundary of $\Delta_1$. From item (2) again, one deduces that for $\mu$-almost every point $y$, each stable curve intersects the boundary of $\DD_1$ and cannot meet $\DD_2$ since its forward orbit is not attracted by $p_0$. One deduces that each stable curve of $y$ meets the boundary of $\DD$, proving that $g$ is mildly dissipative. \end{remark} \begin{proof}[Proof of proposition~\ref{p.reduction}] Let us fix $R>0$ and define $D=D_1\cup D_2$ where $$D_1=[-R,R]\times\left[-\sqrt{|b|}R,\sqrt{|b|}R\right],\quad D_2=[R, 2R^2]\times\left[-\sqrt{|b|}R,\sqrt{|b|}R\right].$$ One checks easily that if $R$ is large enough, $$f_{b,c}(D_1)\subset \operatorname{interior}(D)\quad \text{ and } \quad f_{b,c}(D_1\cap D_2)\subset \operatorname{interior}(D).$$ One then defines an embedding $\tilde f\colon D\to \operatorname{interior}(D)$ such that \begin{itemize} \item[--] $\tilde f|_{D_1}=f_{b,c}|_{D_1}$, \item[--] $\tilde f(D_2)\subset \operatorname{interior}(D_2)$, \item[--] any forward orbit of $\tilde f|_{D_2}$ converges to a fixed sink. \end{itemize} One approximates $D$ by a disc $\Delta\subset D$ with a smooth boundary: $D\setminus \Delta$ is contained in a small neighborhood of $\partial\Delta$ and such that $\{R\}\times {\mathbb R}$ decomposes $\Delta$ in two half discs $\Delta_1$, $\Delta_2$. One then chooses a diffeomorphism $h\colon \Delta\to \DD$ and set $g=h\circ \tilde f\circ h^{-1}$. The items (1) and (2) of the proposition are then satisfied. Note that the domain $U^+=\{(x,y),\; |x|\geq R \text{ and } |y|\leq \sqrt{|b|}|x|\}$ is mapped into itself and that if $R$ has been chosen large enough then the image $(x',y')$ of $(x,y)\in U^+$ satisfies $x'>2|x|$. Consequently the forward orbit of any point in $U^+$ escapes to infinity. The inverse map is $f_{b,c}^{-1}\colon (x,y)\mapsto (-y/b, x-y^2/b^2-c).$ As before, the domain $U^-=\{(x,y),\; |x|\geq R \text{ and } |y|\geq \sqrt{|b|}|x|\}$ is mapped into itself by $f_{b,c}^{-1}$ and that if $R$ has been chosen large enough then the preimage $(x',y')$ of $(x,y)\in U^-$ satisfies $|y'|>2|y|$. Consequently the backward orbit by $f_{b,c}$ of any point in $U^-$ escapes to infinity. This concludes the proof of items (3), (4), (5). \end{proof} \subsection{Proof of corollary~\ref{c.henon}} Let $g$ be the diffeomorphism given by proposition~\ref{p.reduction}. Since the topological entropy of $f_{b,c}$ vanishes, the same holds for $g$. Moreover by remark~\ref{r.reduction}, $g$ is mildly dissipative. From the items (3) and (5) of proposition~\ref{p.reduction}, any forward orbit by $f_{b,c}$ which does not escape to infinity accumulates in a subset $K$ of $\Delta_1$. The image $h(K)$ by the conjugacy is the limit set of a forward orbit of $g$. It is contained in a chain-recurrence class of $g$. With corollary~\ref{c.structure}, one deduces that the forward orbit of $f_{b,c}$ converges to a periodic orbit or to a subset of an odometer. From items (4) and (5), a similar conclusion holds for backward orbits. The periodic set of $f_{b,c}$ is included in $\Delta_1$ and is conjugated by $h$ to the periodic set of $g$, once the fixed point $p_0$ has been excluded. Hence the set of periods of $f_{b,c}$ can be described from the set of periods of $g$. By corollary~\ref{c.period}, it has the structure~\eqref{e.period}. \qed \subsection{Final remark: trapping discs for the H\'enon map} We propose an alternative proof to corollary~\ref{c.henon} in the case where the H\'enon map $f_{b,c}$ is orientation-preserving (i.e. $b\in (0,1)$). Indeed the following proposition holds. One can then find a trapping disc and directly apply corollaries~\ref{c.structure} and~\ref{c.period} to $f_{b,c}$. \begin{proposition}\label{p.quadritomie} Any H\'enon map $f=f_{b,c}\colon (x,y)\mapsto (x^2+c+y,-bx)$ with jacobian $b\in (0,1)$ satisfies one of the following properties: \begin{itemize} \item[a)] There is no fixed point. All the orbits escape to infinity in the past and the future. \item[b)] The fixed points belong to a simple curve $\gamma\colon [0,+\infty)\to {\mathbb R}^2$ whose image is invariant and which satisfies $\gamma(t)\underset{t\to+\infty}\longrightarrow \infty$. Any forward (resp. backward) orbit either converges to a fixed point or escape to infinity. \item[c)] There exist a topological disc $D\subset {\mathbb R}^2$ trapped by $f$ and a simple curve $\gamma\colon {\mathbb R}\to {\mathbb R}^2$ whose image is invariant and which satisfies $\gamma(t)\underset{t\to+\infty}\longrightarrow \infty$ and $\gamma((-\infty,0])\subset D$ such that all the fixed points are contained in $D\cup \gamma({\mathbb R})$. Any backward orbit either converges to a fixed point or escape to infinity. Any forward orbit either converges to a fixed point, escape to infinity, or is trapped by $D$. \item[d)] There is a fixed point with a homoclinic orbit. The topological entropy is positive. \end{itemize} \end{proposition} \begin{proof}[Sketch of the proof] If there is no fixed point, by Brouwer theorem, any orbit $(f^n(x))_{n\in{\mathbb Z}}$ converges to infinity when $n\to \pm \infty$ and the proposition holds. In the following we will thus assume that there exists at least one fixed point and that the topological entropy is positive. If $(x,y)$ is fixed, then $x^2+c-(1+b)x=0$. Hence there exists at most two fixed points. We denote by $q=(x_q,y_q)$ the point whose first coordinate satisfies $x_q=\frac{1+b}{2}+ \sqrt{\frac{(1+b)^2}{4}-c}.$ Note that it is a saddle-node fixed point when $4c=(1+b)^2$, and a hyperbolic saddle with positive eigenvalues otherwise. \begin{claim}\label{c.unbounded-graph} The right unstable branch of $q$ is a graph over $(x_q,+\infty)$. It is contained in the wandering set and the forward orbit of any point in a neighborhood escapes to infinity. \end{claim} \begin{proof} We first consider the case $b>0$. At a point $z=(x,y)$, one considers the direction $(1,v_z):=(1,-x+\sqrt{x^2-b})$. Let us assume $x\geq 0$ and that the image $z'=(x',y')$ satisfies $x'\geq x$. Note that $v_{z}\leq v_{z'}\leq 0$. If the direction $(1,v)$ at $z$ satisfies $-x\leq v\leq v_z$, then the image $(1,v')$ at $z'$ satisfies $$-x'\leq -x\leq v\leq v'\leq v_z\leq v_{z'}.$$ One can thus obtain the unstable manifold of $q$ by iterating forwardly a local half graph at $q$ whose tangent directions $(1,v)$ satisfy $-x< v<v_z$ at any of its points $z$ with $x\geq x_q$. The iterates still satisfy these inequalities. The sequence of iterated graphs converges towards the right unstable branch of $q$, which is a graph. Since there is no fixed point satisfying $x>x_q$, it is a graph over $(x_q,+\infty)$. Any point $x$ in a neighborhood of the unstable branch of $q$ has a forward iterate in the domain $U^+$ introduced in the proof of proposition~\ref{p.reduction}, hence escape to infinity in the future. In particular the first coordinate is strictly increasing and $x$ admits a wandering neighborhood. \end{proof} \begin{claim}\label{l.concavity} On the domain where it is a graph, the local stable manifold of $q$ is convex. On the domain where it is a graph, the local unstable manifold of $q$ is concave. \end{claim} In the following, we denote by $(x_s,x_q)$ and $(x_u,x_q)$ the maximal open domains where the left stable and left unstable branches of $q$ are graphs. \begin{proof} Let us consider a graph $\Gamma=\{(t,\varphi(t))\}_{t\in I}$ whose image is a graph $\Gamma'=\{(s,\varphi'(s))\}$ such that $f|_\Gamma$ preserves the orientation on the first projection. The slope $v(t)=D\varphi(t)$ has an image with slope $v'(t)=-b/(2t+v(t))$. The derivative of the slope of the image is $Dv'(t)=\frac{b(2+Dv(t))}{(2t+v(t))^2}$. If $Dv(t)$ is positive, the same holds for $Dv'(t)$. If $Dv'(t)$ is negative, the same holds for $2+Dv(t)$, hence for $Dv(t)$. One deduces that: if $\Gamma$ is concave, then $\Gamma'$ is also concave; if $\Gamma'$ is a convex, then $\Gamma$ is also convex. \end{proof} \paragraph{\it First case: the left unstable or the left stable branch of $q$ is a graph.} If the left unstable branch of $q$ is a graph, it is bounded by the second fixed point $p$. Hence $W^u(q)\cup\{p\}$ is an invariant closed half line containing the two fixed points. The domain $U={\mathbb R}^2\setminus (W^u(q)\cup\{p\})$ is homeomorphic to a plane. By Brouwer theorem, any orbit which does not belong to that line escape to infinity in the domain $U$ when $n\to \pm \infty$. Together with the claim~\ref{c.unbounded-graph}, this implies that the forward (resp. backward) orbit either belongs to the stable (resp. unstable) manifold of a fixed point, or converges to infinity in ${\mathbb R}^2$. If the left stable branch of $q$ is a graph, the union of the left stable branch and of the right stable branch is an invariant closed half line and a similar argument holds. \paragraph{\it Second case: the unstable manifold of $q$ is not a graph and $x_s\leq x_u$.} The left unstable branch is not a graph: there exists a point $z_u\in W^u(q)$ with a vertical tangent space and (by lemma~\ref{l.concavity}) the unstable arc connecting the points $z_u$ and $q$ is a concave graph $\gamma^u$ over an interval $(x_u,x_q)$. The (local) left stable branch of $q$ is a graph over a maximal interval $(x_s,x_q)$. The tangent map at $q$ is $Df(q)=\begin{pmatrix} 2x_q & 1 \\ -b & 0 \end{pmatrix}$, hence the stable graph is above the unstable graph. Moreover the two graphs are disjoint : if they intersect, by concavity (lemma~\ref{l.concavity}) they have a transverse intersection point, and the entropy is positive, contradicting our assumption. One can build a Jordan domain $\Delta$ by considering the union of the unstable arc $\gamma^u$, a vertical segment $\gamma^v$ and a stable arc $\gamma^s$ above $(x_u,x_q)$, see figure~\ref{f.construction-trapped}. We claim that $f(\Delta)\subset \Delta$. Indeed $f(\gamma^u)$ does not crosses $\gamma^s$, as explained above. It does not crosses $\gamma^v$ either, since $f^{-1}(\gamma^v)$ is a subset of a convex graph $\{(t, cte-t^2),t\in {\mathbb R}\}$ which is tangent to the concave graph $\gamma^u$ at the point $f^{-1}(z_u)$. Similarly the horizontal segment $f(\gamma^v)$ does not cross the convex graph $\gamma^s$ since one of its endpoints is below the convex graph $\gamma^s$ and the other one is on the graph. One considers a domain $D$ which is bounded by curves close to (but disjoint from) $\gamma^u, \gamma^v,\gamma^s$. The inclination lemma implies that $D$ is a trapped disc, see figure~\ref{f.construction-trapped}. By construction it contains a fundamental domain of the left unstable branch of $q$. One deduces that ${\mathbb R}^{2}\setminus (W^u(q) \cup (\cap_n f^n(D)))$ is homeomorphic to the plane and does not contain any fixed point. Brouwer theorem implies that any forward (resp. backward) orbit either belongs to the stable (resp. unstable) manifold of $q$, or intersects $D$ (resp. belongs to $\cap_n f^n(D)$), or escapes to infinity. \begin{figure} \includegraphics[width=5cm,angle=0]{positive.pdf} \hspace{2cm} \includegraphics[width=5cm,angle=0]{positive-trapped.pdf} \put(-212,31){\small $q$} \put(-266,48){\small $f(\Delta)$} \put(-325,43){\small $z_u$} \put(-315,70){\small $\gamma^v$} \put(-280,30){\small $\gamma^u$} \put(-235,90){\small $\gamma^s$} \put(-275,84){\small $\Delta$} \put(-6,31){\small $q$} \put(-49,48){\small $f(D)$} \put(-73,84){\small $D$} \caption{Construction of a trapped domain (when $b\in (0,1)$).\label{f.construction-trapped}} \end{figure} \paragraph{\it Third case: the left stable branch is not a graph and $x_s> x_u$.} We perform a similar construction. The local stable graph is bounded by a point $z_s$ with a vertical tangent space. As before, the two local graphs are disjoint and one builds a Jordan domain $\Delta$ by considering the union of a stable arc $\gamma^s$, a vertical segment $\gamma^v$ and an unstable arc $\gamma^u$ above $(x_s,x_q)$. For the same reasons as before, the boundary of $\Delta$ does not cross its image. In this case $f(\gamma^v)$ is an horizontal graph tangent to the convex graph $\gamma^s$ and hence above it. This implies $f(\Delta)\supset \Delta$, which contradicts the volume contraction of$f$. \end{proof}
{'timestamp': '2020-09-01T02:16:22', 'yymm': '2005', 'arxiv_id': '2005.14418', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14418'}
arxiv
\section{Introduction} \label{sec:introduction} Collective excitations are simple and informative probes of various physical properties in solids. Among them, excitations related to the interaction
of light and matter are, perhaps, among the most numerous. In particular, polaritons are quasiparticles related to the coupling of electromagnetic waves with any resonance in material. The paradigmatic example of polaritons is realized by coupled states of electromagnetic waves with phonons in ionic crystals~\cite{Tolpygo,Huang} whose charged particles are not mobile. The latter property makes these materials insulating and allows for an unobstructed propagation of electromagnetic collective modes. The situation is different in metals, which are characterized by a large number of conducting electrons, where electromagnetic waves can propagate only with frequencies higher than the plasma one. Still, the surface plasmons~\cite{Ritchie:1957} can propagate with frequencies below the plasma edge. A strong interaction of light with surface plasmons produces surface plasmon polaritons (SSPs), which are, therefore, a particular case of polaritons confined to a metal-dielectric or metal-air interface. Surface plasmon polaritons are particularly important for practical applications~\cite{Maier:book}. Indeed, they can be guided along surfaces and have significantly smaller wavelength than that of the incident photons enabling subwavelength optics and lithography beyond the diffraction limit. Further, SPPs are very sensitive to external fields, non-linear effects, and material parameters. This can be used to create nanoscale devices connected with optical switching and biosensing. Furthermore, the strong sensitivity allows one to investigate various properties of novel materials. Recently, materials characterized by nontrivial topological properties have attracted a significant attention. A paradigmatic example of topological matter with gapless energy spectrum is given by Weyl semimetals~\cite{Yan-Felser:2017-Rev,Hasan-Huang:rev-2017,Armitage-Vishwanath:2017-Rev}. Their low-energy excitations are described by the relativistic-like Weyl equation in the vicinity of the band-touching points called Weyl nodes. Each of these nodes is a monopole of the Berry curvature, whose flux defines a topological charge of the nodes. As proved by Nielsen and Ninomiya~\cite{Nielsen-Ninomiya-1,Nielsen-Ninomiya-2}, the Weyl nodes in lattice systems always come in pairs of opposite chirality, or, equivalently, topological charges. In each pair, the Weyl nodes are separated by $2\mathbf{b}$ in momentum [this breaks the time-reversal (TR) symmetry] and/or $2b_0$ in energy (this breaks the parity-inversion symmetry). The former parameter is known as the chiral shift~\cite{Gorbar:2009bm}. It results in the anomalous Hall effect (AHE)~\cite{Yang-Lu:2011,Burkov-Balents:2011,Burkov-Hook:2011,Grushin:2012,Zyuzin:2012,Goswami:2013,Burkov:2014} in Weyl semimetals, which plays an important role in transport and optical properties of Weyl semimetals. Moreover, the AHE strongly affects collective excitations, including the SPPs. Surface plasmon polaritons in Weyl semimetals were studied in Refs.~\cite{Zyuzin-Zyuzin:2014,Hofmann-DasSarma:2016,Kotov-Lozovik:2016,Kotov-Lozovik:2018,Tamaya-Kawabata:2019,Chen-Belyanin:2019a,Chen-Belyanin:2019b,Abdol-Abdollahipour:2019,Jalali-Mola-Jafari:2019,Abdol-Vala:2020}. The principal finding is that the SPP dispersion in Weyl semimetals with broken TR symmetry is similar to magnetoplasmons in ordinary metals~\cite{Chiu-Quinn:1972,Wallis-Hartstein:1974,Kushwaha-Halevi:1987,Boardman:book} with strong gyrotropic and nonreciprocity effects. It is important to emphasize that a giant nonreciprocity can be attained in the absence of magnetic fields, which is very advantageous for technological applications. In thin films of Weyl semimetals, a hybridization between plasmons localized at the opposite surfaces of the semimetal results in mixed plasmon modes with different localization lengths~\cite{Tamaya-Kawabata:2019}. The nontrivial bulk topology of Weyl semimetals is also reflected in unusual surface states known as the Fermi arcs~\cite{Savrasov:2011}. Unlike surface states in ordinary materials, the Fermi arcs form open segments in momentum space that connect Weyl nodes of opposite chirality~\cite{Savrasov:2011,Haldane:2014}. The interplay of the Fermi arcs and the SPPs was studied in Refs.~\cite{Song:2017,Andolina:2018,Losic:2018,Gorbar-Sukhachov:2019-FAH,Adinehvand-Jafari:2019}. By using semiclassical~\cite{Song:2017} and quantum-mechanical nonlocal~\cite{Andolina:2018} approaches, it was found that the constant frequency contours of the surface plasmons become strongly anisotropic. In addition, as was shown in Refs.~\cite{Gorbar-Sukhachov:2019-FAH,Adinehvand-Jafari:2019}, a gapless Fermi arc collective mode could emerge. The dispersion relations of surface plasmons can be measured by the scattering-type near-field optical spectroscopy (for a recent review, see Ref.~\cite{Basov-rev:2016}) as well as the momentum-resolved electron energy loss spectroscopy (see, e.g., Ref.~\cite{Wang-Zhang:1995} and references therein). Experimentally, the electron energy loss in Weyl semimetals was recently studied in Ref.~\cite{Chiarello:2018}. The SPPs were experimentally investigated in the type-II Weyl semimetal WTe$_2$ in Ref.~\cite{Tan-Wang:2018}. The nonreciprocity of the SPPs can be used to develop unidirectional optical devices~\cite{Dotsch-Popkov:2005} such as nonreciprocal circulators, nonreciprocal Mach--Zehnder interferometers, one-way optical waveguides~\cite{Takeda:2008}, etc. Tuning the thickness of a Weyl semimetal, dielectric constants of surrounding media, and the direction of the chiral shift provides efficient means to control the strength of the nonreciprocity. However, such a tuning cannot be performed {\it in situ}, which is crucial for creating easily controllable devices. In this study, we propose a different way to control the nonreciprocity of the SPPs connected with the effect of strains in Weyl semimetals. A remarkable property of mechanical strains in Weyl semimetals is their ability to induce pseudoelectromagnetic fields~\cite{Zhou-Shi:2013,Zubkov:2015,Cortijo-Vozmediano:2015,Cortijo:2016wnf,Grushin-Vishwanath:2016,Pikulin:2016,Liu-Pikulin:2016,Ilan-Pikulin:rev-2019}. Unlike the ordinary electromagnetic fields $\mathbf{E}$ and $\mathbf{B}$, their pseudoelectromagnetic counterparts $\mathbf{E}_5$ and $\mathbf{B}_5$ couple to the left-handed and right-handed particles with opposite signs. A pseudoelectric field $\mathbf{E}_{5}$, for instance, can be created by dynamically stretching or compressing the sample. A nonzero pseudomagnetic field $\mathbf{B}_{5}$ is generated, e.g., by applying a static torsion~\cite{Pikulin:2016,Arjona-Vozmediano:2018} or bending the sample~\cite{Liu-Pikulin:2016}. A typical magnitude of the pseudomagnetic field $B_5$ is estimated to be about $0.3~\mbox{T}$ in the former case and about $15~\mbox{T}$ in the latter case. While dynamical pseudoelectromagnetic fields allow for interesting effects such as the acoustogalvanic effect~\cite{Sukhachov:2019}, for the purposes of this study, it will be sufficient to consider only static deformations in Weyl semimetals with broken TR symmetry. In this model, we found that strains affect the spectrum of the SPPs by reducing their frequencies and even leading to nonreciprocity. Moreover, deformations can be used to tune the localization of the SPPs. The paper is organized as follows. The model, key notions, and numerical estimates of model parameters are provided in Sec.~\ref{sec:model}. The SSPs for the perpendicular, Faraday, and Voigt configurations of the chiral shift and wave vector are investigated in Sec.~\ref{sec:results}. The obtained results are summarized in Sec.~\ref{sec:Summary}. The effects of a nonuniform chiral shift profile at the surface of Weyl semimetals are discussed in Appendix~\ref{sec:chiral-shift}. Throughout this study, we set $k_{B}=1$. \section{Model} \label{sec:model} Let us begin with defining the model of a Weyl semimetal and presenting general equations for the SPPs. We assume that the Weyl semimetal has the form of a slab of finite thickness $2d$ along the $z$ direction. The corresponding setup together with three configurations of the chiral shift $\mathbf{b}$ and the wave vector $\mathbf{q}$ of the SPPs is shown in Fig.~\ref{fig:setup}. For the slab of a sufficiently large thickness, the SSPs on its surfaces overlap weakly and can be considered as independent. In this simplified case, one assumes that the Weyl semimetal is situated at $z > 0$ and vacuum is at $z < 0$. In view of the translational invariance along the interface, the electric field $\mathbf{E}$ is sought as a plane wave with frequency $\omega$ and wave vector along the surface $\mathbf{q}=(q_x,q_y)$, i.e., \begin{equation} \label{model-electric-field-ansatz} \mathbf{E} \propto e^{-i\omega t + iq_xx+iq_yy}\,e^{-\kappa |z|}, \end{equation} which decays exponentially away from the boundary for $\kappa > 0$. The field in vacuum is sought in the same form, however, with a different decay constant $\kappa_0$. The electric field is determined by the following equation: \begin{equation} \bm{\nabla}\times\left[\bm{\nabla}\times \mathbf{E}\right] =-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\,\mathbf{D}, \label{model-wave-equation} \end{equation} where $\mathbf{D}$ is the displacement electric field and $c$ is the speed of light. The same equation where $\mathbf{D}$ is replaced with $\mathbf{E}$ should be used in vacuum. \begin{figure}[t] \centering \subfigure[]{\includegraphics[height=0.2\textwidth]{Fig1a.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.2\textwidth]{Fig1b.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.2\textwidth]{Fig1c.eps}} \caption{Schematic setup for the perpendicular configuration $\mathbf{b}\perp\mathbf{q}$ and $\mathbf{b}\parallel\hat{\mathbf{z}}$ (panel (a)), the Voigt configuration $\mathbf{b}\perp\mathbf{q}$ and $\mathbf{b}\perp\hat{\mathbf{z}}$ (panel (b)), and the Faraday configuration $\mathbf{b}\parallel\mathbf{q}$ and $\mathbf{b}\perp\hat{\mathbf{z}}$ (panel (c)). Here $\mathbf{b}$ is the chiral shift vector, $\mathbf{q}$ is the wave vector of surface plasmons, and $\hat{\mathbf{z}}$ is the unit vector in the $z$ direction. The slab is infinite along the $x$ and $y$ directions and has the width $2d$ in the $z$ direction. } \label{fig:setup} \end{figure} \subsection{Hamiltonian and main equations} \label{sec:model-H} To demonstrate the effect of strain-induced axial gauge fields on the SPPs in Weyl semimetals, it suffices to consider the minimal model of a Weyl semimetal with a single pair of Weyl nodes separated by $2\mathbf{b}$ in momentum. The corresponding Hamiltonian has the following form: \begin{equation} \label{model-H-chi} H_{\lambda}=-\mu +\lambda \hbar v_F \bm{\sigma} \cdot\left(-i\bm{\nabla} + \lambda \frac{e}{c\hbar} \mathbf{A}_5(\mathbf{r}) -\lambda \mathbf{b}\right). \end{equation} Here $\lambda=\pm$ is the chirality of Weyl nodes, $\mu$ is the electric chemical potential, $v_F$ is the Fermi velocity, $\bm{\sigma}$ is the vector of Pauli matrices, and $\mathbf{A}_5(\mathbf{r})$ is the axial gauge field. The latter can be induced by strains~\cite{Zhou-Shi:2013,Zubkov:2015,Cortijo-Vozmediano:2015}. Moreover, a coordinate-dependent axial gauge field appears necessarily at the surface of a Weyl semimetal, where the chiral shift terminates~\cite{Chernodub-Vozmediano:2014,Grushin-Vishwanath:2016,Grushin:2018,Benito-Matias-Gonzalez:2020}. The dependence of $\mathbf{A}_5$ on coordinates and the direction of the chiral shift $\mathbf{b}$ will be specified later in Secs.~\ref{sec:Estimates} and \ref{sec:results}. The effects of nonuniform chiral shift profile are considered in Appendix~\ref{sec:chiral-shift}. In particular, we found that surface collective modes become delocalized when the profile of the chiral shift is sufficiently nonuniform. In order to determine the displacement electric field $\mathbf{D}$, the dependence of the electric current density $\mathbf{j}$ on the electric field $\mathbf{E}$ should be specified. In addition to the usual Ohm's current, it is well known \cite{Yang-Lu:2011,Burkov-Balents:2011,Burkov-Hook:2011,Grushin:2012,Zyuzin:2012,Goswami:2013,Burkov:2014} that a Weyl semimetal with broken TR symmetry has the AHE current, which is perpendicular to the electric field. This is the origin of the gyrotropic effects observed in Weyl semimetals even in the absence of a magnetic field. In the model (\ref{model-H-chi}), the AHE current has the form \begin{equation} \label{model-current-AHE} \mathbf{j}_{\text{{\tiny AHE}}} =-\frac{e^2}{2\pi^2 \hbar} \left[\mathbf{b} \times\mathbf{E}\right] +\frac{e^3}{2\pi^2 \hbar^2 c} \left[\mathbf{A}_5 \times\mathbf{E}\right]. \end{equation} Thus, the explicit expression for the displacement vector $\mathbf{D}$ is \begin{equation} \label{model-displacement-field} \mathbf{D}=\left[\varepsilon(\omega)+\frac{4\pi i}{\omega}\sigma\right] \mathbf{E} - \frac{2ie^2}{\pi \hbar \omega} \left[\mathbf{b} \times \mathbf{E}\right] + \frac{2ie^3}{\pi \hbar^2 c \omega} \left[\mathbf{A}_5 \times \mathbf{E}\right], \end{equation} where $\sigma$ describes the real part of the electric conductivity related to disorder and $\varepsilon(\omega)$ is the frequency-dependent dielectric constant of Weyl semimetal. For simplicity, we assumed that $\varepsilon(\omega)$ does not depend on the wave vector $\mathbf{q}$. This approximation is justified if the inverse wave vector of SPPs is larger than the inverse Fermi wave vector. Then, the frequency dependence has the standard form $\varepsilon(\omega)=\varepsilon_{\infty}(1-\Omega^2_{\rm e}/\omega^2)$, where $\varepsilon_{\infty}$ is the high-frequency dielectric constant and \begin{equation} \label{model-CMP-k=0-Langmuir} \Omega_{\rm e}^2 = \frac{4e^2}{3\pi\hbar^3 v_F \varepsilon_{\infty}}\left(\mu^2 +\frac{\pi^2 T^2}{3}\right) \end{equation} is the plasma or Langmuir frequency. Here $T$ is temperature. The profiles of electromagnetic fields and frequencies of the corresponding collective modes are determined by solving Eq.~(\ref{model-wave-equation}) with the appropriate boundary conditions. For these conditions, we demand, as usual, the continuity of the parallel components of electric and normal components of magnetic fields. These magnetic fields are generated dynamically by oscillating electric currents and fields. Further, since no external charges and currents are present, the perpendicular components of the displacement field and parallel components of the magnetic field are also continuous. For example, by using ansatz (\ref{model-electric-field-ansatz}), a homogeneous system of linear algebraic equations can be derived in the case of a semi-infinite slab. The zeros of the determinant of this system define the dispersion relation of SPPs. As we will show below, the case of strained Weyl semimetal is more complicated and one can no longer look for solution in form (\ref{model-electric-field-ansatz}). \subsection{Model parameters} \label{sec:Estimates} In order to provide a direct relation to experiments, we quantify the values of model parameters in realistic materials. For definiteness, we use in our analysis the numerical constants valid for the Dirac semimetal Cd$_3$As$_2$~\cite{Freyland-Madelung:book,Wang-Yamazaki:2007,Neupane-Hasan-Cd3As2:2014,Liu-Chen-Cd3As2:2014,Li-Yu-Cd3As2:2015}: \begin{equation} \label{estimates-parameters-num} v_{\rm F}\approx 1.5\times 10^8~{\rm cm/s}, \quad \mu\approx 200~{\rm meV}, \quad b\approx 1.6~{\rm nm}^{-1}, \end{equation} where the chiral shift is estimated as the distance between two Dirac points in Cd$_3$As$_2$. In addition, the dielectric constant of the Weyl semimetal candidate Eu$_2$Ir$_2$O$_7$, $\varepsilon_{\infty}=13$~\cite{Sushkov-Drew:2015}, is used. Then, according to Eq.~(\ref{model-CMP-k=0-Langmuir}), the plasma frequency at $T\to0$ can be estimated as \begin{equation} \label{estimates-Omegae} \Omega_{\rm e} \approx 6.6\times10^{13}~\mbox{s}^{-1}. \end{equation} This frequency corresponds to the following characteristic length scale: \begin{equation} \label{estimates-char-length} \frac{c}{\Omega_e} \approx 4.5~\mu\mbox{m}. \end{equation} Note that the thickness of films of Weyl and Dirac semimetal could be even smaller than the characteristic length scale. For example, films of the Dirac semimetal Cd$_3$As$_2$ with the thickness $2d\approx 35-100~\mbox{nm}$~\cite{Schumann-Stemmer-Cd3As2:2019,Nishihaya-Kawasaki-Cd3As2:2019} and the Weyl semimetals NbP and TaP with the thickness $2d\approx 9-70~\mbox{nm}$~\cite{Bedoya-Pinto-Parkin-NbP:2020} can be grown. The characteristic frequency corresponding to the Weyl node separation is given by \begin{equation} \label{estimates-omegab} \omega_b=\frac{2 e^2 b}{\pi \hbar \varepsilon_{\infty}}\approx 1.7\times10^{14}~\mbox{s}^{-1}\approx2.6\,\Omega_e. \end{equation} It is interesting to note that this frequency is comparable to $\Omega_e$. This suggests that the effects related to the Weyl nodes separation could be indeed significant in real materials. Further, let us provide estimates of strain magnitude. We start with the case of bending about the $y$ axis. The corresponding components of the displacement field $\mathbf{u}$ are~\cite{Landau:t7} \begin{eqnarray} \label{estimates-u-bend} u_x=\frac{u_0}{d} xz, \quad u_z=-\frac{u_0}{2d}\left(x^2+D_{\rm L}z^2\right). \end{eqnarray} Here $u_0$ is the maximum stress and $D_{\rm L}$ is a certain function of the Lam\'{e} coefficients. The corresponding strain-induced axial gauge field for $\mathbf{b}\parallel\hat{\mathbf{x}}$ can be estimated as~\cite{Cortijo-Vozmediano:2015} \begin{equation} \label{estimates-A5x-bend} A_{5,x} \simeq -\frac{c\hbar}{e} \beta_G b_x u_{xx} = -\frac{c\hbar \beta_G b_x u_0}{ed} z, \end{equation} where $\hat{\mathbf{x}}$ is the unit vector in the $x$ direction, $\beta_{G}\simeq1$ is the Gr\"{u}neisen parameter, and the standard definition of the strain tensor was used, $u_{ij}=\left(\partial_iu_j+\partial_j u_i\right)/2$. Then, the effective axial field strength, which is defined as \begin{equation} \label{estimates-tA5-def} \tilde{A}_{5} \simeq |A_5| \frac{d}{z}, \end{equation} reads as \begin{equation} \label{estimates-tA5x-bend} \tilde{A}_{5,x} \simeq \frac{c\hbar \beta_G b_x u_0}{e}. \end{equation} We find it convenient to quantify the magnitude of strain by the following dimensionless parameter: \begin{eqnarray} \label{estimates-beta-def} \beta = \sqrt{\frac{c\omega_b}{\Omega_e^2 b l^2}}, \end{eqnarray} where $l^2 = \hbar c d/(e\tilde{A}_5)$. In the case of bending, it is estimated as \begin{eqnarray} \label{estimates-beta-bend} \beta = \sqrt{\frac{c\omega_b}{\Omega_e^2 b l^2}} = \sqrt{\frac{2e^3 \tilde{A}_{5,x}}{\pi \hbar^2 \varepsilon_{\infty} \Omega_e^2 d}} \simeq \sqrt{\frac{2u_0e^2 c \beta_G b_x}{\pi \hbar \varepsilon_{\infty} \Omega_e^2 d}} \approx 1.6 \sqrt{\frac{c}{\Omega_e d} u_0}. \end{eqnarray} As expected, the strain effects are well manifested in sufficiently thin films. For example, even for $u_0=1\%$ and $d=0.1c/\Omega_e$, the dimensionless parameter $\beta\approx0.5$. In such a case, however, the SPPs on the opposite surfaces hybridize notably. In the case of an inhomogeneous stretching along the $z$ direction, the $z$ component of the displacement vector is \begin{eqnarray} \label{estimates-u-stretch} u_z =z \frac{f(z)}{2d} =z^2 \frac{f(d)-f(-d)}{(2d)^2}, \end{eqnarray} where we assumed a linear dependence of the function $f(z)$ on coordinates. Then \begin{equation} \label{estimates-A5z-stretch} A_{5,z} \simeq -\frac{c\hbar}{e} \beta_G b_z u_{zz} = -\frac{c\hbar}{e} \beta_G b_z z \frac{f(d)-f(-d)}{2d^2}. \end{equation} The corresponding effective axial field strength and the dimensionless parameter $\beta$ are \begin{equation} \label{estimates-tA5z-stretch} \tilde{A}_{5,z} \simeq \frac{c\hbar}{e} \beta_G b_z \frac{\left|f(d)-f(-d)\right|}{2d} \end{equation} and \begin{eqnarray} \label{estimates-beta-stretch} \beta \simeq \sqrt{\frac{2ce^2}{\pi \hbar \varepsilon_{\infty} \Omega_e^2 d} \beta_G b_z \frac{\left|f(d)-f(-d)\right|}{2d}} \approx 1.6 \sqrt{\frac{c}{\Omega_e d} \frac{\left|f(d)-f(-d)\right|}{2d}}, \end{eqnarray} respectively. As in the case of bending, the relative deformation $\left|f(d)-f(-d)\right|/(2d)$ could reach a few percents. \section{Results for surface plasmon polaritons} \label{sec:results} In this section, we discuss the results for the dispersion relations of SPPs in Weyl and Dirac semimetals and show how strains affect them. Let us consider first the case of a Dirac semimetal with $\mathbf{b}=\mathbf{A}_5=\mathbf{0}$. Then it is easy to obtain that the dispersion of the SPPs coincides with that in ordinary metals~\cite{Ritchie-Wilems:1969,Barton:rev-1979,Boardman:book,Pitarke-Echenique:rev-2006} and is determined by the following relation: \begin{equation} \varepsilon_1\kappa_0+ \kappa=0, \label{results-Dirac-semimetal} \end{equation} where $\kappa=\sqrt{q^2-\varepsilon_1\omega^2/c^2}$ and $\kappa_0=\sqrt{q^2-\omega^2/c^2}$. The AHE currents and the corrections due to the axial fields generated by strains cancel out for Dirac semimetals. Let us present now the results for Weyl semimetals with broken $\mathcal{T}$ symmetry ($\mathbf{b} \neq \mathbf{0}$). As we will see below and as was noted in, e.g., Ref.~\cite{Hofmann-DasSarma:2016}, the SPPs in Weyl semimetals resemble the magnetoplasmons in conventional metals~\cite{Chiu-Quinn:1972,Wallis-Hartstein:1974,Kushwaha-Halevi:1987,Boardman:book}. It is convenient to rewrite Eq. (\ref{model-wave-equation}) as \begin{equation} \label{results-wave-eq} \nabla(\nabla \cdot \mathbf{E}) - \Delta \mathbf{E} = \frac{\omega^2}{c^2}\left(\varepsilon_1 \mathbf{E} - i \varepsilon_2 [\hat{\mathbf{b}}\times \mathbf{E}] + i \varepsilon_2 \frac{z}{b l^2}[\hat{\mathbf{A}}_5\times \mathbf{E}]\right), \end{equation} where $\varepsilon_{1} = \varepsilon(\omega) +4\pi i \sigma/\omega$, $\varepsilon_2 = \varepsilon_{\infty}\omega_b/\omega$, and $\hat{\mathbf{A}}_5$ is the unit vector in the direction of $\mathbf{A}_5$. A nonzero conductivity $\sigma$ leads to a dissipation of the SPPs. For the sake of simplicity, we will ignore it in the rest of the study. The explicit form of Eq.~(\ref{results-wave-eq}) is \begin{equation} \label{results-wave-eq-expl} \begin{pmatrix} q_y^2 - \partial_z^2 & -q_xq_y & i q_x \partial_z \\ -q_x q_y & q_x^2 - \partial_z^2 & i q_y \partial_z \\ i q_x \partial_z & i q_y \partial_z & q_x^2 + q_y^2 \end{pmatrix} \begin{pmatrix} E_x \\ E_y \\ E_z \end{pmatrix} = \frac{\omega^2}{c^2} \begin{pmatrix} \varepsilon_1& i \hat{b}_z \varepsilon_2 -\frac{i z}{b l^2} \hat{A}_{5,z} \varepsilon_2& -i \hat{b}_y \varepsilon_2 + \frac{i z}{b l^2} \hat{A}_{5,y} \varepsilon_2 \\ -i \hat{b}_z \varepsilon_2 +\frac{i z}{b l^2} \hat{A}_{5,z} \varepsilon_2& \varepsilon_1& i \hat{b}_x \varepsilon_2 - \frac{i z}{b l^2} \hat{A}_{5,x} \varepsilon_2\\ i\hat{b}_y \varepsilon_2 - \frac{i z}{b l^2} \hat{A}_{5,y} \varepsilon_2& -i \hat{b}_x \varepsilon_2 + \frac{i z}{b l^2} \hat{A}_{5,x} \varepsilon_2 & \varepsilon_1\\ \end{pmatrix} \begin{pmatrix} E_x \\ E_y \\ E_z \end{pmatrix}. \end{equation} It is easy to check that the electric field $\mathbf{E}$ takes the following form in vacuum: \begin{equation} \mathbf{E}_0= \left(E_x(\pm d), E_y(\pm d), \pm i\frac{E_x(\pm d) q_x + E_y(\pm d) q_y}{\kappa_0}\right) e^{i q_x x + i q_y y - \kappa_0 |z\mp d| - i \omega t}. \end{equation} Here $\pm$ corresponds to the upper ($+$) and lower ($-$) vacuum half-spaces. \subsection{Perpendicular configuration} \label{sec:results-sol-perpendicular} Let us start our analysis of the SPPs in strained Weyl semimetals with the perpendicular configuration $\mathbf{b}\perp\mathbf{q}$ and $\mathbf{b}\parallel\hat{\mathbf{z}}$ (see Fig.~\ref{fig:setup}(a)). Without the loss of generality, we set the wave vector of the SPPs pointing in the $x$ direction, i.e., $\mathbf{q}\parallel\hat{\mathbf{x}}$. Further, we assume that $\hat{\mathbf{A}}_5\parallel \hat{\mathbf{z}}$. As was discussed in Sec.~\ref{sec:Estimates}, this axial gauge field could be generated by stretching the sample inhomogeneously along the $z$ axis with $u_z\propto z f(z)/d$ and $f(z)=z$. It is straightforward to show that the matrix equation (\ref{results-wave-eq-expl}) can be rewritten as a fourth-order ordinary differential equation \begin{eqnarray} \label{results-sol-Perp-eq} &&\frac{\varepsilon_1}{\varepsilon_2 \kappa^2\left[1-z/(bl^2)\right]}E_y^{(4)} +\frac{2\varepsilon_1}{\varepsilon_2 bl^2 \kappa^2 \left[1-z/(bl^2)\right]^2} E_y^{(3)} +\frac{2\varepsilon_1}{\varepsilon_2 (bl^2)^2\left[1-z/(bl^2)\right]^3}\left[\frac{1}{\kappa^2} -(bl^2)^2\left(1-\frac{z}{bl^2}\right)^{2}\right] E_y^{\prime \prime} \nonumber\\ &&-\frac{2\varepsilon_1}{\varepsilon_2 bl^2 \left[1-z/(bl^2)\right]^2} E_y^{\prime}-\frac{1}{\varepsilon_2 \left[1-z/(bl^2)\right]^3} \Bigg[\frac{2\varepsilon_1}{(bl^2)^2} -q^2 \varepsilon_1 \left(1-\frac{z}{bl^2}\right)^2 +\frac{\omega^2 \varepsilon_1^2}{c^2} \left(1-\frac{z}{bl^2}\right)^2 \nonumber\\ &&-\frac{\omega^2 \varepsilon_2^2}{c^2} \left(1-\frac{z}{bl^2}\right)^4\Bigg] E_y=0, \end{eqnarray} where we used \begin{eqnarray} E_x &=& -ic^2\frac{E_y^{\prime \prime} -\kappa^2 E_y}{\varepsilon_2\omega^2 \left[1-z/(bl^2)\right]},\\ E_z &=& -\frac{iq E_x^{\prime}}{\kappa^2}, \end{eqnarray} and $q=q_x$. Note that since both $E_x$ and $E_z$ are generically nonzero, SPPs are not purely longitudinal or transverse waves. One can check that Eq.~(\ref{results-sol-Perp-eq}) reproduces the results obtained in Ref.~\cite{Hofmann-DasSarma:2016} in the limit of semi-infinite slab $d\to\infty$ and vanishing pseudomagnetic field $l\to\infty$. In particular, the decay constant in ansatz (\ref{model-electric-field-ansatz}) equals \begin{equation} \label{results-sol-Perp-semiinf} \kappa_{\rm P}^2 = \kappa^2 \pm \frac{|\omega \kappa \varepsilon_2|}{c \sqrt{-\varepsilon_1}}. \end{equation} The dispersion relation of the SPPs in the finite slab is obtained by solving Eq.~(\ref{results-sol-Perp-eq}) and requiring the continuity of the tangential components of the dynamical magnetic field. The latter condition is equivalent to the continuity of $\partial_z E_y$ and $\partial_z E_x - i q E_z$ at the boundaries. For fields outside the slab, we have $\partial_z E_x - i q E_z = \sign{z} \omega^2E_x/(c^2\kappa_0)$. Therefore, since the tangential components of the electric field are continuous, we obtain \begin{eqnarray} \label{results-sol-Perp-char-eq} \left.\kappa_0 \varepsilon_1 E_x^{\prime} + \frac{z q \kappa_0 \varepsilon_2}{b l^2}E_x +\sign{z}\kappa^2E_x\right|_{z = \pm d} = 0,\\ \label{results-sol-Perp-char-eq-Ey} \left.\partial_z E_y +\kappa_0\sign{z} E_y\right|_{z = \pm d}=0. \end{eqnarray} The case of a finite slab with nonzero $\mathbf{A_5}$ is more complicated. Therefore, we focus on numerical solutions. It is worth noting, however, that analytical analysis could be still performed in the case of short and long wavelengths or, equivalently, $q\to\pm\infty$ and $q\to0$, respectively. In the latter case, since SPPs are gapless collective modes, $\varepsilon_1\to\infty$ at $\omega\to0$ leading to the divergence of the first term in Eq.~(\ref{results-sol-Perp-char-eq}). Therefore, in order to satisfy the characteristic equation, one should set $\kappa_0 = 0$. This leads to the following dispersion relation at small momenta: \begin{equation} \label{results-sol-Perp-qto0} \omega(q\to0) =c q, \end{equation} which is nothing else as the dispersion of light. Thus, neither chiral shift nor strains affect the SPPs at small $q$. Further, let us consider the short wavelength limit $q\to\pm\infty$. In this case, Eq.~(\ref{results-sol-Perp-eq}) simplifies and can be rewritten as \begin{equation} \label{results-sol-Perp-eq-q-inf} \varepsilon_1 E_y^{(4)} +\frac{2\varepsilon_1}{bl^2\left[1-z/(bl^2)\right]} E_y^{(3)} -2\varepsilon_1 q^2 E_y^{\prime \prime} -\frac{2q^2 \varepsilon_1}{bl^2\left[1-z/(bl^2)\right]} E_y^{\prime}+\varepsilon_1 q^4E_y=0. \end{equation} Its general solution is \begin{equation} \label{results-sol-Perp-sol-q-inf} E_y=C_1e^{qz} +C_2e^{-qz} +C_3 z\left[3+ qbl^2\left(2-\frac{z}{bl^2}\right)\right] e^{qz}+C_4 z\left[3- qbl^2\left(2-\frac{z}{bl^2}\right)\right]e^{-qz}, \end{equation} where $C_i$ with $i=\overline{1,4}$ are constants determined from the boundary conditions (\ref{results-sol-Perp-char-eq}) and (\ref{results-sol-Perp-char-eq-Ey}). By substituting solution (\ref{results-sol-Perp-sol-q-inf}) into Eqs.~(\ref{results-sol-Perp-char-eq}) and (\ref{results-sol-Perp-char-eq-Ey}), we find \begin{equation} \label{results-sol-Perp-omega-q-inf} \omega\left(q \to \pm\infty\right) = \Omega_e \sqrt{\frac{\varepsilon_{\infty}}{1+\varepsilon_{\infty}}}. \end{equation} This result agrees with that for conventional surface plasmons~\cite{Ritchie:1957}. It is clear that strain does not induce nonreciprocity in this case. \begin{figure}[t] \centering \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig2a.eps}} \hspace{0.05\textwidth} \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig2b.eps}} \hspace{0.05\textwidth} \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig2c.eps}} \caption{The dispersion relation of the surface plasmon polaritons in a slab of Weyl semimetal for the perpendicular configuration at $\beta=0$ (red solid lines), $\beta=0.5$ (blue dashed lines), and $\beta=1$ (green dotted lines). Panels (a), (b), and (c) correspond to two SPP branches $\omega_{-}$ and $\omega_{+}$, as well as the lowest bulk mode $\omega_{\rm B,1}$, respectively. We set $d=2c/\Omega_e$ and $\omega_b=\Omega_e$. } \label{fig:results-perp-omega-few-beta} \end{figure} The numerical solutions for dispersion relations obtained from Eq.~(\ref{results-sol-Perp-eq}) with the boundary conditions (\ref{results-sol-Perp-char-eq}) and (\ref{results-sol-Perp-char-eq-Ey}) are shown in Fig.~\ref{fig:results-perp-omega-few-beta}, where the case $\beta=0$ corresponds to the absence of strain. The frequencies $\omega_{+}$ and $\omega_{-}$ correspond to two branches of the SPP spectrum. If the width of the slab is sufficiently large, then these modes can be understood as a combination of the SPPs localized at the opposite surfaces. They are hybridized, however, in a thin slab. Nevertheless, we can still distinguish them by using the symmetry properties of the field component $E_x$ in the unstrained limit. In this case, $\omega_{+}$ and $\omega_{-}$ correspond to the modes with antisymmetric and symmetric distributions of the field, respectively. Clearly, strain decreases frequencies of the SPPs for intermediate values of $q$. In agreement with the analytical result (\ref{results-sol-Perp-omega-q-inf}), there is no dependence on strain at $q\to\pm \infty$, however. In addition to the SPPs, we also present one of the bulk modes in Fig.~\ref{fig:results-perp-omega-few-beta}(c), which is determined as the lowest delocalized solution. The field profiles of the SPPs are shown in Fig.~\ref{fig:results-perp-fields}. Unlike the case of semi-infinite slab, where the electric field for the surface modes is localized at the boundary, the field could be relatively large inside a slab of small thickness. The localization becomes more pronounced as the slab width increases. Furthermore, we found that the strain enhances the localization of the SPPs. Depending on its direction, the modes become localized on either top or bottom surface. Therefore, deformations can be used to effectively tune the localization of the SPPs in Weyl semimetals. \begin{figure}[t] \centering \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig3a.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig3b.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig3c.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig3d.eps}} \caption{Profiles of the $x$ component of the electric field $E_x$ in the perpendicular configuration. Top and bottom panels correspond to two SPP branches $\omega_{-}$ and $\omega_{+}$, respectively. Strain strength is $\beta=0$ in panels (a) and (c) as well as $\beta=1$ in panels (b) and (d). We set $d = 2c/\Omega_{e}$ and $\omega_b = \Omega_e$.} \label{fig:results-perp-fields} \end{figure} \subsection{Voigt configuration} \label{sec:results-sol-Voigt} Let us proceed to the Voigt configuration, which is schematically shown in Fig.~\ref{fig:setup}(b). For the sake of definiteness, we set $\mathbf{q}\parallel \hat{\mathbf{x}}$ and $\mathbf{b}\parallel\hat{\mathbf{y}}$. Further, we assume that $\hat{\mathbf{A}}_5\parallel\hat{\mathbf{y}}$. As we discussed in Sec.~\ref{sec:Estimates}, this axial gauge field can be generated by bending about the $x$ axis producing $\mathbf{A}_5 \propto u_0b_yz \hat{\mathbf{y}}/d$. Equation~(\ref{results-wave-eq-expl}) takes the following form in the case of the Voigt configuration: \begin{eqnarray} \label{results-sol-Voigt-eq} \varepsilon_1 E_x^{\prime \prime} -E_x \left[\varepsilon_1 q^2 -\frac{q\varepsilon_2}{bl^2} - \frac{\varepsilon_1^2\omega^2}{c^2} + \left(1-\frac{z}{bl^2}\right)^2\frac{\omega^2 \varepsilon_2^2}{c^2} \right]=0. \end{eqnarray} Note that the $y$ component of the field is decoupled and does not correspond to plasmon modes. Furthermore, it can be shown that it vanishes after matching with solutions in vacuum. The $z$ component of the electric field is related to $E_x$ according to \begin{equation} \label{results-sol-Voigt-Ez-Ex} E_z = -i\frac{q}{\kappa^2} E_x^{\prime} + i\frac{\omega^2}{c^2 \kappa^2}\varepsilon_2\left(1 - \frac{z}{b l^2}\right) E_x. \end{equation} Let us check that we reproduce the results obtained in the literature if strains are ignored. By using ansatz (\ref{model-electric-field-ansatz}) and taking the limit $l \rightarrow \infty$, the following decay constant is obtained: \begin{equation} \label{results-sol-Voigt-kappaV} \kappa_{V}^2 = q^2 + \frac{\omega^2}{c^2}\left(\frac{\varepsilon_2^2}{\varepsilon_1} - \varepsilon_1\right), \end{equation} which agrees with the result in Ref.~\cite{Hofmann-DasSarma:2016}. In general, Eq.~(\ref{results-sol-Voigt-eq}) should be solved numerically. By requiring the continuity of the tangential component of the magnetic field, which is equivalent to the continuity of $\partial_z E_x - i q E_z$, the following characteristic equation is derived: \begin{equation} \label{results-sol-Voigt-char-eq} \left.\kappa_0 \varepsilon_1 E_x^{\prime} - \kappa_0 \varepsilon_2 q\left(1 - \frac{z}{b l^2}\right)E_x + \sign{z}\kappa^2 E_x \right|_{z = \pm d} = 0. \end{equation} Here, the last term stems from the vacuum solution. Before presenting numerical results, let us investigate the limit of long and short wavelengths, i.e., $q\to0$ and $q\to \pm \infty$, respectively. In the case $q\to0$, the same simple result as in the perpendicular configuration can be obtained [see Eq.~(\ref{results-sol-Perp-qto0})]. For short wavelengths ($q\to \pm\infty$), a solution to Eq.~(\ref{results-sol-Voigt-eq}) can be sought as $E_x(z) = C_1 e^{q z} + C_2 e^{-q z}$. Then, by using Eq.~(\ref{results-sol-Voigt-char-eq}) and retaining only the leading in $1/q$ terms, we obtain \begin{eqnarray} \label{results-sol-Voigt-omega-q-inft-plus} \omega_{\pm}(q\to\infty) = -\omega_b\varepsilon_{\infty}\frac{d\mp bl^2}{2bl^2 (1+\varepsilon_{\infty})} + \frac{\sqrt{(\omega_b\varepsilon_{\infty})^2\left(d\mp bl^2\right)^2 +4b^2l^4 \Omega_e^2 \varepsilon_{\infty}(1+\varepsilon_{\infty})}}{2bl^2(1+\varepsilon_{\infty})},\\ \label{results-sol-Voigt-omega-q-inft-minus} \omega_{\pm}(q\to-\infty) = \omega_b\varepsilon_{\infty}\frac{d\pm bl^2}{2bl^2 (1+\varepsilon_{\infty})} + \frac{\sqrt{(\omega_b\varepsilon_{\infty})^2\left(d\pm bl^2\right)^2 +4b^2l^4 \Omega_e^2 \varepsilon_{\infty}(1+\varepsilon_{\infty})}}{2bl^2(1+\varepsilon_{\infty})}, \end{eqnarray} where subscript $\pm$ corresponds to the second ($+$) and first ($-$) branches of the SPP spectrum. As one can see, the spectrum is nonreciprocal. The magnitude of the nonreciprocity for a weak strain and a small chiral shift reads as \begin{eqnarray} \label{results-sol-Voigt-omega-q-inft-nonrecipr} \left|\omega_{\pm}(q\to\infty)-\omega_{\pm}(q\to-\infty)\right|&\approx& \frac{d \omega_b \varepsilon_{\infty}\left[\varepsilon_{\infty} \omega_b^2 +4\Omega_e^2(1+\varepsilon_{\infty}) +\omega_b\sqrt{\varepsilon_{\infty}} \sqrt{\varepsilon_{\infty} \omega_b^2 +4\Omega_e^2(1+\varepsilon_{\infty})}\right]}{bl^2 \left\{1+\varepsilon_{\infty} \left[\varepsilon_{\infty}+\omega_b^2 +4\Omega_e^2 (1+\varepsilon_{\infty})\right]\right\}} \nonumber\\ &\approx& \frac{d \omega_b \varepsilon_{\infty}}{bl^2 (1+\varepsilon_{\infty})} = \frac{2e^3 \tilde{A}_{5,y}}{\pi \hbar^2 c(1+\varepsilon_{\infty})}. \end{eqnarray} It grows with the magnitude of strain. Numerical results for the SPP dispersion at a few values of the strain strength $\beta$ are shown in Fig.~\ref{fig:results-Voigt-omega-few-beta}. The nonreciprocity of the surface collective modes is clearly evident at large values of strain quantified by $\beta$ and agrees well with the results in Eqs.~(\ref{results-sol-Voigt-omega-q-inft-plus}) and (\ref{results-sol-Voigt-omega-q-inft-minus}). The nonreciprocity of the SPPs originates from the broken parity-inversion symmetry $z \to -z$ and the Weyl node separation. In the case under consideration, a nonuniform strain breaks this symmetry leading to the dependence of the frequencies on the sign of the SPP wave vector $q$. It is worth noting that the parity-inversion symmetry could be broken also when the slab of an unstrained Weyl semimetal is surrounded by dielectrics with different dielectric constant (see, e.g., Ref.~\cite{Kotov-Lozovik:2018}). Therefore, while the strain is not equivalent to the nonuniform dielectric constant of the sample, its effect on the SPPs appears to be qualitatively similar. The same analogy might be used to explain the decrease of the frequencies at intermediate $q$. \begin{figure}[t] \centering \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig4a.eps}} \hspace{0.05\textwidth} \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig4b.eps}} \hspace{0.05\textwidth} \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig4c.eps}} \caption{The dispersion relation of the surface plasmon polaritons in a slab of Weyl semimetal for the Voigt configuration at $\beta=0$ (red solid lines), $\beta=0.5$ (blue dashed lines), and $\beta=1$ (green dotted lines). Panels (a), (b), and (c) correspond to two SPP branches $\omega_{-}$ and $\omega_{+}$, as well as the lowest bulk mode $\omega_{\rm B,1}$, respectively. We set $d=2c/\Omega_e$, and $\omega_b=\Omega_e$. } \label{fig:results-Voigt-omega-few-beta} \end{figure} The spatial distribution of the electric field inside the slab is shown in Fig.~\ref{fig:results-Voigt-fields}. The surface localization of the lowest mode is clearly evident from the figure. On the other hand, the field of the second mode $\omega_{+}$ could become noticeable inside the slab. We checked that the localization become much more pronounced in larger samples. It is worth noting also that the change of the spatial dependence of the field distributions from the exponentially localized to oscillating one can be easily inferred by using Eqs.~(\ref{model-electric-field-ansatz}) and (\ref{results-sol-Voigt-kappaV}) in the case $A_5=0$. Indeed, the parameter $\kappa_V$ is real and positive in the case of surface modes. On the other hand, the mixing with bulk modes leads to an imaginary part of $\kappa_V$. In the strained case, however, one can rely on the spatial profiles of the fields. As one can see from Fig.~\ref{fig:results-Voigt-fields}, there are no purely surface collective modes in the slab because there is always a finite overlap between the surfaces. The localization length, however, depends on the wave vector. Indeed, it is smallest at small wave vectors and tends to increase with $q$. \begin{figure}[t] \centering \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig5a.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig5b.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig5c.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig5d.eps}} \caption{Profiles of the $x$ component of the electric field $E_x$ in the Voigt configuration. Top and bottom panels correspond to two SPP branches $\omega_{-}$ and $\omega_{+}$, respectively. Strain strength is $\beta=0$ in panels (a) and (c) as well as $\beta=1$ in panels (b) and (d). We set $d = 2c/\Omega_{e}$ and $\omega_b = \Omega_e$.} \label{fig:results-Voigt-fields} \end{figure} \subsection{Faraday configuration} \label{sec:results-sol-Faraday} Finally, we consider the Faraday configuration. It is schematically shown in Fig.~\ref{fig:setup}(c), where we set $\mathbf{q}\parallel\mathbf{b}\parallel\mathbf{A}_5\parallel \hat{\mathbf{x}}$. The corresponding strain corresponds to bending about the $y$ axis producing $\mathbf{A}_5 \propto u_0b_xz \hat{\mathbf{x}}/d$. By using Eq.~(\ref{results-wave-eq-expl}) and Gauss's law $\bm{\nabla}\cdot \mathbf{D}=0$, we derive the following equation for $E_y$: \begin{eqnarray} \label{results-sol-Faraday-eq} &&\frac{1}{\varepsilon_2\left[1-z/(bl^2)\right]} E_y^{(4)} + \frac{2}{\varepsilon_2 bl^2\left[1-z/(bl^2)\right]^2} E_y^{(3)} +\frac{1}{\varepsilon_1 \varepsilon_2 \left[1-z/(bl^2)\right]^3} \Bigg\{\varepsilon_1 \left[\frac{2}{(bl^2)^2} -(q^2 +\kappa^2) \left(1-\frac{z}{bl^2}\right)^2\right] \nonumber\\ &&+\frac{\omega^2}{c^2} \left(1-\frac{z}{bl^2}\right)^2 \left[(\varepsilon_1-\varepsilon_2) +\frac{z}{bl^2} \varepsilon_2\right] \left(\varepsilon_1 +\varepsilon_2 -\varepsilon_2 \frac{z}{bl^2}\right)\Bigg\}E_y^{\prime \prime} +\frac{\omega^2}{c^2 bbl^2}\left[\frac{2\varepsilon_2}{\varepsilon_1} -\frac{c^2\kappa^2}{\varepsilon_2\omega^2 \left[1-z/(bl^2)\right]^2}\right] E_y^{\prime} \nonumber\\ &&-\frac{1}{\varepsilon_2 \left[1-z/(bl^2)\right]^3} \left\{\kappa^2\left[\frac{2}{(bl^2)^2} -q^2\left(1-\frac{z}{bl^2}\right)^2\right] +\frac{\omega^2}{c^2} \kappa^2 \varepsilon_1 \left(1-\frac{z}{bl^2}\right)^2 +\varepsilon_2^2 \frac{\omega^4}{c^4} \left(1-\frac{z}{bl^2}\right)^4 \right\}E_y=0. \end{eqnarray} The $z$ and $x$ components of the electric field are determined by \begin{eqnarray} \label{results-sol-Faraday-Ez} E_z &=& c^2\frac{\kappa^2 E_y - E_y^{\prime \prime}}{i\varepsilon_2\omega^2\left[1-z/(bl^2)\right]},\\ \label{results-sol-Faraday-Ex} E_x &=& -\frac{\varepsilon_1 E_z^{\prime}+ i\varepsilon_2 E_y/(bl^2) -i\varepsilon_2 \left[1-z/(bl^2)\right] E_y^{\prime}}{iq \varepsilon_1}, \end{eqnarray} respectively. The decay constant $\kappa_{F}$ can be obtained analytically at $l \rightarrow \infty$ and $d\to\infty$. It reads as \begin{equation} \label{results-sol-Faraday-kappa} \kappa_{F}^2 = q^2 + \frac{\omega^2}{c^2}\left(\frac{\varepsilon_2^2}{2\varepsilon_1} - \varepsilon_1\right) \pm \frac{\varepsilon_2 \omega^2}{2c^2 |\varepsilon_1|} \sqrt{\varepsilon_2^2 +\frac{4c^2 q^2 \varepsilon_1}{\omega^2}}. \end{equation} This result agrees with that in Ref.~\cite{Hofmann-DasSarma:2016}. Let us analyze the analytical solutions at small and large wave vectors. The dispersion relation is the same as in the other two configurations (see Secs.~\ref{sec:results-sol-perpendicular} and \ref{sec:results-sol-Voigt}), i.e., $\omega=cq$ at small wave vectors. In the case $q \to \pm\infty$, Eq.~(\ref{results-sol-Faraday-eq}) simplifies \begin{equation} \label{results-sol-Faraday-eq-large-q} E_y^{(4)} + \frac{2 E_y^{(3)}}{b l^2 - z} -2 q^2 E_y^{\prime \prime} - \frac{2 q^2 E_y^{\prime}}{b l^2 - z} + q^4 E_y = 0. \end{equation} Its general solution is \begin{equation} E_y = C_1 e^{qz} +C_2 e^{-qz} +C_3 e^{qz}z \left[3+qbl^2\left(2-\frac{z}{bl^2}\right)\right] +C_4e^{-qz} z \left[3-qbl^2\left(2-\frac{z}{bl^2}\right)\right]. \end{equation} By using this solution and employing the continuity relations for $\partial_z E_y$ and $\partial_z E_x - i q E_z$ at the surface, we found that $\omega(q\to\pm \infty)$ are given by the same expression as in Eq.~(\ref{results-sol-Perp-omega-q-inf}). Therefore, the corresponding modes are reciprocal even in the presence of deformations and the chiral shift. We present the dispersion relations of the SPPs in Fig.~\ref{fig:results-Faraday-omega-few-beta} at a few values of strain strength quantified by $\beta$. The effects of strains are similar to those in the perpendicular configuration (see Sec.~\ref{sec:results-sol-perpendicular}). \begin{figure}[t] \centering \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig6a.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig6b.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.275\textwidth]{Fig6c.eps}} \caption{Dispersion relation of the collective modes in a slab of Weyl semimetal for the Faraday configuration at $\beta=0$ (red solid lines), $\beta=0.5$ (blue dashed lines), and $\beta=1$ (green dotted lines). Panels (a), (b), and (c) correspond to two SPP branches $\omega_{-}$ and $\omega_{+}$, as well as the lowest bulk mode $\omega_{\rm B,1}$, respectively. We set $d=2c/\Omega_e$, and $\omega_b=\Omega_e$. } \label{fig:results-Faraday-omega-few-beta} \end{figure} Finally, let us discuss the profiles of electric field. We present the corresponding results in Fig.~\ref{fig:results-Faraday-fields}. As one can see, the lowest mode is well localized for small wave vectors. Strain, however, changes the surface where the mode is localized. Therefore, the lowest mode could be identified with a surface mode or a short-range surface plasmon~\cite{Tamaya-Kawabata:2019}. A similar effect of strain is also present for the second mode $\omega_{+}$. The field magnitude in the bulk is more pronounced in this case, however. \begin{figure}[t] \centering \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig7a.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig7b.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig7c.eps}} \hspace{0.02\textwidth} \subfigure[]{\includegraphics[height=0.35\textwidth]{Fig7d.eps}} \caption{Profiles of the $x$ component of the electric field $E_x$ in the Faraday configuration. Top and bottom panels correspond to two SPP branches $\omega_{-}$ and $\omega_{+}$, respectively. Strain strength is $\beta=0$ in panels (a) and (c) as well as $\beta=1$ in panels (b) and (d). We set $d = 2c/\Omega_{e}$ and $\omega_b = \Omega_e$.} \label{fig:results-Faraday-fields} \end{figure} \section{Summary} \label{sec:Summary} In this study, we investigated the effects of strains on the surface plasmon polaritons in a Weyl semimetal slab. By using a low-energy model of a time-reversal symmetry broken Weyl semimetal, we found that strain provides an effective means to control the nonreciprocity and localization of the SPPs. As in the previous studies, the collective modes strongly depend on the relative orientation of the chiral shift $\mathbf{b}$, the wave vector $\mathbf{q}$ of collective modes, and the surface normal $\hat{\mathbf{n}}$ for which the three main configurations can be identified. They are the perpendicular ($\mathbf{b}\parallel\hat{\mathbf{n}}$), Voigt ($\mathbf{b}\perp\hat{\mathbf{n}}$ and $\mathbf{b}\perp \mathbf{q}$), and Faraday ($\mathbf{b}\parallel\mathbf{q}$) configurations. By applying bending and inhomogeneous stretching, a coordinate-dependent axial gauge field that does not break the translation invariance along the surface of the slab can be generated. For the perpendicular and Faraday configurations, this strain-induced field reduces the frequencies of the collective modes for intermediate values of the wave vector $q$ (there is no dependence on strain at $q\to\pm\infty$) and enhances their localization at the surfaces. Moreover, strain can even change the localization of the SPPs introducing an asymmetry in their field profiles. The results for the Voigt configuration demonstrate that the strain-induced axial gauge field generated by bending not only reduces the frequency of the modes but makes the SPPs nonreciprocal even in thin films. The nonreciprocity of the SPPs originates from the separation between the Weyl nodes in momentum space and broken parity-inversion symmetry $z\to -z$ due to a nonuniform strain. This finding is quite interesting since the nonreciprocity is usually absent in slabs of finite thickness due to the hybridization of the collective modes at different surfaces. The proposed effect could have a direct practical application. Indeed, strain-induced axial gauge fields provide an efficient way to create tunable unidirectional optical devices. Among them, we mention nonreciprocal circulators, nonreciprocal Mach-Zehnder interferometers, and one-way waveguides. Unlike previous proposals, where the thickness of a Weyl semimetal, dielectric constants of surrounding media, and the direction of the chiral shift were used, the nonreciprocity in the proposed setup can be manipulated {\it in situ}. Numerical estimates suggest that the strain-induced effects could be potentially measured for sufficiently high strain magnitude and thin films. Experimentally, strain-induced modifications of SPPs could be realized, for example, in the recently discovered Weyl semimetal EuCd$_2$As$_2$, where only two Weyl nodes separated in momentum space exist in the vicinity of the Fermi level~\cite{Soh-Boothroyd:2019,Ma-Shi:2019}. Finally, let us comment on the nonuniform profile of the chiral shift that is realized at the surface of Weyl semimetals (see Appendix~\ref{sec:chiral-shift}). Contrary to external strains, where the chiral shift profile is asymmetric inside the slab, a symmetric profile reduces the localization of the surface collective modes. While the Weyl node separation is always nonuniform in finite samples of Weyl semimetals, the corresponding modification of the anomalous Hall conductivity is estimated to be weak. \begin{acknowledgments} The work of E.V.G. was supported partially by the National Academy of Sciences of Ukraine grants No.~0116U003191 and No.~0120U100858. P.O.S. was supported partially by the VILLUM FONDEN via the Centre of Excellence for Dirac Materials (Grant No.~11744), the European Research Council under the European Unions Seventh Framework Program Synergy HERO, and the Knut and Alice Wallenberg Foundation KAW 2018.0104. \end{acknowledgments}
{'timestamp': '2020-06-01T02:05:22', 'yymm': '2005', 'arxiv_id': '2005.14365', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14365'}
arxiv
\section{Introduction} \label{S:intro} In this paper we consider the problem of estimating the number of isomorphism classes of principally polarized abelian varieties $(A,\lambda)$ such that $A$ lie
s in a given isogeny class of simple ordinary abelian varieties over a finite field. We approach this problem by subdividing isogeny classes into their \emph{strata}, which are the subsets of an isogeny class consisting of abelian varieties sharing the same endomorphism ring. (To avoid awkward locutions, we will say that a principally polarized variety $(A,\lambda)$ lies in an isogeny class ${\mathcal C}$ or a stratum ${\mathcal S}$ when $A$ lies in ${\mathcal C}$ or ${\mathcal S}$.) Our main result concerns strata corresponding to endomorphism rings $R$ that are \emph{convenient}. A convenient ring is an order in a CM~field with the properties that, first, $R$ is stable under complex conjugation; second, the real subring $R^{+}$ of $R$ is Gorenstein; and third, the trace dual of $R$ is generated by its pure imaginary elements. (We explain these terms and present results on convenient rings in Section~\ref{S:convenient}.) If ${\mathcal S}$ is a stratum of an isogeny class corresponding to a convenient order~$R$, we can express the number of principally polarized varieties in ${\mathcal S}$ in terms of the sizes of the Picard group of $R$ and the narrow Picard group of the maximal real sub-order $R^{+}$ of~$R$; the definitions of these groups are also reviewed in Section~\ref{S:convenient}. \begin{theorem} \label{T:classgroup} Let ${\mathcal S}$ be a stratum of an isogeny class of simple ordinary abelian varieties over a finite field, corresponding to an endomorphism ring $R$. Suppose that $R$ is convenient and that the norm map $N_{\Pic}$ from the Picard group of $R$ to the narrow Picard group of $R^{+}$ is surjective. Let $U$ be the unit group of $R$ and let $U^{+}_{>0}$ be the group of totally positive units of~$R^{+}$. Then the number of varieties $A\in{\mathcal S}$ that have principal polarizations is equal to $\#\ker N_{\Pic}$, and each such $A$ has $[U^{+}_{> 0}: N(U)]$ principal polarizations up to isomorphism, where $N$ is the norm map from $R$ to~$R^{+}$. \end{theorem} \begin{corollary} \label{C:classgroup} Under the hypotheses of Theorem~\textup{\ref{T:classgroup}}, the total number of principally polarized varieties $(A,\lambda)$ in the stratum ${\mathcal S}$, counted up to isomorphism, is equal to \[\frac{1}{[N(U) : (U^{+})^2]} \, \frac{\#\Pic R^{\phantom{+}}}{\#\Pic R^{+}},\] where $U$ is the unit group of $R$ and $U^{+}$ is the unit group of~$R^{+}$. Furthermore, the index $[N(U) : (U^{+})^2]$ is equal to either $1$ or $2$, and is equal to $1$ if $K/K^{+}$ is ramified at an odd prime. \end{corollary} In Section~\ref{S:PPAVs} we prove these two results and give some reasonably weak sufficient condition for $N_{\Pic}$ to be surjective. Special cases of these results are known already; in the most fundamental case, when $R$ is a maximal order, these results can be obtained from the work of Shimura and Taniyama~\cite[\S~14]{ShimuraTaniyama1961}, combined with the theory of canonical lifts. Other examples occur, for instance, in~\cite[\S~8]{LenstraPilaEtAl2002}, \cite[Proposition~2, p.~583]{Howe2004}, and~\cite[Lemma~19]{IonicaThome2020}. But none of the previous results we are aware of apply as generally as Theorem~\ref{T:classgroup} and Corollary~\ref{C:classgroup}. To every $n$-dimensional abelian variety $A$ over ${\mathbf F}_q$ one associates its characteristic polynomial of Frobenius~$f_A$, sometimes called the \emph{Weil polynomial} of $A$. This is a polynomial of degree~$2n$, whose multiset of complex roots can be written in the form \[ \left\{ \sqrt{q} e^{\pm i \theta_j}\right\}_{j=1}^n \] for an $n$-tuple $s_A = (\theta_1,\ldots,\theta_n)$ of real numbers, the \emph{Frobenius angles} of $A$, normalized so that \begin{equation} \label{EQ:normal} 0 \le \theta_1 \le \theta_2 \le \cdots \le \theta_n \le \pi. \end{equation} The theorem of Honda and Tate~\cite[Th\'eor\`eme 1, p.~96]{Tate1971} gives a complete description of the set of Weil polynomials. In particular, Tate showed that two abelian varieties over ${\mathbf F}_q$ are isogenous if and only if they share the same Weil polynomial~\cite{Tate1966}, so it makes sense to speak of the Weil polynomial of an isogeny class. We see that an isogeny class of abelian varieties over a finite field is determined by its Weil polynomial, by the multiset of roots of its Weil polynomial, and by the multiset of its Frobenius angles. For simple ordinary isogeny classes, all of the inequalities in Equation~\eqref{EQ:normal} are strict. We will see (Corollary~\ref{C:minimalR}) that the ring $R$ generated over ${\mathbf Z}$ by the Frobenius and Verschiebung of a simple ordinary abelian variety $A$ is convenient. We call this ring the \emph{minimal ring} of the isogeny class of~$A$, because every endomorphism ring of a variety in ${\mathcal C}$ contains~$R$, and there are varieties in ${\mathcal C}$ with endomorphism ring equal to~$R$~\cite[Theorem 6.1, pp.~550--551]{Waterhouse1969}. We call the corresponding stratum the \emph{minimal stratum} of the isogeny class. Using results of Louboutin, we can (somewhat crudely) estimate the number of principally polarized varieties in the minimal stratum in terms of the Frobenius angles of the isogeny class. Our theorem uses the following notation: If $\{a_m\}$ and $\{b_m\}$ are two infinite sequences of positive real numbers indexed by integers~$m$, we write $a_m\triplesim b_m$ to mean that for every $\varepsilon >0$ there are positive constants $r$ and $s$ such that $b_m \le r a_m^{1+\varepsilon}$ and $a_m \le s b_m^{1+\varepsilon}$ for all~$m$. \begin{theorem} \label{T:sequence} Fix an integer $n>0$. For each positive integer $m$, let ${\mathcal C}_m$ be an isogeny class of simple $n$-dimensional ordinary abelian varieties over a finite field ${\mathbf F}_{q_m}$, and let $R_m$ and ${\mathcal S}_m$ be the minimal ring and minimal stratum for ${\mathcal C}_m$. For each $m$, let $\{\theta_{m,i}\}_{i=1}^n$ be the Frobenius angles for~${\mathcal C}_m$. Let $P_m$ be the number of principally polarized varieties in~${\mathcal S}_m$. If $q_m\to\infty$ and if each norm map $\Pic R_m \to \Picplus R^{+}_m$ is surjective, then \[ P_m \triplesim q_m^{n(n+1)/4} \prod_{i<j} (\cos \theta_{m,i} - \cos \theta_{m,j}) \prod_{i} \sin \theta_{m,i}. \] \end{theorem} The relation indicated by the $\triplesim$ symbol is a \emph{very} rough comparison of magnitudes, and indeed, if there is an $\varepsilon$ such that $\lvert \theta_{m,i} - \theta_{m,j}\rvert > \varepsilon$ and $\lvert\sin \theta_{m,i}\rvert>\varepsilon$ for all $m$, $i$, and $j$, then the conclusion of the theorem is equivalent to saying simply that $P_m \triplesim q_m^{n(n+1)/4}$. However, if the Frobenius angles of the sequence of isogeny classes do \emph{not} stay a bounded distance from one another and from $0$ and $\pi$, then the trigonometric factors on the right hand side of the relation do make a difference. We will see examples of this in Section~\ref{S:examples}. The trigonometric factors in Theorem~\ref{T:sequence} may have only a tenuous influence on the asymptotic predictions of the theorem, but they provided a key motivation for this work. To explain this, let us consider another approach toward estimating the number of principally polarized abelian varieties in an isogeny class, an approach that considers the question in terms of limiting distributions. It is well-known that for a fixed positive integer~$n$, the number of principally polarized $n$-dimensional abelian varieties over a finite field ${\mathbf F}_q$ grows like \[2q^{n(n+1)/2}\] as $q\to\infty$, in the sense that the ratio between the two quantities tends to~$1$; this follows simply from the existence of an irreducible coarse moduli space for these abelian varieties, together with the fact that generically a principally polarized abelian variety over a finite field has two twists. On the other hand, the number of isogeny classes of $n$-dimensional abelian varieties over~${\mathbf F}_q$ grows like \begin{equation} \label{EQ:isogenyclasses} v_n \frac{\varphi(q)}{q} q^{n(n+1)/4} \end{equation} as $q\to\infty$, where $\varphi$ is Euler's totient function and where \begin{equation} \label{EQ:vn} v_n = \frac{2^n}{n!}\, \prod_{j=1}^n \left(\frac{2j}{2j-1}\right)^{n + 1 - j} \end{equation} (see~\cite[Theorem~1.1, p.~427]{DiPippoHowe1998}). It follows that the average number of principally polarized varieties per isogeny class is \[ \frac{2q}{v_n\varphi(q)} q^{n(n+1)/4}. \] But there is finer information available. To explain this, we require some notation. Let $S_n$ be the space of all $n$-tuples $(\theta_j)$ of real numbers satisfying~\eqref{EQ:normal}. There is a map from $\USp_{2n}(q)$ to $S_n$ that sends a symplectic matrix $M$ to the multiset of the arguments of the eigenvalues of~$M$. Haar measure on $\USp_{2n}(q)$ gives rise to a measure $\mu_n$ on~$S_n$; this measure is determined by \begin{equation} \label{EQ:AV_measure} d\/\mu_n = c_n \prod_{i<j} (\cos \theta_i - \cos \theta_j)^2 \prod_{i} \sin^2 \theta_i \, d\theta_1\,\cdots\, d\theta_n, \end{equation} where $c_n = 2^{n^2}/\pi^n$, so that $\mu_n(S_n) = 1$. We also get a measure on $S_n$ from the principally polarized $n$-dimensional abelian varieties over ${\mathbf F}_q$: For every open set $U$ of $S_n$, we set \[ \mu_{n,q}(U) = c_{n,q} \cdot \#\{\text{principally polarized $(A,\lambda)$ such that $s_A\in U$}\}, \] where $1/c_{n,q}$ is the total number of principally polarized $n$-dimensional abelian varieties $(A,\lambda)$ over~${\mathbf F}_q$, so that $\mu_{n,q}(S) = 1$. Katz and Sarnak \cite[Theorem~11.3.10, p.~330]{KatzSarnak1999} proved the following: \begin{theorem}[Katz--Sarnak] \label{T:KatzSarnak} Fix a positive integer $n$. As $q\to\infty$ over the prime powers, the measures $\mu_{n,q}$ converge in measure to $\mu_n$. \end{theorem} By considering isogeny classes ${\mathcal C}$ of $n$-dimensional abelian varieties, we get another family of measures. Given any isogeny class ${\mathcal C}$, we let $s_{\mathcal C}$ be the $n$-tuple $s_A$ for any $A$ in ${\mathcal C}$. Given a prime power~$q$, we define a measure $\nu_{n,q}$ on $S_n$ by setting \[ \nu_{n,q}(U) = d_{n,q} \cdot \#\{\text{isogeny classes ${\mathcal C}$ such that $s_{\mathcal C}\in U$}\}, \] where $1/d_{n,q}$ is the total number of isogeny classes of $n$-dimensional abelian varieties over~${\mathbf F}_q$, so that $\nu_{n,q}(S_n) = 1$. Vl\u adu\c t~\cite[Theorem~A, p.~128]{Vladut2001} proved that the $\nu_{n,q}$ have a limiting distribution as well: \begin{theorem}[Vl\u adu\c t] \label{T:Vladut} Fix a positive integer $n$. As $q\to\infty$ over the prime powers, the measures $\nu_{n,q}$ converge in measure to the measure $\nu_n$ defined by \begin{equation} \label{EQ:isogeny_measure} d\/\nu_n = d_n \prod_{i<j} (\cos \theta_i - \cos \theta_j) \prod_{i} \sin \theta_i \, d\theta_1\,\cdots\, d\theta_n, \end{equation} where \[ d_n = \frac{1}{v_n\pi^n} = \frac{n!}{(2\pi)^n}\, \prod_{j=1}^n \left(\frac{2j-1}{2j}\right)^{n + 1 - j}. \] \end{theorem} Consider what this means for a region $U\subset S_n$ contained within a small disk around an $n$-tuple $(\alpha_i)$, where we assume that the $\alpha_i$ are distinct and that none of them is equal to $0$ or~$\pi$. Suppose $U$ has volume~$u$, with respect to the measure $d\theta_1\,\cdots\, d\theta_n$. For large $q$, the number of isogeny classes with Frobenius angles in $U$ is $\nu_{n,q}(U) / d_{n,q} $, and using Equations~\eqref{EQ:isogenyclasses} and~\eqref{EQ:vn} we see that this is roughly equal to \begin{align*} \frac{1}{d_{n,q}} \nu_n(U) &\approx \frac{d_n}{d_{n,q}} u \prod_{i<j} (\cos \alpha_i - \cos \alpha_j) \prod_{i} \sin \alpha_i\\ &\approx \pi^n u \frac{\varphi(q)}{q} q^{n(n+1)/4} \prod_{i<j} (\cos \alpha_i - \cos \alpha_j) \prod_{i} \sin \alpha_i. \end{align*} On the other hand, the number of principally polarized abelian varieties with Frobenius angles in $U$ is $\mu_{n,q}(U) / c_{n,q} $, which is roughly \[ 2q^{n(n+1)/2} \mu_n(U) \approx 2q^{n(n+1)/2} \frac{2^{n^2}}{\pi^n} u \prod_{i<j} (\cos \alpha_i - \cos \alpha_j)^2 \prod_{i} \sin^2 \alpha_i. \] Therefore, for the isogeny classes with Frobenius angles in $U$, the average number of principally polarized varieties per isogeny class is roughly \begin{equation} \label{EQ:average} \frac{2^{n^2 + 1}}{\pi^{2n}} \frac{q}{\varphi(q)} q^{n(n+1)/4} \prod_{i<j} (\cos \alpha_i - \cos \alpha_j) \prod_{i} \sin \alpha_i. \end{equation} Conversely, estimates for the number of principally polarized varieties in a given isogeny class --- estimates like our Theorem~\ref{T:sequence} --- can be combined with Vl\u adu\c t's result to give a heuristic explanation of the Katz--Sarnak theorem. This line of reasoning was the initial motivation that led to the present work. It is especially suggestive that the trigonometric factors in the expression~\eqref{EQ:average} match the those that appear in Theorem~\ref{T:sequence}. A special case of this type of heuristic argument, which is perhaps familiar to some readers, concerns elliptic curves. The case $n=1$ of Theorem~\ref{T:KatzSarnak} was proven by Birch~\cite{Birch1968}. To every elliptic curve $E/{\mathbf F}_q$ we can associate its trace of Frobenius~$t$, which lies in the interval $[-2\sqrt{q},2\sqrt{q}]$. Dividing the trace by $2\sqrt{q}$, we get a \emph{normalized trace} that lies in the interval $[-1,1]$. For each $q$ we can consider the counting measure on $[-1,1]$ that tells us what fraction of the elliptic curves over ${\mathbf F}_q$ have their normalized traces lying in a given set. Birch proved that these counting measures converge in measure to the `semicircular' measure, that is, the measure associated to the differential $(2/\pi) \sqrt{1-x^2} \, dx.$ (This is equivalent to the measure $\mu_1 = (2/\pi) \sin^2\theta \, d\theta$ on $S_1$, since $x = \cos \theta$.) Now, if $t$ is an integer in the interval $[-2\sqrt{q},2\sqrt{q}]$ and if $(t,q) = 1$, then the number of elliptic curves over ${\mathbf F}_q$ with trace $t$ is $H(t^2 - 4q)$, where $H$ denotes the Kronecker class number. But $H(-n)$ grows roughly as~$\sqrt{n}$; more precisely, for every $\varepsilon>0$ there are positive constants $c$ and $d$ such that \[ c n^{1/2-\varepsilon} < H(-n) < d n^{1/2+\varepsilon} \] for all positive $n\equiv 0,3\bmod 4$, so that $H(-n)\triplesim n^{1/2}$ for these $n$ (compare~\cite[Proposition~1.8, p.~656]{Lenstra1987}), and the average value of $H(-n)/\sqrt{n}$ for discriminants $n$ in quite small intervals is $\pi/6$ (see~\cite[Theorem~2, p.~722]{Bykovskii1997}). Thus it seems reasonable to expect that the number of elliptic curves over ${\mathbf F}_q$ with trace $t$ will be about $c\sqrt{4q - t^2}$ on average, for some constant $c$. Scaling this down, we find that for any $x$ in $[-1,1]$, we expect there to be about $c'\sqrt{1-x^2}\,\Delta x$ elliptic curves over ${\mathbf F}_q$ having scaled traces in a small interval of size $\Delta x$ near $x$. As $q$ increases the constant $c'$ will have to tend to $2/\pi$, and we find that we are led to believe that the counting measures should converge to the semi-circular measure. (Gekeler~\cite{Gekeler2003} shows how the crude approximation that ``$H(n)$ grows like $\sqrt{n}$'' can be modified with local factors in order to make this interpretation of Birch's result more rigorous, at least in the case of finite prime fields. Achter and Gordon~\cite{AchterGordon2017} provide an alternate explanation for Gekeler's work, and extend it to arbitrary finite fields.) Unfortunately, there seems to be little hope of turning this heuristic argument into an actual proof of Theorem~\ref{T:KatzSarnak}. We find it interesting, nevertheless, that the trigonometric factors in the measure given by Equation~\eqref{EQ:AV_measure} get split evenly between the measure defined by Equation~\eqref{EQ:isogeny_measure} and the approximation in Theorem~\ref{T:sequence}. The structure of this paper is as follows: In Section~\ref{S:convenient} we explore the properties of convenient orders, give some examples, and define a norm map from the invertible ideals of a convenient order to the invertible ideals of its real suborder. In Section~\ref{S:isogenyclasses} we look at convenient orders related to isogeny classes of abelian varieties. In Section~\ref{S:PPAVs} we use Deligne's equivalence~\cite{Deligne1969} between the category of ordinary abelian varieties over a finite field and the category of Deligne modules~\cite{Howe1995} to prove Theorem~\ref{T:classgroup} and Corollary~\ref{C:classgroup}. In Section~\ref{S:discriminants} we review a theorem of Louboutin~\cite{Louboutin2006} on minus class numbers of CM~fields and extend it to apply to convenient orders. We apply the theorem to the minimal orders of isogeny classes and obtain Theorem~\ref{T:sequence}. In Section~\ref{S:warnings} we give some examples that show that while the \emph{average} number of principally polarized varieties in a given isogeny class is given by Equation~\eqref{EQ:average}, there are isogeny classes for which this number is significantly larger than the average value. Finally, in Section~\ref{S:examples} we give examples showing that the trigonometric terms in Theorem~\ref{T:sequence} are necessary. \section*{Acknowledgments} The basic ideas in this paper first appeared in an email~\cite{Howe2000} written by the author to Nick Katz in the year 2000, the contents of which have been shared with a number of researchers over the years and presented in several conference and seminar talks. (This letter is reproduced as an appendix to this paper.) The author is grateful to Jeff Achter for his continued interest in this work and for his encouragement to write it down more formally. Jeff Achter and Stefano Marseglia had helpful comments on an early draft of this paper, and Marseglia suggested Proposition~\ref{P:Gorenstein}, which allowed for a simplification in the definition of convenient orders; the author thanks them both for their help and kindness. \section{Convenient orders} \label{S:convenient} In this section we define convenient orders and prove some results about them. The definition involves the concept of \emph{Gorenstein rings}. Most of what we will need to know about Gorenstein rings can be found in the paper of Picavet-L'Hermitte~\cite{PicavetLHermitte1987}. In particular, we will use the following facts: \begin{enumerate} \item An order $R$ in a number field $K$ is Gorenstein if and only if its trace dual is invertible as a fractional $R$-ideal~\cite[Proposition~4, p.~20]{PicavetLHermitte1987}; here the \emph{trace dual} $R^\dagger$ of $R$ is the set of elements $x\in K$ such that ${\Tr_{K/{\mathbf Q}}(xR)\subseteq{\mathbf Z}}$. \item An order $R$ in a number field $K$ is Gorenstein if and only if every fractional $R$-ideal ${\mathfrak A}$ with $\End{\mathfrak A} = R$ is invertible~\cite[Proposition~4, p.~20]{PicavetLHermitte1987}. \item A ring that is a complete intersection over ${\mathbf Z}$ is Gorenstein~\cite[Theorem~21.3, p.~171]{Matsumura1986}, and in particular every monogenic order ${\mathbf Z}[\alpha]$ is Gorenstein. \end{enumerate} Let $K$ be a CM~field, that is, a totally imaginary quadratic extension of a totally real number field $K^{+}$. We refer to the nontrivial involution $x\mapsto \bar{x}$ of $K/K^{+}$ as \emph{complex conjugation}, and we say that an element of $K$ is \emph{pure imaginary} if it is negated by complex conjugation. \begin{definition} \label{D:convenient} We call an order $R$ in $K$ \emph{convenient} if it satisfies the following properties\textup{:} \begin{enumerate} \item \label{conv1} $R$ is stable under complex conjugation\textup{;} \item \label{conv2} the order $R^{+} := R\cap K^{+}$ of $K^{+}$ is Gorenstein\textup{;} \item \label{conv3} the trace dual $R^\dagger$ of $R$ is generated \textup(as a fractional $R$-ideal\textup) by its pure imaginary elements. \end{enumerate} We call the ring $R^{+}$ from property~\eqref{conv2} the \emph{real subring} of~$R$. \end{definition} \begin{proposition} \label{P:Gorenstein} Every convenient order is Gorenstein. \end{proposition} \begin{proof} Let $\iota$ be a pure imaginary element of $R$. By assumption $R^\dagger$ is generated by its pure imaginary elements, so $\iota^{-1} R^\dagger = (\iota R)^\dagger$ is generated by its totally real elements. Since clearly $(\iota R : \iota R) = R$ we have $((\iota R)^\dagger : (\iota R)^\dagger) = R$. Let $I$ be the fractional $R^{+}$-ideal $(\iota R)^\dagger\capK^{+}$. Then $IR = (\iota R)^\dagger$ because $(\iota R)^\dagger$ is generated by its totally real elements, and since $((\iota R)^\dagger : (\iota R)^\dagger) = R$ we must have $(I: I) = R^{+}$. By assumption, $R^{+}$ is Gorenstein, and therefore $I$ is invertible, so there is a fractional $R^{+}$-ideal $J$ with $IJ = R^{+}$. Then we have $(IR)(JR) = R$, so $IR = (\iota R)^\dagger$ is an invertible fractional $R$-ideal. It follows that $R^\dagger$ is invertible, so $R$ is Gorenstein. \end{proof} \begin{proposition} \label{P:maximal} The maximal order ${\mathcal O}$ of $K$ is convenient. \end{proposition} \begin{proof} The maximal order ${\mathcal O}$ clearly is stable under complex conjugation, and the real suborder $\calO^+$ is the maximal order of $K^{+}$. Maximal orders are Gorenstein (because their trace duals are invertible), so ${\mathcal O}$ satisfies the first two conditions of Definition~\ref{D:convenient}. For the third, we note that the trace dual of ${\mathcal O}$ is generated by pure imaginary elements if and only if its inverse --- the different of ${\mathcal O}$ --- is generated by pure imaginary elements. Now, the different of ${\mathcal O}$ is the product of the different of $\calO^+$ and the relative different of ${\mathcal O}$ over $\calO^+$, so it suffices to show that the relative different can be generated by pure imaginary elements. We know (\cite[Theorem~4.16, p.~151]{Narkiewicz2004}, \cite[Theorem~2.5, p.~198]{Neukirch1999}) that the relative different is generated by the elements $f_\alpha'(\alpha)$ for all $\alpha\in{\mathcal O}\setminus\calO^+$, where $f_\alpha$ is the minimal polynomial of $\alpha$ over $K^{+}$ and $f_\alpha'$ is the derivative of~$f_\alpha$. But $f_\alpha'(\alpha)$ is equal to $\alpha-\bar{\alpha}$, which is clearly pure imaginary. \end{proof} \begin{proposition} \label{P:convenient} Let $R$ be an order in $K$ that is stable under complex conjugation and whose real subring $R^{+}$ of $R$ is Gorenstein. If there are elements $\alpha$ and $\beta$ of $K$ and invertible fractional ideals ${\mathfrak A}$ and ${\mathfrak B}$ of $R^{+}$ such that $R = {\mathfrak A}\alpha \oplus {\mathfrak B}\beta$, then $R$ is convenient. \end{proposition} \begin{proof} By hypothesis, $R$ satisfies the first two conditions in Definition~\ref{D:convenient}, so we need only check the third. The trace dual of $R$ is the product of the trace dual of $R^{+}$ with the relative trace dual ${\mathfrak D}$ of $R$ over $R^{+}$. The trace dual of $R^{+}$ is obviously generated by totally real elements, so we just need to check that ${\mathfrak D}$ is generated by pure imaginary elements. Set \[ \alpha^* = \frac{\bar{\beta} }{\alpha\bar{\beta} - \bar{\alpha}\beta} \text{\quad and \quad} \beta^* = \frac{\bar{\alpha}}{\beta\bar{\alpha} - \bar{\beta}\alpha}. \] We note that \[ \Tr_{K/K^{+}} \alpha\alpha^* = \Tr_{K/K^{+}} \beta\beta^* = 1 \text{\quad and \quad} \Tr_{K/K^{+}} \alpha\beta^* = \Tr_{K/K^{+}} \beta\alpha^* = 0, \] so the relative trace dual ${\mathfrak D}$ of $R$ is \begin{align*} {\mathfrak D} &= {\mathfrak A}^{-1}\alpha^* + {\mathfrak B}^{-1}\beta^*\\ \intertext{and we have} (\alpha\bar{\beta} - \bar{\alpha}\beta){\mathfrak A}{\mathfrak B}{\mathfrak D} &= {\mathfrak A}\alpha+ {\mathfrak B}\beta = R. \end{align*} Since ${\mathfrak A}$ and ${\mathfrak B}$ are fractional $R^{+}$-ideals and $\alpha\bar{\beta} - \bar{\alpha}\beta$ is pure imaginary, we see that ${\mathfrak D}$ is generated as a fractional $R$-ideal by pure imaginary elements. \end{proof} We close this section by showing that there is a natural norm map from the invertible ideals of a convenient ring to the invertible ideals of its real subring. We prove the statement in a more general context. \begin{lemma} Let $L/K$ be a quadratic extension of number fields, with nontrivial involution $x\mapsto \bar{x}$. Let $S$ be an order of $L$ that is stable under the involution, and let $R = S\cap K$. For every invertible fractional ideal ${\mathfrak B}$ of $S$, there is a unique invertible fractional ideal ${\mathfrak A}$ of $R$ such that ${\mathfrak A}\otimes_{R}S = {\mathfrak B}\bar{\Bid}$. \end{lemma} We call the ideal ${\mathfrak A}$ the \emph{norm} of ${\mathfrak B}$. \begin{proof} Let $\Inv S$ denote the group of invertible fractional ideals of $S$, and for each prime ${\mathfrak q}$ of $S$ let $\Prin S_{\mathfrak q}$ denote the group of principal fractional ideals of the localization $S_{\mathfrak q}$. Then the map that sends an ideal to its localizations gives an isomorphism \begin{equation} \label{EQ:invertible} \Inv S \to \bigoplus_{\substack{\text{primes}\\ {\mathfrak q}\, \text{of}\, S}} \Prin S_{\mathfrak q} \end{equation} (see \cite[Proposition~12.6, p.~75]{Neukirch1999}). The inverse isomorphism is given by sending a collection $({\mathfrak B}_{\mathfrak q})_{\mathfrak q}$ of principal fractional ideals (viewed as subsets of~$L$) to their intersection $\bigcap_{\mathfrak q} {\mathfrak B}_{\mathfrak q}$. For each prime ${\mathfrak q}$, we will define the norm map on the image of $\Prin S_{\mathfrak q}$ in $\Inv S$; this will suffice to define the norm on all of~$\Inv S$. Suppose ${\mathfrak q}$ is a prime of $S$, let ${\mathfrak p}$ be the prime of $R$ lying under ${\mathfrak q}$, and suppose ${\mathfrak B}_{\mathfrak q}$ is a principal fractional ideal of $S_{\mathfrak q}$. Let ${\mathfrak B}$ be the invertible ideal of $S$ whose image in the right-hand side of~\eqref{EQ:invertible} is trivial in every component except the ${\mathfrak q}$-th, where it is ${\mathfrak B}_{\mathfrak q}$. Let $b\in L^*$ be an element that generates ${\mathfrak B}_{\mathfrak q}$ as an $S_{\mathfrak q}$-module. Suppose ${\mathfrak q}$ is stable under complex conjugation; then $\bbar$ generates $\bar{\Bid}_{\mathfrak q}$. Let $a = b\bbar\in K^*$. We define $N({\mathfrak B})$ to be the invertible ideal ${\mathfrak A}$ of $R$ whose component at every prime other than ${\mathfrak p}$ is trivial, and whose ${\mathfrak p}$-th component is the principal fractional ideal $bR_{\mathfrak p}$. Clearly ${\mathfrak A} \otimes_{R}S = {\mathfrak B}\bar{\Bid}$. Now suppose ${\mathfrak q}$ is \emph{not} stable under complex conjugation. We claim that there is an $b'\in L$ that generates ${\mathfrak B}_{\mathfrak q}$ and that has the additional property that $b'\in S_{\bar{\qid}}^*$. It will suffice to prove this in the case where $b$ lies in~$S$. If $b\in S\setminus{\mathfrak q}$ then we may take $b' = 1$, so let us assume that $b\in{\mathfrak q}$. If $b\in S\setminus\bar{\qid}$ then we may take $b' = b$, so let us assume that $b\in\bar{\qid}$ as well. By the Chinese Remainder Theorem we may pick an element $z\in S$ such that $z\in\bar{\qid}$ and $z\equiv 1\bmod {\mathfrak q}$. The radical of $b S_{\bar{\qid}}$ is $\bar{\qid} S_{\bar{\qid}}$, so some power of $z$ lies in $b S_{\bar{\qid}}$; say $z^n \in b S_{\bar{\qid}}$ for some $n\ge 1$. Let $v = b + z^{n+1}$, and let $b' = b/v$. Note that $v/b = 1 + z (z^n/b) \in 1 + \bar{\qid} S_{\bar{\qid}}$ so that $v/b$, and hence $b'$, is an element of $S_{\bar{\qid}}^*$. Note also that $v\equiv 1\bmod {\mathfrak q}$, so $b$ and $b'$ generate the same $S_{\mathfrak q}$ ideal. Thus, this $b'$ meets our requirements. Replace $b$ with $b'$, and once again let $a = b\bbar\in K^*$. We define $N({\mathfrak B})$ to be the invertible ideal ${\mathfrak A}$ of $R$ whose component at every prime other than ${\mathfrak p}$ is trivial, and whose ${\mathfrak p}$-th component is the principal fractional ideal $aR_{\mathfrak p}$. We will show that ${\mathfrak A} \otimes_{R}S = {\mathfrak B}\bar{\Bid}$. Clearly ${\mathfrak A} \otimes_{R}S$ is trivial at every prime of $S$ that does not lie over~${\mathfrak p}$. On the other hand, ${\mathfrak A} \otimes_{R}S_{\mathfrak q}$ is equal to $b\bbar S_{\mathfrak q}$. Since $b$ is a unit in $S_{\bar{\qid}}$, we see that $\bbar$ is a unit in $S_{\mathfrak q}$. Thus, ${\mathfrak A} \otimes_{R} S_{\mathfrak q} = b S_{\mathfrak q}$, so that ${\mathfrak A} \otimes_{R} S$ has the same localization at ${\mathfrak p}$ as does the product ${\mathfrak B}\bar{\Bid}$. Likewise, ${\mathfrak A} \otimes_{R}S$ and ${\mathfrak B}\bar{\Bid}$ have the same localization at~$\bar{\qid}$. Since the two ideals also have the same (trivial) localizations at all primes other than ${\mathfrak q}$ and $\bar{\qid}$, they must be equal. We see that we can define the ideal norm on a set of ideals that generate $\Inv S$, so we can define the norm on all of $\Inv S$. We also see that the norm is unique: If there were two distinct invertible ideals of $R$ that lifted to ${\mathfrak B}\bar{\Bid}$, then their quotient would be a nontrivial ideal ${\mathfrak E}$ such that ${\mathfrak E}\otimes_{R}S = S$. We would then have ${\mathfrak E}\subseteq R$, and also that ${\mathfrak E}$ contains a unit of~$S$. This unit would then also be a unit of $R$, so ${\mathfrak E} = R$, contradicting the nontriviality of ${\mathfrak E}$. \end{proof} \begin{remark} Let $R$ be a convenient order. Recall that the \emph{Picard group} $\Pic R$ of $R$ is the group of isomorphism classes of invertible $R$-ideals. The \emph{narrow Picard group} $\Picplus R^{+}$ of the real order $R^{+}$ is the group of strict isomorphism classes of invertible $R^{+}$-ideals, where two invertible $R^{+}$-ideals ${\mathfrak A}$ and ${\mathfrak B}$ are said to be \emph{strictly isomorphic} if there is a totally positive element $x$ of $K^{+}$ such that $x{\mathfrak A} = {\mathfrak B}$. The norm map on invertible ideals gives us a homomorphism $N_{\Pic}$ from $\Pic R$ to $\Picplus R^{+}$, which we continue to call the norm. \end{remark} \section{Isogeny classes and convenient orders} \label{S:isogenyclasses} In this section we show that some rings associated to a simple ordinary isogeny class of abelian varieties over a finite field are convenient. Suppose that ${\mathcal C}$ is an isogeny class of simple $n$-dimensional ordinary abelian varieties over a finite field $k$ with $q$ elements, and let $f$ be its Weil polynomial. Honda--Tate theory shows that $f$ has degree $2n$ and is irreducible, and that the number field $K$ defined by $f$ is a CM~field. Let $K^{+}$ be the maximal real subfield of $K$, and let $\pi$ be a root of $f$ in $K$. \begin{proposition} \label{P:examples} Let $B$ be a Gorenstein order in $K^{+}$ that contains $\pi+\bar{\pi}$. Then the ring $R = B[\pi]$ is convenient. \end{proposition} \begin{proof} The minimal polynomial of $\pi$ over $K^{+}$ is $x^2 - (\pi+\bar{\pi})x + q$, so $1$ and $\pi$ form a basis for $R$ as a $B$-module. It follows easily that $R$ is stable under complex conjugation and that $R^{+} = B$. The result follows from Proposition~\ref{P:convenient}. \end{proof} \begin{corollary} \label{C:minimalR} The order $R = {\mathbf Z}[\pi,\bar{\pi}]$ of $K$ is convenient. \end{corollary} \begin{proof} It is not hard to show that one basis for $R$ as a ${\mathbf Z}$-module is \[ \{1, \pi, \bar{\pi}, \pi^2, \bar{\pi}^2, \ldots, \pi^{n-1}, \bar{\pi}^{n-1}, \pi^n \}, \] and from this one sees that $R^{+} = {\mathbf Z}[\pi+\bar{\pi}]$. The ring $R^{+}$ is Gorenstein because it is integral over ${\mathbf Z}$ and monogenic. The corollary follows from Proposition~\ref{P:examples}. \end{proof} \begin{example} \label{EX:inconvenient} Here we give an example that shows that an order can satisfy conditions~\eqref{conv1} and~\eqref{conv2} of Definition~\ref{D:convenient}, without also satisfying condition~\eqref{conv3}. Let $p = 19$ and let $f$ be the ordinary irreducible Weil polynomial $x^4 - 4x^3 + 10x^2 - 4px + p^2$, corresponding to an isogeny class of abelian surfaces over~${\mathbf F}_p$. One checks that $\pi+\bar{\pi} = 2 + 4\sqrt{2}$ for a choice of $\sqrt{2}$ in $K$. Let $R$ be the ${\mathbf Z}$-module generated by \[ 1,\quad 2\sqrt{2},\quad \frac{\pi-\bar{\pi}}{2},\text{\quad and \quad} \frac{(\pi-\bar{\pi})\sqrt{2}}{2}. \] Using the fact that $(\pi-\bar{\pi})^2/4 = -10 + 4\sqrt{2}$, we see that $R$ is closed under multiplication and is therefore a ring. We will show that $R$ satisfies the first two conditions for being convenient, but not the third, showing that the third is not a consequence of the first two. It is clear that $R$ is stable under complex conjugation, and that $R^{+} = {\mathbf Z}[2\sqrt{2}]$. The ring $R^{+}$ is Gorenstein because it is monogenic. Thus $R$ satisfies conditions~\eqref{conv1} and~\eqref{conv2}. We compute that $R^\dagger$ is the ${\mathbf Z}$-module generated by \[ \frac{1}{4},\quad \frac{\sqrt{2}}{16},\quad \frac{1}{2(\pi-\bar{\pi})},\text{\quad and \quad} \frac{\sqrt{2}}{4(\pi-\bar{\pi})}. \] The pure imaginary elements of $R^\dagger$ are the elements of the ${\mathbf Z}$-module generated by \[\frac{1}{2(\pi-\bar{\pi})}\text{\quad and \quad} \frac{\sqrt{2}}{4(\pi-\bar{\pi})}. \] The $R$-module generated by these elements is spanned as a ${\mathbf Z}$-module by \[ \frac{1}{4},\quad \frac{\sqrt{2}}{8},\quad \frac{1}{2(\pi-\bar{\pi})},\text{\quad and \quad} \frac{\sqrt{2}}{4(\pi-\bar{\pi})}, \] and this has index $2$ in $R^\dagger$. Thus, $R$ does not satisfy condition~\eqref{conv3} of Definition~\ref{D:convenient}, even though it satisfies conditions~\eqref{conv1} and~\eqref{conv2}. (Note however that $R$ is Gorenstein: If we let $\Lambda$ be the ${\mathbf Z}$-module generated by $32-40\sqrt{2}$, $136\sqrt{2}$, $8(\pi-\bar{\pi})$, and $4\sqrt{2}(\pi-\bar{\pi})$, then $R^\dagger \Lambda = R$, so $R^\dagger$ is an invertible fractional $R$-ideal.) \end{example} \section{Principally polarized varieties and minus class numbers of orders} \label{S:PPAVs} In this section we will prove Theorem~\ref{T:classgroup} and Corollary~\ref{C:classgroup}, and we give some conditions under which the norm map from $\Pic R$ to $\Picplus R^{+}$ is surjective. Throughout the section we continue to use the notation set at the beginning of Section~\ref{S:isogenyclasses}: $k$ is a finite field with $q$ elements, ${\mathcal C}$ is an isogeny class of simple $n$-dimensional ordinary abelian varieties over~$k$, $f$ is the Weil polynomial for ${\mathcal C}$ (and is irreducible and of degree~$2n$), $K$ is the CM~field defined by~$f$, $K^{+}$ is its maximal real subfield, and $\pi$ is a root of $f$ in~$K$. Before we begin the proof of Theorem~\ref{T:classgroup}, let us make one comment on the restriction to strata corresponding to convenient orders. If $A$ is an abelian variety in ${\mathcal C}$ and if $A$ has a principal polarization, then $\End A$ is stable under complex conjugation because the Rosati involution on $(\End A)\otimes{\mathbf Q} = K$ associated to a principal polarization takes $\End A$ to itself, and the only positive involution on $K$ is complex conjugation. Thus, every stratum of ${\mathcal C}$ that contains a principally polarized variety must correspond to an endomorphism ring $R$ which satisfies condition~\eqref{conv1} of Definition~\ref{D:convenient}. In general, however, $\End A$ need not be convenient. \begin{proof}[Proof of Theorem~\textup{\ref{T:classgroup}}] To understand the category of abelian varieties in the ordinary isogeny class~${\mathcal C}$, we turn to the theory of Deligne modules and their polarizations as set forth in~\cite{Howe1995}, based on Deligne's equivalence of categories~\cite{Deligne1969} between ordinary abelian varieties over a finite field and a certain category of modules. Deligne's equivalence of categories involves picking an embedding of the Witt vectors over $\bar{k}$ into the complex numbers~${\mathbf C}$, and this embedding determines a $p$-adic valuation $v$ on the algebraic numbers in ${\mathbf C}$. We let \[ \Phi = \{ \varphi\colon K\to{\mathbf C} \mid v(\varphi(\pi)) > 0 \} \] so that $\Phi$ is a \emph{CM-type}, that is, a choice of half of all the embeddings of $K$ into ${\mathbf C}$, one from each complex-conjugate pair. Following~\cite{Howe1995}, we see that the abelian varieties $A$ in~${\mathcal S}$ correspond via Deligne's equivalence to the classes of fractional ideals ${\mathfrak A}$ of $R$ with $\End{\mathfrak A} = R$, and since $R$ is Gorenstein these are precisely the classes of invertible fractional $R$-ideals. According to~\cite{Howe1995}, if $A$ corresponds to (the class of) an ideal ${\mathfrak A}$, then the dual $\hat{A}$ of $A$ corresponds to the complex conjugate of the trace dual of ${\mathfrak A}$. Let ${\mathfrak d}$ be the different of $R$; since $R$ is stable under complex conjugation, so is ${\mathfrak d}$. The trace dual of an invertible $R$-ideal ${\mathfrak A}$ is ${\mathfrak d}^{-1}{\mathfrak A}^{-1}$, and we see that $\hat{A}$ corresponds to the class of ${\mathfrak d}^{-1}\bar{\Aid}^{-1}$ in $\Pic R$. An isogeny from one Deligne module ${\mathfrak A}$ to another ${\mathfrak B}$ is an element $x\in K$ such that $x{\mathfrak A}\subseteq{\mathfrak B}$. The degree of this isogeny is the index of $x{\mathfrak A}$ in ${\mathfrak B}$. A polarization of $A$ is an isogeny $A\to\hat{A}$ that satisfies certain symmetry and positivity conditions. In the category of Deligne modules, a polarization is an isogeny $x$ from ${\mathfrak A}$ to ${\mathfrak d}^{-1}\bar{\Aid}^{-1}$ such that $x$ is pure imaginary and such that $\varphi(x)$ is positive imaginary (that is, a positive real times the element $i$ of~${\mathbf C}$) for every $\varphi\colon K\to {\mathbf C}$ in the CM-type $\Phi$. Fix an arbitrary pure imaginary $\iota\in K$ such that $\varphi(\iota)$ is positive imaginary for every $\varphi\colon K\to {\mathbf C}$ in the CM-type $\Phi$. Then a polarization of a Deligne module ${\mathfrak A}$ is an isogeny $\iota x$ from ${\mathfrak A}$ to ${\mathfrak d}^{-1}\bar{\Aid}^{-1}$ such that $x$ is a totally positive element of~$K^{+}$. It is now easy to characterize the Deligne modules ${\mathfrak A}$, with $\End {\mathfrak A} = R$, that have principal polarizations. We see that such an ${\mathfrak A}$ has a principal polarization if and only if there is a totally positive $x\inK^{+}$ such that $\iota x {\mathfrak A} = {\mathfrak d}^{-1}\bar{\Aid}^{-1}$. This condition is equivalent to $x{\mathfrak A}\bar{\Aid} = (\iota{\mathfrak d})^{-1}$. Since $R$ is convenient, the different ${\mathfrak d}$ can be generated by pure imaginary elements, so the ideal $\iota{\mathfrak d}$ can be generated by real elements. Let ${\mathfrak d}' = (\iota{\mathfrak d})\capK^{+}$. Then ${\mathfrak d}'$ is a fractional $R^{+}$-ideal with ${\mathfrak d}' R = \iota{\mathfrak d}$. In fact, we have $\End {\mathfrak d}' = (\End \iota{\mathfrak d})\cap K^{+} = R\cap K^{+} = R^{+}$, and since $R^{+}$ is Gorenstein, this means that ${\mathfrak d}'$ is an invertible fractional $R^{+}$-ideal. Let ${\mathfrak D}$ be the norm of ${\mathfrak A}$. Then the equality $x{\mathfrak A}\bar{\Aid} = (\iota{\mathfrak d})^{-1}$ of invertible fractional $R$-ideals is equivalent to the equality $x{\mathfrak D} = ({\mathfrak d}')^{-1}$ of invertible fractional $R^{+}$-ideals. In other words, we see that the abelian variety corresponding to the class of ${\mathfrak A}$ in $\Pic R$ has a principal polarization if and only if this class maps, via the norm map $\Pic R \to \Picplus R^{+}$, to the class of $({\mathfrak d}')^{-1}$. Since this norm map is surjective by assumption, the number of principally polarizable classes $[{\mathfrak A}]$ is simply the quotient $(\#\Pic R)/(\#\Picplus R^{+})$. Finally, we count the number of distinct principal polarizations (up to isomorphism) on a Deligne module, given that it has one. Suppose $\lambda$ and $\mu$ are two principal polarizations on a Deligne module ${\mathfrak A}$ with $\End {\mathfrak A} = R$. Then $\mu^{-1}\lambda$ is an automorphism of ${\mathfrak A}$, and it is a totally positive element of $R^{+}$. Conversely, if $u$ is a totally positive unit of $R^{+}$, then $u\lambda$ is a principal polarization of ${\mathfrak A}$. Two principal polarizations $\lambda$ and $\mu$ are isomorphic if and only if there is an isomorphism $\alpha\colon{\mathfrak A}\to{\mathfrak A}$ such that $\mu = \hat{\alpha} \lambda\alpha$, where $\hat{\alpha}$ is the dual isogeny of $\alpha$. The Rosati involution on $\End A$ --- which is complex conjugation --- is given by $x\mapsto \lambda^{-1} \hat{x} \lambda$, so we find that $\lambda$ and $\mu$ are isomorphic if and only if $\mu = \lambda \bar{\alpha}\alpha$. Thus, the isomorphism classes of principal polarizations on ${\mathfrak A}$ correspond to elements of $U^{+}_{>0}$ modulo $N(U)$. The theorem follows. \end{proof} \begin{proof}[Proof of Corollary~\textup{\ref{C:classgroup}}] By Theorem~\ref{T:classgroup}, the total number of principally polarized varieties $(A,\lambda)$ in the stratum ${\mathcal S}$, counted up to isomorphism, is equal to \[ [U^{+}_{>0} : N(U)] \, \frac{\#\Pic R^{\phantom{+}}}{\#\Pic^+ R^{+}}, \] where $U^{+}_{>0}$ is the group of totally positive units of~$R^{+}$. Since $U\supseteq U^{+}$, we can rewrite this expression as \begin{align*} [U^{+}_{>0} : N(U)] \, \frac{\#\Pic R^{\phantom{+}}}{\#\Pic^+ R^{+}} &= \frac{[U^{+}_{>0} : (U^{+})^2]}{[N(U) : (U^{+})^2]} \, \frac{\#\Pic R^{\phantom{+}}}{\#\Pic^+ R^{+}} \\ &= \frac{1}{[N(U) : (U^{+})^2]} \, \frac{\#\Pic R^{\phantom{+}}}{\#\Pic R^{+}}. \end{align*} We are left to prove the statement about the unit index. Hasse~\cite[Satz~14, p.~54]{Hasse1952} shows that the index of $(U^{+})^2$ in $N(U)$ is either $1$ or~$2$. We will show that if the index is~$2$, then $K/K^{+}$ is unramified at all odd primes. Suppose $v$ is an element of $N(U)$ not in $(U^{+})^2$, say with $v = N(u)$ for some $u\in U\setminusU^{+}$. Then $\bar{u}/u$ is an algebraic integer, and for every embedding $\varphi$ of $K$ into ${\mathbf C}$ we find that $\varphi(\bar{u}/u)$ lies on the unit circle; therefore $\bar{u}/u = \zeta$ for some root of unity $\zeta$. Let $n$ be the smallest positive integer with $\zeta^n = 1$ and write $n = 2^e m$ with $m$ odd. Let $a = (1-m)/2$. If we let $z = \zeta^a u$, then $N(z) = v$, and $\bar{z}/z = \zeta^m$ is a primitive $2^e$-th root of unity. Replacing $u$ with $z$, we find we may assume that $n = 2^e$ is a power of~$2$. Furthermore, if $n=1$ then $\bar{u}=u$ and $N(u)\in(U^{+})^2$, contrary to assumption, so $n > 1$ and $e > 0$. We see that $v = u\bar{u} = \zeta u^2$, so $u^{2^e} = - v^{2^{e-1}}$. If $e = 1$ then $u^2 = -v$, so $K = K^{+}(\sqrt{-v})$. If $e > 1$ then $(u^{2^{e-1}} / v^{2^{e-2}})^2 = -1$, so $K = K^{+}(\sqrt{-1})$. In both cases $K$ is obtained from $K^{+}$ by adjoining the square root of a unit, so $K/K^{+}$ is unramified at all odd primes. \end{proof} \begin{remark} \label{R:cokernel} Suppose ${\mathcal S}$ is a stratum corresponding to a convenient order~$R$, and suppose the norm map $N_{\Pic}$ is \emph{not} surjective; say that the cokernel has order $n>1$. If the class of $({\mathfrak d}')^{-1}$ in $\Picplus R^{+}$ is not in the image of the norm map, then there will be \emph{no} principally polarized varieties $A\in{\mathcal C}$ with $\End A = R$. On the other hand, if the class of $({\mathfrak d}')^{-1}$ is in the image of the norm, there will be $n(\#\Pic R)/(\#\Picplus R^{+})$ principally polarizable varieties $A\in{\mathcal C}$ with $\End A = R$, and each such variety will have ${[U^{+}_{>0} : N(U)]}$ isomorphism classes of principal polarizations. Likewise, the total number of principally polarized varieties $(A,\lambda)$ in ${\mathcal S}$, counted up to isomorphism, will be \[\frac{n}{[N(U) : (U^{+})^2]} \, \frac{\#\Pic R^{\phantom{+}}}{\#\Pic R^{+}}\] or $0$, depending on whether or not the class of $({\mathfrak d}')^{-1}$ is in the image of the norm. \end{remark} \begin{remark} \label{R:lowerstar} Let $R$ be the endomorphism ring for a stratum ${\mathcal S}$ of an isogeny class. For an alternative viewpoint on Corollary~\ref{C:classgroup}, we can consider the group $\Pic_* R$ defined by Lenstra, Pila, and Pomerance~\cite[\S6]{LenstraPilaEtAl2002} as the set of equivalence classes of pairs $({\mathfrak B},\beta)$, where ${\mathfrak B}$ is an invertible $R$-ideal and $\beta$ is a totally positive element of $K^{+}$ such that $N({\mathfrak B})=\beta R$, and where two such pairs $({\mathfrak B},\beta)$ and $({\mathfrak C},\gamma)$ are taken to be equivalent if there is an element $\alpha\in K^*$ such that $\alpha {\mathfrak B} = {\mathfrak C}$ and $\alpha\bar{\alpha}\beta = \gamma$. This group is sometimes called the \emph{Shimura class group} of~$R$. The group $\Pic_* R$ acts on the set $X$ of principally polarized abelian varieties $(A,\lambda)$ in the stratum~${\mathcal S}$, as follows: If $(A,\lambda)$ corresponds to a Deligne module ${\mathfrak A}$ together with a totally positive $x\inK^{+}$ such that $x{\mathfrak A}\bar{\Aid} = (\iota{\mathfrak d})^{-1}$ (with notation as in the proof of Theorem~\ref{T:classgroup}), then for every $({\mathfrak B},\beta)$ in $\Pic_* R$ we define $({\mathfrak B},\beta)\cdot (A,\lambda)$ to be the principally polarized variety corresponding to the Deligne module ${\mathfrak B}{\mathfrak A}$ and the element $x/\beta\inK^{+}$. It is easy to see that if the set $X$ is nonempty, then it is a principal homogeneous space for the group $\Pic_* R$. Thus, Corollary~\ref{C:classgroup} says that when $\Pic R \to \Picplus R^{+}$ is surjective, the group $\Pic_* R$ has order \[\frac{1}{[N(U) : (U^{+})^2]} \, \frac{\#\Pic R^{\phantom{+}}}{\#\Pic R^{+}},\] and there are this many principally polarized varieties in~${\mathcal S}$. More generally, we find (as in Remark~\ref{R:cokernel}) that \[\#\Pic_* R = \frac{n}{[N(U) : (U^{+})^2]} \, \frac{\#\Pic R^{\phantom{+}}}{\#\Pic R^{+}},\] where $n$ is the order of the cokernel of the norm map $\Pic R \to \Picplus R^{+}$. \end{remark} Next we give some conditions under which the norm map $\Pic R \to \Picplus R^{+}$ is guaranteed to be surjective. Suppose $R$ is a convenient order in a CM~field~$K$. Let ${\mathfrak f}$ be the conductor of $R^{+}$ and let $L/K^{+}$ be the ray class field for the modulus of $K^{+}$ determined by ${\mathfrak f}$ together with all of the infinite primes. \begin{proposition} \label{P:norm} If $K/K^{+}$ is not isomorphic to a subextension of $L/K^{+}$, then the norm map $\Pic R \to \Picplus R^{+}$ is surjective. \end{proposition} \begin{corollary} \label{C:norm} If $K/K^{+}$ is ramified at a finite prime that does not divide the conductor of $R^{+}$, then the norm map $\Pic R \to \Picplus R^{+}$ is surjective. \qed \end{corollary} \begin{proof}[Proof of Proposition~\textup{\ref{P:norm}}] To better understand the norm map from $\Pic R$ to $\Picplus R^{+}$, we take~\cite[\S 6]{LenstraPilaEtAl2002} as a model and identify the Picard groups with quotients of certain profinite groups, as follows. Let $\hat{\BZ}$ be the profinite completion of the integers ${\mathbf Z}$; that is, $\hat{\BZ} = \varprojlim {\mathbf Z}/n{\mathbf Z} \cong \prod_{p} {\mathbf Z}_p$. For any ring $R$ we let $\hat{R}$ denote $R\otimes_{{\mathbf Z}}\hat{\BZ}$. Then we have \[ \Pic R \cong \hat{K}^*/(\hat{R}^* K^*) \text{\quad and\quad} \Picplus R^{+} \cong (\hat{K^{+}})^*/((\hat{R^{+}})^* (K^{+})^*_{>0}), \] where $(K^{+})^*_{>0}$ denotes the multiplicative group of totally positive elements of $K^{+}$. The norm map on Picard groups gives us an exact sequence \[ \xymatrix{ \displaystyle\frac{\hat{K}^*}{\hat{R}^* K^*}\ \ar[r]^(0.4){N} & \ \displaystyle\frac{(\hat{K^{+}})^*}{(\hat{R^{+}})^* (K^{+})^*_{>0}}\ \ar[r] & \ \displaystyle\frac{(\hat{K^{+}})^*}{(\hat{R^{+}})^* (K^{+})^*_{>0} N(\hat{K}^*)}\ \ar[r] & 1. } \] Combining this with the analogous sequence for the maximal orders, we obtain the following diagram with exact rows and columns: \[ \xymatrix{ & & \displaystyle\frac{(\hat{\calO^+})^* (K^{+})^*_{>0} N(\hat{K}^*)}{(\hat{R^{+}})^* (K^{+})^*_{>0} N(\hat{K}^*)}\ar[d]\\ \displaystyle\frac{\hat{K}^*}{\hat{R}^* K^*}\ \ar[r]^(0.4){N}\ar[d] & \ \displaystyle\frac{(\hat{K^{+}})^*}{(\hat{R^{+}})^* (K^{+})^*_{>0}}\ \ar @{->}[r] \ar[d] & \ \displaystyle\frac{(\hat{K^{+}})^*}{(\hat{R^{+}})^* (K^{+})^*_{>0} N(\hat{K}^*)}\ \ar[r] \ar[d] & 1\\ \displaystyle\frac{\hat{K}^*}{\hat{{\mathcal O}}^* K^*}\ \ar[r]^(0.4){N} & \ \displaystyle\frac{(\hat{K^{+}})^*}{(\hat{\calO^+})^* (K^{+})^*_{>0}}\ \ar[r] & \ \displaystyle\frac{(\hat{K^{+}})^*}{(\hat{\calO^+})^* (K^{+})^*_{>0} N(\hat{K}^*)}\ \ar[r] & 1 } \] Let ${\mathfrak m}$ be the modulus consisting of the infinite primes of $K^{+}$ and the ideal~${\mathfrak f}$. We claim that the cokernel of the map $\Pic R \to \Picplus R^{+}$ is trivial, under the assumption that $K/K^{+}$ is not isomorphic to a subextension of $L/K^{+}$, the ray class field of $K^{+}$ modulo ${\mathfrak m}$. To prove this, it will suffice to show that the cokernel of the map $\Pic {\mathcal O} \to \Picplus \calO^+$ is trivial and that the group \begin{equation} \label{EQ:kernel} \frac{(\hat{\calO^+})^* (K^{+})^*_{>0} N(\hat{K}^*)}{(\hat{R^{+}})^* (K^{+})^*_{>0} N(\hat{K}^*)} \end{equation} is trivial. The extension $K/K^{+}$ must be ramified at a finite prime, because otherwise it would be contained in the ray class field of $K^{+}$ modulo the infinite primes. By \cite[Proposition~10.1, p.~2385]{Howe1995}, it follows that $\Pic {\mathcal O} \to \Picplus \calO^+$ is surjective. We are left to show that the group~\eqref{EQ:kernel} is trivial. To do this, it will suffice to show that for every $a\in(\hat{\calO^+})^*$ we can express $a$ as the product of an element of $(K^{+})^*_{>0}$ and an element of $(\hat{R^{+}})^*$ and the norm of an element of $\hat{K}^*$. First let us describe the structure of the profinite groups in question. The group $\hat{K}^*$ consists of all vectors $(a_{\mathfrak q})_{\textup{${\mathfrak q}$ of ${\mathcal O}$}}$ where each $a_{\mathfrak q}$ is a nonzero element of the completion $K_{\mathfrak q}$, and where all but finitely many $a_{\mathfrak q}$ lie in ${\mathcal O}_{\mathfrak q}^*$. The group $(\hat{\calO^+})^*$ consists of all vectors $(a_{\mathfrak p})_{\textup{${\mathfrak p}$ of $\calO^+$}}$ where each $a_{\mathfrak p}$ lies in $(\calO^+)_{\mathfrak p}^*$. The group $(\hat{R^{+}})^*$ has an analogous structure, and can also be viewed as a subgroup of $(\hat{\calO^+})^*$. For us, it will suffice to observe that \[ (\hat{R^{+}})^* \supseteq \{ (a_{\mathfrak p})_{\textup{${\mathfrak p}$ of $\calO^+$}} \mid a_{\mathfrak p} \equiv 1\bmod {\mathfrak p}^e \textup{\ when\ } {\mathfrak p}^e\parallel {\mathfrak f}\}. \] Suppose we are given an element $a = (a_{\mathfrak p})$ of $(\hat{\calO^+})^*$. First we choose a totally positive $x\in\calO^+$ such that if ${\mathfrak p}^e\parallel{\mathfrak f}$ then $x\equiv a_{\mathfrak p}\bmod {\mathfrak p}^e$. The ideal $x\calO^+$ gives us a class $\chi$ in the ray class group modulo ${\mathfrak m}$. Because $K/K^{+}$ is not isomorphic to a subextension of $L/K^{+}$, the Chebotar\"ev density theorem (applied to the extension $L\cdot K$ of $K^{+}$) shows that there is a prime ${\mathfrak P}$ of $\calO^+$ that splits in $K$, that does not divide~${\mathfrak f}$, and whose image in the ray class group is $\chi^{-1}$. This means that there is an element $y$ of $(K^{+})^*_{>0}$ with $y \equiv 1\modstar {\mathfrak f}$ such that $\calO^+ = xy{\mathfrak P}$. Let ${\mathfrak Q}$ be a prime of $K$ lying over ${\mathfrak P}$, and let $b = (b_{\mathfrak q})$ be the element of $\hat{K}^*$ such that $b_{\mathfrak q} = 1$ if ${\mathfrak q}\ne{\mathfrak Q}$ and $b_{\mathfrak Q} = xy$. Then $N(b)$ is the element of $(\hat{K^{+}})^*$ that is equal to $1$ at every prime except ${\mathfrak P}$, where it is equal to $xy$. We check that for all ${\mathfrak p}^e \parallel {\mathfrak f}$, the ${\mathfrak p}$-components of $a$ and of $xyN(b)$ are congruent modulo ${\mathfrak p}^e$. For all primes ${\mathfrak p}\neq{\mathfrak P}$ that do not divide ${\mathfrak f}$, the ${\mathfrak p}$-component of $xyN(b)$ is a unit of $\calO^+_{\mathfrak p}$. Finally, the ${\mathfrak P}$-component of $xyN(b)$ is equal to~$1$. Therefore, $a/(xyN(b))$ is an element of \[ \{ (a_{\mathfrak p})_{\textup{${\mathfrak p}$ of $\calO^+$}} \mid a_{\mathfrak p} \equiv 1\bmod {\mathfrak p}^e \textup{\ when\ } {\mathfrak p}^e\parallel {\mathfrak f}\}, \] which is contained in $(\hat{R^{+}})^*$. Thus, our element $a$ of $(\hat{\calO^+})^*$ is the product of the element $xy$ of $(K^{+})^*_{>0}$ and the element $a/(xyN(b))$ of $(\hat{R^{+}})^*$ and the norm of the element $b$ of $\hat{K}^*$. This shows that the cokernel of the map $\Pic R \to \Picplus R^{+}$ is trivial. \end{proof} \section{Minus class numbers and discriminants} \label{S:discriminants} We continue to use the notation set forth at the beginning of Section~\ref{S:isogenyclasses}. Corollary~\ref{C:classgroup} shows that for the strata ${\mathcal S}$ corresponding to certain convenient orders~$R$, the number of principally polarized abelian varieties in ${\mathcal S}$ is equal either to $h_R/h_{R^{+}}$ or to $(1/2)(h_R/h_{R^{+}})$, where $h_R$ is the order of the Picard group of $R$ and $h_{R^{+}}$ is the order of the Picard group of the real subring $R^{+}$ of~$R$. We denote the ratio $h_R/h_{R^{+}}$ by $h^{-}_R$, as is commonly done in the case when $R$ is a maximal order, and we call this ratio the \emph{minus class number} of~$R$. In the case where $R$ is a maximal order ${\mathcal O}$, a Brauer--Siegel result for relative class numbers~\cite{Louboutin2006} gives us an estimate --- a rough estimate, to be sure --- for the minus class number $h^{-}_{\mathcal O}$ in terms of the ratio $\Delta_{\mathcal O}/\Delta_{\calO^+}$, where $\Delta_{\mathcal O}$ and $\Delta_{\calO^+}$ are the discriminants of ${\mathcal O}$ and $\calO^+$. In this section we review this result on relative class numbers and consider the case of minus class numbers of convenient orders that are not maximal. In the case where $R$ is the convenient order ${\mathbf Z}[\pi,\bar{\pi}]$, we also compute an exact formula for the ratio $\Delta_R/\Delta_{R^{+}}$ in terms of the Frobenius angles of the isogeny class~${\mathcal C}$. (This argument was sketched in~\cite{Howe2000} and given in detail in~\cite{GerhardWilliams2019}; we present a derivation here for the reader's convenience.) For CM~fields that do not contain imaginary quadratic fields, Louboutin gives effective lower bounds on $h^{-}_{\mathcal O}$ that are better than the crude Brauer--Siegel approximations that we discuss here, but for our purposes the added value of these effective results does not justify the complexity they would add to the discussion. In some sense, we will be satisfied simply to justify the rough heuristic that ``minus class numbers grow like the square root of the ratio of discriminants,'' and we will not try to quantify the known bounds on $h^{-}_R$ more precisely. Let us make some remarks on the $\triplesim$ notation set in the introduction. Recall that if $\{a_i\}$ and $\{b_i\}$ are two sequences of positive real numbers indexed by positive integers~$i$, the expression $a_i \triplesim b_i$ means that for every $\varepsilon >0$ there are positive constants $r$ and $s$ such that $b_i \le r a_i^{1+\varepsilon}$ and $a_i \le s b_i^{1+\varepsilon}$ for all~$i$. The notation is intended to capture the notion that the elements of the two sequences grow \emph{very roughly} at the same rate. The relation $\triplesim$ is clearly symmetric and transitive. Furthermore, if we have sequences $\{a_i\}$, $\{b_i\}$, $\{c_i\}$, and $\{d_i\}$ with $a_i \triplesim b_i$ and $c_i\triplesim d_i$, and if $f$ and $g$ are two functions from ${\mathbf Z}_{>0}$ to itself, then \[ a_{f(i)} c_{g(i)} \triplesim b_{f(i)} d_{g(i)}. \] Note also that for sequences $\{a_i\}$ and $\{b_i\}$ that tend to infinity, $a_i \triplesim b_i$ if and only if $(\log a_i)/(\log b_i)\to 1$. For a convenient order $R$, we let $\Delta_R$ and $\Delta_{R^{+}}$ denote the discriminants of $R$ and $R^{+}$, respectively. \begin{theorem}[Louboutin] \label{T:B-S} As ${\mathcal O}$ ranges over the rings of integers of CM~fields of a given degree over ${\mathbf Q}$, we have $h_{\mathcal O}^- \triplesim \sqrt{\left|\Delta_{\mathcal O} / \Delta_{\calO^+}\right|}$. \end{theorem} \begin{proof} This is a combination of~\cite[Corollary~29, p.~216]{Louboutin2006}, which discusses normal CM~fields of arbitrary degree with root-discriminants tending to infinity, and~\cite[Corollary~32, p.~217]{Louboutin2006}, which discusses non-normal CM~fields of fixed degree. \end{proof} \begin{theorem} \label{T:B-S-orders} As $R$ ranges over all convenient orders of a given degree $2n$ over ${\mathbf Q}$ for which the norm map $\Pic R \to\Picplus R^{+}$ is surjective, we have $h_R^- \triplesim \sqrt{\left|\Delta_R /\Delta_{R^{+}}\right|}$. \end{theorem} \begin{proof} Let $R$ be a convenient order in a field $K$ for which $\Pic R \to\Picplus R^{+}$ is surjective, let ${\mathcal O}$ be the maximal order of~$K$, and let $\calO^+$ be the maximal order of the real subfield $K^{+}$. By Remark~\ref{R:lowerstar}, we see that for this order $R$ the relative class number $h_R^-$ is equal to either $\#\Pic_* R$ or $2\#\Pic_* R$, so it will suffice to show that $\#\Pic_* R \triplesim \sqrt{\left|\Delta_R /\Delta_{R^{+}}\right|}$. Lenstra, Pila, and Pomerance show~\cite[Lemma~6.3, p.~125]{LenstraPilaEtAl2002} that the order of $\Pic_* R$ is equal to \[ \#\Pic_* R = \#C \cdot \frac{w(R)}{2^n} \cdot \frac{h_{\mathcal O} \reg{\mathcal O} \cdot w(\calO^+)}{h_{\calO^+} \reg\calO^+ \cdot w({\mathcal O})} \cdot \frac{[\hat{{\mathcal O}}^* \, \colon\, \hat{R}^*]} {[(\hat{\calO^+})^* \,\colon\, (\hat{R^{+}})^*]}, \] where $C$ is the cokernel of the norm map $\Pic R \to\Picplus R^{+}$ (which is trivial in our case), where $w(R)$, $w({\mathcal O})$, and $w(\calO^+)$ denote the number of roots of unity in these orders, where $\reg$ denotes the regulator, and where the ``hat'' notation is as in the proof of Proposition~\ref{P:norm}. We note that the expression \[ \#C \cdot \frac{w(R)}{2^n} \cdot \frac{\reg{\mathcal O} \cdot w(\calO^+)}{\reg\calO^+ \cdot w({\mathcal O})} \] is bounded above and below in terms depending only on the degree $n$, so it will suffice for us to show that \[ \frac{h_{\mathcal O}}{h_{\calO^+}} \cdot \frac{[\hat{{\mathcal O}}^* \, \colon\, \hat{R}^*]} {[(\hat{\calO^+})^* \,\colon\, (\hat{R^{+}})^*]} \triplesim \frac{\sqrt{\left|\Delta_R\right|}}{\sqrt{\left|\Delta_{R^{+}}\right|}}. \] Following~\cite{LenstraPilaEtAl2002}, we let ${\mathfrak F}$ be the conductor of $R$, we set ${\mathfrak f} = {\mathfrak F}\capR^{+} = {\mathfrak F}\cap\calO^+$, and we define finite rings \[ A = {\mathcal O}/{\mathfrak F}, \quad B = \calO^+/{\mathfrak f}\subseteq A, \quad C = R/{\mathfrak F}\subseteq A, \text{\ and \ } D = R^{+}/{\mathfrak f} = B\cap C. \] Then \[ \frac{[\hat{{\mathcal O}}^* \, \colon\, \hat{R}^*]} {[(\hat{\calO^+})^* \,\colon\, (\hat{R^{+}})^*]} = \frac{[A^*\, \colon\, C^*]}{[B^*\, \colon\, D^*]} = \frac{\#A^* / \#C^*}{\#B^*/\#D^*}, \] and by Corollary~5.8~\cite[p.~123]{LenstraPilaEtAl2002} and the remark following its proof, we have \[ \frac{\#A^* / \#C^*}{\#B^*/\#D^*} \triplesim \frac{\#A / \#C}{\#B/\#D} = \frac{[{\mathcal O}\,\colon\,R]}{[\calO^+\,\colon\,R^{+}]} = \frac{\sqrt{\Delta_{\mathcal O}/\Delta_R}}{\sqrt{\Delta_{\calO^+}/\Delta_{R^{+}}}} \] as $R$ ranges over the convenient orders of a given degree over ${\mathbf Q}$. This gives us \[ \frac{[\hat{{\mathcal O}}^* \, \colon\, \hat{R}^*]} {[(\hat{\calO^+})^* \,\colon\, (\hat{R^{+}})^*]} \triplesim \frac{\sqrt{\Delta_R/\Delta_{\mathcal O}}}{\sqrt{\Delta_{R^{+}}/\Delta_{\calO^+}}}, \] and combining this with Theorem~\ref{T:B-S} we find that \[ \frac{h_{\mathcal O}}{h_{\calO^+}} \cdot \frac{[\hat{{\mathcal O}}^* \, \colon\, \hat{R}^*]} {[(\hat{\calO^+})^* \,\colon\, (\hat{R^{+}})^*]} \triplesim \frac{\sqrt{\left|\Delta_{\mathcal O}\right|}}{\sqrt{\left|\Delta_{\calO^+}\right|}} \cdot \frac{\sqrt{\Delta_R/\Delta_{\mathcal O}}}{\sqrt{\Delta_{R^{+}}/\Delta_{\calO^+}}} = \frac{\sqrt{\left|\Delta_R\right|}}{\sqrt{\left|\Delta_{R^{+}}\right|}}, \] which, as we noted above, is enough to prove the theorem. \end{proof} The ring $R = {\mathbf Z}[\pi,\bar{\pi}]$ from Corollary~\ref{C:minimalR} is contained in the endomorphism ring of every abelian variety $A$ in ${\mathcal C}$, and for this $R$ there is a very nice expression of $\sqrt{\left|\Delta_R/\Delta_{R^{+}}\right|}$ in terms of Frobenius angles. \begin{theorem}[{See~\cite[\S2]{GerhardWilliams2019}}] \label{T:disc} Let $0\le\theta_1\le\cdots\le\theta_d\le\pi$ be the Frobenius angles for the isogeny class~${\mathcal C}$, and let $R$ be the ring ${\mathbf Z}[\pi,\bar{\pi}]$. Then we have \[ \sqrt{\left|\Delta_R/\Delta_{R^{+}}\right|} = 2^{n(n+1)/2} q^{n(n+1)/4} \prod_{i<j} (\cos \theta_i - \cos \theta_j) \prod_{i} \sin \theta_i. \] \end{theorem} \begin{proof} Clearly $R = R^{+}\cdot 1 + R^{+}\cdot\pi$. Arguing as in the proof of Proposition~\ref{P:convenient}, we find that the different of $R$ is $\pi-\bar{\pi}$ times the different of $R^{+}$, and since $R^{+}$ is generated by $\pi+\bar{\pi}$ we see that the different of $R^{+}$ is $g'(\pi+\bar{\pi})$, where $g$ is the minimal polynomial of $\pi+\bar{\pi}$. The discriminant ideal is the norm of the different, so the integers $\left|\Delta_{R^{+}}\right|$ and $\left|\Delta_R\right|$ are given by \[ \left|\Delta_{R^{+}}\right| = \left|N_{K^{+}/{\mathbf Q}}( g'(\pi+\bar{\pi}))\right| \text{\quad and\quad } \left|\Delta_R\right| = \left|N_{K/{\mathbf Q}}(\pi-\bar{\pi})\right| \Delta^2_{R^{+}}, \] and we see that \begin{equation} \label{EQ:quotient} \left|\Delta_R/\Delta_{R^{+}}\right| = \left|N_{K/{\mathbf Q}}(\pi-\bar{\pi})\right| \left|N_{K^{+}/{\mathbf Q}}( g'(\pi+\bar{\pi}))\right|. \end{equation} The images of $\pi+\bar{\pi}$ under the various real embeddings of $K^{+}$ into ${\mathbf R}$ are \[\sqrt{q} e^{\theta_i} + \sqrt{q} e^{-\theta_i} = 2\sqrt{q} \cos\theta_i,\] so \begin{equation} \label{EQ:cosines} \left|N_{K^{+}/{\mathbf Q}}( g'(\pi+\bar{\pi}))\right| = 2^{n(n-1)} q^{n(n-1)/2} \prod_{i<j} (\cos\theta_i-\cos\theta_j)^2. \end{equation} Similarly, the images of $\pi-\bar{\pi}$ in ${\mathbf C}$ are the values \[ \sqrt{q} e^{\theta_i} - \sqrt{q} e^{-\theta_i} = 2\sqrt{q} \sqrt{-1} \sin\theta_i \] and their complex conjugates, so \begin{equation} \label{EQ:sines} \left|N_{K/{\mathbf Q}}(\pi-\bar{\pi})\right| = 2^{2n} q^{n} \prod_{i} \sin^2\theta_i. \end{equation} Combining Equations~\eqref{EQ:quotient}, \eqref{EQ:cosines}, and~\eqref{EQ:sines}, we find that \[ \left|\Delta_R/\Delta_{R^{+}}\right| = 2^{n(n+1)} q^{n(n+1)/2} \prod_{i<j} (\cos\theta_i-\cos\theta_j)^2 \prod_{i} \sin^2\theta_i, \] and the theorem follows. \end{proof} We note that Theorem~\ref{T:sequence} follows from Corollary~\ref{C:classgroup}, Theorem~\ref{T:B-S-orders}, and Theorem~\ref{T:disc}. \section{Isogeny classes containing many principally polarized varieties} \label{S:warnings} Suppose ${\mathcal C}$ is an isogeny class of simple ordinary abelian varieties over~${\mathbf F}_q$, corresponding to a Weil number~$\pi$, and let $R = {\mathbf Z}[\pi,\bar{\pi}]$ be the minimal ring of~${\mathcal C}$. We say that an abelian variety in ${\mathcal C}$ \emph{has minimal endomorphism ring} if its endomorphism ring is~$R$. We saw in Corollary~\ref{C:minimalR} that $R$ is a convenient order, so Corollary~\ref{C:norm} and Corollary~\ref{C:classgroup} show that under a mild hypothesis, the number of principally polarized varieties in ${\mathcal C}$ with minimal endomorphism ring is either $h^{-}_R$ or $h^{-}_R/2$, where $h^{-}_R$ is the minus class number of~$R$. Then Theorems~\ref{T:B-S-orders} and~\ref{T:disc} say that that this number is \emph{very} roughly on the order of \[ q^{n(n+1)/4} \prod_{i<j} (\cos \theta_i - \cos \theta_j) \prod_{i} \sin \theta_i, \] where the $\theta_i$ are the Frobenius angles for the isogeny class. Since this is of the same order as the average number of principally polarized varieties with Frobenius angles near $(\theta_1,\ldots,\theta_n)$ given by Equation~\eqref{EQ:average}, one might be tempted to think that the principally polarized varieties with minimal endomorphism ring account for a nontrivial fraction of the principally polarized varieties in~${\mathcal C}$. The goal of this short section is simply to demonstrate that one should not succumb to this temptation. Indeed, even for isogeny classes of elliptic curves, the number of curves with minimal endomorphism ring can be a vanishingly small fraction of the curves in the isogeny class. \begin{theorem} For every $\varepsilon>0$, there is an isogeny class ${\mathcal C}$ of ordinary elliptic curves over a finite field such that the fraction of curves in ${\mathcal C}$ with minimal endomorphism ring is less than~$\varepsilon$. \end{theorem} \begin{proof} Let ${\mathcal C}$ be an isogeny class of ordinary elliptic curves over~${\mathbf F}_q$, say with trace~$t$, and let $\Delta = t^2 - 4q$, so that $\Delta$ is the discriminant of the ring $R = {\mathbf Z}[\pi,\bar{\pi}]$, where $\pi$ is a root of $x^2 - tx + q$. Write $\Delta = F^2 \Delta_0$ for a fundamental discriminant $\Delta_0$, and for ease of exposition let us suppose that $\Delta_0$ is neither $-3$ nor~$-4$. The number of elliptic curves in ${\mathcal C}$ is equal to the Kronecker class number $H(\Delta)$ of $\Delta$ (see~\cite[Theorem~4.6, pp.~194--195]{Schoof1987}), which is the sum of the class numbers of all orders that contain~$R$: \[ H(\Delta) = \sum_{f \mid F} h(f^2 \Delta_0 ). \] Let $\chi$ be the quadratic character modulo $\Delta_0$. Since the only roots of unity in the order of discriminant $\Delta_0$ are~$\pm1$, we have \[ h(f^2 \Delta_0) = h(\Delta_0) f \prod_{p\mid f}\left(1 - {\textstyle\frac{\chi(p)}{p}}\right), \] so that \begin{align} \notag H(\Delta) &= h(\Delta_0) \sum_{f\mid F} f \prod_{p\mid f}\left(1 - {\textstyle\frac{\chi(p)}{p}}\right)\\ \notag &= h(\Delta_0) \prod_{p^e \parallel F} \left(1 + \left(1 - {\textstyle\frac{\chi(p)}{p}}\right)(p + \cdots + p^e)\right)\\ \intertext{and} \notag \frac{H(\Delta)}{h(\Delta)} & = \prod_{p^e \parallel F}\left(p^{-e} \left(1 - {\textstyle\frac{\chi(p)}{p}}\right)^{-1} + 1 + \frac{1}{p} + \cdots + \frac{1}{p^{e-1}}\right)\\ \notag &\ge \prod_{p^e \parallel F}\left( \frac{1}{p^{e-1}(p+1)} + \frac{p^e-1}{p^{e-1}(p-1)}\right)\\ \notag &\ge \prod_{p\mid F}\left( \frac{p+2}{p+1}\right)\\ \intertext{so} \label{hoverH} \frac{h(\Delta)}{H(\Delta)} &\le \prod_{p\mid F}\left( \frac{p+1}{p+2}\right). \end{align} Now, the product $\prod_p \big( \frac{p+1}{p+2}\big)$ diverges to $0$, so to prove the theorem we need only show that for every integer~$m>0$, there are isogeny classes ${\mathcal C}$ for which the conductor of the minimal endomorphism ring is divisible by~$m$. Suppose we are given an $m>0$. Let $\Delta_0<-4$ be a fundamental discriminant and let $n = m^2|\Delta_0|$. Let $p$ be a prime of the form $x^2 + ny^2$ (see~\cite[Theorem 9.2, p.~163]{Cox2013}), and let $t = 2x$. Then $t^2 < 4p$ and $p\nmid t$, so by a result of Deuring (see~\cite[Theorem~4.2, p.~193]{Schoof1987}) there is an isogeny class of elliptic curves over ${\mathbf F}_p$ with trace~$t$. We see that the discriminant $\Delta$ of this isogeny class is \[ \Delta = t^2 - 4p = 4x^2 - 4(x^2 + m^2|\Delta_0| y^2) = m^2 y^2 \Delta_0,\] so the conductor for the minimal endomorphism ring is~$my$, and is divisible by~$m$, as we wished to show. \end{proof} \section{Examples} \label{S:examples} In this section, we give three families of strata of abelian surfaces such that, in the notation of Theorem~\ref{T:sequence}, we do \emph{not} have $P_m \triplesim q_m^{3/2}$, but instead have $P_m \triplesim q_m^{5/4}$ (for the first family), $P_m \triplesim q_m$ (for the second), and $P_m \triplesim q_m^{1/2}$ (for the third). This shows that the trigonometric factors in Theorem~\ref{T:sequence} are essential. We repeatedly use the fact that if a polynomial of the shape $f = x^4 + ax^3 + bx^2 + aqx + q^2$ is irreducible and defines a CM~field, where $q$ is a power of a prime and where the middle coefficient $b$ is coprime to $q$, then $f$ is the Weil polynomial of an isogeny class of ordinary abelian surfaces over ${\mathbf F}_q$; see~\cite[\S~3]{Howe1995}. \begin{example} \label{EX:small} For every prime $p$ that is congruent to $7$ modulo~$8$, let $a_p$ be the largest integer less than $\sqrt{p} - 1$, and let $f_p$ be the polynomial \[f_p = x^4 - 2 a_p x^3 + (a_p^2 + p)x^2 - 2 a_p p x + p^2.\] We claim that $f_p$ is the Weil polynomial of a simple ordinary isogeny class over~${\mathbf F}_p$. Since the middle coefficient of $f_p$ is clearly coprime to~$p$, it will be enough for us to show that the algebra $K = {\mathbf Q}[x]/(f_p)$ is a CM~field. Let $\pi$ be the image of the polynomial variable $x$ in~$K$ and let $\bar{\pi} = p/\pi$. We check that $\alpha := \pi+\bar{\pi}$ satisfies $\alpha^2 - 2a_p\alpha + a_p^2 - p = 0$, so that $\alpha = a_p + s$ where $s^2 = p$. Therefore the algebra $K$ contains the quadratic field $K^{+} = {\mathbf Q}(\sqrt{p})$. In fact, $K$ is the extension of $K^{+}$ obtained by adjoining a root of $y^2 - \alpha y + p$, so to show that $K$ is a CM~field we just need to show that $\alpha^2 - 4p$ is totally negative. But this is clear, because under the two embeddings of $K^{+}$ into ${\mathbf R}$ the element $\alpha^2$ gets sent to real numbers smaller than~$4p$. Thus, $f_p$ is the Weil polynomial of a simple ordinary isogeny class ${\mathcal C}_p$ of abelian surfaces over~${\mathbf F}_p$. Let $R_p = {\mathbf Z}[\pi,\bar{\pi}]$ be the minimal ring for ${\mathcal C}_p$. As we have just seen, $R^{+}_p$ contains a square root of $p$ and is therefore the maximal order of~$K^{+}$. We claim that the extension $K/K^{+}$ is ramified at an odd prime. We prove this by contradiction: Since $K$ is obtained from $K^{+}$ by adjoining a square root of $\alpha^2-4p$, if $K/K^{+}$ were unramified at all odd primes then the norm of $\alpha^2 - 4p$ would be either a square or twice a square. We compute that \[ N_{K^{+}/{\mathbf Q}}(\alpha^2 - 4p) = (p - a_p^2)(9p - a_p^2), \] and since $a_p$ is coprime to $p$, the greatest common divisor of the two factors is a divisor of~$8$. Thus, if this norm were a square or twice a square, each factor would also be. But there is no integer $b$ such that $p = a_p^2 + b^2$ or $p = a_p^2 + 2b^2$, because $p\equiv 7 \bmod 8$. Therefore, $K/K^{+}$ is ramified at an odd prime. Let ${\mathcal S}_p$ be the minimal stratum of ${\mathcal C}_p$ and let $P_p$ be the number of isomorphism classes of principally polarized varieties in~${\mathcal S}_p$. Since $R_p$ is the minimal order of ${\mathcal C}_p$, it is convenient by Corollary~\ref{C:minimalR}. Since $R^{+}_p$ is the maximal order of $K^{+}$ it has trivial conductor, and since $K/K^{+}$ is ramified at an odd prime, Corollary~\ref{C:norm} tells us that the norm map $\Pic R_p\to\Picplus R^{+}_p$ is surjective. Then from Corollary~\ref{C:classgroup} we find that $P_p = h^{-}_{R_p}$, the minus class number of~$R_p$. As we noted at the beginning of the proof of Theorem~\ref{T:disc}, we have \[\big|\Delta_{R_p}\big| = \left|N_{K/{\mathbf Q}}(\pi-\bar{\pi})\right| \Delta^2_{R^{+}_p}.\] Since $\left|N_{K/{\mathbf Q}}(\pi-\bar{\pi})\right| = N_{K^{+}/{\mathbf Q}}(\alpha^2 - 4p),$ we find that $\big|\Delta_{R_p}\big| = (p - a_p^2)(9p - a_p^2) (4p)^2$ and \[\big|\Delta_{R_p}/\Delta_{R^{+}_p}\big| = 4 p (p - a_p^2)(9p - a_p^2).\] If we write $a_p = \sqrt{p} - \varepsilon$ for a real number $\varepsilon$ in the interval~$(1,2)$, then \[\big|\Delta_{R_p}/\Delta_{R^{+}_p}\big| = 4 p (2 \varepsilon \sqrt{p} - \varepsilon^2) (8p + 2 \varepsilon \sqrt{p} - \varepsilon^2),\] so we have \[ 32 p^{5/2} < \big|\Delta_{R_p}/\Delta_{R^{+}_p}\big| < 144 p^{5/2} \] Thus, Theorem~\ref{T:B-S-orders} says that as $p\to\infty$ we have $P_p\triplesim p^{5/4}.$ \end{example} \begin{example} \label{EX:smaller} For every prime $p$ that is congruent to $7$ modulo~$8$, we let $f_p$ be the polynomial \[f_p = x^4 + x^3 + (2p-1)x^2 + px + p^2.\] Again we claim that the polynomial $f_p$ is the Weil polynomial of a simple ordinary isogeny class ${\mathcal C}_p$ over~${\mathbf F}_p$, and since its middle coefficient is visibly coprime to~$p$, all we must show is that the algebra $K = {\mathbf Q}[x]/(f_p)$ is a CM~field. Let $\pi$ be the image of the polynomial variable $x$ in~$K$, let $\bar{\pi} = p/\pi$, and let $\alpha = \pi+\bar{\pi}$. We calculate that $\alpha^2 + \alpha - 1 = 0$, so $K$ contains the quadratic field $K^{+} = {\mathbf Q}(\sqrt{5})$. We obtain $K$ from $K^{+}$ by adjoining a root of $y^2 - \alpha y + p$, and since the discriminant $\alpha^2 - 4p$ is totally negative (because the images of $\alpha^2$ in the real numbers are both smaller than~$3$), $K$ is a CM~field and our claim is verified. There are only two totally imaginary quadratic extensions of ${\mathbf Q}(\sqrt{5})$ that are not ramified at an odd prime, namely ${\mathbf Q}(\sqrt{5},\sqrt{-1})$ and~${\mathbf Q}(\sqrt{5},\sqrt{-2})$. One checks that every prime of ${\mathbf Q}(\sqrt{5})$ lying over $p$ is inert in both of these extensions, because $p \equiv 7 \bmod 8$, so neither of these CM~fields contains an element $\beta$ with $\beta\bar{\beta} = p$. In particular, since $\pi\bar{\pi} = p$, we see that $K$ is neither of these fields, so $K/K^{+}$ is ramified at an odd prime. Let $R_p$ and ${\mathcal S}_p$ be the minimal ring and minimal stratum of ${\mathcal C}_p$, and let $P_p$ be the number of isomorphism classes of principally polarized varieties in~${\mathcal S}_p$. The ring $R_p$ is convenient by Corollary~\ref{C:minimalR}. The real order $R^{+}_p$ is maximal and so has trivial conductor, and since $K/K^{+}$ is ramified at an odd prime, we again find from Corollary~\ref{C:norm} that the norm map $\Pic R_p\to\Picplus R^{+}_p$ is surjective. The ramification of $K/K^{+}$ at an odd prime, together with Corollary~\ref{C:classgroup}, tells us that $P_p = h^{-}_{R_p}$. We have \[\big|\Delta_{R_p}\big| = \left|N_{K/{\mathbf Q}}(\pi-\bar{\pi})\right| \Delta^2_{R^{+}_p} = (16p^2 - 12p + 1) \cdot 25\] so that \[\big|\Delta_{R_p}/\Delta_{R^{+}_p}\big| = 5(16p^2 - 12p + 1).\] Theorem~\ref{T:B-S-orders} then tells us that as $p\to\infty$ we have $P_p\triplesim p$. \end{example} \begin{example} \label{EX:smallest} For every prime $p$ that is congruent to $7$ modulo~$8$, let $c_p$ be the largest integer less than $2\sqrt{p} - 1$, and let $f_p$ be the polynomial \[f_p = x^4 + (1 - 2c_p) x^3 + (2p + c_p^2 - c_p -1)x^2 + p(1 - 2c_p)x + p^2.\] Once more we claim that $f_p$ is the Weil polynomial of a simple ordinary isogeny class ${\mathcal C}_p$ over~${\mathbf F}_p$, and again we prove this by showing that the algebra $K = {\mathbf Q}[x]/(f_p)$ is a CM~field and that the middle coefficient of $f_p$ is coprime to~$p$. Let $\pi$ be the image of the polynomial variable $x$ in~$K$, let $\bar{\pi} = p/\pi$, and let $\alpha = \pi+\bar{\pi}$. We check that \[\alpha^2 - (2c_p-1)\alpha + (c_p^2 - c_p - 1) = 0,\] so once again we find that $K$ contains the quadratic field $K^{+} = {\mathbf Q}(\sqrt{5})$. In fact, $\alpha = c_p - \varphi$, where $\varphi \in K^{+}$ satisfies $\varphi^2 - \varphi - 1 = 0$. The algebra $K$ is obtained from $K^{+}$ by adjoining a square root of $\alpha^2 - 4p$. This quantity is totally negative, so $K$ is a CM~field. For $p<100$, explicit computation shows that the middle coefficient of $f_p$ is coprime to~$p$. For $p>100$, we write $c_p = 2\sqrt{p} - \varepsilon$ for a real number $\varepsilon$ in the interval $(1,2)$, and we compute that the middle coefficient of $f_p$ is equal to \[6p - (4\varepsilon+2)\sqrt{p} + \varepsilon^2 + \varepsilon - 1.\] This lies strictly between $5p$ and~$6p$, so the middle coefficient is not a multiple of~$p$ for these primes as well. Thus $f_p$ is the Weil polynomial of a simple ordinary isogeny class ${\mathcal C}_p$ of abelian surfaces over~${\mathbf F}_p$. Let $R_p$ be the minimal order of ${\mathcal C}_p$. Arguing as in Example~\ref{EX:smaller} we find that $R^{+}_p$ is the maximal order of~${\mathbf Q}(\sqrt{5})$ and that $K/K^{+}$ is ramified at an odd prime. If we let $P_p$ denote the number of isomorphism classes of principally polarized varieties in the minimal stratum of ${\mathcal C}_p$, then once again we have that $P_p = h^{-}_{R_p}$. Writing $c_p = 2\sqrt{p} - \varepsilon$ for some $\varepsilon\in(1,2)$, we compute that \begin{align*} \left| N_{K/{\mathbf Q}}(\pi-\bar{\pi})\right| &= N_{K^{+}/{\mathbf Q}}(\alpha^2 - 4p)\\ &= c_p^4 - 2c_p^3 - (8 p + 1) c_p^2 + (8 p + 2) c_p + (16 p^2 - 12 p + 1)\\ &= 16 (\varepsilon^2 + \varepsilon - 1) p - 4 (2\varepsilon^3 +3\varepsilon^2 - \varepsilon - 1)\sqrt{p} + (\varepsilon^4 + 2\varepsilon^3 - \varepsilon^2 - 2\varepsilon + 1). \end{align*} If we view this expression as a function of $\varepsilon$, we find that the extreme values of the function on the interval $[1,2]$ are attained at the endpoints, and it follows that \[ 16p - 12\sqrt{p} + 1 < \left| N_{K/{\mathbf Q}}(\pi-\bar{\pi})\right| < 80p - 100\sqrt{p} + 25. \] From this we see that for $p > 144$ we have \[75 p < \big|\Delta_{R_p}/\Delta_{R^{+}_p}\big| < 400 p,\] so Theorem~\ref{T:B-S-orders} says that as $p\to\infty$ we have $P_p\triplesim p^{1/2}$. \end{example} \begin{remark} In Example~\ref{EX:small}, one of the two Frobenius angles of the isogeny classes approaches $0$, while the other remains near $\pi/2$. In Example~\ref{EX:smaller}, the two Frobenius angles both approach $\pi/2$. And in Example~\ref{EX:smallest}, both Frobenius angles approach $0$. \end{remark} \nocite{DiPippoHowe2000} \bibliographystyle{hplaindoi}
{'timestamp': '2020-09-15T02:19:26', 'yymm': '2005', 'arxiv_id': '2005.14274', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14274'}
arxiv
\section{Introduction} The uncertainty principle states that a non-zero function and its Fourier transform cannot be simultaneously sharply localized. There are various ways of measuring localizatio
n of a function and depending on it one can formulate different forms of the uncertainty principle. Uncertainty principles can be subdivided into quantitative and qualitative uncertainty principles. Quantitative uncertainty principles are some special inequalities which give us information about how a function and its Fourier transform relate. For example, Benedicks \cite{ben85}, Donoho and Stark \cite{don89}, and Slepian and Pollak \cite{sle61} gave qualitative uncertainty principles for the Fourier transforms. Qualitative uncertainty principles imply the vanishing of a function under some strong conditions on the function. For example, Hardy \cite{har33}, Morgan \cite{mor34}, Cowling and Price \cite{cow83}, and Beurling \cite{hor91} theorems are the qualitative uncertainty principles. More precisely, Hardy \cite{har33} obtained the following uncertainty principle concerning the decay of a measurable function and its Fourier transform at infinity. \begin{theorem}[Hardy]\label{Hardy} Let $f$ be a measurable function on $\mathbb R$ such that \[ |f(x)| \leq C e^{-a x^2} \qquad \mathrm{and} \qquad |\hat{f}(\xi)| \leq C e^{-b \xi^2} \] for some constants $a, b, C > 0$. Then three cases can occur. \begin{enumerate} \item[(i)] If $ab> \frac{1}{4}$, then $f = 0$ almost everywhere. \item[(ii)] If $ab=\frac{1}{4}$, then the function $f$ is of the form $f(x)=C e^{-a x^2}$, for some constant $C$. \item[(iii)] If $ab<\frac{1}{4}$, then any finite linear combination of Hermite functions satisfies these decay conditions. \end{enumerate} \end{theorem} Cowling and Price \cite{cow83} generalized this theorem by replacing pointwise Gaussian bounds for $f$ by Gaussian bounds in $L^p$ sense and in $L^q$ sense for $\hat{f}$ as well. More precisely, they proved the following theorem. \begin{theorem}[Cowling--Price]\label{Cow-Pri} Let $f$ be a measurable function on $\mathbb R$ such that \begin{enumerate} \item[(i)] $\Vert e^{a x^2} f \Vert_p < \infty,$ \item[(ii)] $\Vert e^{b \xi^2} \hat{f} \Vert_q < \infty,$ \end{enumerate} where $a, b>0$ and $1 \leq p, q \leq \infty$ such that $\min(p, q)$ is finite. If $ab \geq \frac{1}{4}$, then $f=0$ almost everywhere. If $ab < \frac{1}{4}$, then there exist infinitely many linearly independent functions satisfying $\mathrm{(i)}$ and $\mathrm{(ii)}$. \end{theorem} Over the years, analogues of Hardy's theorem have been extended to different settings (see \cite{tha04}). By replacing the Gaussian function $e^{a x^2}$ in Hardy's theorem by the function $e^{a |x|^\alpha}$ where $\alpha > 2$, Morgan \cite{mor34} obtained the following uncertainty principle. \begin{theorem}[Morgan]\label{mor} Let $a>0$, $b>0$, and let $\alpha, \beta$ be positive real numbers satisfying $\alpha >2$ and $1/\alpha+1/\beta=1$. Suppose that $f$ is a measurable function on $\mathbb R$ such that \[ e^{a|x|^\alpha} f \in L^\infty(\mathbb R) \quad \text{and} \quad e^{b|\lambda|^\beta} \hat{f} \in L^\infty(\mathbb R). \] If $ (a \alpha)^{1/\alpha} (b \beta)^{1/\beta} > \left( \sin \left( \frac{\pi}{2}(\beta -1 ) \right) \right)^{1/\beta}, $ then $f = 0$ almost everywhere. \end{theorem} A generalization of this theorem was obtained by Ben Farah and Mokni \cite{far03} where they proved an $L^p$ -- $L^q$-version of Morgan's theorem. For a more detailed study of uncertainty principles, we refer to the book of Havin and J\"oricke \cite{hav94}. Considerable attention has been devoted to discovering generalizations to new contexts for the Cowling--Price's, Hardy's and Morgan's uncertainty principles. For instance, these theorems were obtained in \cite{mej16} for the generalized Fourier transform and in \cite{ray04} for symmetric spaces. Also, an $L^p$ version of Hardy's theorem was obtained for the Dunkl transform in \cite{gal04} and for motion groups in \cite{egu00}. As a generalization of Euclidean uncertainty principles for the Fourier transform, Daher et al. \cite{dah12} have obtained some uncertainty principles for the Cherednik transform. These theorems are further extended to the Opdam--Cherednik transform in \cite{mej014} using classical uncertainty principles for the Fourier transform and composition properties of the Opdam--Cherednik transform. However, upto our knowledge, these types of uncertainty principles have not been studied in the case of the modulation spaces. In this paper, we attempt to prove the Cowling--Price's, Hardy's and Morgan's uncertainty principles for the Opdam--Cherednik transform on modulation spaces associated with this transform. The motivation to prove these uncertainty principles for the Opdam--Cherednik transform on modulation spaces arises from the classical uncertainty principles for the Fourier transform on the Lebesgue spaces. Since the last decade modulation spaces have found to be very fruitful in various current trends (e.g., pseudo-differential operators, partial differential equations, etc..) of investigation and have been widely used in several fields in analysis, physics and engineering. Uncertainty principles have implications in two main areas: quantum mechanics and signal analysis, and modulation spaces are widely used in these areas. We hope that the study of uncertainty principles for the modulation spaces makes a significant impact in these areas. Another important motivation to study the Jacobi--Cherednik operators arises from their relevance in the algebraic description of exactly solvable quantum many-body systems of Calogero--Moser--Sutherland type (see \cite{die00, hik96}) and they provide a useful tool in the study of special functions with root systems (see \cite{dun92, hec91}). These describe algebraically integrable systems in one dimension and have gained considerable interest in mathematical physics. Other motivation for the investigation of the Jacobi--Cherednik operator and the Opdam--Cherednik transform is to generalize the previous subjects which are bound with the physics. For a more detailed discussion, we refer to \cite{mej16}. Since modulation spaces are much larger spaces than the Lebesgue spaces, can we determine the functions $f$ satisfying the conditions of Theorems \ref{Hardy}, or \ref{Cow-Pri}, or \ref{mor} for the modulation spaces? In this paper, we answer these questions. The common key to obtaining extensions of uncertainty principles for the Opdam--Cherednik transform is a slice formula, that is, this transform is decomposed as a composition of the classical Fourier transform and the Jacobi--Cherednik intertwining operator (see \cite{mej014}). However, without using a slice formula for the Opdam--Cherednik transform we give the analogue of the uncertainty principles within the framework of the Opdam--Cherednik transform by using an estimate of the heat kernel, which obtained in \cite{fit89}. Here, we consider the modulation spaces associated with the Opdam--Cherednik transform, as the standard modulation spaces are not suited to this transform. In this paper, we prove the uncertainty principles by using the properties of the heat kernel associated with the Jacobi--Cherednik operator and the versions of the Phragm{\'e}n--Lindl{\"o}f type result for the modulation spaces. The two lemmas, Lemma \ref{lem4} for Cowling--Price's theorem and Lemma \ref{lem2} for Morgan's theorem are essential. Since these lemmas hold for the modulation spaces associated with the Opdam--Cherednik transform, which also satisfies H\"older's inequality, we can apply the classical process to obtain uncertainty principles for the Opdam--Cherednik transform on modulation spaces. The paper is organized as follows. In Section \ref{sec2}, we recall some basic facts about the Jacobi--Cherednik operator and we discuss the main results for the Opdam--Cherednik transform. We also give some properties of the heat kernel associated with the Jacobi--Cherednik operator. In Section \ref{sec3}, we discuss the modulation spaces associated with the Opdam--Cherednik transform. In Section \ref{sec4}, we give the Phragm{\'e}n--Lindl{\"o}f type result for the modulation spaces and using it we prove an $M^p$ -- $M^q$-version of Cowling--Price's theorem for the Opdam--Cherednik transform. In Section \ref{sec5}, an analogue of the classical Hardy's theorem is obtained for the Opdam--Cherednik transform on modulation spaces associated with this transform. Finally, in Section \ref{sec6}, we obtain another version of the Phragm{\'e}n--Lindl{\"o}f type result for the modulation spaces and we prove an $M^p$ -- $M^q$-version of Morgan's theorem for the Opdam--Cherednik transform. \section{Harmonic analysis and the Opdam--Cherednik transform}\label{sec2} In this section, we collect the necessary definitions and results from the harmonic analysis related to the Opdam--Cherednik transform. The main references for this section are \cite{and15, mej14, opd95, opd00, sch08}. However, we will use the same notation as in \cite{joh15}. Let $T_{\alpha, \beta}$ denote the Jacobi--Cherednik differential--difference operator (also called the Dunkl--Cherednik operator) \[T_{\alpha, \beta} f(x)=\frac{d}{dx} f(x)+ \Big[ (2\alpha + 1) \coth x + (2\beta + 1) \tanh x \Big] \frac{f(x)-f(-x)}{2} - \rho f(-x), \] where $\alpha, \beta$ are two parameters satisfying $\alpha \geq \beta \geq -\frac{1}{2}$ and $\alpha > -\frac{1}{2}$, and $\rho= \alpha + \beta + 1$. Let $\lambda \in {\mathbb C}$. The Opdam hypergeometric functions $G^{\alpha, \beta}_\lambda$ on $\mathbb R$ are eigenfunctions $T_{\alpha, \beta} G^{\alpha, \beta}_\lambda(x)=i \lambda G^{\alpha, \beta}_\lambda(x)$ of $T_{\alpha, \beta}$ that are normalized such that $G^{\alpha, \beta}_\lambda(0)=1$. The eigenfunction $G^{\alpha, \beta}_\lambda$ is given by \[G^{\alpha, \beta}_\lambda (x)= \varphi^{\alpha, \beta}_\lambda (x) - \frac{1}{\rho - i \lambda} \frac{d}{dx}\varphi^{\alpha, \beta}_\lambda (x)=\varphi^{\alpha, \beta}_\lambda (x)+ \frac{\rho+i \lambda}{4(\alpha+1)} \sinh 2x \; \varphi^{\alpha+1, \beta+1}_\lambda (x), \] where $\varphi^{\alpha, \beta}_\lambda (x)={}_2F_1 \left(\frac{\rho+i \lambda}{2}, \frac{\rho-i \lambda}{2} ; \alpha+1; -\sinh^2 x \right) $ is the classical Jacobi function. For every $ \lambda \in {\mathbb C}$ and $x \in \mathbb R$, the eigenfunction $G^{\alpha, \beta}_\lambda$ satisfy \[ |G^{\alpha, \beta}_\lambda(x)| \leq C \; e^{-\rho |x|} e^{|\text{Im} (\lambda)| |x|},\] where $C$ is a positive constant. Since $\rho > 0$, we have \begin{equation}\label{eq1} |G^{\alpha, \beta}_\lambda(x)| \leq C \; e^{|\text{Im} (\lambda)| |x|}. \end{equation} Let us denote by $C_c (\mathbb R)$ the space of continuous functions on $\mathbb R$ with compact support. \begin{Def} Let $\alpha \geq \beta \geq -\frac{1}{2}$ with $\alpha > -\frac{1}{2}$. The Opdam--Cherednik transform $\mathcal{H} f$ of a function $f \in C_c(\mathbb R)$ is defined by \[ {\mathcal H} f (\lambda)=\int_{\mathbb R} f(x)\; G^{\alpha, \beta}_\lambda(-x)\; A_{\alpha, \beta} (x) dx \quad \text{for all } \lambda \in {\mathbb C}, \] where $A_{\alpha, \beta} (x)= (\sinh |x| )^{2 \alpha+1} (\cosh |x| )^{2 \beta+1}$. The inverse Opdam--Cherednik transform for a suitable function $g$ on $\mathbb R$ is given by \[ {\mathcal H}^{-1} g(x)= \int_{\mathbb R} g(\lambda)\; G^{\alpha, \beta}_\lambda(x)\; d\sigma_{\alpha, \beta}(\lambda) \quad \text{for all } x \in \mathbb R, \] where $$d\sigma_{\alpha, \beta}(\lambda)= \left(1- \dfrac{\rho}{i \lambda} \right) \dfrac{d \lambda}{8 \pi |C_{\alpha, \beta}(\lambda)|^2}$$ and $$C_{\alpha, \beta}(\lambda)= \dfrac{2^{\rho - i \lambda} \Gamma(\alpha+1) \Gamma(i \lambda)}{\Gamma \left(\frac{\rho + i \lambda}{2}\right)\; \Gamma\left(\frac{\alpha - \beta+1+i \lambda}{2}\right)}, \quad \lambda \in {\mathbb C} \setminus i \mathbb{N}.$$ \end{Def} The Plancherel formula is given by \begin{equation}\label{eq03} \int_{\mathbb R} |f(x)|^2 A_{\alpha, \beta}(x) dx=\int_\mathbb R {\mathcal H} f(\lambda) \overline{{\mathcal H} \check{f}(-\lambda)} \; d \sigma_{\alpha, \beta} (\lambda), \end{equation} where $\check{f}(x):=f(-x)$. Let $L^p(\mathbb R,A_{\alpha, \beta} )$ (resp. $L^p(\mathbb R, \sigma_{\alpha, \beta} )$), $p \in [1, \infty] $, denote the $L^p$-spaces corresponding to the measure $A_{\alpha, \beta}(x) dx$ (resp. $d | \sigma_{\alpha, \beta} |(x)$). The Schwartz space ${\mathcal S}_{\alpha, \beta}(\mathbb R )=(\cosh x )^{-\rho} {\mathcal S}(\mathbb R)$ is defined as the space of all differentiable functions $f$ such that $$ \sup_{x \in \mathbb R} \; (1+|x|)^m e^{\rho |x|} \left|\frac{d^n}{dx^n} f(x) \right|<\infty,$$ for all $m, n \in {\mathbb N}_0 = {\mathbb N} \cup \{0\}$, equipped with the obvious seminorms. The Opdam--Cherednik transform ${\mathcal H}$ and its inverse ${\mathcal H}^{-1}$ are topological isomorphisms between the space ${\mathcal S}_{\alpha, \beta}(\mathbb R )$ and the space ${\mathcal S}(\mathbb R)$ (see \cite{sch08}, Theorem 4.1). Let $t > 0$. The heat kernel $E^{\alpha, \beta}_t$ associated with the Jacobi--Cherednik operator is defined by \begin{equation}\label{eq04} E^{\alpha, \beta}_t(x)={\mathcal H}^{-1}(e^{-t \lambda^2})(x) \quad \text{for all } x \in \mathbb R. \end{equation} For all $t > 0$, $E^{\alpha, \beta}_t$ is an $C^\infty$-function on $\mathbb R$. Moreover, for all $t > 0$ and all $\lambda \in \mathbb R$, we have \begin{equation}\label{eq05} {\mathcal H} (E^{\alpha, \beta}_t) (\lambda)=e^{-t \lambda^2}. \end{equation} We refer to \cite{cho03} for further properties of the heat kernel $E^{\alpha, \beta}_t$. From (\cite{fit89}, Theorem 3.1), there exist two real numbers $\mu_1$ and $\mu_2$, such that \begin{equation}\label{eq06} \frac{e^{\mu_1 t}}{2^{2\alpha+1} \Gamma(\alpha+1) t^{\alpha+1}} \frac{e^{-\frac{x^2}{4t}}}{\sqrt{B_{\alpha, \beta}(x)}} \leq E^{\alpha, \beta}_t(x) \leq \frac{e^{\mu_2 t}}{2^{2\alpha+1} \Gamma(\alpha+1) t^{\alpha+1}} \frac{e^{-\frac{x^2}{4t}}}{\sqrt{B_{\alpha, \beta}(x)}}, \quad \forall x \in \mathbb R, \end{equation} where $B_{\alpha, \beta} (x)= (\sinh |x|/|x| )^{2 \alpha+1} (\cosh |x| )^{2 \beta+1}$ for all $x \in \mathbb R \setminus \{0\}$ and $B_{\alpha, \beta}(0)=1$. Also, we have $A_{\alpha, \beta}(x)= |x|^{2\alpha+1} B_{\alpha, \beta}(x)$ and for all $x \in \mathbb R$, $B_{\alpha, \beta} (x) \geq 1$. \section{Modulation spaces associated with the Opdam--Cherednik transform}\label{sec3} The modulation spaces were introduced by Feichtinger \cite{fei03, fei97}, by imposing integrability conditions on the short-time Fourier transform (STFT) of tempered distributions. More specifically, for $x, w \in \mathbb R$, let $M_w$ and $T_x$ denote the operators of modulation and translation. Then, the STFT of a function $f$ with respect to a window function $g \in {\mathcal S}(\mathbb R)$ is defined by \[ V_g f (x,w)=\langle f, M_w T_x g \rangle=\int_{\mathbb R} f(t) \overline{g(t-x)} e^{-2 \pi i w t} dt, \quad (x,y) \in \mathbb R^2. \] Here we are interested in modulation spaces with respect to measure $A_{\alpha, \beta}(x) dx$. \begin{Def} Fix a non-zero window $g \in \mathcal{S}(\mathbb R)$, and $1 \leq p,q \leq \infty$. Then the modulation space $M^{p,q}(\mathbb R, A_{\alpha, \beta})$ consists of all tempered distributions $f \in \mathcal{S'}(\mathbb R)$ such that $V_g f \in L^{p,q}(\mathbb R^2, A_{\alpha, \beta})$. The norm on $M^{p,q}(\mathbb R, A_{\alpha, \beta})$ is \begin{eqnarray*} \Vert f \Vert_{M^{p,q}(\mathbb R, A_{\alpha, \beta})} &=& \Vert V_g f \Vert_{L^{p,q}(\mathbb R^2, A_{\alpha, \beta})} \\ &=& \bigg( \int_{\mathbb R} \bigg( \int_{\mathbb R} |V_gf(x,w)|^p A_{\alpha, \beta}(x) dx \bigg)^{q/p} A_{\alpha, \beta}(w) dw \bigg)^{1/q} < \infty, \end{eqnarray*} with the usual adjustments if $p$ or $q$ is infinite. If $p=q$, then we write $M^p(\mathbb R, A_{\alpha, \beta})$ instead of $M^{p,p}(\mathbb R, A_{\alpha, \beta})$. Also, we denote by $M^p(\mathbb R, \sigma_{\alpha, \beta})$ the modulation space corresponding to the measure $d |\sigma_{\alpha, \beta}|(x)$ and $M^p(\mathbb R)$ the modulation space corresponding to the Lebesgue measure $dx$. \end{Def} The definition of $M^{p, q}(\mathbb R, A_{\alpha, \beta})$ is independent of the choice of $g$ in the sense that each different choice of $g$ defines an equivalent norm on $M^{p, q}(\mathbb R, A_{\alpha, \beta})$. Each modulation space is a Banach space. For $p=q=2$, we have that $M^2(\mathbb R, A_{\alpha, \beta}) =L^2(\mathbb R, A_{\alpha, \beta}).$ For other $p=q$, the space $M^p(\mathbb R, A_{\alpha, \beta})$ is not $L^p(\mathbb R, A_{\alpha, \beta})$. In fact for $p=q>2$, the space $M^p(\mathbb R, A_{\alpha, \beta})$ is a superset of $L^2(\mathbb R, A_{\alpha, \beta})$. We have the following inclusion \[ \mathcal{S}(\mathbb R) \subset M^1(\mathbb R, A_{\alpha, \beta}) \subset M^2(\mathbb R, A_{\alpha, \beta})=L^2(\mathbb R, A_{\alpha, \beta}) \subset M^\infty(\mathbb R, A_{\alpha, \beta}) \subset \mathcal{S'}(\mathbb R). \] In particular, we have $M^p(\mathbb R, A_{\alpha, \beta}) \hookrightarrow L^p(\mathbb R, A_{\alpha, \beta})$ for $1 \leq p \leq 2$, and $L^p(\mathbb R, A_{\alpha, \beta}) \hookrightarrow M^p(\mathbb R, A_{\alpha, \beta})$ for $2 \leq p \leq \infty$. Furthermore, the dual of a modulation space is also a modulation space, if $p < \infty$, $q < \infty$, $(M^{p, q}(\mathbb R, A_{\alpha, \beta}))^{'} =M^{p', q'}(\mathbb R, A_{\alpha, \beta})$, where $p', \; q'$ denote the dual exponents of $p$ and $q$, respectively. We refer to Gr\"ochenig's book \cite{gro01} for further properties and uses of modulation spaces. \section{Cowling--Price's theorem for the Opdam--Cherednik transform}\label{sec4} We begin this section with the following lemma which we need for the proof of the next result. \begin{lemma}\label{lem3} If $f(t)=1$ and $g(t)=e^{-\pi t^2}$, then \[ V_g f (x, w) =e^{- 2 \pi i w x} \; e^{-\pi w^2}. \] Also for $p\in [1, \infty)$ and $\rho, \sigma >0$, we have \[ \|f\|_{M^p([\rho \sigma, \; \rho (\sigma+1)])} \leq \rho^{\frac{2}{p}}. \] \end{lemma} \begin{proof} \begin{eqnarray*} V_g f (x, w) &=& \int_{\mathbb R} e^{-\pi (t-x)^2} e^{- 2 \pi i w t} dt \\ &=& \int_{\mathbb R} e^{-\pi s^2} e^{- 2 \pi i w (s+x)} ds \\ &=& e^{- 2 \pi i w x} \int_{\mathbb R} e^{-\pi s^2} e^{- 2 \pi i w s} ds \\ &=& e^{- 2 \pi i w x} \; e^{-\pi w^2}. \end{eqnarray*} Also, we have \begin{eqnarray*} \|f\|_{M^p([\rho \sigma, \; \rho (\sigma+1)])} &=& \| V_g f \|_{L^p([\rho \sigma, \; \rho (\sigma+1)]\times [\rho \sigma, \; \rho (\sigma+1)])} \\ &=& \left( \int_{\rho \sigma}^{\rho (\sigma+1)} \int_{\rho \sigma}^{\rho (\sigma+1)} e^{- \pi p w^2} dx dw \right)^{\frac{1}{p}}\\ & \leq & \left( \int_{\rho \sigma}^{\rho (\sigma+1)} \int_{\rho \sigma}^{\rho (\sigma+1)} dx dw \right)^{\frac{1}{p}}\\ &=& \rho^{\frac{2}{p}}. \end{eqnarray*} \end{proof} We obtain the following lemma of Phragm{\'e}n--Lindl{\"o}f type using the same technique as in \cite{cow83}. This lemma plays a crucial role in the proof of the next theorem, which is an $M^p$ -- $M^q$-version of Cowling--Price's theorem for the Opdam--Cherednik transform. An $L^p$-version of the following lemma proved in \cite{cow83} but here we prove the lemma for the modulation space $M^p(\mathbb R, \sigma_{\alpha, \beta})$ and obtain a different estimate. \begin{lemma}\label{lem4} Suppose that $g$ is analytic in the region $ Q = \{re^{i \theta} : r>0, \;0 < \theta < \frac{\pi}{2}\}$ and continuous on the closure $\bar{Q}$ of $Q$. Suppose also that for $p\in [1, \infty)$ and constants $A, a > 0$, \[|g(x + iy)| \leq A \; e^{ax^2} \quad \text{for} \;\; x + iy \in \bar{Q} , \] and \[ \| g_{|\mathbb R} \|_{M^p(\mathbb R, \sigma_{\alpha, \beta})} \leq A. \] Then \[ \int_\sigma^{\sigma+1} |g(\rho e^{i \psi} )| \; d\rho \leq A \; \max \left\{e^a , (\sigma + 1)^{\frac{2}{p}-1} \right\} \] for $\psi \in [0, \frac{\pi}{2}]$ and $\sigma \in \mathbb R^+$. \end{lemma} \begin{proof} Using the definition of $M^p(\mathbb R, \sigma_{\alpha, \beta})$ and the fact that there is a constant $k_1 > 0$ such that $|C_{\alpha, \beta} (\lambda)|^{-2} \geq k_1 |\lambda|^{2 \alpha +1}$ for all $\lambda \in \mathbb R$ with $|\lambda| \geq 1$ (see \cite{tri97}, page 157), we get \begin{eqnarray*} && \| g_{|\mathbb R} \|^p_{M^p(\mathbb R, \sigma_{\alpha, \beta})} = \| V_h g \|^p_{L^p(\mathbb R^2, \sigma_{\alpha, \beta})} \\ && \geq \int_{|\mu| \geq 1} \int_{|\lambda| \geq 1} |V_h g(\lambda, \mu)|^p \; d |\sigma_{\alpha, \beta}|(\lambda) \; d |\sigma_{\alpha, \beta}|(\mu) \\ && = \int_{|\mu| \geq 1} \int_{|\lambda| \geq 1} |V_h g(\lambda, \mu)|^p \;\left|1-\frac{\rho}{i\lambda}\right| \frac{d\lambda}{8 \pi |C_{\alpha, \beta} (\lambda)|^2} \; \left|1-\frac{\rho}{i\mu}\right| \frac{d\mu}{8 \pi |C_{\alpha, \beta} (\mu)|^2} \\ && \geq \frac{1}{64\pi^2} \int_{|\mu| \geq 1} \int_{|\lambda| \geq 1} |V_h g(\lambda, \mu)|^p \; \frac{d\lambda}{|C_{\alpha, \beta} (\lambda)|^2} \; \frac{d\mu}{|C_{\alpha, \beta} (\mu)|^2}\\ && \geq \frac{k_1^2}{64\pi^2} \int_{|\mu| \geq 1} \int_{|\lambda| \geq 1} |V_h g(\lambda, \mu)|^p |\lambda|^{2 \alpha +1} d\lambda \; |\mu|^{2 \alpha +1} d\mu \\ && \geq \frac{k_1^2}{64\pi^2} \int_{|\mu| \geq 1} \int_{|\lambda| \geq 1} |V_h g(\lambda, \mu)|^p \; d\lambda d\mu, \end{eqnarray*} where $h \in \mathcal{S}(\mathbb R)$. This shows that $\| g_{|\mathbb R} \|_{M^p(\mathbb R)} \leq A$. Now, we define a function $f$ on $\bar{Q}$ by $ f(z)=g(z) \exp (i \varepsilon e^{i \varepsilon} z^{\frac{(\pi - 2 \varepsilon)}{\theta}}+i a \cot (\theta) z^2/2 )$ for $\theta \in (0, \pi/2)$ and $\varepsilon \in (0, \pi/2-\theta)$. We apply the same method as in \cite{cow83} to the function $f$ to get the estimate. By using H{\"o}lder's inequality for $M^p$ and Lemma \ref{lem3}, for $\rho > (\sigma+1)^{-1}$ we obtain \begin{eqnarray*} \int_\sigma^{\sigma+1} |f(\rho \tau)| d \tau = \frac{1}{\rho} \int_{\rho \sigma}^{\rho (\sigma+1)} |f( \tau)| d\tau &\leq & \frac{1}{\rho}\; \|f\|_{M^p([\rho \sigma, \; \rho (\sigma+1)])} \|1 \|_{M^q([\rho \sigma, \; \rho (\sigma+1)])} \\ & \leq & \rho^{\frac{2}{q}-1} \|g\|_{M^p([\rho \sigma, \; \rho (\sigma+1)])} \\ & \leq & A \; (\sigma+1)^{\frac{2}{p}-1}. \end{eqnarray*} Now the rest of the proof follows similarly as in \cite{cow83}. \end{proof} \begin{theorem}\label{th2} Let $1 \leq p, q \leq \infty$ with at least one of them finite. Suppose that $f$ is a measurable function on $\mathbb R$ such that \begin{eqnarray}\label{eq5} e^{a x^2} f \in M^p(\mathbb R, A_{\alpha, \beta}) \quad \text{and} \quad e^{b \lambda^2} {\mathcal H} f \in M^q(\mathbb R, \sigma_{\alpha, \beta}), \end{eqnarray} for some constants $a, b > 0$. Then the following conclusions hold: \begin{enumerate} \item[(i)] If $ab \geq \frac{1}{4}$, then $f = 0$ almost everywhere. \item[(ii)] If $ab < \frac{1}{4}$, then for all $t \in (b, \frac{1}{4a})$, the functions $f = E^{\alpha, \beta}_t$ satisfy the relations $(\ref{eq5})$. \end{enumerate} \end{theorem} \begin{proof} We divide the proof in several steps. Step 1: Assume that $ab > \frac{1}{4}$. The function \[ {\mathcal H} f (\lambda)=\int_{\mathbb R} f(x)\; G^{\alpha, \beta}_\lambda(-x)\; A_{\alpha, \beta} (x) dx, \quad \text{for any } \lambda \in {\mathbb C}, \] is well defined, entire on ${\mathbb C}$, and satisfies the condition \begin{eqnarray}\label{eq6} |{\mathcal H} f (\lambda)| & \leq & \int_{\mathbb R} |f(x)|\; |G^{\alpha, \beta}_\lambda(-x)|\; A_{\alpha, \beta} (x) dx \nonumber \\ & \leq & C \int_{\mathbb R} |f(x)|\; e^{|\mathrm{Im}(\lambda)| |x|}\; A_{\alpha, \beta} (x) dx, \quad \; \text{by } (\ref{eq1}), \nonumber \\ & = & C \; e^{\frac{|\mathrm{Im}(\lambda)|^2}{4a}} \int_{\mathbb R} e^{a x^2} |f(x)|\; e^{- a \left(x - \frac{|\mathrm{Im}(\lambda)|}{2a}\right)^2 }\; A_{\alpha, \beta} (x) dx, \;\; \text{so by H\"older's inequality}, \nonumber \\ & \leq & C \; e^{ \frac{(\mathrm{Im}(\lambda))^2}{4a}} \; \Big\Vert e^{a x^2} f \Big\Vert_{M^p(\mathbb R, A_{\alpha, \beta})} \; \Big\Vert e^{- a \left(x - \frac{|\mathrm{Im}(\lambda)|}{2a} \right)^2} \Big\Vert_{M^{p'}(\mathbb R, A_{\alpha, \beta})}, \end{eqnarray} where $p'$ is the conjugate exponent of $p$. We consider the function $g$ defined on ${\mathbb C}$ by \begin{equation}\label{eq7} g(\lambda)=e^{\frac{\lambda^2}{4a}} \; {\mathcal H} f (\lambda) . \end{equation} Then $g$ is an entire function on ${\mathbb C}$ and using (\ref{eq6}), we obtain that there exists a constant $A$ such that \begin{equation}\label{eq8} |g(\lambda)| \leq A \; e^{\frac{(\mathrm{Re}(\lambda))^2}{4a}}, \quad \text{for all } \lambda \in {\mathbb C}. \end{equation} In the following we consider two cases. (i) Let $q<\infty$. Using the inequality $ab > \frac{1}{4}$ and the hypothesis (\ref{eq5}), we obtain \begin{equation}\label{eq9} \| g_{|\mathbb R} \|_{M^q(\mathbb R, \sigma_{\alpha, \beta})}=\left\| e^{b \lambda^2} \; {\mathcal H} f \; e^{\left(\frac{1}{4a} - b\right)\lambda^2} \right\|_{M^q(\mathbb R, \sigma_{\alpha, \beta})} \leq \left\| e^{b \lambda^2} \; {\mathcal H} f \right\|_{M^q(\mathbb R, \sigma_{\alpha, \beta})} \leq A. \end{equation} By applying the Lemma \ref{lem4} to the functions $g(\lambda), \; g(-\lambda), \; \overline{g(\overline{\lambda})}$ and $\overline{g(-\overline{\lambda})}$, we get that for $\psi \in [0, 2\pi]$ and large $\sigma$ \[ \int_\sigma^{\sigma+1} |g(\rho e^{i \psi} )| \; d\rho \leq B (\sigma + 1)^{\frac{2}{q}-1}, \] for some constant $B$. Now by Cauchy's integral formula, \[ |g^{(n)} (0)| \leq n! (2 \pi)^{-1} \int_0^{2 \pi} |g(\rho e^{i \psi} )| \; \rho^{-n} d\psi. \] Consequently, for large $\sigma$, \begin{eqnarray}\label{eq10} |g^{(n)} (0)| & \leq & n! (2 \pi)^{-1} \int_0^{2 \pi} \left( \int_\sigma^{\sigma+1} |g(\rho e^{i \psi} )| \; \rho^{-n} d\rho \right) d\psi \\ & \leq & B n! \; \sigma^{-n} (\sigma + 1)^{\frac{2}{q}-1}. \nonumber \end{eqnarray} Let $\sigma \to \infty$. If $q \geq 2$, then $(\sigma + 1)^{\frac{2}{q}-1} \leq B_1$, for some constant $B_1$ and $g^{(n)} (0)=0$ for $n \geq 1$. So $g(\lambda)= D$, for some constant $D$. From (\ref{eq9}), $g(\lambda) = 0$ for all $\lambda \in {\mathbb C}$. Also, if $q < 2$, then $g^{(n)} (0)=0$ for $n \geq 2$. So $g(\lambda)= C \lambda+ D$, for some constants $C$ and $D$. From (\ref{eq8}) and (\ref{eq9}), $g(\lambda) = 0$ for all $\lambda \in {\mathbb C}$. Thus ${\mathcal H} f (\lambda)=0$ for all $\lambda \in \mathbb R$, and then by (\ref{eq03}), we have $f=0$ almost everywhere on $\mathbb R$. (ii) Let $q = \infty$. As $ab > \frac{1}{4}$, then from (\ref{eq5}) we obtain \begin{equation}\label{eq11} \| g_{|\mathbb R} \|_{M^\infty(\mathbb R, \sigma_{\alpha, \beta})} \leq \left\| e^{b \lambda^2} \; {\mathcal H} f \right\|_{M^\infty(\mathbb R, \sigma_{\alpha, \beta})} < \infty. \end{equation} If we consider $q=\infty$, then the estimate obtained in Lemma \ref{lem4} can be refined so that $\max \{e^a , (\sigma + 1)^{\frac{2}{q}-1} \}$ is replaced by $1$ (see \cite{cow83}). From (\ref{eq10}), we have \[ |g^{(n)} (0)| \leq A \; n! \; \sigma^{-n}. \] Then $g^{(n)} (0)=0$ for $n \geq 1$. So $g(\lambda) = C$ for all $\lambda \in {\mathbb C}$, for some constant $C$. Therefore $e^{b \lambda^2} {\mathcal H} f (\lambda)=C \; e^{(b-\frac{1}{4a})\lambda^2}$ for all $\lambda \in \mathbb R$. Since $ab > \frac{1}{4}$, this function satisfies the relation (\ref{eq11}) implies that $C=0$ thus, $f = 0$ almost everywhere on $\mathbb R$. Step 2: Assume that $ab = \frac{1}{4}$. (a) If $q < \infty$, with the same proof as for the point (i) of the first step, we obtain $f = 0$ almost everywhere on $\mathbb R$. (b) Let $q = \infty$ and $1 \leq p < \infty$. We have $\| g_{|\mathbb R} \|_{M^\infty(\mathbb R, \sigma_{\alpha, \beta})} < \infty$. Then by the point (ii) of the first step, the relation (\ref{eq7}), and the property (\ref{eq05}) of the heat kernel $E^{\alpha, \beta}_{\frac{1}{4a}}$, we deduce that \begin{equation}\label{eq12} {\mathcal H} f (\lambda)=C \; e^{-\frac{\lambda^2}{4a}}=C \; {\mathcal H} ( E^{\alpha, \beta}_{\frac{1}{4a}} ) (\lambda), \quad \text{for all } \lambda \in \mathbb R, \end{equation} for some constant $C$. Thus from the injectivity of the transform ${\mathcal H}$, we obtain \begin{equation}\label{eq14} f(x)= C \; E^{\alpha, \beta}_{\frac{1}{4a}} (x), \quad \text{a.e. } x \in \mathbb R. \end{equation} By using the relations (\ref{eq06}) and (\ref{eq14}), we get \[ \frac{2 C e^{\frac{\mu_1}{4a}} a^{\alpha+1}}{\Gamma(\alpha+1) \sqrt{B_{\alpha, \beta}(x)}} \leq e^{a x^2} f(x), \quad \text{for all } x \in \mathbb R. \] From the properties of the functions $A_{\alpha, \beta}$ and $B_{\alpha, \beta}$, we obtain that for finite $p$ \[ \left\| \frac{1}{\sqrt{B_{\alpha, \beta} (x)}} \right\|_{M^p(\mathbb R, \;A_{\alpha, \beta})} =\infty. \] On the other hand, from (\ref{eq5}) we have $\| e^{a x^2} f \|_{M^p(\mathbb R, \;A_{\alpha, \beta})} < \infty$, this is impossible unless $C = 0$. Then we obtain from (\ref{eq14}) that $f = 0$ almost everywhere on $\mathbb R$. Step 3: Assume that $ab < \frac{1}{4}$. Let $t \in (b, \frac{1}{4a})$ and $f = E^{\alpha, \beta}_t$. From the relation (\ref{eq06}), we get \[ K_1 e^{-\left(\frac{1}{4t}-a\right)x^2} \leq e^{ax^2} f(x) \leq K_2 e^{-\left(\frac{1}{4t}-a\right)x^2}, \quad \text{for all } x \in \mathbb R, \] for some constants $K_1, \;K_2 > 0$. As $t < \frac{1}{4a}$, we deduce that $e^{ax^2} f \in {M^p(\mathbb R, A_{\alpha, \beta})} $. Using the relation (\ref{eq04}), we get \[ e^{b \lambda^2} {\mathcal H} f (\lambda)=e^{-(t-b)\lambda^2}, \quad \text{for all } \lambda \in \mathbb R. \] The condition $t > b$ and the inequality $|C_{\alpha, \beta} (\lambda)|^{-2} \leq k_2 |\lambda|^{2 \alpha +1}$ at infinity (see \cite{tri97}, page 157) imply that $e^{b \lambda^2} {\mathcal H} f \in M^q(\mathbb R, \sigma_{\alpha, \beta})$. This completes the proof of the theorem. \end{proof} \section{Hardy's theorem for the Opdam--Cherednik transform}\label{sec5} In this section, we determine the functions $f$ satisfying the relations (\ref{eq5}) in the special case $p = q = \infty$. The result we obtain, is an analogue of the classical Hardy's theorem for the Opdam--Cherednik transform. \begin{theorem} Let $f$ be a measurable function on $\mathbb R$ such that \begin{equation}\label{eq15} e^{a x^2} f \in M^\infty(\mathbb R, A_{\alpha, \beta}) \quad \text{and} \quad e^{b \lambda^2} {\mathcal H} f \in M^\infty(\mathbb R, \sigma_{\alpha, \beta}), \end{equation} for some constants $a, b > 0$. Then \begin{enumerate} \item[(i)] If $ab > \frac{1}{4}$, we have $f = 0$ almost everywhere. \item[(ii)] If $ab = \frac{1}{4}$, the function $f$ is of the form $f = C E^{\alpha, \beta}_{\frac{1}{4a}} $, for some real constant $C$. \item[(iii)] If $ab < \frac{1}{4}$, there are infinitely many nonzero functions $f$ satisfying the conditions $(\ref{eq15})$. \end{enumerate} \end{theorem} \begin{proof} (i) If $ab > \frac{1}{4}$, the point (ii) of the first step of the proof of Theorem \ref{th2} gives the result. (ii) If $ab = \frac{1}{4}$ and $\| e^{b \lambda^2} {\mathcal H} f \|_{M^\infty(\mathbb R, \; \sigma_{\alpha, \beta})} < \infty$, then from Step 2 (b) of the proof of Theorem \ref{th2} and the relation (\ref{eq14}), we have $f = C E^{\alpha, \beta}_{\frac{1}{4a}} $, for some real constant $C$. As $B_{\alpha, \beta} (x) \geq 1$, from relations (\ref{eq06}) and (\ref{eq14}), we get that \[ e^{a x^2} f(x) \leq \frac{2 C e^{\frac{\mu_2}{4a}} a^{\alpha+1}}{\Gamma(\alpha+1) \sqrt{B_{\alpha, \beta}(x)}}, \quad \text{for all } x \in \mathbb R. \] On the other hand, from (\ref{eq15}) we have $\| e^{a x^2} f \|_{M^\infty(\mathbb R, \;A_{\alpha, \beta})} < \infty$, this is impossible unless $f = C E^{\alpha, \beta}_{\frac{1}{4a}} $. Thus, the result of point (ii) is proved. (iii) If $ab < \frac{1}{4}$, the functions $f = E^{\alpha, \beta}_t$, $t \in (b, \frac{1}{4a})$, satisfy the conditions $(\ref{eq15})$. This completes the proof of the theorem. \end{proof} \section{Morgan's theorem for the Opdam--Cherednik transform}\label{sec6} The aim of this section is to prove an $M^p$ -- $M^q$-version of Morgan's theorem for the Opdam--Cherednik transform. Before we prove the main result of this section, we first need the following lemma. \begin{lemma}[\cite{far03}, Lemma 2.3]\label{lem1} Suppose that $\rho \in (1, 2)$, $q \in [1, \infty]$, $\eta > 0$, $M > 0$ and $ B > \eta \sin \frac{\pi}{2} (\rho-1)$. If $g$ is an entire function on ${\mathbb C}$ satisfying the conditions \begin{enumerate} \item[(i)] $|g(x + iy)| \leq M e^{\eta |y|^\rho},\;$ for any $x,y \in \mathbb R$, \item[(ii)] $ e^{B |x|^\rho} g_{|\mathbb R} \in L^q(\mathbb R) $, \end{enumerate} then $g = 0$. \end{lemma} Using the above lemma, in the following, we obtain a version of the Phragm{\'e}n--Lindl{\"o}f type result for the modulation spaces. \begin{lemma}\label{lem2} Suppose that $\rho \in (1, 2)$, $q \in [1, \infty)$, $\eta > 0$, $M > 0$ and $B > \eta \sin \frac{\pi}{2} (\rho-1)$. If $g$ is an entire function on ${\mathbb C}$ satisfying the conditions \begin{enumerate} \item[(i)] $|g(x + iy)| \leq M e^{\eta |y|^\rho}$, for any $x, \;y \in \mathbb R$, \item[(ii)] $ e^{B |x|^\rho} g_{|\mathbb R} \in M^q(\mathbb R) $, \end{enumerate} then $g = 0$. \end{lemma} \begin{proof} Let $R > 0$ be such that \[B > \eta ((R+1)/R)^\rho \sin \frac{\pi}{2} (\rho-1).\] Consider the entire function on ${\mathbb C}$ defined by \[ F(z)=\int_R^{R+1} g(tz) dt.\] Then for any $n \in {\mathbb N}$, the derivatives of $F$ satisfy the condition \[ F^{(n)}(0) = \left[\left( (R + 1)^{n+1} - R^{n+1}\right) /(n+1)\right] g^{(n)}(0). \] Therefore, $g =0$ if and only if $F =0$. By assumption (i), we have \begin{equation}\label{eq01} |F(x+iy)| \leq M \;e^{{(R+1)}^{\rho} \eta |y|^{\rho}},\mathrm{\;for \;any\;} x, \;y \in \mathbb R. \end{equation} Let $x \in \mathbb R \setminus \{0\}$, the change of variable $u = xt$ gives \[ F(x)= \frac{1}{x} \int_{Rx}^{(R+1)x} g(u) du,\] so \begin{eqnarray*} |F(x)| \leq \frac{1}{|x|} \int_{Rx}^{(R+1)x} |g(u)| du \leq \frac{1}{|x|} \int_{Rx}^{(R+1)x} |g(u)| \;e^{B |u|^\rho} e^{- R^\rho B |x|^\rho} du. \end{eqnarray*} By the H\"older's inequality, we get \[|F(x)| \leq\frac{1}{|x|} \;\Vert e_B g \Vert_{M^q(\mathbb R)} \; \Vert 1 \Vert_{M^{q'}([Rx, (R+1)x])} \; e^{- R^\rho B |x|^\rho}, \] where $e_B(u)=e^{B |u|^\rho}$ and $q'$ is the conjugate exponent of $q$. Since $$\Vert 1 \Vert_{M^{q'}([Rx, (R+1)x])} \leq C |x|^{1/{q'}}$$ for some constant $C>0$, we have \[|F(x)| \leq \frac{C}{|x|^{1/q}} \;\Vert e_B g \Vert_{M^q(\mathbb R)} \; e^{- R^\rho B |x|^\rho}. \] Since $F$ is continuous on $\mathbb R$, using assumption (ii), we obtain \begin{equation}\label{eq02} e^{R^\rho B |x|^\rho} F_{|\mathbb R} \in L^\infty(\mathbb R). \end{equation} Using the inequalities (\ref{eq01}) and (\ref{eq02}), and applying Lemma \ref{lem1} for $q=\infty$ to $F$, we see that $F=0$, thus $g=0$. This completes the proof. \end{proof} \begin{theorem} Let $ p \in [1, \infty]$, $ q \in [1, \infty)$, $a>0$, $b>0$, and let $\alpha, \beta$ be positive real numbers satisfying $\alpha >2$ and $1/\alpha+1/\beta=1$. Suppose that $f$ is a measurable function on $\mathbb R$ such that \[ e^{a|x|^\alpha} f \in M^p(\mathbb R, A_{\alpha, \beta}) \quad \text{and} \quad e^{b|\lambda|^\beta} {\mathcal H} f \in M^q(\mathbb R, \sigma_{\alpha, \beta}). \] If \[ (a \alpha)^{1/\alpha} (b \beta)^{1/\beta} > \left( \sin \left( \frac{\pi}{2}(\beta -1 ) \right) \right)^{1/\beta}, \] then $f = 0$. \end{theorem} \begin{proof} Let $f$ be a measurable function on $\mathbb R$ such that \begin{equation}\label{eq2} e^{a|x|^\alpha} f \in M^p(\mathbb R, A_{\alpha, \beta}) \end{equation} and \begin{equation}\label{eq3} e^{b|\lambda|^\beta} {\mathcal H} f \in M^q(\mathbb R, \sigma_{\alpha, \beta}). \end{equation} We use conditions (\ref{eq2}) and (\ref{eq3}) to prove that the Opdam--Cherednik transform of $f$ satisfies the conditions (i) and (ii) of Lemma \ref{lem2}, and hence we deduce that $f=0$ almost everywhere. The function \[ {\mathcal H} f (\lambda)=\int_{\mathbb R} f(x)\; G^{\alpha, \beta}_\lambda(-x)\; A_{\alpha, \beta} (x) dx \] is well defined, entire on ${\mathbb C}$, and satisfies the condition \begin{eqnarray*} |{\mathcal H} f (\lambda)| & \leq & \int_{\mathbb R} |f(x)|\; |G^{\alpha, \beta}_\lambda(-x)|\; A_{\alpha, \beta} (x) dx \\ & \leq & C \int_{\mathbb R} |f(x)|\; e^{|\mathrm{Im}(\lambda)| |x|}\; A_{\alpha, \beta} (x) dx, \quad \text{for any } \lambda \in {\mathbb C}, \; \text{by } (\ref{eq1}), \\ & \leq & C \; \Big\Vert e^{a|x|^\alpha} f \Big\Vert_{M^p(\mathbb R, A_{\alpha, \beta})} \; \Big\Vert e^{-a|x|^\alpha} e^{|\mathrm{Im}(\lambda)| |x|} \Big\Vert_{M^{p'}(\mathbb R, A_{\alpha, \beta})}, \quad \text{by H\"older's inequality}, \\ & \leq & C_1 \; \Big\Vert e^{|\mathrm{Im}(\lambda)| |x|-a|x|^\alpha} \Big\Vert_{M^{p'}(\mathbb R, A_{\alpha, \beta})}, \quad \text{by } (\ref{eq2}), \end{eqnarray*} where $C_1$ is a constant and $p'$ is the conjugate exponent of $p$. Let \[C \in I= \left( (b \beta)^{-1/\beta} \left( \sin \left( \frac{\pi}{2}(\beta -1 ) \right) \right)^{1/\beta}, \; (a \alpha)^{1/\alpha} \right) . \] Applying the convex inequality \[ |ty| \leq \left(\frac{1}{\alpha}\right)|t|^\alpha + \left(\frac{1}{\beta}\right)|y|^\beta \] to the positive numbers $C|t|$ and $|y|/C$, we obtain \[ |ty| \leq \frac{C^\alpha}{\alpha} \;|t|^\alpha + \frac{1}{\beta C^\beta} \;|y|^\beta, \] and thus \[ \Big\Vert e^{|\mathrm{Im}(\lambda)| |x|-a|x|^\alpha} \Big\Vert_{M^{p'}(\mathbb R, A_{\alpha, \beta})} \leq e^{{|\mathrm{Im}(\lambda)|^\beta}/{\beta C^\beta}} \; \Big\Vert e^{-\left(a- {C^\alpha}/{\alpha} \right)|x|^\alpha} \Big\Vert_{M^{p'}(\mathbb R, A_{\alpha, \beta})}. \] Since $C \in I$ , it follows that $a > C^\alpha / \alpha$, and thus \[\Big\Vert e^{-\left(a- {C^\alpha}/{\alpha} \right)|x|^\alpha} \Big\Vert_{M^{p'}(\mathbb R, A_{\alpha, \beta})} < \infty . \] Therefore, \[\Big\Vert e^{|\mathrm{Im}(\lambda)| |x|-a|x|^\alpha} \Big\Vert_{M^{p'}(\mathbb R, A_{\alpha, \beta})} < \infty. \] Moreover, \begin{equation}\label{eq4} |{\mathcal H} f (\lambda)| \leq \text{Const.}\; e^{{|\mathrm{Im}(\lambda)|^\beta}/{\beta C^\beta}} \quad \text{for any } \lambda \in {\mathbb C}. \end{equation} Condition (\ref{eq3}) and inequality (\ref{eq4}) imply that the function $g(z)={\mathcal H} f (z)$ satisfies the assumptions (i) and (ii) of Lemma \ref{lem2} with $\rho = \beta$, $\eta=1/(\beta C^\beta)$, and $B=b$. The condition $C \in I$ implies the inequality \[ b > \frac{1}{\beta C^\beta} \sin \left( \frac{\pi}{2}(\beta -1 ) \right), \] which gives ${\mathcal H} f=0$ by Lemma \ref{lem2}, then $f=0$ by (\ref{eq03}). \end{proof} \section*{Acknowledgments} The author is deeply indebted to Prof. S. Thangavelu for several fruitful discussions and generous comments. The author is grateful to the University Grants Commission, India for providing the Dr. D. S. Kothari Post Doctoral Fellowship (Award No.- F.4-2/2006 (BSR)/MA/18-19/0032). The author also wishes to thank the anonymous referee for valuable comments and suggestions which helped to improve the quality of the paper. \section*{Disclosure statement} No potential conflict of interest was reported by the author.
{'timestamp': '2020-06-01T02:06:36', 'yymm': '2005', 'arxiv_id': '2005.14386', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14386'}
arxiv
\section{Introduction} Most existing captioning models learn an autoregressive model, either LSTM or transformer, in which explicit control of generation process is difficult. In particular, the leng
th of the caption is determined only after the End Of Sentence ($<$eos$>$) is generated. It is hard to know and control the length beforehand. However, length can be an important property of a caption. A model that allows control of output length would provide more options for the end users of the captioning model. By controling the length, we can influence the style and descriptiveness of the caption: short, simple captions vs. longer, more complex and detailed descriptions for the same image. Previous work includes captioning models that allow control for other aspects. \cite{cornia2019show} controls the caption by inputting a different set of image regions. \cite{deshpande2019fast} can generate a caption controlled by assigning POS tags. Length control has been studied in abstract summarization~\cite{kikuchi2016controlling,fan2017controllable,takase2019positional}, but to our knowledge not in the context of image capitoning. To control the length of the generated caption, we build the model borrowing existing ideas from summarization work, by injecting length information into the model. To generate captions without an explicit length specification, we add a \emph{length prediction module} that can predict the optimal length for the input image at hand. We show that the length models can successfully generate catpion ranging from 7 up to 28 words long. We also show that length models perform better than non-controled model (with some special decoding methods)\gscom{the last phrase is vague at this point} when asked to generate long captions. \section{Models} We consider repurposing existing methods in summarization for captioning. In general, the length is treated as an intermediate \rtcom{latetnt variable is not correct, it's not latent} variable: $P(c|I) = P(c|I,l)*P(l|I)$. $c$, $I$ and $l$ are caption, image and length, respectively. We introduce how we build $P(c|I,l)$ and $P(l|I)$ as follows. Note that, the following methods can be used in conjunctions with any standard captioning model. \subsection{LenEmb\cite{kikuchi2016controlling}} We take LenEmb from \cite{kikuchi2016controlling} and make some small change according to \cite{takase2019positional}. Given desired length $T$ and current time step $t$, we embed the remaining length $T-t$ into a vector that is the same size as the word embedding. Then the word embedding of the previous word $w_{t-1}$, added with the length embedding (rather than concatenated, as in \cite{kikuchi2016controlling}), is fed as the input to the rest of the LSTM model. \begin{align} x_t = E_w[w_{t-1}] + E_l[T-t];\ \ P(w_t) = LSTM(x_t, h_{t-1}) \nonumber \end{align} \noindent\textbf{Learning to predict length} We add a length prediction module to predict the length given the image features (in this case the averaging region features) while no desired length is provided. We treat it as a classification task and train it with the reference caption length. \subsection{Marker\cite{fan2017controllable}} We also implement Marker model from \cite{fan2017controllable}. The desired length is fed as a special token at the beginning of generation as the ``first word''. At training time, the model needs to learn to predict the length at the first step the same way as other words (no extra length predictor needed). At test time, the length token is sampled in the same way as other words if no desired length specified. \section{Experiments} We use COCO~\cite{lin2014microsoft} to evaluate our models. For train/val/test split we follows \cite{karpathy2015deep}. The base captioning model is Att2in~\cite{rennie2017self}. The images features are bottom-up features\cite{anderson2018bottom}. For evaluation, we use BLEU~\cite{papineni2002bleu}, METEOR~\cite{denkowski2014meteor}, ROUGE~\cite{lin2004rouge}, CIDEr~\cite{vedantam2015cider}, SPICE~\cite{anderson2016spice} and bad ending rate~\cite{guo2018improving,self-critical-code}. We train the models with cross-entropy loss. Unless specified otherwise, decoding is beam search with beam size 5, and evaluation on Karpthy test set. \subsection{Generation with predicted lengths} For fair comparison on general image captioning task, we predict the length and generate the caption conditioned on the predicted length for length models. Results in Table \ref{tab:pred_perf} show that the length models are comparable to the base model. \begin{table}[htbp] \footnotesize \centering \begin{tabular}{lccccccc} & B4 & R & M & C & S & BER & LenMSE\\ \hline Att2in & 35.9 & 56.1 & 27.1 & 110.6 & 20.0 & 0.1 & N/A\\ \hline LenEmb & 34.9 & 56.2 & 27.0 & 110.0 & 20.0 & 0.0 & 0\\ Marker & 35.2 & 56.2 & 26.9 & 109.8 & 19.9 & 0.0 & 0\\ \hline \end{tabular}% \caption{\small Performance on COCO Karpathy test set. B4=BLEU4, R=ROUGE, M=METEOR, C=CIDEr, S=SPICE, BER=bad ending rate, LenMSE=length mean square error. Numbers all in percentage(\%) except LenMSE.} \label{tab:pred_perf}% \end{table}% \noindent\textbf{Length distribution(Fig.\ref{fig:length_dist})} While the scores are close, the length distribution is quite different. Length models tend to generate longer captions than normal auto-regressive models. However neither is close to the real caption length distribution(``test'' in the figure). \begin{figure}[t] \centering \begin{minipage}[b]{.6\linewidth} \includegraphics[width=\linewidth]{figures/length_dist.pdf} \end{minipage} \begin{minipage}[b]{.38\linewidth} \captionof{figure}{\small Length distribution of captions generated by different models. For length models, the length is obtained within the beam search process as a special token.} \label{fig:length_dist} \end{minipage} \end{figure} \subsection{Generation with controlled lengths} For baseline model, we use the method fixLen in \cite{kikuchi2016controlling} where probability manipulation are used to avoid generating eos token until desired length. The original CIDEr-D promotes short and generic captions because it computes average similarity between the generated and the references. We report a modified CIDEr (mCIDEr): 1) removing length penalty term in the CIDEr-D; 2) combining the ngram counts from all the reference captions to compute similarity~\cite{coco-caption-code}. \noindent\textbf{Fluency} The high bad ending rate for Att2in indicates it can't generate fluent sentences; when increasing beam size, the bad ending rate becomes lower. For length models, Marker performs well when length is less than 20 but collapse after, while LenEmb performs consistently well. \noindent\textbf{Accuracy} The length models perform better than base model when length$<10$. The base model performs better between 10-16 which are the most common lengths in the dataset. For larger length, the LenEmb performs the best on both mCIDEr and SPICE, incidcating it's covering more information in the reference captions. \noindent\textbf{Controllability} We use the mean square error between the desired length and the actual length (LenMSE) to evaluate the controllability. When using predicted length, the length models perfectly achieve the predicted length (in Table \ref{tab:pred_perf}). When desired length is fed, Fig. \ref{fig:scores} shows that LenEmb can perfectly obey the length while Marker fails for long captions probabily due to poor long-term dependency. \noindent\textbf{Quatlitative results(Fig. \ref{fig:qualitative})} show that the LenEmb model, when generating longer captions, changes the caption structure and covers more detail, while the base model tends to have the same prefix for different lengths and repetition. More results can be browsed onine~\cite{colab}. \begin{figure}[t] \centering \includegraphics[width=0.45\linewidth]{figures/length_CIDEr.pdf} \includegraphics[width=0.45\linewidth]{figures/length_SPICE.pdf} \includegraphics[width=0.45\linewidth]{figures/length_bad_count_rate.pdf} \includegraphics[width=0.45\linewidth]{figures/length_length_var.pdf} \caption{\small The performance of models with different desired length. Att2in+BSx is Att2in+fixLen with beam size x.} \label{fig:scores} \end{figure} \begin{figure}[!th] {\footnotesize \centering \begin{tabular}{p{0.01\linewidth}p{0.95\linewidth}} \multicolumn{2}{c}{ \includegraphics[width=\linewidth]{figures/000000354533.jpg}}\\ 7 & a motorcycle parked on a dirt road\\ 10 & a motorcycle is parked on the side of a road\\ 16 & a motorcycle parked on the side of a dirt road with a fence in the background\\ 22 & a motorcycle parked on the side of a dirt road in front of a fence with a group of sheep behind it\\ 28 & a motorcycle is parked in a dirt field with a lot of sheep on the side of the road in front of a fence on a sunny day\\ \hline 7 & a motorcycle parked on a dirt road\\ 10 & a motorcycle parked on a dirt road near a fence\\ 16 & a motorcycle parked on a dirt road in front of a group of people behind it\\ 22 & a motorcycle parked on a dirt road in front of a group of people on a dirt road next to a fence\\ 28 & a motorcycle parked on a dirt road in front of a group of people on a dirt road in front of a group of people in the background\\ \end{tabular} \begin{tabular}{p{0.01\linewidth}p{0.95\linewidth}} \multicolumn{2}{c}{ \includegraphics[width=\linewidth]{figures/000000348881.jpg}}\\ 7 & an airplane is parked at an airport\\ 10 & an airplane is parked on the tarmac at an airport\\ 16 & an airplane is parked on a runway with a man standing on the side of it\\ 22 & an airplane is parked on a runway with a man standing on the side of it and a person in the background\\ 28 & an airplane is parked on the tarmac at an airport with a man standing on the side of the stairs and a man standing next to the plane\\ \hline 7 & a plane is sitting on the tarmac\\ 10 & a plane is sitting on the tarmac at an airport\\ 16 & a plane that is sitting on the tarmac at an airport with people in the background\\ 22 & a plane is sitting on the tarmac at an airport with people in the background and a man standing in the background\\ 28 & a plane is sitting on the tarmac at an airport with people in the background and a man standing on the side of the road in the background\\ \end{tabular} } \caption{\small Generated captions of different lengths. Top: LenEmb; Bottom: Att2in+BS10} \label{fig:qualitative} \end{figure} \subsection{Failure on CIDEr optimization} We apply SCST\cite{rennie2017self} training for length models. However, SCST doesn't work well. While the CIDEr scores can be improved, the generated captions tend to be less fluent, including bad endings (ending with 'with a') or repeating words (like 'a a'). \section{Conclusions} We present two captioning models that can control the length and shows their effectiveness to generate good captions of different lengths. The code will be released at link\footnote{\url{https://github.com/ruotianluo/self-critical.pytorch/tree/length_goal}}. {\small \bibliographystyle{ieee_fullname} \section{Introduction} Most existing captioning models learn an autoregressive model, either LSTM or transformer, in which explicit control of generation process is difficult. In particular, the length of the caption is determined only after the End Of Sentence ($<$eos$>$) is generated. It is hard to know and control the length beforehand. However, length can be an important property of a caption. A model that allows control of output length would provide more options for the end users of the captioning model. By controling the length, we can influence the style and descriptiveness of the caption: short, simple captions vs. longer, more complex and detailed descriptions for the same image. Previous work includes captioning models that allow control for other aspects. \cite{cornia2019show} controls the caption by inputting a different set of image regions. \cite{deshpande2019fast} can generate a caption controlled by assigning POS tags. Length control has been studied in abstract summarization~\cite{kikuchi2016controlling,fan2017controllable,takase2019positional}, but to our knowledge not in the context of image capitoning. To control the length of the generated caption, we build the model borrowing existing ideas from summarization work, by injecting length information into the model. To generate captions without an explicit length specification, we add a \emph{length prediction module} that can predict the optimal length for the input image at hand. We show that the length models can successfully generate catpion ranging from 7 up to 28 words long. We also show that length models perform better than non-controled model (with some special decoding methods)\gscom{the last phrase is vague at this point} when asked to generate long captions. \section{Models} We consider repurposing existing methods in summarization for captioning. In general, the length is treated as an intermediate \rtcom{latetnt variable is not correct, it's not latent} variable: $P(c|I) = P(c|I,l)*P(l|I)$. $c$, $I$ and $l$ are caption, image and length, respectively. We introduce how we build $P(c|I,l)$ and $P(l|I)$ as follows. Note that, the following methods can be used in conjunctions with any standard captioning model. \subsection{LenEmb\cite{kikuchi2016controlling}} We take LenEmb from \cite{kikuchi2016controlling} and make some small change according to \cite{takase2019positional}. Given desired length $T$ and current time step $t$, we embed the remaining length $T-t$ into a vector that is the same size as the word embedding. Then the word embedding of the previous word $w_{t-1}$, added with the length embedding (rather than concatenated, as in \cite{kikuchi2016controlling}), is fed as the input to the rest of the LSTM model. \begin{align} x_t = E_w[w_{t-1}] + E_l[T-t];\ \ P(w_t) = LSTM(x_t, h_{t-1}) \nonumber \end{align} \noindent\textbf{Learning to predict length} We add a length prediction module to predict the length given the image features (in this case the averaging region features) while no desired length is provided. We treat it as a classification task and train it with the reference caption length. \subsection{Marker\cite{fan2017controllable}} We also implement Marker model from \cite{fan2017controllable}. The desired length is fed as a special token at the beginning of generation as the ``first word''. At training time, the model needs to learn to predict the length at the first step the same way as other words (no extra length predictor needed). At test time, the length token is sampled in the same way as other words if no desired length specified. \section{Experiments} We use COCO~\cite{lin2014microsoft} to evaluate our models. For train/val/test split we follows \cite{karpathy2015deep}. The base captioning model is Att2in~\cite{rennie2017self}. The images features are bottom-up features\cite{anderson2018bottom}. For evaluation, we use BLEU~\cite{papineni2002bleu}, METEOR~\cite{denkowski2014meteor}, ROUGE~\cite{lin2004rouge}, CIDEr~\cite{vedantam2015cider}, SPICE~\cite{anderson2016spice} and bad ending rate~\cite{guo2018improving,self-critical-code}. We train the models with cross-entropy loss. Unless specified otherwise, decoding is beam search with beam size 5, and evaluation on Karpthy test set. \subsection{Generation with predicted lengths} For fair comparison on general image captioning task, we predict the length and generate the caption conditioned on the predicted length for length models. Results in Table \ref{tab:pred_perf} show that the length models are comparable to the base model. \begin{table}[htbp] \footnotesize \centering \begin{tabular}{lccccccc} & B4 & R & M & C & S & BER & LenMSE\\ \hline Att2in & 35.9 & 56.1 & 27.1 & 110.6 & 20.0 & 0.1 & N/A\\ \hline LenEmb & 34.9 & 56.2 & 27.0 & 110.0 & 20.0 & 0.0 & 0\\ Marker & 35.2 & 56.2 & 26.9 & 109.8 & 19.9 & 0.0 & 0\\ \hline \end{tabular}% \caption{\small Performance on COCO Karpathy test set. B4=BLEU4, R=ROUGE, M=METEOR, C=CIDEr, S=SPICE, BER=bad ending rate, LenMSE=length mean square error. Numbers all in percentage(\%) except LenMSE.} \label{tab:pred_perf}% \end{table}% \noindent\textbf{Length distribution(Fig.\ref{fig:length_dist})} While the scores are close, the length distribution is quite different. Length models tend to generate longer captions than normal auto-regressive models. However neither is close to the real caption length distribution(``test'' in the figure). \begin{figure}[t] \centering \begin{minipage}[b]{.6\linewidth} \includegraphics[width=\linewidth]{figures/length_dist.pdf} \end{minipage} \begin{minipage}[b]{.38\linewidth} \captionof{figure}{\small Length distribution of captions generated by different models. For length models, the length is obtained within the beam search process as a special token.} \label{fig:length_dist} \end{minipage} \end{figure} \subsection{Generation with controlled lengths} For baseline model, we use the method fixLen in \cite{kikuchi2016controlling} where probability manipulation are used to avoid generating eos token until desired length. The original CIDEr-D promotes short and generic captions because it computes average similarity between the generated and the references. We report a modified CIDEr (mCIDEr): 1) removing length penalty term in the CIDEr-D; 2) combining the ngram counts from all the reference captions to compute similarity~\cite{coco-caption-code}. \noindent\textbf{Fluency} The high bad ending rate for Att2in indicates it can't generate fluent sentences; when increasing beam size, the bad ending rate becomes lower. For length models, Marker performs well when length is less than 20 but collapse after, while LenEmb performs consistently well. \noindent\textbf{Accuracy} The length models perform better than base model when length$<10$. The base model performs better between 10-16 which are the most common lengths in the dataset. For larger length, the LenEmb performs the best on both mCIDEr and SPICE, incidcating it's covering more information in the reference captions. \noindent\textbf{Controllability} We use the mean square error between the desired length and the actual length (LenMSE) to evaluate the controllability. When using predicted length, the length models perfectly achieve the predicted length (in Table \ref{tab:pred_perf}). When desired length is fed, Fig. \ref{fig:scores} shows that LenEmb can perfectly obey the length while Marker fails for long captions probabily due to poor long-term dependency. \noindent\textbf{Quatlitative results(Fig. \ref{fig:qualitative})} show that the LenEmb model, when generating longer captions, changes the caption structure and covers more detail, while the base model tends to have the same prefix for different lengths and repetition. More results can be browsed onine~\cite{colab}. \begin{figure}[t] \centering \includegraphics[width=0.45\linewidth]{figures/length_CIDEr.pdf} \includegraphics[width=0.45\linewidth]{figures/length_SPICE.pdf} \includegraphics[width=0.45\linewidth]{figures/length_bad_count_rate.pdf} \includegraphics[width=0.45\linewidth]{figures/length_length_var.pdf} \caption{\small The performance of models with different desired length. Att2in+BSx is Att2in+fixLen with beam size x.} \label{fig:scores} \end{figure} \begin{figure}[!th] {\footnotesize \centering \begin{tabular}{p{0.01\linewidth}p{0.95\linewidth}} \multicolumn{2}{c}{ \includegraphics[width=\linewidth]{figures/000000354533.jpg}}\\ 7 & a motorcycle parked on a dirt road\\ 10 & a motorcycle is parked on the side of a road\\ 16 & a motorcycle parked on the side of a dirt road with a fence in the background\\ 22 & a motorcycle parked on the side of a dirt road in front of a fence with a group of sheep behind it\\ 28 & a motorcycle is parked in a dirt field with a lot of sheep on the side of the road in front of a fence on a sunny day\\ \hline 7 & a motorcycle parked on a dirt road\\ 10 & a motorcycle parked on a dirt road near a fence\\ 16 & a motorcycle parked on a dirt road in front of a group of people behind it\\ 22 & a motorcycle parked on a dirt road in front of a group of people on a dirt road next to a fence\\ 28 & a motorcycle parked on a dirt road in front of a group of people on a dirt road in front of a group of people in the background\\ \end{tabular} \begin{tabular}{p{0.01\linewidth}p{0.95\linewidth}} \multicolumn{2}{c}{ \includegraphics[width=\linewidth]{figures/000000348881.jpg}}\\ 7 & an airplane is parked at an airport\\ 10 & an airplane is parked on the tarmac at an airport\\ 16 & an airplane is parked on a runway with a man standing on the side of it\\ 22 & an airplane is parked on a runway with a man standing on the side of it and a person in the background\\ 28 & an airplane is parked on the tarmac at an airport with a man standing on the side of the stairs and a man standing next to the plane\\ \hline 7 & a plane is sitting on the tarmac\\ 10 & a plane is sitting on the tarmac at an airport\\ 16 & a plane that is sitting on the tarmac at an airport with people in the background\\ 22 & a plane is sitting on the tarmac at an airport with people in the background and a man standing in the background\\ 28 & a plane is sitting on the tarmac at an airport with people in the background and a man standing on the side of the road in the background\\ \end{tabular} } \caption{\small Generated captions of different lengths. Top: LenEmb; Bottom: Att2in+BS10} \label{fig:qualitative} \end{figure} \subsection{Failure on CIDEr optimization} We apply SCST\cite{rennie2017self} training for length models. However, SCST doesn't work well. While the CIDEr scores can be improved, the generated captions tend to be less fluent, including bad endings (ending with 'with a') or repeating words (like 'a a'). \section{Conclusions} We present two captioning models that can control the length and shows their effectiveness to generate good captions of different lengths. The code will be released at link\footnote{\url{https://github.com/ruotianluo/self-critical.pytorch/tree/length_goal}}. {\small \bibliographystyle{ieee_fullname}
{'timestamp': '2020-06-01T02:04:28', 'yymm': '2005', 'arxiv_id': '2005.14339', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14339'}
arxiv
\section{Introduction} \label{intro} A \textit{degenerate parabolic equation} \cite[Chapter III]{showalter} (also called \textit{parabolic-elliptic equation} \cite{pluschke}) is an abstract evolution
equation of the form \begin{equation}\label{deg-1} \dfrac{d}{dt}(Ru(t))+ A(t)u(t)=f(t), \end{equation} where $R$ is a linear, bounded and monotone operator and $(A(t))_{t\in[0,T]}$ is a family of linear and bounded operators. They arise in several applications, for instance in the study of eddy currents in electromagnetic field theory (see \cite{zlamal,maccamy,reales}). Results about existence and uniqueness of solutions for some degenerate parabolic equations have been widely studied. In \cite{kuttler} Kuttler \& Kenneth L. show results concerning existence, uniqueness and regularity of equations of the form \eqref{deg-1}, but with R non-invertible and A a linear operator independent of the time. Sufficient conditions to ensure the existence and uniqueness of solutions of \eqref{deg-1}, even when R depends on the time, are shown by Showalter \cite{showalter} (see also \cite{showalter-2}). Moreover, the existence and uniqueness of the solutions for the case of the family of operators $A$ can be non-linear, has been analyzed in \cite{kuttler-1982, kuttler-1986, paronetto}. Among the numerical methods found in the literature to compute the approximated solution of classical parabolic partial differential equation, the finite element method (with some time-stepping scheme) is one of the more extended. We can cite the book by V. Thom\'{e}e \cite{thomee} as a classical reference about this topic. Moreover, books dedicate to the finite element approximation for partial differential equations, devote at least one chapter to the analysis of the numerical approximation of parabolic equations (see, for instance, \cite{guermond} and \cite{quarteronivalli}). In fact, the developed theory for the approximation of parabolic equations by the finite element method, is mainly presented for a general heat-like equation, i.e., to approximate the solution of a general parabolic problem of the form: \begin{equation}\nonumber \dfrac{du}{dt} + \mathcal{L}u= f, \end{equation} with $\mathcal{L}$ is a coercive differential operator of the second order. The mathematical analysis for the numerical approximations by finite element methods, including existence and uniqueness of the discrete solutions and quasi-optimal error estimates, has been only performed for particular degenerate parabolic equations. For instance, Zlamal \cite{zlamal} has studied the approximation of solution for a two-dimensional eddy current problem in a bounded domain, MacCamy \& Zuri \cite{maccamy} have proposed a FEM-BEM coupling for the formulation analyzed in \cite{zlamal}, and a formulation for an axisymmetric eddy current problem was studied by Bermudez \textit{et al} \cite{reales}. The formulations studied in all these references can be expressed as particular cases of problem \eqref{deg-1}. Nevertheless, to the best knowledge of the authors, there is not an abstract general theory that allows to deduce the mathematical analysis of these approximations as particular applications of that theory. The main goal of this article is precisely to provide a general theory for the mathematical analysis of a fully-discrete finite element approximation for an abstract degenerate parabolic equation. To this aim, we consider a fully discrete approximation for a Cauchy problem associated to equation \eqref{deg-1}, by using a finite element method in space and a Backward-Euler scheme in time. We show sufficient conditions for the spaces and the family of operators, to guarantee existence and uniqueness of the fully-discrete solutions by assuming that the time step is sufficiently small. Furthermore, we prove quasi-optimal error estimates for this fully discretized scheme by adapting the approximation theory for classical parabolic equations to the abstract degenerate case. Moreover, since a good discrete approximation for the time-derivative of the solution is relevant for the applications, we prove that this time derivative can be approximated with quasi-optimal error estimates. The outline of the paper is as follows: Section~\ref{sec:1} is devoted to show some concepts about spaces for evolutive problems and the abstract framework for degenerate parabolic equations and their well-posedness are recalled in Section~\ref{degenerate}. The corresponding analysis for the fully-discrete approximation of problem by using finite element method in space and a backward Euler scheme in time, is presented in Section \ref{discreto-d} and the results ensuring the quasi-optimal convergence of the approximation method are shown in Section~\ref{error-d}. Furthermore, the application of the theory to an eddy current model is studied in Section \ref{aplicaciones-d}, where we deduce its well-posedness and theoretical convergence by using the developed abstract theory. Finally, we show some numerical results that confirm the expected convergence of the method according to the theory. \section{Hilbert functional spaces for evolutive problems} \label{sec:1} Let us first review some basic concepts about functional analysis which are useful in dealing with time-dependent functions. A complete and detailed presentation of the concepts that we indicate in this section can be founded, for instance, in \cite[ Sections 23.2-23.6]{zeidlerl}. More precisely, we need to introduce spaces of functions defined on a bounded time interval $(0,T)$ (where $T>0$ is a fixed time) and with values in separable Hilbert space $X$. We will denote by $\|\cdot\|_{X}$, $(\cdot,\cdot)_{X}$ and $\langle \cdot,\cdot\rangle_{X}$, the norm, the inner product and duality pairing in $X$. We use the notation $\mathcal{C}^0([0,T];X)$ for the space consisting of all continuous functions $f:[0,T]\to X$. More generally, for any $k\in\mathbb{N}$, $\mathcal{C}^k([0,T];X)$ denotes the subspace of $\mathcal{C}^0([0,T];X)$ of all functions $f$ with (strong) derivatives of order at most $k$ in $\mathcal{C}^0([0,T];X)$, i.e., \[ \mathcal{C}^k([0,T];X) :=\set{f\in\mathcal{C}^0([0,T];X):\quad \frac{d^jf}{dt^j} \in\mathcal{C}^0([0,T];X),\quad 1\leq j\leq k}. \] A classical result of functional analysis states $\mathcal{C}^k([0,T];X)$ is a Banach space with the norm \[ \norm{f}_{\mathcal{C}^k([0,T];X)}:=\sup_{t\in[0,T]} \sum_{j=0}^k\norm{\frac{d^j f}{dt^j}(t)}_{X}. \] We also consider the space $\L^2(0,T; X)$ of classes of functions $f:(0,T)\to X$ that are B\"ochner-measurable whose norm in $X$ belongs to $\L^2(0,T)$, i.e., \[ \norm{f}^2_{\L^2(0,T; X)}:= \int_0^T\norm{f(t)}_{X}^2 \,dt < +\infty. \] The space $\L^2(0,T;X)$ is a Hilbert space with the norm $\|\cdot\|_{\L^2(0,T;X)}$. Furthermore, the dual space of $\L^2(0,T;X)$ can be identified with the space $\L^2(0,T;X')$ as shown in the following result. \begin{proposition}[Dual space of $\L^2(0,T;X)$] Let $X$ be a separable Hilbert space. For any $f\in \L^2(0,T;X)'$ there exists a unique $v_f\in\L^2(0,T; X')$ satisfying \[ \dual{f}{w}=\int_0^T\dual{v_f(t)}{w(t)}_{X}dt\qquad\forall w\in \L^2(0,T;X). \] Moreover, the map $f\mapsto v_f$ is a linear bijection which preserves the norm, i.e., \[ \norm{f}_{\paren{L^2(0,T;X)}'}=\norm{v_f}_{L^2(0,T;X')}\qquad\forall f\in\paren{L^2(0,T;X)}'. \] \end{proposition} \begin{proof} See, for instance, \cite[Proposition~23.7]{zeidlerl}. \end{proof} The analysis of evolutive differential problems require functional spaces involving time-derivatives. Let $X$ and $Y$ be two separable Hilbert spaces such that $X\subset Y$ with continuous and dense embedding. Let $X'$ the dual space of $X$ with respect to the pivot space $Y$. More precisely, $Y$ can be identified as a subset of $X'$ and \[ \langle w, v\rangle_{X}=(w,v)_{X}\quad \forall w\in Y \quad \forall v\in X. \] We will denote by $\mathrm{W}^{1,2}(0,T;X,X')$ the functional space given by \[ \mathrm{W}^{1,2}(0,T;X,X'):= \left\{ v\in \L^2(0,T;X):\: \dfrac{dv}{dt}\in \L^2(0,T;X') \right\} \] where $\dfrac{dv}{dt}$ is the \textit{generalized time-derivative} of $v$ characterized by \[ \int_{0}^T\dual{\dfrac{dv}{dt}(t)}{w}_{X}\varphi(t)dt= - \int_{0}^T\paren{v(t),w}_X\varphi'(t)dt \qquad \forall w\in X \quad\forall\varphi\in C_0^{\infty}(0,T). \] It is well known that $\mathrm{W}^{1,2}(0,T;X,X')$ endowed with the norm \[ \norm{v}_{\mathrm{W}^{1,2}(0,T;X,X')}:=\norm{v}_{L^2(0,T;X)} + \norm{\dfrac{dv}{dt}}_{L^2(0,T;X')} \] is a Banach space and $\mathrm{W}^{1,2}(0,T;X,X')\subset \mathcal{C}^0([0,T];Y)$ with a continuous embedding (see, for instance, \cite[Proposition~23.23]{zeidlerl}). Let $k\in\mathbb{Z}^+$. The generalized time-derivative of order $k$ of $v\in \L^2(0,T;X)$, denoted by $\dfrac{d^kv}{dt^k}$, can be defined inductively. Hence, we can consider the space \[ \mathrm{H}^k(0,T;X):=\left\{ v\in \L^2(0,T;X):\: \dfrac{d^jv}{dt^j}\in \L^2(0,T;X),\: j=1,\ldots,k \right\}, \] which is a Banach space with the norm \[ \norm{v}_{\mathrm{H}^k(0,T;X)}:=\sum_{j=0}^k\norm{\dfrac{d^jv}{dt^j}}_{L^2(0,T;X)}. \] Furthermore, the embedding $ \mathrm{H}^k(0,T;X) \subset \mathcal{C}^{k-1}([0,T];X) $ is continuous for any $k\in\mathbb{Z}^+$. \section{The degenerate parabolic problem}\label{degenerate} Let $X$ and $Y$ be two real separable Hilbert spaces such that $X\subsetY$ with continuous and dense embedding. We denote by $(\cdot,\cdot)_{X}$ and $(\cdot,\cdot)_{Y}$ the inner products on $X$ and $Y$ respectively and $\|\cdot\|_{X}$, $\|\cdot\|_{Y}$ the corresponding norms. Furthermore, $\langle\cdot,\cdot\rangle_{X}$ and $\langle\cdot,\cdot\rangle_{Y}$ denote respectively the duality paring of $X$ and $Y$ and their corresponding dual spaces. Let $R:Y \to Y'$ a linear and bounded operator. Let $T>0$, for any $t\in[0,T]$, let us consider a linear and bounded operator $A(t):X\toX'$. Then, given $f\in \L^2(0,T;X')$ and $u_{0}\in Y$, the degenerate parabolic problem can read as follows. \begin{problem} \label{ppdc} Find $u\in\L^2(0,T;X)$ such that: \begin{align*} \frac{d}{dt}\left\langle Ru(t),v\right\rangle_{Y} + \produ{A(t)u(t),v}_{X}=&\produ{f(t),v}_{X}\quad \forall v\in X ,\\ \left<Ru(0),v\right>_{Y}=&\left<Ru_0,v\right>_{Y}\quad\forall v \in Y. \end{align*} \end{problem} The first identity in Problem~\ref{ppdc} is given in the space of the distributions $\mathcal{D}'(0,T)$, i.e., this equation is equivalent to \begin{equation}\nonumber - \int_0^T \left\langle Ru(t),v\right\rangle_{Y} \varphi'(t)dt + \int_0^T \produ{A(t)u(t),v}_{X} \varphi(t)dt = \int_0^T \produ{f(t),v}_{X} \varphi(t)dt \end{equation} for all $v\in X$ and $\varphi\in C_0^\infty(0,T)$. Moreover, Problem~\ref{ppdc} can be formulated as any of the following two equivalent problems. \begin{problem} \label{ppdint} Find $u\in \L^2(0,T;X)$ such that \[ -\int_0^T{\left\langle Ru(t), v'(t)\right\rangle}_{Y}dt+\int_0^T{\left\langle A(t)u(t),v(t)\right\rangle}_{X}dt =\int_0^T{\left\langle f(t),v(t)\right\rangle}_{X}dt+\left\langle Ru_0,v(0)\right\rangle_{Y}, \] for all $v\in \L^2(0,T;X)\cap H^1(0,T;Y)$ with $v(T)=0$. \end{problem} \begin{problem} \label{ppddual} Find $u\in \L^2(0,T;X)$ satisfying \begin{align*} \frac {d}{dt}Ru(\cdot) +A(\cdot)u(\cdot)&=f(\cdot)\quad\text{in} \ \ \L^2(0,T;X'),\\ Ru(0)&=Ru_0\quad \text{in}\quad Y'. \end{align*} \end{problem} Let us remark that the first equation in Problem~\ref{ppddual} implies that $Ru(\cdot)\in H^1(0,T;X')$, consequently the function $t\mapsto Ru(t)$ is absolutely continuous in $X'$ and, in particular, $Ru(0)\inX'$. On the other hand, since the inclusion $X\subset Y$ is dense and continuous, the inclusion $Y'\subset X'$ is also dense and continuous and therefore, by recalling that $Ru_0\in Y'$, the initial condition given by the second equation of Problem~\ref{ppddual} has meaning, which is equivalent to the second equation of Problem~\ref{ppdc}. In order to obtain the well-posedness result for Problem~\ref{ppdc} (and equivalently for Problem~\ref{ppdint} and Problem~\ref{ppddual}), we need to recall the following definition; see \cite[Section~III.3]{showalter}. \begin{definition} Let $Z$ be a real separable Hilbert space and $\mathcal{G}:=\{G(t):Z\to Z':t\in[0,T]\}$ be a family of linear and bounded operators. $\mathcal{G}$ is called \textit{monotone}, if $\left<G(t)v,v\right>_{Z}\geq 0$ for any $v\in Z$ and for any $t\in[0,T]$. $\mathcal{G}$ is called \textit{self-adjoint}, if $\left<G(t)u,v\right>_{Z}=\left<G(t)v,u\right>_{Z}$ for any $u,v\in Z$ and for any $t\in[0,T]$. Similarly, $\mathcal{G}$ is called \textit{regular} if for each $u,v\in Z$ the map $t\mapsto \left<G(t)u,v\right>_{Z}$ is absolutely continuous on $[0,T]$ and there exists a function $k:(0,T)\to\mathbb{R}$ belongs to $L^1(0,T)$, which satisfies \[ \abs{\frac{d}{dt}\left<G(t)u,v\right>_{Z}}\leq k(t)\|u\|_Z \|v\|_Z\qquad \forall u,v\in Z\quad\textrm{a.e. } t\in [0,T]. \] \end{definition} The following result shows sufficient conditions to obtain the existence and uniqueness of solution for Problem~\ref{ppdc} and its proof can be founded in \cite[Proposition~III.3.2 and III.3.3]{showalter}. \begin{theorem}\label{welld Assume that the operator $R$ is monotone, self-adjoint, and there exist constants $\lambda>0$ and $\alpha>0$ such that \begin{equation}\label{existencia} \lambda\left<Rv,v\right>_{Y} + \left<A(t)v,v\right>_{X} \geq \alpha \|v\|_X^2\quad \forall v\in X\quad\forall t\in [0,T]. \end{equation} Then, there exists a solution of Problem~\ref{ppdc} and it satisfies \begin{equation}\label{dependencia-d} \|u\|_{\L^2(0,T;X)}\leq C \left(\|f\|_{\L^2(0,T;X')}^2+ \left<Ru_0,u_0 \right>_{Y}\right)^{\frac{1}{2}}, \end{equation} for some constant $C>0$. Furthermore, if $A(t)$ is a regular family of self-adjoint operators, then the solution of Problem~\ref{ppdc} is unique. \end{theorem} \section{Fully-discrete approximation for degenerate parabolic problem}\label{discreto-d} In this section we present the fully-discrete approximation for the degenerate parabolic problem which was introduced in the previous section. To this aim, we assume that the family of operators $A(t)$ and the operator $R$ satisfy the sufficient conditions given in Theorem~\ref{welld} to guarantee the existence and uniqueness of solution of Problem~\ref{ppdc}. The fully-discrete approximation will be obtained by using the finite-element method in space and a backward-Euler scheme in time. Let $\{X_h\}_{h>0}$ be a sequence of finite-dimensional subspaces of $X$ and let $t_n: =n\Delta t$, $n=0,\ldots,N,$ be a uniform partition of $[0,T]$ with a time-step $\Delta t := T/N$. For any finite sequence $\{\theta^n: \ n=0,\ldots,N\}$ we denote \[ \overline{\partial}\theta^n:=\frac{\theta^n-\theta^{n-1}}{\Delta t},\qquad n=1,\dots,N. \] Let $u_{0,h}\inX_h$ a given approximation of $u_0$. The fully-discrete approximation of Problem~\ref{ppdc} reads as follows. \begin{problem}\label{ppdd} Find $u_h^n\inX_h$, $n=1,\ldots,N$, such that \begin{align*} \langle R\overline{\partial} u_h^n ,v\rangle_{Y} +\langle A (t_n)u_h^n,v\rangle_{X} &=\langle f(t_n),v\rangle_{X}\qquad \forall v\inX_h.\\ u_h^0&=u_{0,h} \end{align*} \end{problem} We can easily check that in each step $n=1,\ldots,N$, $u_h^n$ is computed as the solution of the following problem: find $u_h^n\inX_h$ such that \begin{align*} \mathcal{A}_{n}(u_h^n,v)=F_n(v)\qquad \forall v\in X_h, \end{align*} where $\mathcal{A}_{n}$ and $F_n$ are defined by \begin{align*} \mathcal{A}_{n}(w,v)&:=\langle R w,v\rangle_{Y}+\Delta t \ \langle A(t_n)w,v \rangle_{X}\qquad\forall w,\ v\inX_h,\\ F_n(v)&:=\Delta t\ \langle f(t_n),v\rangle_{X}+\langle R u_h^{n-1},v\rangle_{Y}\qquad\forall v\inX_h. \end{align*} We will use the Lax-Milgram Lemma to deduce the existence and uniqueness of solution of Problem~\ref{ppdd} for each $n=1,\ldots,N$. Since $F_n$ is linear and bounded and $\mathcal{A}_{n}$ is bilinear and bounded, we need to prove that $\mathcal{A}_{n}$ is elliptic in $X_h$. In fact, if we assume that $0<\Delta t\leq 1/\lambda$, for any $v\inX_h$ we have \[ \mathcal{A}_{n}(v,v)=\langle R v,v\rangle_{Y}+\Delta t \langle A(t_n)v,v \rangle_{X} \geq \Delta t\left[\lambda\langle R v,v\rangle_{Y}+\produ{A(t_n)v,v}_{X}\right], \] then, from \eqref{existencia} it follows that \[ \mathcal{A}_{n}(v,v)\geq \alpha\Delta t \|v\|^2_{X}\qquad \forall v\in X_h. \] Consequently, we have the following result about the existence and uniqueness of solution for the fully-discrete Problem~\ref{ppdd}. \begin{theorem}\label{welldh} Assume that the family of operators $A(t)$ and the operator $R$ satisfy the sufficient conditions given in Theorem~\ref{welld} to guarantee the existence and uniqueness of solution of Problem~\ref{ppdc}. If the time-step $\Delta t$ is small enough (\textit{e.g.}, $0<\Delta t\leq 1/\lambda$), the fully-discrete Problem~\ref{ppdd} has a unique solution $u_h^n\inX_h$ for each $n=1,\ldots,N$. \end{theorem} \section{Error estimates for the fully-discrete approximation}\label{error-d} In this section, we will deduce some error estimates for the fully-discrete approximation. To this aim, from now on we assume the assumptions of Theorems~\ref{welld} and \ref{welldh}. Moreover, we assume that the solution to Problem~\ref{ppdc} satisfies $u\in\H^1(0,T;X)$. Furthermore, we consider the orthogonal projection operator $\Pi_h:X \toX_h$, defined by \[ \Pi_h w\inX_h:\quad (\Pi_h w,v)_{X}=(w,v)_{X}\quad \forall v\in X_h, \] clearly, $\Pi_h$ is well-defined and satisfies \begin{align}\label{infprodg} \|w-\Pi_h w\|_{X}\leq \inf_{v\in X_h} \|w-v\|_{X}\quad \forall w\in X. \end{align} {}From now on $u$ and $u_h^n$, $n=1,\ldots,N$, denotes the solutions to Problem~\ref{ppdc} and Problem~\ref{ppdd}, respectively. We define the error and consider its splitting \begin{align} \label{splitE-d} e^n_h:=u(t_n)-u_h^n=\rho_h^n+\sigma_h^n, \qquad n=1,\ldots,N, \end{align} where \begin{align}\label{rhoysigmad} \rho_h(t):=u(t)-\Pi_h u(t),\quad \rho_h^n:=\rho_h(t_n),\quad \sigma_h^n:=\Pi_h u(t_n)-u_h^n. \end{align} Furthermore, we denote \begin{equation}\nonumber \tau^n:=\frac{u(t_n)-u(t_{n-1})}{\Delta t}-\partial_t u(t_n). \end{equation} \begin{lemma}\label{lemdg} If $u\in\H^1(0,T;X)$ then there exists a constant $C>0$, independent of $h$ and $\Delta t$, such that \begin{equation}\label{desi-sigma} \langle R\sigma_h^n,\sigma_h^n\rangle_{Y} +\Delta t\,\sum_{k=1}^n\|\sigma_h^k\|^2_{X} \leq C\left[\langle R\sigma_h^0,\sigma_h^0\rangle_{Y} +\Delta t\,\sum_{k=1}^N \left\{\|\tau^k\|^2_{Y}+\|\overline{\partial}\rho_h^k\|^2_{X}+\|\rho_h^k\|^2_{X} \right\}\right]. \end{equation} Furthermore, if $u_0\inX$ and for each $t\in[0,T]$ the operator $A(t)$ is monotone and there exists a constant $C>0$ such that \begin{equation}\label{Ap-bounded} \langle A'(t)u,v\rangle\leq C\|u\|_{X}\|v\|_{X}\quad \forall u,v\in X\quad \forall t\in [0,T], \end{equation} then, there exists a constant $C>0$, independent of $h$ and $\Delta t$, such that \begin{equation}\label{estimasumaderivada-R} \begin{split} \Delta t\sum_{k=1}^n\langle R\overline{\partial} \sigma_h^k,\overline{\partial} \sigma_h^k\rangle_{Y} &+\langle A(t_n)\sigma_h^n,\sigma_h^n\rangle_{X}\\ &\leq C\left[ \Vert \sigma_h^{0}\Vert_X^2 + \Vert \rho_h^{0}\Vert_X^2 + \Vert \rho_h^{n}\Vert_X^2 +\Delta t\sum_{k=1}^{N} \left\{\Vert{\tau^k}\Vert_{Y}^2 + \Vert\overline{\partial}\rho_h^k\Vert_{X}^2+ \Vert\rho_h^k\Vert_{X}^2\right\} \right]. \end{split} \end{equation} \end{lemma} \begin{proof} Let $n\in\{1,\dots,N\}$, $k\in\{1,\dots,n\}$ and $v\inX_h$. Then, from Problem~\ref{ppdc} and Problem~\ref{ppdd}, it follows that \begin{equation}\label{indentityv} \langle R\overline{\partial} \sigma_h^k,v\rangle_{Y} +\langle A(t_k)\sigma_h^k,v\rangle_{X} =\langle R\tau^k,v\rangle_{Y} -\langle R\overline{\partial}\rho_h^k,v\rangle_{Y} -\langle A(t_k)\rho_h^k,v\rangle_{X}\qquad\forall v\in X_h. \end{equation} By testing this previous identity with $v=\sigma_h^k\in X_h$, we have \begin{equation}\label{identitysigma} \langle R\overline{\partial} \sigma_h^k,\sigma_h^k\rangle_{Y} +\langle A(t_k)\sigma_h^k,\sigma_h^k\rangle_{X} =\langle R\tau^k,\sigma_h^k\rangle_{Y} -\langle R\overline{\partial}\rho_h^k,\sigma_h^k\rangle_{Y} -\langle A(t_k)\rho_h^k,\sigma_h^k\rangle_{X}. \end{equation} Using the fact that $R$ is monotone and self-adjoint, the first term of the left-hand term in the previous identity satisfies \begin{equation*} \langle R\overline{\partial} \sigma_h^k,\sigma_h^k\rangle_{Y} \geq \dfrac{1}{2\Delta t} \left\{\langle R\sigma_h^k,\sigma_h^k\rangle_{Y} - \langle R\sigma_h^{k-1},\sigma_h^{k-1}\rangle_{Y} \right\}, \end{equation*} by recalling \eqref{existencia}, there exist $\lambda,\alpha>0$ such that \[ \langle A(t_k)\sigma_h^k,\sigma_h^k\rangle_{X} \geq \alpha \Vert{\sigma_h^k}\Vert_{X}^{2} -\lambda \langle R\sigma_h^k,\sigma_h^k\rangle_{Y}, \] thus, replacing in \eqref{identitysigma}, it follows that \begin{equation}\label{pre1B} \begin{split} \dfrac{1}{2\Delta t} \left[\langle R\sigma_h^k,\sigma_h^k\rangle_{Y} - \langle R\sigma_h^{k-1},\sigma_h^{k-1}\rangle_{Y}\right] &+ \alpha \Vert{\sigma_h^k}\Vert_{X}^{2} -\lambda \langle R\sigma_h^k,\sigma_h^k\rangle_{Y}\\ &\leq\langle R\tau^k,\sigma_h^k\rangle_{Y} -\langle R\overline{\partial}\rho_h^k,\sigma_h^k\rangle_{Y} -\langle A(t_k)\rho_h^k,\sigma_h^k\rangle_{X} \end{split} \end{equation} Now, since the operator $R$ is monotone and self-adjoint, it satisfies the following Cauchy-Schwarz type inequality \begin{equation}\label{cauchy-R2} \vert\langle Rv,w\rangle_{Y}\vert\leq \langle Rv,v\rangle_{Y}^{1/2} \langle Rw,w\rangle_{Y}^{1/2} \end{equation} then, we have \begin{align*} &\vert\langle R\tau^k,\sigma_h^k\rangle_{Y}\vert \leq \dfrac{1}{4} \langle R\sigma_h^k,\sigma_h^k\rangle_{Y} + \langle R\tau^k,\tau^k\rangle_{Y},\quad &\vert\langle R\overline{\partial}\rho_h^k,\sigma_h^k\rangle_{Y}\vert\leq \dfrac{1}{4} \langle R\sigma_h^k,\sigma_h^k\rangle_{Y} + \langle R\overline{\partial}\rho_h^k,\overline{\partial}\rho_h^k\rangle_{Y}. \end{align*} On the other hand, by using the uniform continuity of the family of operators $A$, we can notice that \[ \vert\langle A(t_k)\rho_h^k,\sigma_h^k\rangle_{X}\vert \leq \dfrac{\alpha}{2} \Vert{\sigma_h^k}\Vert_{X}^{2} + \dfrac{1}{2\alpha}\Vert A\Vert ^2 \Vert{\rho_h^k}\Vert_{X}^{2}. \] Therefore, by replacing the previous inequalities in \eqref{pre1B} and using the fact that $R$ is a bounded operator and $X\subset Y$ is a continuous embedding, we deduce \begin{align*} \langle R\sigma_h^k,\sigma_h^k\rangle_{Y} -\langle R\sigma_h^{k-1},\sigma_h^{k-1}\rangle_{Y} &+\alpha\Delta t \|\sigma_h^k\|^2_{X}\\ &\leq (1+2\lambda)\Delta t \langle R\sigma_h^k,\sigma_h^k\rangle_{Y} + C\Delta t\left[ \Vert\tau^k\Vert_{Y}^2 + \Vert\overline{\partial}\rho_h^k\Vert_{X}^2 + \Vert{\rho_h^k}\Vert_{X}^{2} \right]. \end{align*} Hence, by summing over $k$, we obtain \begin{align*} &\langle R\sigma_h^n,\sigma_h^n\rangle_{Y} -\langle R\sigma_h^0,\sigma_h^0\rangle_{Y} +\alpha\Delta t\sum_{k=1}^n\|\sigma_h^k\|^2_{X}\\ &\qquad\qquad\qquad\qquad \leq(1+2\lambda)\Delta t\sum_{k=1}^n\langle R\sigma_h^k,\sigma_h^k\rangle_{Y} +C\Delta t\,\sum_{k=1}^n \left[ \Vert\tau^k\Vert_{Y}^2 + \Vert\overline{\partial}\rho_h^k\Vert_{X}^2 + \Vert{\rho_h^k}\Vert_{X}^{2} \right]. \end{align*} Then, if $\Delta t$ is small enough such that $(1+2\lambda)\Delta t \leq \frac12$, we have \begin{equation}\label{Cota1dg} \begin{split} &\dfrac12\langle R\sigma_h^n,\sigma_h^n\rangle_{Y}+\alpha\Delta t\sum_{k=1}^n\|\sigma_h^k\|^2_{X}\\ &\qquad\qquad \leq \langle R\sigma_h^0,\sigma_h^0\rangle_{Y} + (1+2\lambda)\Delta t\sum_{k=1}^{n-1}\langle R\sigma_h^k,\sigma_h^k\rangle_{Y}+C\Delta t\,\sum_{k=1}^n \left[ \Vert\tau^k\Vert_{Y}^2 + \Vert\overline{\partial}\rho_h^k\Vert_{X}^2 + \Vert{\rho_h^k}\Vert_{X}^{2} \right], \end{split} \end{equation} which implies \[ \langle R\sigma_h^n,\sigma_h^n\rangle_{Y} \leq 2\langle R\sigma_h^0,\sigma_h^0\rangle_{Y} + 2(1+2\lambda)\Delta t\sum_{k=1}^{n-1}\langle R\sigma_h^k,\sigma_h^k\rangle_{Y}+C\Delta t\,\sum_{k=1}^n \left[ \Vert\tau^k\Vert_{Y}^2 + \Vert\overline{\partial}\rho_h^k\Vert_{X}^2 + \Vert{\rho_h^k}\Vert_{X}^{2} \right]. \] Therefore, by using the discrete Gronwall's Lemma (see, for instance, \cite[Lemma 1.4.2]{quarteronivalli}), we obtain \begin{align*} &\langle R\sigma_h^n,\sigma_h^n\rangle_{Y} \leq C\left\{ \langle R\sigma_h^0,\sigma_h^0\rangle_{Y} +\Delta t\,\sum_{k=1}^n \left[ \Vert\tau^k\Vert_{Y}^2 + \Vert\overline{\partial}\rho_h^k\Vert_{X}^2 + \Vert{\rho_h^k}\Vert_{X}^{2} \right]\right\}. \end{align*} Hence, by using this inequality to estimate the second term in the right-hand term of \eqref{Cota1dg}, we deduce \eqref{desi-sigma}. Next, we want to prove \eqref{estimasumaderivada-R} by assuming that each $A(t)$ is monotone and \eqref{Ap-bounded} holds true. In fact, by taking $v=\overline{\partial}\sigma_h^k\in X_h$ in \eqref{indentityv}, we obtain \begin{align}\label{testingdifsigma-R} \langle R \overline{\partial}\sigma_h^k,\overline{\partial}\sigma_h^k\rangle_{Y}+\langle A(t_k)\sigma_h^k,\overline{\partial}\sigma_h^k\rangle_{X} = \langle R \tau^k,\overline{\partial}\sigma_h^k\rangle_{Y}-\langle R\overline{\partial} \rho_h^k,\overline{\partial} \sigma_h^k\rangle_{Y}-\langle A(t_k)\rho_h^k,\overline{\partial}\sigma_h^k\rangle_{X }. \end{align} Now, since each operator $A(t)$ is monotone and self-adjoint, it follows \begin{equation*} \langle A(t_k)\overline{\partial} \sigma_h^k,\sigma_h^k\rangle_{X} \geq \dfrac{1}{2\Delta t} \left\{\langle A(t_k)\sigma_h^k,\sigma_h^k\rangle_{X} - \langle A(t_k)\sigma_h^{k-1},\sigma_h^{k-1}\rangle_{X} \right\}, \end{equation*} and therefore \begin{equation}\label{cotaoperatorAsigma-R} \begin{split} &\langle A(t_k)\sigma_h^k,\overline{\partial}\sigma_h^k\rangle_{X}\\ &\quad \geq \frac{1}{2\Delta t}\left[ \langle A(t_k)\sigma_h^k,\sigma_h^k\rangle_{X}-\langle A(t_{k-1})\sigma_h^{k-1},\sigma_h^{k-1}\rangle_{X} \right] -\frac{1}{2\Delta t} \left\langle\left(\int_{t_{k-1}}^{t_k} A'(t)dt\right) \sigma_h^{k-1},\sigma_h^{k-1}\right\rangle_{X}. \end{split} \end{equation} On the other hand, a straightforward computation shows that \begin{equation}\label{cotaoperatorArho-R} \begin{split} \langle A(t_k)\rho_h^k,\overline{\partial}\sigma_h^k\rangle_{X}&= \frac{1}{\Delta t}\left[ \langle A(t_k)\rho_h^k,\sigma_h^k\rangle_{X}-\langle A(t_{k-1})\rho_h^{k-1},\sigma_h^{k-1}\rangle_{X} \right]-\langle A(t_k)\overline{\partial}\rho_h^k,\sigma_h^{k-1}\rangle_{X}\\ &\quad-\frac{1}{\Delta t} \left\langle\left(\int_{t_{k-1}}^{t_k} A'(t)dt\right) \rho_h^{k-1},\sigma_h^{k-1}\right\rangle_{X}. \end{split} \end{equation} Hence, by using \eqref{cotaoperatorAsigma-R} and \eqref{cotaoperatorArho-R} in \eqref{testingdifsigma-R}, we have \begin{equation}\nonumber \begin{split} \langle R \overline{\partial}\sigma_h^k,\overline{\partial}\sigma_h^k\rangle_{Y} &+\frac{1}{2\Delta t}\left[ \langle A(t_k)\sigma_h^k,\sigma_h^k\rangle_{X}-\langle A(t_{k-1})\sigma_h^{k-1},\sigma_h^{k-1}\rangle_{X} \right] \\ &\leq \langle R \tau^k,\overline{\partial}\sigma_h^k\rangle_{Y}-\langle R\overline{\partial} \rho_h^k,\overline{\partial} \sigma_h^k\rangle_{Y} -\frac{1}{\Delta t}\left[ \langle A(t_k)\rho_h^k,\sigma_h^k\rangle_{X}-\langle A(t_{k-1})\rho_h^{k-1},\sigma_h^{k-1}\rangle_{X} \right]\\ &\hspace*{2cm} +\langle A(t_k)\overline{\partial}\rho_h^k,\sigma_h^{k-1}\rangle_{X} + \frac{1}{\Delta t} \left\langle\left(\int_{t_{k-1}}^{t_k} A'(t)dt\right) (\rho_h^{k-1}+ \sigma_h^{k-1}),\sigma_h^{k-1}\right\rangle_{X}, \end{split} \end{equation} then, recalling that the family of operators $A(t)$ is uniformly bounded and that the operator $R$ is also bounded, using \eqref{cauchy-R2} and \eqref{Ap-bounded}, it follows that \begin{align*} &\frac12\langle R\overline{\partial} \sigma_h^k,\overline{\partial} \sigma_h^k\rangle_{Y}+ \frac{1}{2\Delta t}\left\{\langle A(t_k)\sigma_h^k,\sigma_h^k\rangle_{X}-\langle A(t_{k-1})\sigma_h^{k-1},\sigma_h^{k-1}\rangle_{X}\right\}\\ &\quad\leq -\frac{1}{\Delta t}\left\{\langle A(t_k)\rho_h^k,\sigma_h^k\rangle_{X}-\langle A(t_{k-1})\rho_h^{k-1},\sigma_h^{k-1}\rangle_{X}\right\} +C \left\{\|\sigma_h^{k-1}\|_{X}^2+ \|\tau^k\|_{Y}^2+\|\overline{\partial} \rho_h^k\|_{X}^2+\|\rho_h^{k-1}\|_{X}^2\right\}, \end{align*} then, multiplying by $2\Delta t$, summing over $k$ and using the fact that $\langle A(t_n)\rho_h^n,\sigma_h^n\rangle_{X}\leq \langle A(t_n)\rho_h^n,\rho_h^n\rangle_{X}+\frac{1}{4}\langle A(t_n)\sigma_h^n,\sigma_h^n\rangle_{X}$, we obtain \begin{multline*} \Delta t\sum_{k=1}^n\langle R \overline{\partial}\sigma_h^k,\overline{\partial}\sigma_h^k\rangle_{Y} +\frac{1}{2}\langle A(t_n)\sigma_h^n,\sigma_h^n\rangle_{X} \\ \qquad \leq \langle A(0)(2\rho_h^{0}+\sigma_h^{0}),\sigma_h^{0}\rangle_{X} +2\langle A(t_n)\rho_h^n,\rho_h^n\rangle_{X} +C \Delta t \sum_{k=1}^n\left\{\|\sigma_h^{k-1}\|_{X}^2+ \|\tau^k\|_{Y}^2+\|\overline{\partial} \rho_h^k\|_{X}^2+\|\rho_h^{k-1}\|_{X}^2\right\}. \end{multline*} Finally, using \eqref{desi-sigma} to estimate the sum involving $\|\sigma_h^{k-1}\|_{X}$ and recalling $A(t)$ is uniformly bounded and monotone, we deduce \eqref{estimasumaderivada-R}. \end{proof} Now, we are in a position to prove the following error estimate. \begin{theorem}\label{teo-deg-u} If $u\in\H^1(0,T;X)\cap H^2(0,T;Y)$, then there exists a constant $C>0$, independent of $h$ and $\Delta t$, such that \begin{equation}\label{esti-eh} \begin{split} &\max_{1 \leq n \leq N}\langle R(u(t_n)-u_h^n),u(t_n)-u_h^n\rangle_{Y} +\Delta t\,\sum_{n=1}^N\|u(t_n)-u_h^n\|^2_{X} \\ &\leq C\left\{ {\| u_0 -u_{0,h} \|_{Y}^2} +\hspace*{-1mm}\max_{0 \leq n \leq N}\hspace*{-1mm}\left[ \inf_{v\in X_h}{\|u(t_{n}) - v\|_{X}^2}\right] \hspace*{-1mm}+\hspace*{-1mm}\int_0^T\hspace*{-2mm} \inf_{v\inX_h} \| \partial_{t} u(t)-v \|_{X}^2\, dt + (\Delta t)^2\hspace*{-1mm} \int_0^T \hspace*{-2mm}\| \partial_{tt} u(t)\|^2_{Y}\,dt \right\}. \end{split} \end{equation} Furthermore, if $u_0\inX$ and for each $t\in[0,T]$ the operator $A(t)$ is monotone and \eqref{Ap-bounded} holds true, then there exists a constant $C>0$, independent of $h$ and $\Delta t$, satisfying \begin{equation}\label{esti-deh} \begin{split} &\Delta t \sum_{k=1}^n\produ{R(\partial_{t}u(t_k)-\overline{\partial} u_h^k),(\partial_{t}u(t_k)-\overline{\partial} u_h^k)}_{Y} +\max_{1 \leq n \leq N}\langle A(t_n)(u(t_n)-u_h^n),u(t_n)-u_h^n\rangle_{X} \\ & \leq C\left\{ {\| u_0 -u_{0,h} \|_{X}^2} +\hspace*{-1mm}\max_{0 \leq n \leq N}\hspace*{-1mm}\left[ \inf_{v\in X_h}{\|u(t_{n}) - v\|_{X}^2}\right] \hspace*{-1mm}+\hspace*{-1mm}\int_0^T\hspace*{-2mm} \inf_{v\inX_h} \| \partial_{t} u(t)-v \|_{X}^2\, dt + (\Delta t)^2\hspace*{-1mm} \int_0^T \hspace*{-2mm}\| \partial_{tt} u(t)\|^2_{Y}\,dt \right\}. \end{split} \end{equation} \end{theorem} \begin{proof} First of all, we notice that \eqref{infprodg} and \eqref{rhoysigmad} imply \begin{equation}\label{ineq1-cea} \|\rho_h^n\|_{X}= \|\rho_h(t_n)\|_{X}\leq C \inf_{z\in X_h}\|u(t_n)-z\|_{X}. \end{equation} Moreover, the regularity assumption about $u$ implies $\partial_{t}\Pi_h u(t)=\Pi_h(\partial_{t}u(t))$, and consequently \[ \|\partial_{t}\rho_h(t)\|_{X}\leq C \inf_{z\in X_h}\|\partial_{t} u(t)-z\|_{X}. \] Hence, it is easy to check that \begin{align*} \Delta t\sum_{k=1}^N \|\overline{\partial} \rho_h^k \|^2_{X} =\Delta t\sum_{k=1}^N\left\|\frac{1}{\Delta t}\int_{t_{k-1}}^{t_k} \hspace*{-2mm}\partial_t\rho_h(t)\,dt\right\|_X^2 \leq\sum_{k=1}^N \int_{t_{k-1}}^{t_k} \hspace*{-2mm}\|\partial_t\rho_h(t)\|_X^2\,dt \leq C \int_0^T \hspace*{-2mm} \inf_{v\inX_h} \| \partial_{t} u(t)-v \|_{X}^2\, dt. \end{align*} On the other hand, by combining a Taylor expansion with the Cauchy-Schwarz inequality, we obtain \begin{equation}\nonumbe \sum_{k=1}^N \| \tau^k \|^2_{Y} =\sum_{k=1}^N \left\|\frac{1}{\Delta t} \int_{t_{k-1}}^{t_{k}} (t_{k-1}-t)\partial_{tt}u(t)\,dt\right\|_{Y}^2 \leq \Delta t \int_0^T\|\partial_{tt}u(t)\|_{Y}^2 \,dt. \end{equation} Now, by writing $\sigma_h^0=e_h^0-\rho_h^0$ and using the fact that $R$ is self-adjoint and monotone\footnote{Notice that if $R$ is self-adjoint and monotone, we have $\langle R(v+w),(v+w)\rangle_{Y}\leq 2\left[ \langle R(v),v\rangle_{Y} + \langle R(w),w\rangle_{Y}\right]$ for any $v,w\inY$.}, from the second equation of Problem~\ref{ppdc}, it follows that \begin{equation} \label{ineq4-cea} \langle R\sigma_h^0,\sigma_h^0\rangle_{Y}\leq 2\langle R(u_0-u_{0,h}),u_0-u_{0,h}\rangle_{Y}+2\langle R\rho_h^0,\rho_h^0\rangle_{Y}. \end{equation} By using inequalities \eqref{ineq1-cea}--\eqref{ineq4-cea} and Lemma~\ref{lemdg}, \eqref{esti-eh} follows from the fact that $u(t_n)-u_h^n=\rho_h^n+\sigma_h^n$ (see \eqref{splitE-d}) and the triangle inequality. Next, we need to deduce \eqref{esti-deh}. To this aim, we first recall that $\partial_{t}u(t_k)-\overline{\partial} u_h^k = \left[\overline{\partial} u(t_k)-\overline{\partial} u_h^k\right]-\tau^k$, then, by using \eqref{splitE-d} it follows $ \partial_{t}u(t_k)-\overline{\partial} u_h^k = \left(\overline{\partial} \rho_h^k + \overline{\partial} \sigma_h^k\right) - \tau^k$. Therefore, it is easy to obtain \begin{equation*} \produ{R(\partial_{t}u(t_k)-\overline{\partial} u_h^k),\partial_{t}u(t_k)-\overline{\partial} u_h^k}_{Y} \leq C \left[ \langle R\overline{\partial} \sigma_h^k,\overline{\partial} \sigma_h^k\rangle_{Y} + \| {\overline{\partial} \rho_h^k}\|_{Y}^2 + \| {\tau^k}\|_{Y}^2 \right]. \end{equation*} Consequently, \eqref{esti-deh} follows by using \eqref{estimasumaderivada-R}, by proceeding as in the proof of \eqref{esti-eh} and noticing that \begin{equation}\nonumber \Delta t\,\sum_{n=1}^N \inf_{v\inX_h}{\|u(t_n)-v\|^2_{X}} \leq T \max_{1 \leq n \leq N}\left[ \inf_{v\in X_h}{\|u(t_{n}) - v\|_{X}^2} \right]. \end{equation} \end{proof} \section{Application to the eddy current problem}\label{aplicaciones-d} The eddy current model is obtained by dropping the displacement currents from Maxwell equations \cite[chapter 8]{bossavit}) and it provides a reasonable approximation to the solution of the full Maxwell system in the low frequency range (see \cite{AB}). This model is commonly used in many problems in science and industry: induction heating, electromagnetic braking, electric generation, etc (see \cite[Chapter 9]{alonsovallilibro}). The purpose for the eddy current problem is to determine the eddy currents induced a three-dimensional conducting domain $\hat{\Omega}_{\mathrm{c}}$ by a given time dependent compactly-supported current density $\mathbf{J}$. The eddy current problem can be read as follows. \begin{problem} \label{eddy-d} Find the magnetic field $\Hn:\mathbb{R}^3\times[0,T]\to\mathbb{R}^3$ and the electric field $\mathbf{E}:\mathbb{R}^3\times[0,T]\to\mathbb{R}^3$ satisfying \begin{align*} \partial_t \left(\mu \Hn\right)+ \mathop{\mathbf{curl}}\nolimits \mathbf{E} &= \boldsymbol{0}, \\ \mathop{\mathbf{curl}}\nolimits \Hn &= \Jn + \sigma \mathbf{E}, \\ \mathop{\mathrm{div}}\nolimits (\varepsilon \mathbf{E}) &= 0, \\ \mathop{\mathrm{div}}\nolimits (\mu \Hn) &= 0, \end{align*} where $\mu$, $\sigma$ and $\varepsilon$ represent the physical (scalar) parameters respectively called magnetic permeability, electric conductivity and electric permittivity. \end{problem} We assume that these parameters are piecewise smooth real valued functions satisfying: \begin{align*} &\varepsilon_{\max}\geq \varepsilon(\mathbf{x})\geq \varepsilon_{\min} > 0\ \quad\mathrm{a.e.} \text{ in } \hat{\Omega}_{\mathrm{c}} \quad\text{and}\quad\varepsilon(\mathbf{x})= \varepsilon_{\min} \quad \mathrm{a.e.}\text{ in }\mathbb{R}^3\setminus\overline{\hat{\Omega}}_{\mathrm{c}}, \\ &\sigma_{\max}\geq \sigma(\mathbf{x})\geq \sigma_{\min} > 0 \quad\mathrm{a.e.} \text{ in } \hat{\Omega}_{\mathrm{c}} \quad\text{and}\quad\sigma(\mathbf{x})= 0 \ \quad \quad \mathrm{a.e.}\text{ in }\mathbb{R}^3\setminus\overline{\hat{\Omega}}_{\mathrm{c}}, \\ &\mu_{\max}\geq \mu(\mathbf{x})\geq \mu_{\min} > 0 \quad\mathrm{a.e.} \text{ in } \hat{\Omega}_{\mathrm{c}} \quad\text{and}\quad\mu(\mathbf{x})= \mu_{\min} \quad \mathrm{a.e.}\text{ in }\mathbb{R}^3\setminus\overline{\hat{\Omega}}_{\mathrm{c}}. \end{align*} Different formulations for the eddy current model (\cite{zlamal, maccamy, reales}) can be analyzed as a degenerate parabolic problem of Section~\ref{degenerate} and the mathematical analysis of their numerical approximation by using finite element methods can be obtained with the theory performed in Sections \ref{discreto-d} and \ref{error-d}, however we only focus in the formulation studied in the first of that references. Zlamal \cite{zlamal} (see also \cite{zlamal-ad}) has proposed a solution of a particular case of the eddy current Problem~\ref{eddy-d} by solving the following two-dimensional degenerate parabolic problem, for a given data source $J_{\mathrm{d}}:\mathbb{R}^2\times [0,T]\to \mathbb{R}$. \begin{problem} \label{two-d} Find $u:\mathbb{R}^2\times[0,T]\to \mathbb{R}$ such that \begin{equation}\label{vari-two-d} \sigma\deriparc{u}{t}=\mathop{\mathrm{div}}\nolimits\left(\dfrac{1}{\mu}\nabla u\right)+ J_{\mathrm{d}}, \end{equation} where the physical parameters $\sigma$ and $\mu$ are independent of $x_3$. \end{problem} The following result shows the relationship between the eddy current Problem~\ref{eddy-d} and the degenerate parabolic equation Problem~\ref{two-d}. \begin{proposition}\label{rela-two-three} If $u:\mathbb{R}^2\times[0,T]\to\mathbb{R}$ is an enough regular solution of Problem~\ref{two-d} and the electric permittivity $\varepsilon$ is independent of $x_3$, then \begin{align}\label{def-EH} \boldsymbol{E}:=\paren{0,0,-\deriparc{u}{t}}\quad\text{and}\quad\boldsymbol{H}:=\dfrac{1}{\mu}\paren{\deriparc{u}{x_2},-\deriparc{u}{x_1},0} \end{align} are solutions of problem Problem~\ref{eddy-d} with $\mathbf{J}:=\paren{0,0,J_{\mathrm{d}}}$. \end{proposition} \begin{proof} Let $u$ be a regular solution of Problem~\ref{two-d} and assume that $\mathbf{J}:=\paren{0,0,J_{\mathrm{d}}}$. Let us define $\boldsymbol{E}$ and $\boldsymbol{H}$ as in \eqref{def-EH}. Therefore, \begin{equation*} \mathop{\mathbf{curl}}\nolimits\boldsymbol{E} =\left( -\dfrac{\partial}{\partial x_2}\left(\dfrac{\partial u}{\partial t} \right),\dfrac{\partial}{\partial x_1}\left(\dfrac{\partial u}{\partial t} \right),0\right)\\ =-\dfrac{\partial }{\partial t}(\mu\boldsymbol{H}), \end{equation*} and the first equation of Problem~\ref{eddy-d} follows. Furthermore, the second equation of Problem~\ref{two-d} is obtained by noticing that \begin{equation*} \mathop{\mathbf{curl}}\nolimits\boldsymbol{H}=\left(0,0,-\dfrac{\partial}{\partial x_1}\left(\frac{1}{\mu}\dfrac{\partial u}{\partial x_1}\right) - \dfrac{\partial}{\partial x_2}\left(\frac{1}{\mu}\dfrac{\partial u}{\partial x_2}\right)\right) =\left(0,0,-\mathop{\mathrm{div}}\nolimits\left(\frac{1}{\mu}\nabla u\right)\right) =\mathbf{J} + \sigma\boldsymbol{E}. \end{equation*} Next, by recalling that $u$ and $\varepsilon$ are independent of $x_3$, it follows the third equation of Problem~\ref{eddy-d}. Finally, the last equation of Problem~\ref{eddy-d} follows by using the regularity of $u$. \end{proof} \subsection{Well-posedness for the eddy current formulation} Let $\hat{\Omega}\subset\mathbb{R}^3$ be a simply connected and bounded set containing $\hat{\Omega}_{\mathrm{c}}$ and $\mathop{\mathrm{Supp}}\nolimits\mathbf{J}$, with $\mathbf{J}$ as in Proposition~\ref{rela-two-three}. In order to obtain a weak formulation for Problem~\ref{two-d}, we have to consider the projection of both sets $\hat{\Omega}$ and the conducting domain $\hat{\Omega}_{\mathrm{c}}$ onto the plane $x_1x_2$, that will be denoted respectively as $\Omega$ and $\Omega_{\mathrm{c}}$. Then, given $u_0\in\L^2(\oc)$ and $J_{\mathrm{d}}\in\L^2(0,T;\ldoso)$, by multiplying equation \eqref{vari-two-d} with $v\in\H_0^1(\Omega)$ and integrating by parts over $\Omega$, we obtain the following weak formulation for the Problem~\ref{two-d}. \begin{problem} \label{two-dw1} Find $u\in\L^2(0,T;\houno)$ such that \begin{align*} \dfrac{d}{dt}\int_{\Omega_{\mathrm{c}}} \sigma u v + \int_{\Omega} \dfrac{1}{\mu}\nabla u\cdot \nabla v &= \int_{\Omega} J_{\mathrm{d}} v \quad\forall v\in\H_0^1(\Omega),\\ u(0)&=u_0 \quad\qquad \textrm{in }\Omega_{\mathrm{c}}. \end{align*} \end{problem} The analysis of existence and uniqueness of solution for the previous problem is obtained by using Theorem~\ref{welld}. To this aim, in order to fit Problem~\ref{two-dw1} in the abstract structure of Problem~\ref{ppdc}, we have to define $X:=\H_0^1(\Omega)$ and $Y:=\L^2(\Omega)$, with their usual inner products. Then, we can easily deduce that these spaces satisfy the corresponding properties of Section~\ref{degenerate}. Furthermore, we define the operators $R:Y\to Y'$ and $A:X\to X'$ given by \begin{align} \left\langle Av,w\right\rangle_{X}&:=\int_\Omega\dfrac1\mu\nabla v\cdot\nabla w\qquad\forall v,w\in X,\label{defA-d}\\ \left\langle Rv,w\right\rangle_{Y}&:=\int_{\Omega_{\mathrm{c}}}\sigma v w\ \qquad\qquad\forall v,w\in Y.\label{defR-d} \end{align} We can notice that in this case the family of operators $A(t)$ in Problem~\ref{ppdc} is constant with respect of $t$. Additionally, we need to define the function $f\in\L^2(0,T;X')$ given by \begin{equation}\label{deff-d} \left\langle f(t),v\right\rangle_{X}:=\int_\OmegaJ_{\mathrm{d}}(t)v\qquad\forall v\in X. \end{equation} Finally, we should notice that the initial condition to Problem~\ref{two-dw1} is equivalent to $Ru(0)=Ru_0$ in $Y'$. \begin{theorem}\label{well-eddy-two} There exists a unique solution $u$ of Problem~\ref{two-dw1} satisfying \begin{equation* \norm{u}_{\L^2(0,T;\houno)}\leq C\left\{ \norm{u_0}_{\L^2(\oc)} + \norm{J_{\mathrm{d}}}_{\L^2(0,T;\ldoso)} \right\}. \end{equation*} \end{theorem} \begin{proof} The operator $R$ is clearly monotone and self-adjoint. Furthermore, the following G\r{a}rding-type inequality holds true for all $v\in X$: \begin{equation}\label{garding-d} \left\langle Rv,v\right\rangle_{Y} + \left\langle Av,v\right\rangle_{X} =\int_{\Omega_{\mathrm{c}}}\sigma\abs{v}^2 + \int_{\Omega}\dfrac{1}{\mu}\abs{\nabla v}^2 \geq \dfrac{1}{\mu_{\max}}\int_{\Omega}\abs{\nabla v}^2 \geq \dfrac{C_{\mathrm{P}}}{\mu_{\max}}\norm{v}_{\H^1(\Omega)}^2, \end{equation} where $C_{\mathrm{P}}$ is the positive constant given by the Poincar\'e inequality in $\H_0^1(\Omega)$. Consequently, Theorem~\ref{welld} shows that Problem~\ref{two-dw1} has at least a solution. Moreover, since the family of operators $A$ is independent of time, it is trivially a regular family and consequently the solution $u$ of Problem~\ref{two-dw1} is unique. Finally, by using \eqref{dependencia-d} and noticing that \[ \left\langle Ru_0,u_0\right\rangle_{Y}=\int_{\Omega_{\mathrm{c}}}\sigma\abs{u_0}^2\leq \sigma_{\max}\norm{u_0}_{\L^2(\oc)}^2, \] we conclude the proof. \end{proof} \begin{remark} It is easy to see that \begin{equation* \sigma\partial_t u - \mathop{\mathrm{div}}\nolimits\left(\frac1\mu\nabla u\right)= J_{\mathrm{d}} \qquad\textrm{in }\L^2(0,T;\houno'), \end{equation*} consequently $u\vert_{\Omega_{\mathrm{c}}}$ belongs to the space $W^{1,2}(0,T;\H^1(\oc),\H^1(\oc)')$. \end{remark} \subsection{Error estimates for the fully-discrete degenerate formulation} The fully-discrete approximation for the degenerate Problem~\ref{two-dw1} is obtained by using a finite element subspaces to define $X_h$ which is the corresponding family of finite dimensional subspaces of $X$ (see Section~\ref{discreto-d}). To this aim, in what follows we assume that $\Omega$ and $\Omega_{\mathrm{c}}$ are Lipschitz polygonal. Let $\set{\mathcal{T}_h}_h$ be a regular family of triangles meshes of $\Omega$ such that each element $K\in\mathcal{T}_h$ is contained either in $\overline{\Omega}_{\mathrm{c}}$ or in $\overline{\Omega}_{\mathrm{d}}:=\overline{\Omega\setminus\overline{\Omega}_{\mathrm{c}}}$. As usual, $h$ stands for the largest diameter of the triangles $K$ in $\mathcal{T}_h$. We define $X_h$ using the standard Lagrange finite element subspace of $\H_0^1(\Omega)$, \textit{i.e.}, \[ X_h:=\left\{ v_h\in C^0(\overline\Omega): v\vert_{K}\in \mathbb{P}_1(K)\right\}\cap \H_0^1(\Omega), \] where $C^0(\overline\Omega)$ is the space of scalar continuous functions defined on $\overline\Omega$ and $\mathbb{P}_1$ is the set of polynomials of degree not greater than $1$. Then, the fully-discrete approximation for the degenerate parabolic formulation is given by Problem~\ref{ppdd}, by using the notation \eqref{defA-d}--\eqref{deff-d}. More precisely, Given $u_{0,h}\in X_h$ an approximation of $u_0$, the fully-discrete approximation of Problem~\ref{two-dw1} can be read as follows. \begin{problem} \label{fd-eddy-d} Find $u_h^n\inX_h$, $n=1,\dots, N$, such that \begin{align*} \int_{\Omega_{\mathrm{c}}} \sigma \left(\dfrac{u_h^n-u_h^{n-1}}{\Delta t}\right) v + \int_{\Omega} \dfrac{1}{\mu}\nablau_h^n\cdot \nabla v &= \int_{\Omega} J_{\mathrm{d}}(t_n) v \qquad \forall v\in X_h,\\ u_h^0&=u_{0,h}. \end{align*} \end{problem} Thus, by using \eqref{garding-d}, the existence and uniqueness of solution $u_h^n\inX_h$, $n=1,\dots, N$, of the fully-discrete problem is guaranteed by Theorem~\ref{welldh} for a small enough time-step. Moreover, by noticing that in this case we have \[ \produ{R(\partial_{t}u(t_k)-\overline{\partial} u_h^k),\partial_{t}u(t_k)-\overline{\partial} u_h^k}_{Y} = \int_{\Omega_{\mathrm{c}}}\sigma\norm{\partial_{t}u(t_k)-\overline{\partial} u_h^k}_{\L^2(\oc)}^2, \] we obtain the following result about the error estimates for the fully-discrete approximation Problem~\ref{fd-eddy-d} of the degenerate parabolic Problem~\ref{two-dw1}, which is a direct consequence of Theorem~\ref{teo-deg-u}. \begin{theorem}\label{cea-eddy-d} Let $u\in\L^2(0,T;\houno)$ be the solution of the eddy current Problem~\ref{two-dw1} and $u_h^n\inX_h$, $n=1,\dots, N$, the fully-discrete solution of Problem~\ref{fd-eddy-d}. If $u_0\in\H_0^1(\Omega)$ and $u\in \H^1(0,T;\H_0^1(\Omega))\cap\H^2(0,T;\L^2(\Omega))$ then there exists a constant $C>0$, independent of $h$ and $\Delta t$, such that \begin{align*} &\max_{1\leq n\leq N}\|u(t_n) - u_h^n\|_{\sigma,\Omega_{\mathrm{c}}}^2 + \Delta t\sum_{n=1}^{N}\|u(t_n) - u_h^n\|_{\H_0^1(\Omega)}^2 + \Delta t\sum_{n=1}^{N}\norm{\partial_{t}u(t_n)-\overline{\partial} u_h^n}_{\sigma,\Omega_{\mathrm{c}}}^2 \\ &\quad\quad \leq C \left\{ \|u_0 -u_{0,h} \|_{\H_0^1(\Omega)}^2 +\max_{0 \leq n \leq N}\left[ \inf_{v\in X_h}{\|u(t_{n}) - v\|_{\H_0^1(\Omega)}^2}\right] +\int_0^T \inf_{v\inX_h} \| \partial_{t} u(t)-v \|_{\H_0^1(\Omega)}^2\,dt \right.\\[1ex] \qquad\qquad\quad \left. + (\Delta t)^2 \int_0^T \| \partial_{tt} u(t) dt\|^2_{\L^2(\Omega)} \right\}, \end{align*} where $\norm{w}_{\sigma,\Omega_{\mathrm{c}}}^2:=\displaystyle\int_{\Omega_{\mathrm{c}}}\sigma\vert w\vert^2$. \end{theorem} Finally, to obtain the asymptotic error estimate, we need to consider the Sobolev space $\mathrm{H}^{1+s}(\Omega)$ for $0<s\leq1$. It is well known that the Lagrange interpolant $\mathcal{L}_h v\in X_h$ is well defined for all $v\in\mathrm{H}^{1+s}(\Omega)\cap\H_0^1(\Omega)$ and satisfies the following estimate (see, for instance, \cite{ciarlet}) \begin{equation}\label{bimbo-d} \norm{v-\mathcal{L}_h v}_{\H_0^1(\Omega)} \leq C h^{s} \norm{v}_{\mathrm{H}^{1+s}(\Omega)} \qquad \forall v\in\mathrm{H}^{1+s}(\Omega)\cap\H_0^1(\Omega). \end{equation} Consequently, we have the following result which shows the asymptotic convergence of the fully-discrete approximation. \begin{corollary}\label{coroconvucd-de} If $u_0\in\H_0^1(\Omega)$ and $u\in \H^1(0,T;\H_0^1(\Omega)\cap\mathrm{H}^{1+s}(\Omega))\cap \H^2(0,T;\L^2(\Omega))$ for $0<s\leq1$, there exists a constant $C>0$ independent of $h$ and $\Delta t$, such that \begin{align*} &\max_{1\leq n\leq N}\| u(t_n)-u_h^n\|_{\sigma,\Omega_{\mathrm{c}}}^2 + \Delta t\sum_{n=1}^{N}\|u(t_n)-u_h^n\|_{\H_0^1(\Omega)}^2 + \Delta t\sum_{n=1}^{N}\norm{\partial_{t}u(t_n)-\overline{\partial} u_h^n}_{\sigma,\Omega_{\mathrm{c}}}^2 \\ &\qquad\qquad \leq C \left\{ \|u_0 -u_{0,h} \|_{\H_0^1(\Omega)}^2 +h^{2s}\left[ \max_{1\leq n\leq N} \|u(t_n)\|_{\mathrm{H}^{1+s}(\Omega)}^2 +\|\partial_t u\|_{\L^2(0,T;\mathrm{H}^{1+s}(\Omega))}^2 \right]\right.\\ &\quad \left. \qquad\quad \phantom{\max_{1\leq n\leq N}} +(\Delta t)^2\|\partial_{tt}u\|_{\L^2(0,T;\L^2(\Omega)^3)}^2 \right\}. \end{align*} Moreover, if $u_0\in\H_0^1(\Omega)\cap\mathrm{H}^{1+s}(\Omega)$, for $0<s\leq1$ and $u_{0,h}=\mathcal{L}_h u_0$ then \[ \max_{1\leq n\leq N}\| u(t_n)-u_h^n\|_{\sigma,\Omega_{\mathrm{c}}}^2 + \Delta t\sum_{n=1}^{N}\|u(t_n)-u_h^n\|_{\H_0^1(\Omega)}^2 + \Delta t\sum_{n=1}^{N}\norm{\partial_{t}u(t_n)-\overline{\partial} u_h^n}_{\sigma,\Omega_{\mathrm{c}}}^2 =\mathcal{O}(h^{2s}+(\Delta t)^2). \] \end{corollary} \begin{proof} It is a direct consequence of Theorem~\ref{cea-eddy-d} and the interpolation error estimate \eqref{bimbo-d}. \end{proof} \begin{remark}\label{ErrorFisicas} The previous result shows that the fully-discrete approximation Problem~\ref{fd-eddy-d} provides a suitable approximation for the physical variables of the eddy current problem at each time $t_n$, namely the electric field $\mathbf{E}(t_n)$ in the three-dimensional conducting domain $\hat{\Omega}_{\mathrm{c}}$ and the magnetic field $\mathbf{H}(t_n)$ in the three-dimensional computational domain $\hat{\Omega}$. More precisely, we can use the relationship \eqref{def-EH}, to define \[ \mathbf{E}(t_n):= (0,0,-\partial_tu(t_n)) \quad\textrm{in }\hat{\Omega}_{\mathrm{c}},\qquad \mathbf{H}(t_n):=\frac1\mu\left(\dfrac{\partial u}{\partial x_2}(t_n),-\dfrac{\partial u}{\partial x_1}(t_n),0\right) \quad\textrm{in }\hat{\Omega}, \] for any $n=1,\ldots,N$, and propose the following approximations \begin{equation}\nonumber \mathbf{E}(t_n)\approx\mathbf{E}_h^n:=(0,0,-\overline{\partial}u_h^n) \qquad\textrm{in }\hat{\Omega}_{\mathrm{c}}, \end{equation} and \begin{equation}\nonumber \mathbf{H}(t_n)\approx\mathbf{H}_h^n:=\frac1\mu\left(\dfrac{\partialu_h^n}{\partial x_2},-\dfrac{\partialu_h^n}{\partial x_1},0\right) \qquad\textrm{in }\hat{\Omega}. \end{equation} Consequently, by using Corollary~\ref{coroconvucd-de}, we deduce the following quasi-optimal error estimates \begin{equation}\nonumber \Delta t\sum_{n=1}^{N}\|\mathbf{E}(t_n)-\mathbf{E}_h^n\|_{\sigma,\hat{\Omega}_{\mathrm{c}}}^2 + \Delta t\sum_{n=1}^{N}\norm{\mathbf{H}(t_n)- \mathbf{H}_h^n}_{\mu,\hat{\Omega}}^2 \leq \|u_0 -u_{0,h} \|_{\H_0^1(\Omega)}^2 + C\left[ h^{2s} +(\Delta t)^2 \right], \end{equation} where $\norm{\mathbf{w}}_{\mu,\hat{\Omega}}^2:=\displaystyle\int_{\hat{\Omega}}\frac1\mu\vert \mathbf{w}\vert^2$. \end{remark} \subsection{Numerical results} In this subsection we present some numerical results obtained with a MATLAB code which implements the numerical method described in Problem~\ref{fd-eddy-d}, to illustrate the convergence with respect to the discretization parameters. To this end, we describe the results obtained for a test problem with a known analytical solution. \begin{figure}[!htb] \begin{center} \includegraphics*[scale=0.3]{3D.eps}\hspace*{2cm}\includegraphics*[scale=0.45]{2D.eps} \caption{Sketch of the domain 3D (left) and 2D (right).} \label{domain} \end{center} \end{figure} We consider $\hat{\Omega}$ with $\hat{\Omega}_{\mathrm{c}}$ and their respective projection onto the plane $x_1x_2$, $\Omega$ and $\Omega_{\mathrm{c}}$ (see Figure~\ref{domain}) and $T=1$. The right hand side $J_d$, is chosen so that \[ u(x_1,x_2,t)=e^{-5\pi t} \sin (\pi x_1) \sin(\pi x_2), \] is the solution to Problem~\ref{two-d} in $\Omega$ with boundary condition $u=0$ on $\partial\Omega$. Notice that $u$ is also solution of Problem~\ref{two-dw1} with $u_0(x_1,x_2)=\sin (\pi x_1) \sin(\pi x_2)$ where, in particular $u_0\in\H^1_0(\Omega)\cap\H^2(\Omega)$. We have taken $\unit[\mu=\mu_0=4\pi\times10^{-7}]{{Hm}^{-1}}$, $\unit[\sigma=\sigma=10^{6}]{(\Omega m)^{-1}}$ in $\Omega_{\mathrm{c}}$, the magnetic permeability and electric conductivity of vacuum, respectively. The numerical method has been applied with several successively refined meshes and time-steps. The computed approximate solutions have been compared with the analytical one, by calculating the relative percentage error in time-discrete norms from Corollary \ref{coroconvucd-de}. More accurately, thanks to Proposition~\ref{rela-two-three} and Remark~\ref{ErrorFisicas}, we have compute the relative percentage error for the physical variables of interest, the magnetic field and the electric field in the conductor domain, namely \begin{equation*} 100\,\frac{\Delta t\sum_{n=1}^{N}\norm{\mathbf{H}(t_n)- \mathbf{H}_h^n}_{\mu,\hat{\Omega}}^2}{\Delta t\sum_{n=1}^{N}\norm{\mathbf{H}(t_n)}_{\mu,\hat{\Omega}}^2} \quad\text{and}\quad 100\,\dfrac{\Delta t\sum_{n=1}^{N}\|\mathbf{E}(t_n)-\mathbf{E}_h^n\|_{\sigma,\hat{\Omega}_{\mathrm{c}}}^2}{\Delta t\sum_{n=1}^{N}\|\mathbf{E}(t_n)\|_{\sigma,\hat{\Omega}_{\mathrm{c}}}^2}, \end{equation*} which are time-discrete forms of the errors in $\L^{2}(0,T;\L^2(\hat{\Omega}))$ and $\L^{2}(0,T;\L^2(\hat{\Omega}_{\mathrm{c}}))$ norms, respectively. The Table~\ref{TablaH} shows the relative errors for $\mathbf{H}$ in the $\L^{2}(0,T;\L^2(\hat{\Omega}))$-norm, namely the relative errors for $u$ in the $\L^2(0,T;\H^1_0(\Omega))$-norm. We notice that by taking a small enough time-step $\Delta t$, we can observe the behavior of the error with respect to the space discretization (see the row corresponding to $\Delta t/64$). On the other hand, by considering a small enough mesh-size $h$, we can check the order convergence with respect $\Delta t$ (see the first entries of the column corresponding to $h/64$). Hence, we conclude an order the convergence $\mathcal{O}(h+\Delta t)$ for $\mathbf{H}$, which confirm the theoretical results given in Remark~\ref{ErrorFisicas}, proved in Corollary \ref{coroconvucd-de}. \begin{table}[!htb] \begin{center} \begin{tabular}{lccccccccc} \hline & $h$ &$h/2$ & $h/4$ & $h/8$ &$h/16$& $h/32$&$h/64$\\ \hline $\Delta t$ & \fbox{41.3685}& 22.1296& 12.8925 & 9.1603 & 7.9516& 7.6190 & 7.5335\\ $\Delta t/2$ & 41.3088& \fbox{21.4624}& 11.4341 & 6.8342 & 5.0574& 4.5040& 4.3546& \\ $\Delta t/4$ &41.4454& 21.3041& \fbox{10.9212} & 5.8293 & 3.5396& 2.6751 &2.4108 \\ $\Delta t/8$ &41.5820& 21.3044& 10.7883 & \fbox{5.5072} & 2.9460&1.845&1.3784 \\ $\Delta t/16$ & 41.6723& 21.3307& 10.7652 & 5.4225 &\fbox{2.7648} &1.4813&0.9115 \\ $\Delta t/32$ & 41.7237& 21.3514& 10.7663 & 5.4038 & 2.7172& \fbox{1.3851}&0.7428 \\ $\Delta t/64$ & 41.7511& 21.3637& 10.7702& 5.4008& 2.7059&1.3599& \fbox{0.6932} \\ \hline \end{tabular} \caption{Percentage errors for $\mathbf{H}$ in the $\L^{2}(0,T;\L^2(\hat{\Omega}))$-norm, with $h=0.3687$ and $\Delta t=0.025$.} \label{TablaH} \end{center} \end{table} The Table~\ref{TablaDerU} shows the relative errors for $\mathbf{E}$ in $\L^{2}(0,T;\L^2(\hat{\Omega}_{\mathrm{c}}))$, namely the relative errors $\partial_t u$ in the $\L^2(0,T;\L^2(\Omega_{\mathrm{c}}))$-norm. We proceed as above, now we can see an order the convergence $\mathcal{O}( h^2+\Delta t)$ (see the row corresponding to $\Delta t/512$ and the column corresponding to $h/16$), in spite of the fact that only a linear order of convergence in $h$ has been proved above. Hence, we have obtained the theoretical results proved in Corollary \ref{coroconvucd-de}, too. \begin{table}[!htb] \begin{center} \begin{tabular}{lcccccccc} \hline & $h$ &$h/2$ & $h/4$ & $h/8$ &$h/16$\\ \hline $\Delta t$ & \fbox{26.3489}& 23.9703 & 23.6728& 23.6232 & 23.6127\\ $\Delta t/2$ & 17.2551&13.4472& 13.1275 & 13.1028 & 13.1006 \\ $\Delta t/4$ & 13.7947&\fbox{7.5263} & 6.9433 & 6.9188 & 6.9213\\ $\Delta t/8$ & 13.2102& 4.8159&3.6233& 3.5566 & 3.5592 \\ $\Delta t/16$ &13.3954& 3.9628&\fbox{1.9873}&1.8078&1.8042 \\ $\Delta t/32$ & 13.6309& 3.8427& 1.3093 & 0.9290 & 0.9082\\ $\Delta t/64$ & 13.7873& 3.8923& 1.1142 &\fbox{0.5144} &0.4574\\ $\Delta t/128$ & 13.8756& 3.9494& 1.0886 &0.3501 &0.2352\\ $\Delta t/256$ & 13.9223& 3.9870& 1.0992 &0.3049 &\fbox{0.1323}\\ $\Delta t/512$ & 13.9463& 4.0081& 1.1111 &0.2992 &0.0927\\ \hline \end{tabular} \caption{Percentage errors for $\mathbf{E}$ in the $\L^2(0,T;\L^2(\Omega_{\mathrm{c}}))$-norm, with $h=0.3687$ and $\Delta t=0.025$.} \label{TablaDerU} \end{center} \end{table} Figure~\ref{ErroresRel_HE} shows log-log plots of the error of $\mathbf{H}$ (left) and $\mathbf{E}$ (right) versus number of degrees of freedom (d.o.f). To report this we have been values of $\Delta t$ proportional to $h$ (see the values within boxes in Table \ref{TablaH}) and $\Delta t$ proportional to $h^2$ (see the values within boxes in Table~\ref{TablaDerU}), respectively. The slopes of the curves clearly show an order of convergence $\mathcal{O}(h+\Delta t)$ and $\mathcal{O}(h^2+\Delta t)$, respectively. \begin{figure}[ht!] \begin{center} \includegraphics*[width=7cm]{H_L2time.eps}\includegraphics*[width=7cm]{E_L2time.eps} \caption{Percentage discretization error curves for $\mathbf{H}$ (left) and $\mathbf{E}$ (right) versus number of d.o.f. (log-log scale).} \label{ErroresRel_HE} \end{center} \end{figure} \textbf{Acknowledgments}\\ Thanks to Colciencias. \textbf{Funding}\\ This work was partially supported by Colciencias through the 727 call, University of Cauca through project VRI ID 5243 and by Universidad Nacional de Colombia through Hermes project $46332$. \textbf{Availability of data and materials}\\ Not applicable. \textbf{Competing interests}\\ The authors declare that they have no competing interests. \textbf{Authors contributions}\\ The authors declare that the work was realized in collaboration with the same responsibility. All authors read and approved the final manuscript. \input{refs-AGL} \end{document}
{'timestamp': '2021-06-04T02:04:45', 'yymm': '2005', 'arxiv_id': '2005.14356', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14356'}
arxiv
\section{Introduction \label{sec:intro}} Centrality measures are prescriptions for assigning importance values to nodes in complex networks, and the power of the concept stems from the flexibility
of characterizing importance in different ways. As such, centralities can be applied everywhere from Internet search results (Google's PageRank \cite{page1999pagerank}) to identifying important structures in neuron networks \cite{joyce2010new}. Centrality is one of the most basic and widely studied concepts in network theory. Recently, we summarized how many prominent centrality measures arise from the aggregation of ``influences'' flowing between pairs of nodes \cite{gurfinkel2020absorbing}. These influences are encoded in the entries of a centrality matrix $\mathbf{M}$, whose specification is equivalent to that of the overall measure. As we demonstrate here, these pair influences can be revealing measurements in their own right (see Sec.~\ref{sec:sstar}). Centrality results are also useful beyond identifying influential nodes and influence flows between node pairs. Often, researchers posses quantitative information about individual nodes---information which is {\it external} to the specification of the network structure. A centrality that approximately reproduces these data can reveal principles inherent in the structure of the network. In \cite{xu2014architecture}, we investigated the architecture of the Florida electric power grid along these lines. A strong correlation was revealed between the known generating capacities of power plants and the values of a centrality based on Estrada's communicability \cite{estrada2008communicability,estrada2009communicability}, here referred to as the {\it communicability centrality}. Quantification of such correlations between node attributes and network structure requires centrality measures with a built-in tuning parameter. The communicability centrality has a parameter that controls the (graph) distance over which nodes can influence each other. Such parameters can reveal the length scale over which the network is optimized. Since there are many ways for a centrality to limit the distance that influence can spread, we introduce the {\it reach-parametrized} category to describe centralities with parameters that have this effect. We will discuss how the reach-parametrized category includes the well-known PageRank \cite{page1999pagerank}, Katz \cite{katz1953new}, and $\alpha$ \cite{ghosh2011parameterized} centralities. The reach-parametrized category is not exhaustive. In \cite{gurfinkel2020absorbing}, we introduced the {\it conditional walker-flow centralities}, which include parameters that interpolate these centralities between older, well-known measures. The conditional walker-flow measures belong to a distinct category: {\it grasp-parametrized } centralities. These centralities' parameters also attenuate influence, but in a way different from reach parameters. While reach parameters control how far centrality influence can spread, grasp parameters control how many alternative paths influence can follow. In addition to reach and grasp parametrization, here we further classify parametrized centralities according to the conceptual dimensions introduced by Borgatti \cite{borgatti2005centrality,borgatti2006graph}. Referencing this classification system, we show that there is a notable absence of centrality measures that are radial, reach parametrized, and based on acyclic, conserved flows of influence. To fill this void, we introduce a new centrality, the {\it ground-current centrality}. There, influence is modeled by the flow of electrical current from the source node to all possible end nodes, from which the current flows to ground. (The method is fully described in Sec.~\ref{sec:gcc}.) The physics of current flow naturally satisfies the conservation and acyclicity criteria, while variable resistances to ground naturally limit the spread of currents (and hence influences), thus representing a reach parameter. Conservation and acyclicity enable the ground-current centrality to perform differently from other reach-parametrized centralities in several ways. Most importantly, we demonstrate that, compared to other reach-parametrized centralities, the ground-current centrality robustly preserves an intuitive rank ordering across a range of simple network topologies. Here we take the closeness centrality (specifically, its harmonic variant \cite{Dekker2005,Rochat2009,NEWM10}) to provide the paradigmatic intuitive centrality ranking for simple networks, since it places greater importance on nodes that are close to many others. However, the ground-current centrality can also reproduce aspects of betweenness: when the reach is high, it is highly sensitive to network bottlenecks, assigning them high centrality rank, whereas other reach-parametrized centrality measures almost completely ignore bottlenecks in certain situations. We further show that, on hub networks, the ground-current centrality does not lead to excessive localization. This is a phenomenon \cite{martin2014localization} whereby the majority of the net centrality is assigned to a small fraction of nodes. On the other hand, in regular networks the ground-current centrality does not lead to excessive {\it de}localization: the assignment of nearly the same centrality value to every node. Other measures, such as the Katz and communicability centralities, exhibit both these behaviors. Recently, it has also been proposed to construct centrality measures from diffusion dynamics \cite{arnaudon2020scale}. The remainder of this paper is organized as follows. In Sec.~\ref{sec:centralities}, we present a classification system for {\it parametrized} centrality measures, discussing in detail the distinction between reach and grasp parameters. In Sec.~\ref{sec:gcc} we define the ground-current centrality. In Sec.~\ref{sec:results}, we discuss the properties of the ground-current centrality relative to other similarly classified measures. To that end, we perform a numerical study of the centralities' detailed performance on a variety of networks. These include four artificial networks designed to highlight a particular network property, as well as seven real-world networks. In Sec.~\ref{sec:conc}, we conclude that the special properties of the ground-current centrality stem from its unique position as a radial reach-parametrized centrality based on acyclic, conserved flows. \section{Reach and grasp parameters for network centralities \label{sec:centralities}} In this section, we present a wide-ranging classification of parametrized centrality measures, which includes the most prominent measures in the literature. We find that a simple and reasonable combination of centrality characteristics has not yet been studied, which motivates us to introduce a new measure, the ground-current centrality, to which we devote Secs. \ref{sec:gcc}-\ref{sec:conc}. \subsection{Notation and conventions} The $N\times N$ adjacency matrix is denoted $\mathbf{A}$. Here we consider both weighted and unweighted adjacency matrices. The graph distance between nodes $i$ and $j$ is denoted $d_{i j}$. In the case of weighted networks, we may instead use $D_{i j}$, which is the length of the shortest edge path from node $i$ to $j$, where the length of a given edge $(a,b)$ is $(\mathbf{A}_{ab})^{-1}$ \cite{gurfinkel2020absorbing}. The most commonly studied centrality measures can be found in, \em e.g.\/\em, Ch.7 of \cite{NEWM10}, and many can be written \cite{borgatti2006graph} in the matrix form: \begin{flalign} \phantom{\text{}}&& c_i= \alpha \sum_{j} \mathbf{M}_{i j},&&\text{}\label{eq:centrality} \end{flalign} where $c_i$ is the centrality of node $i$, and the sum is over the $N$ nodes in the network. We focus on centralities with a single parameter $\mbox{$\mathsmaller{\Pi}$}$, so $\mathbf{M}=\mathbf{M}(\mbox{$\mathsmaller{\Pi}$})$. The matrix elements $\mathbf{M}_{i j}$ of the $N\times N$ centrality matrix $\mathbf{M}$ encode the level of influence that node $j$ exerts on node $i$, and the final centrality is the sum of such influences. In this paper we denote column (row) vectors as kets (bras). The normalization factor $\alpha$ ensures that $\braket{c\,|1}=1$, where $\ket{1}$ is the column vector with all elements equal to one \footnote{In \cite{gurfinkel2020absorbing}, we chose to present {\it unnormalized } centrality results to better track centrality behavior across a range of parameter values.}. The normalization factor is different for every centrality measure, and for each choice of parameter value, so $\alpha=\alpha(\mbox{$\mathsmaller{\Pi}$})$. To maintain readability, we will omit the $\mbox{$\mathsmaller{\Pi}$}$ dependence of $\alpha$ and $\mathbf{M}$, and we will not specify which centrality $\alpha$ normalizes when it is clear from the context. The {\it degree centrality} (DEG) is one of the simplest and most commonly studied network measures. It can be put into the above form by setting $\mathbf{M}^\mathrm{DEG}$ equal to $\mathbf{A}$ so that $c^\mathrm{DEG}_i=\alpha \sum_j A_{ij}=\alpha k_i$. In this paper, we consider (potentially) weighted, symmetric adjacency matrices. The $k_i$ are, thus, (potentially) weighted degrees, and there is no distinction between indegree and outdegree. A very important centrality that { cannot} be expressed in the above form is the {\it closeness centrality} (CLO): $c^\mathrm{CLO}_i = (\sum_j{D_{i j}})^{-1}$. In \cite{gurfinkel2020absorbing}, we therefore used the {\it harmonic closeness centrality} (HCC) \cite{Dekker2005,Rochat2009,NEWM10}, which {\it can} be written in matrix form as: \begin{equation} \mathbf{M}^\mathrm{HCC}_{i j} = {D_{i j}}^{-1}. \end{equation} A useful modification of Eq.~(\ref{eq:centrality}) involves subtracting the diagonal of the centrality matrix $\mathbf{M}$: \begin{flalign} \phantom{\text{}}&&\widetilde{c}_i^{ \mathrm{\hspace{.15em}}}= \widetilde{\alpha} \sum_{j} \widetilde{\mathbf{M}}_{i j}^{ \mathrm{\hspace{.15em}}} = \widetilde{\alpha}\sum_{j} (\mathbf{M}-\pmb{\mathrm{Diag}}(\mathbf{M}))_{i j}, &&\text{} \label{eq:exocentcentrality} \end{flalign} where $\widetilde{\alpha}$ is the new normalization factor. This modified form $\widetilde{\mathbf{M}}$ simply prevents self influence, and we thus refer to $\widetilde{c}$ as the {\it exogenous} centrality. Above, we have used $\pmb{\mathrm{Diag}}(\mathbf{M})$ to indicate the modified form of matrix $\mathbf{M}$ that has all nondiagonal entries set to zero. In the following, we will also use the symbol $\pmb{\mathrm{Diag}}(\ket{v})$ to indicate the diagonal matrix with the elements of the vector $\ket{v}$ appearing on the diagonal. \subsection{Reach-parametrized centralities} A centrality parameter $\mbox{$\mathsmaller{\Pi}$}$ is a {\it reach parameter } if changing it tends to attenuate the influence flow $\mathbf{M}_{ij}$ between pairs of nodes $i$ and $j$ separated by large graph distances $d_{ij}$. For weighted networks, it is possible to instead use the weighted graph distance $D_{ij}$. Three prominent reach-parametrized centralities with similar definitions are the \em PageRank \em (PRC), \em Katz \em (KC), and \em $\alpha$ \em centralities. The first two of these can be defined \cite{noteoninverseparam,katz1953new,page1999pagerank}, respectively, by \begin{equation}\label{eq:prc} \mathbf{M}^\mathrm{PRC}= [\pmb{\mathbb{I}} - \mbox{$\mathsmaller{\Pi}$}_\mathrm{PRC}^{-1} \mathbf{A} \hspace{.2em} \pmb{\mathrm{Diag}}(\ket{k^{-1}}) ]^{-1}, \end{equation} and \begin{equation}\label{eq:katz} \mathbf{M}^\mathrm{KC}= (\pmb{\mathbb{I}}- \mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}^{-1}\mathbf{A})^{-1}, \end{equation} where $\ket{k^{-1}}_i = k_i^{-1}$, the identity matrix is $\pmb{\mathbb{I}}$, and where we have employed the matrix inverse. (For the PRC, we have used a simplified definition that works for the {\it symmetric} adjacency matrices considered in this paper.) The $\alpha$ centrality is a variation of the Katz centrality, involving another parameter \cite{ghosh2011parameterized}. Here, we focus on the KC. The fact that the parameters $\mbox{$\mathsmaller{\Pi}$}$ control the network distance over which influence can spread is seen from the series expansion for the Katz centrality: \begin{equation}\label{eq:katzexpand} \mathbf{M}^\mathrm{KC} = \pmb{\mathbb{I}} + \mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}^{-1} \mathbf{A} +\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}^{-2} \mathbf{A}^2 +\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}^{-3} \mathbf{A}^3+\cdots. \end{equation} Since, in general, $(\mathbf{A}^l)_{ij}$ is equal to the number of walks of length $l$ from node $i$ to node $j$, one can see that larger values of $\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}$ tend to suppress the influence of longer walks. The case of the PageRank centrality is similar, except that each term in the series expansion describes a single random walk, rather than counting the total number of walks. This is because the value of $[\mathbf{A} \hspace{.2em} \pmb{\mathrm{Diag}}(\ket{k^{-1}})]^l_{ij}$ is the probability of a walker starting on node $j$ being on $i$ after $l$ steps \cite{random_walk_note}. Thus $\mbox{$\mathsmaller{\Pi}$}_\mathrm{PRC}$ controls the length of walks in the same way as $\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}$. Incidentally, the series expansion in Eq.~(\ref{eq:katzexpand}) makes clear that the Katz centrality will diverge at some small value $\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}$---the same is true for the PageRank centrality at $\mbox{$\mathsmaller{\Pi}$}_\mathrm{PRC}=1$. In what follows, we restrict $\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}$ and $\mbox{$\mathsmaller{\Pi}$}_\mathrm{PRC}$ to the range where Eq.~(\ref{eq:katzexpand}) and the corresponding series expansion for the PageRank are convergent. The series form of Katz centrality above suggests a class of reach-parametrized centralities based on power series in the adjacency matrix (with the PageRank case similar). These take the form $\mathbf{M}(\mbox{$\mathsmaller{\Pi}$})=\sum_{l=0}^\infty f(l) \mathbf{A}^l \mbox{$\mathsmaller{\Pi}$}^{-l}$, where the Katz centrality sets all factors $f(l)$ to one. This choice, however, is not ideal because the series does not converge when $\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC}$ is smaller than the largest eigenvalue of $\mathbf{A}$ ($\lambda_1$, with corresponding eigenvector $\ket{\psi_1}$). In the general case, for small $\mbox{$\mathsmaller{\Pi}$}$, the higher-order terms are dominated by \begin{equation}\label{eq:matlimit} f(l) (\lambda_1/\mbox{$\mathsmaller{\Pi}$})^l \ket{\psi_1}\bra{\psi_1}. \end{equation} \noindent For $\mathbf{M}$ to converge for all $\mbox{$\mathsmaller{\Pi}$}$, $1/f(l)$ must grow super-exponentially in $l$. A reasonable choice, inspired both by the Estrada communicability metric \cite{estrada2008communicability} and by the desire to make contact with statistical physics, is to choose the factors $f(l)=(l!)^{-1}$. This formula, which defines the \em communicability centrality \em (COM) in terms of the matrix exponential function, means that \begin{equation} \label{eq:communicability} \mathbf{M}^\mathrm{COM}(\mbox{$\mathsmaller{\Pi}$}_T) = \exp (\mathbf{A} / \mbox{$\mathsmaller{\Pi}$}_T) = \pmb{\mathbb{I}} + \frac{\mbox{$\mathsmaller{\Pi}$}_\mathrm{T}^{-1} \mathbf{A}}{1!} +\frac{\mbox{$\mathsmaller{\Pi}$}_\mathrm{T}^{-2} \mathbf{A}^2}{2!} +\frac{\mbox{$\mathsmaller{\Pi}$}_\mathrm{T}^{-3} \mathbf{A}^3}{3!}+\cdots, \end{equation} where we have introduced the ``temperature" parameter $\mbox{$\mathsmaller{\Pi}$}_T$. (This is very similar to the \em total communicability \em studied in \cite{benzi2013total}.) In past work \cite{gurfinkel2015centrality}, we compared the communicability centrality to several other centrality measures prominent in the literature, finding that it gives the best match to the generating capacities in the Florida power grid. The communicability and Katz centralities have several satisfying properties, especially in their exogenous forms $\widetilde{\mathbf{M}}^{ \mathrm{COM}}$ and $\widetilde{\mathbf{M}}^\mathrm{KC}$. From the series expansions, it is easy to see that the degree centrality is recovered in the low reach limits ($\mbox{$\mathsmaller{\Pi}$}_T \to \infty$ and $\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC} \to \infty$). In fact, in these limits we obtain $\widetilde{\mathbf{M}}^\mathrm{COM}=\widetilde{\mathbf{M}}^\mathrm{KC}=\mathbf{A}$. In the high reach limits ($\mbox{$\mathsmaller{\Pi}$}_T \to 0 $ and $\mbox{$\mathsmaller{\Pi}$}_\mathrm{KC} \to \lambda_1$), the largest eigenvalue dominates as in Eq.~(\ref{eq:matlimit}), so the centralities reduce to the well-known eigenvector centrality \cite{NEWM10}. For large, fully connected networks, the exogenous forms $\widetilde{\mathbf{M}}$ give very similar results. These centralities also satisfy two very reasonable conditions on assigning influence between nodes $i$ and $j$: (1) the existence of many walks leads to more influence due to the presence of the term $(\mathbf{A}^l)_{ij}$, but (2) long walks are suppressed due to the weights $\mbox{$\mathsmaller{\Pi}$}^{-l}$. (PageRank satisfies very similar conditions.) Though several individual examples of reach-parametrized centralities are well-known in the field of network science, we believe that we are the first to identify reach-parametrized as a distinct category of centrality measures. We emphasize that, for every reach-parametrized centrality in this paper, we have defined the parameters $\mbox{$\mathsmaller{\Pi}$}$ such that small $\mbox{$\mathsmaller{\Pi}$}$ results in high reach, while large $\mbox{$\mathsmaller{\Pi}$}$ results in low reach. \subsection{Grasp-parameterized centralities} \begin{figure*} \includegraphics[scale=1.2, trim={0 0cm 0cm 0cm},clip]{figures/grasp_demo_a.pdf} \includegraphics[scale=1.2, trim={0 0cm 0cm 0cm},clip]{figures/grasp_demo_b.pdf} \caption{\label{fig:grasp_demo} {High and low grasp centralities.} The figures depict the current of random walkers used to calculate the conditional current betweenness and the conditional resistance closeness from \cite{gurfinkel2020absorbing}. This is demonstrated on the (weighted) kangaroo interaction network from \cite{kangadata,grant1973dominance}. Line thickness is proportional to current magnitude. A unit current flows from the source node (large, green) to the target node (large, red). Dashed lines indicate negligible current ($<.01$ units). (a) At high grasp (low $\mbox{$\mathsmaller{\Pi}$}_D$), the current takes advantage of many parallel paths. (b) At low grasp (high $\mbox{$\mathsmaller{\Pi}$}_D$), the current follows only the shortest weighted path from the source to the target. } \end{figure*} A centrality parameter $\mbox{$\mathsmaller{\Pi}$}$ is a {\it grasp parameter } if it tends to attenuate the influence of indirect paths between two nodes in a weighted graph. As illustrated in Fig.~\ref{fig:grasp_demo}, when the centrality parameter is set to high grasp, the measure takes into account many parallel paths between the nodes, while when the centrality parameter is set to low grasp, the measure only considers the shortest path between the two nodes. This is distinct from the behavior of reach parameters because the two nodes can be an arbitrary (weighted) distance apart. Thus, reach-parametrized and grasp-parametrized are distinct centrality categories. In \cite{gurfinkel2020absorbing}, we introduced the grasp-parametrized centrality category, as well as two grasp-parametrized measures, based on absorbing random walks: the {\it conditional current betweenness } [$\mathbf{M}^\mathrm{CBT}(\mbox{$\mathsmaller{\Pi}$}_D)$] and the {\it conditional resistance closeness} [$\mathbf{M}^\mathrm{RCC}(\mbox{$\mathsmaller{\Pi}$}_D)$]. Collectively, these are the {\it conditional walker-flow} centralities, parametrized by the ``walker death parameter'' $\mbox{$\mathsmaller{\Pi}$}_D$. The conditional current betweenness interpolates from betweenness, at low grasp, to Newman's {\it random-walk betweenness } \cite{newman2005measure}, at high grasp. Similarly, the conditional resistance closeness interpolates from the harmonic closeness, at low grasp, to the harmonic form of the Stephenson--Zelen {\it information centrality} \cite{stephenson1989rethinking} (also known as the current-flow closeness \cite{brandes2005centrality} and the resistance closeness \cite{gurfinkel2020absorbing}), at high grasp. \subsection{Classification of parametrized centralities} \label{sec:difficulties} \begin{comment} {\color{red} The Borgatti DF is from \cite{borgatti2006identifying}. Borgatti serial duplication/transfer is from \cite{borgatti2005centrality}. We focus on this characteristic rather than the length/volume distinction from \cite{borgatti2006graph} because the latter puts centralities with the same parameter in different categories: conditional current betweenness is put in the volume category, while the conditional resistance closeness is put in the length category. Estrada comm BTW is from \cite{estrada2009communicability}. Borgatti radial/medial is from \cite{borgatti2006graph}. } \end{comment} There is a proliferation of centrality measures in the network-science literature. Even in the case of parametrized centrality measures, which have not yet been studied extensively, there are sufficiently many measures to require an organizing principle. Here, we build on the typologies introduced by Borgatti in \cite{borgatti2005centrality,borgatti2006graph}. There, centralities are situated along the conceptual dimensions of Summary Type, Walk Position, and Walk Type. Each of these is described below. In Table~\ref{tab:classification}, all of the parametrized centralities discussed in this paper are classified according to Walk Position (columns) and Walk Type (rows). \subsubsection{Summary Type: how influences are aggregated} The difference between the standard (row-sum) centrality $\mathbf{M}$ and exogenous centrality $\widetilde{\mathbf{M}}$ lies in what Borgatti calls Summary Type, which dictates the way influences are aggregated, not the fundamental nature of the centrality. Another possible variation is the diagonal centrality $\overline{\mathbf{M}}=\pmb{\mathrm{Diag}}(\mathbf{M})$. Estrada's {\it subgraph centrality } \cite{estrada2005subgraph} is equivalent to $\overline{\mathbf{M}}^\mathrm{COM}$ at $\mbox{$\mathsmaller{\Pi}$}_T=1$. \subsubsection{Walk Position: radial and medial centralities} Though the conditional current betweenness and conditional resistance closeness are parametrized by the same ``walker death" process, they are very different measures. In Borgatti's typology, the first of these is a {\it medial } centrality while the latter is {\it radial}. This means that the former assigns importance to a node based on the walks passing through it, while the latter assigns importance based only on the walks that start on the node. The classic examples of medial and radial centrality are {\it betweenness } and {\it closeness}, respectively, and we have seen that the conditional walker-flow centralities reduce to these at low grasp. The columns in Table~\ref{tab:classification} group the parametrized centralities discussed in this paper into radial and medial categories. \subsubsection{Walk Type: reach, grasp, conserved flows, duplicating flows, cyclic flows, and acyclic flows} The Walk Type conceptual dimension describes the characteristics of the walks through which influence is spread. For example, influence might be restricted to geodesic paths, or to walks of a certain length. It is clear, then, that the categories of reach-parametrized and grasp-parametrized centrality represent differences in Walk Type. A further distinction within the Walk Type is described in \cite{borgatti2005centrality}, which compares conserved flow processes ({\it e.g.}, the movement of physical objects) to duplicating flow processes ({\it e.g.}, the spread of gossip). The conditional current betweenness and conditional resistance closeness are both calculated using the conserved current created by a single random walk, so they are conserved-flow centralities. On the other hand, the Katz, PageRank, and Communicability centralities rely on infinite summations, as in Eqs.~(\ref{eq:katzexpand}) and (\ref{eq:communicability}), aggregating influence from an infinite number of walks. These are thus duplicating-flow centralities. Another important subcategory within the Walk Type dimension is cyclicity. (Borgatti addresses cyclicity within his ``trajectory dimension''.) The Katz, PageRank, and Communicability are cyclic: the spread of influence within these centralities is free to form cycles, potentially even recrossing the same edge over and over. Thus, for all the measures considered here, cyclic centralities are based on duplicating flows, while acyclic centralities are based on conserved flows. However, in general, cyclicity and duplication are independent of each other. The rows in Table~\ref{tab:classification} group the parametrized centralities discussed in this paper into Walk Type categories. \subsubsection{Disfavored centrality combinations} \label{sec:reachgrasp} Generally, reach parametrization is not compatible with medial measures like betweenness, since every pair of source and target nodes is considered equally, no matter how far apart they may be. This is why there are no well-known measures in the light-font areas of the right column of Table~\ref{tab:classification}. However, any reach-parametrized relationship (such the entries in the matrix $\mathbf{M}^\mathrm{COM}$) may be used to weight pairs of nodes, allowing betweenness-like measures to use reach parameters. (This modification would also allow the simultaneous use of reach and grasp parameters.) These areas of the table are marked with stars to indicate that these centrality combinations are achievable, though they have not been studied extensively to our knowledge. Centralities that are both duplicating and grasp parametrized are also disfavored. It is difficult to control the grasp of duplicating-flow centralities since, by the nature of duplicating influence, they generally cannot restrict influence to geodesic paths. However, an exception to this rule is found---for the {\it medial} parameter type---in the form of the communicability betweenness \cite{estrada2009communicability}, and similarly constructed centralities. They rely on a mathematical technique for converting radial reach-parametrized centralities into medial grasp centralities. This is described in Appendix \ref{app:cmb}. We are not aware of any similar techniques for arriving at {\it radial}, duplicating, grasp centralities, which is why this area of Table~\ref{tab:classification} remains empty. \subsubsection{A new radial reach-parametrized centrality based on acyclic, conserved flows} \label{sec:newcent} Aside from the disfavored centrality combinations described above, there is one location in Table~\ref{tab:classification} (indicated with bold stars) that has, to our knowledge, not yet been studied. The Katz centrality, which is radial and reach-parametrized, is one of the oldest measures in the network science literature, and the PageRank, of the same type, is one of the most prominent. It is striking, therefore, that there is no well-known {\it conserved-flow} centrality of this type, given the importance of conserved flows in both theoretical and practical domains. Therefore, in Sec.~\ref{sec:gcc}, we introduce the {\it ground-current centrality}, which is of the radial, reach-parametrized, and conserved-flow type. It is also acyclic, whereas the duplicating radial reach measures are cyclic. In Sec.~\ref{sec:results}, we show that the use of acyclic, conserved flows leads the ground-current centrality to some notable differences from the other measures in the radial, reach-parametrized category. \begin{table}[h] \caption{\label{tab:classification}{\it Classification of parametrized centralities.} Centrality measures are classified according to Borgatti's \cite{borgatti2005centrality,borgatti2006graph} Walk Position (columns) and Walk Type (rows). Conditional current betweenness subsumes betweenness and random walk betweenness, while conditional resistance closeness subsumes closeness and information centrality \cite{gurfinkel2020absorbing}. Reference \cite{avrachenkov2013alpha} describes the beta current-flow centrality, whose derivation is similar to that in Sec. \ref{sec:gccf}. The positions in the table depicted with a light font represent disfavored centrality types, discussed in Sec.~\ref{sec:reachgrasp}. The starred entries represent centralities introduced in this paper, filling in ``blanks'' within the table. The ground-current centrality is the main result of this paper.} \begin{ruledtabular} \begin{tabular}{cllll} & \multicolumn{2}{l}{Radial} & \multicolumn{2}{l}{Medial} \\ \hlin \multirow{3}{5.5em}{\vspace{-3em} \begin{minipage}{5.5em} \vspace{-1em} Acyclic \\ \vspace{-1em}conserved \\ \vspace{-1em} flow\end{minipage}} & Grasp: &cond. resistance closeness \hspace{1em} & Grasp: & cond. current betweenness, and \cite{avrachenkov2013alpha}\\ & Reach: & $\textbf{*}$\textbf{ground current (Sec.~\ref{sec:gcc}-\ref{sec:results})}$\textbf{*}$ & \light{Reach:}& \light{$*${see Sec.~\ref{sec:conc}}$*$} \\ \hlin \multirow{2}{5.5em}{\vspace{-1em} \begin{minipage}{5.5em} \vspace{-1em} Cyclic \\ \vspace{-1em}duplicating \\ \vspace{-1em} flow\end{minipage}} &\light{Grasp:}&\light{none (see Sec.~\ref{sec:reachgrasp} )} & Grasp: & communicability betweenness \\ & Reach: & Katz, PageRank, communicability \hspace{2em} & \light{Reach:}& \light{$*${see Sec.~\ref{sec:reachgrasp}}$*$} \\ \end{tabular} \end{ruledtabular} \end{table} \section{The ground-current centrality \label{sec:gcc}} \subsection{Generalizing the resistance-closeness centrality} \label{sec:gcc_first} This paper is concerned with developing a conserved-flow centrality measure that features a reach parameter, tuning the distance that influences can spread across the network. To estimate the node centralities in network $\mathcal{N}$, we focus our model on the electrical current flows in the resistor network derived from $\mathcal{N}$ (or equivalently, random walkers \cite{Doyle06randomwalks} on $\mathcal{N}$). In this interpretation, an element of $\mathcal{N}$'s adjacency matrix $\mathbf{A}_{ij}$ is taken to be the conductance (inverse resistance) of the direct electrical connection between nodes $i$ and $j$ \footnote{The interpretation of non-zero adjacency matrix elements as conductances in a resistor network makes sense for {\it affinity-weighted} networks \cite{newman2005measure}, as well as multigraphs where $\mathbf{A}_{ij}$ stands for the number of parallel edges between $i$ and $j$. When an adjacency matrix element stands for, {\it e.g.}, a distance rather than an affinity, it is more appropriate for it to be interpreted as a resistance. In either case, when $\mathbf{A}_{ij}=0$, there is no edge between $i$ and $j$. }. By using current flow to spread influence, we guarantee that the resulting centrality will be both conserved and acyclic. It is not possible to explicitly limit the reach of current (and hence influence) by increasing the resistance along all edges, or changing the strength of voltage sources. Since network current flow is a linear theory, any introduction of a multiplicative constant $m$ on either (1) all voltage sources, or (2) all resistances, will only scale the resulting currents by $m$. And since centrality vectors are normalized by the factor $\alpha$ in Eq.~(\ref{eq:centrality}), any multiplicative constants do not affect the final centrality assignments. This provides motivation to build a parametrization around resistors {\it external } to the equivalent resistor network. \begin{figure*} \includegraphics[scale=0.85, trim={0 1cm 0cm 0cm},clip]{figures/ground_current_fixed.pdf} \caption{\label{fig:gcc} The ground-current centrality (right) as a multinode generalization of the resistance-closeness centrality (left). The ground-current centrality of a node $i$ is given as a function of the finite ground conductances (shown in light gray), by the currents flowing from that node to the ground node $g$ when a unit voltage is introduced between node $i$ and $g$. The exogenous ground-current centrality $\widetilde{\mathbf{M}}^\mathrm{GCC}_{ij}$ is equivalent to the removal of the (dotted) connection between $i$ and and $g$. Ignoring the voltage sources, the left side of the figure illustrates the resistor-network interpretation of the network $\mathcal{N}$, while the right side illustrates the modified network $\mathcal{N}_g$}. \end{figure*} We now present a new centrality, which is a generalization of (but not a parametrization of) the resistance-closeness centrality (RCC) studied in \cite{gurfinkel2020absorbing}. There, $\mathbf{M}^\mathrm{RCC}_{ij}$ is equal to the inverse of the effective resistance $R^{\mathrm{eff}}_{ij}$, which is the current resulting from connecting a 1-Volt battery between $i$ and $j$ in $\mathcal{N}$, as seen in Fig. \ref{fig:gcc}(left). Without affecting the results, we may set the absolute potential scale by connecting $j$ to the ground node $g$ with a resistance-less wire; the current then returns to the battery through the ground node. Extrapolating the measure to multiple nodes is achieved simply by connecting \em all \em nodes directly to ground. The currents $ I^i_{j \rightarrow g}$ from each $j$ to ground (when the voltage source is on $i$) are straightforwardly interpreted as the contribution of $j$ to the centrality value of $i$; that is, the ground currents are just the $\mathbf{M}_{ij}$. Thus, we name the new measure the \em ground-current centrality \em (GCC). In summary: \begin{equation} \label{eq:gccbasic} \mathbf{M}^\mathrm{GCC}_{ij} = I^i_{j \rightarrow g}\qquad \mbox{(unit voltage source between $i$ and ground)} \end{equation} (In what follows, we will often omit the superscripts in $\mathbf{M}^\mathrm{GCC}$ and $c^\mathrm{GCC}$ when it is clear from the context that we are referring to the ground-current centrality.) This centrality measure represents a transition from the resistance distance, a node-node relation, to a node-network relation; this process is illustrated in Fig. \ref{fig:gcc}. The ground-current centrality also represents a complementary approach to our previous work in \cite{gurfinkel2020absorbing}. There, the conditional walker-flow centralities employ the portion of the current that does not eventually reach ground. Here, the entirety of the current eventually reaches ground, and the centrality is based on the magnitudes of the ground currents. If all the nodes were directly connected to ground with zero resistance, then they would all be at the same potential. This would mean that no current could flow between them, leading to a centrality insensitive to the details of the network structure. To prevent this behavior, we introduce the ground-conductance vector $\ket{C}$, where $C_j$ is the finite conductance of the edge connecting $j$ to ground. The node potentials are now $V_j = I^i_{j \rightarrow g} /C_j = \mathbf{M}_{i j}/ C_j$---in general they are all different. Since the network $\mathcal{N}$ has $N$ nodes, adding $g$ and its adjacent edges creates a $(N+1)$-node network. This new network, called $\mathcal{N}_{g}(\ket{C})$, is illustrated on the right side of Fig. \ref{fig:gcc}. Note that one of the edges between $g$ and $i$ (indicated by the battery symbol in the circuit diagram) represents voltage boundary conditions, and is therefore not included in $\mathcal{N}_{g}(\ket{C})$. \subsection{The ground-current centrality formula \label{sec:gccf}} We now derive a compact formula for the ground-current centrality. The foundational relation for resistor networks \cite{NEWM10}---as applied to $\mathcal{N}_g (\ket{C})$---is \begin{equation}\label{eq:networkvoltage} \ket{I^\mathrm{in}} = \mathbf{L}^g\ket{V}.\end{equation} Here, $\ket{V}$ is the vector of node voltages, and $\mathbf{L}^g$ is the $(N+1)\times(N+1)$ Laplacian matrix of $\mathcal{N}_{g}(\ket{C})$. The $j$th element of the vector $\ket{I^\mathrm{in}}$ is equal to the current entering ($I^\mathrm{in}_j>0$) or leaving ($I^\mathrm{in}_j<0$) the network at node $j$. In the present case, illustrated in Fig.~\ref{fig:gcc}(right), $I^\mathrm{in}_j=0$ when $j$ is not $i$ or $g$. Because $\mathbf{L}^g \ket{1}=0$, Eq.~(\ref{eq:networkvoltage}) cannot be inverted as is. A standard solution \cite{newman2005measure} is to remove one node from the network, leading to the invertible $N\times N$ {\it reduced } Laplacian $\mathbf{L}^\mathrm{red}$. This specifies the gauge in which the removed node is at zero potential (see Appendix \ref{app:reduced}). We choose to remove node $g$, appropriately setting its potential to zero. Proceeding similarly to the derivation in \cite{avrachenkov2013alpha}, removing $g$ leads to the reduced Laplacian $\mathbf{L}^\mathrm{red}= \mathbf{L} +\pmb{\mathrm{Diag}}(\ket{C})$. Here $\mathbf{L}$ is the standard Laplacian of the $N$-node network $\mathcal{N}$: $\mathbf{L}=\pmb{\mathrm{Diag}}(\ket{k})-\mathbf{A}$, where $\ket{k}$ is the {\it weighted } degree vector. Therefore, inverting the reduced version of Eq.~(\ref{eq:networkvoltage}) leads to \begin{equation} V_j= \left[\mathbf{L} +\pmb{\mathrm{Diag}}(\ket{C})\,\right]^{-1}_{i j} I^\mathrm{in}_i, \end{equation} where we used the fact that $\mathbf{L}$ is symmetric. Recall that $i$ is the index of the node connected to the battery [see fig.~2(b)], while $j$ can stand for any node, including $i$. From the requirement that $V_i=1$, we have $I^\mathrm{in}_i= 1/\left[\mathbf{L} +\pmb{\mathrm{Diag}}(\ket{C})\,\right]^{-1}_{i i} $. The current $\mathbf{M}_{ij}$ from $j$ to $g$ is just $V_j C_j$. And because all the current entering the network at $i$ must also leave the network at $g$, $\sum_j \mathbf{M}_{ij} =\sum_j I^i_{j \rightarrow g}= I^\mathrm{in}_i$, so $I^\mathrm{in}_i$ is equal to the centrality $c_i$ of node $i$. Assembling these results, we arrive at a generalized formula for the ground-current centrality: \begin{eqnarray}\label{eq:gccf} c_i =&\;1/& \left[\mathbf{L}+\pmb{\mathrm{Diag}}(\ket{C}) \,\right]_{i i}^{-1}\nonumber \\ \mathbf{M}_{i j}= & c_i & [\mathbf{L}+\pmb{\mathrm{Diag}}(\ket{C}) \,]_{i j}^{-1} C_j . \end{eqnarray} Every row of $\mathbf{M}$ corresponds to a different experimental situation, where the voltage boundary conditions are changed by connecting a different node $i$ to the 1-Volt battery. In matrix form, this can be expressed as $\mathbf{M}= \{\pmb{\mathrm{Diag}}([\mathbf{L}^\mathrm{red}]^{-1})\}^{-1} [\mathbf{L}^\mathrm{red}]^{-1} \pmb{\mathrm{Diag}}(\ket{C}) .$ For notational convenience, in this section we use the {\it unnormalized} form of the centrality. It can be easily verified that $\sum_j [\mathbf{L}+\pmb{\mathrm{Diag}}(\ket{C}) \,]_{i j}^{-1} C_j=1$. This leads to $\sum_j \mathbf{M}_{ij}=c_i$, which is the unnormalized form of Eq.~(\ref{eq:centrality}). We note that, unlike for other centralities, the elements of the ground-current centrality matrix $\mathbf{M}^\mathrm{GCC}$ do not need to be calculated to find the $c_i$---in fact, the reverse is true. Nonetheless, the $\mathbf{M}^\mathrm{GCC}_{ij}$ are informative in their own right, since they encode the influence of node $j$ on $i$'s centrality. Here, they will be useful for analyzing test cases that show how the ground-current centrality differs from similar measures; see Sec.~\ref{sec:results}. The vector $\ket{C}$ in Eq.~(\ref{eq:gccf}) can be used to tune the relative importance of nodes in the network. For example, in a power-grid network, we may set $C_i=0$ when $i$ is a generator, thereby ensuring that the centrality only rewards connections to loads. However, the simplest case, as in \cite{avrachenkov2013alpha}, is to set all ground conductances to the same value $\mbox{$\mathsmaller{\Pi}$}_C$, meaning that $\pmb{\mathrm{Diag}}(\ket{C}) = \mbox{$\mathsmaller{\Pi}$}_C \pmb{\mathbb{I}}$, for identity matrix $\pmb{\mathbb{I}}$. This leads us to the final parametrized form of our centrality: \begin{equation}\label{gccfinal} \left.\begin{aligned} c_i(\mbox{$\mathsmaller{\Pi}$}_C) &=& 1/&(\mathbf{L} +\mbox{$\mathsmaller{\Pi}$}_C \pmb{\mathbb{I}}\,)_{i i}^{-1} \; \qquad \\ \mathbf{M}_{i j}(\mbox{$\mathsmaller{\Pi}$}_C) &=&I^i_{j \rightarrow g}=\hspace{1.2em} c_i& (\mathbf{L} +\mbox{$\mathsmaller{\Pi}$}_C \pmb{\mathbb{I}} \,)_{i j}^{-1} \; \mbox{$\mathsmaller{\Pi}$}_C \, \ \qquad \end{aligned} \right\} \qquad \text{ground-current centrality} \end{equation} We emphasize that, unlike other network measures based on current flows, it is not necessary to perform a summation to obtain the centrality $c_i$ of node $i$. In what follows, we use the normalized form of the ground-current centrality matrix $\mathbf{M}$, as per Eq.~(\ref{eq:centrality}). \subsection{Properties and limits of the ground-current centrality } \label{ssc:gccp} \begin{table} \caption{\label{tab:gcc}Ground-Current Centrality High/Low $\mbox{$\mathsmaller{\Pi}$}_C$ Limits. The ground-current centrality is formulated in Eq.~(\ref{gccfinal}). The limits for the generalized ground-current centrality [Eq.~(\ref{eq:gccf})] are in square brackets. In the generalized version, $\mbox{$\mathsmaller{\Pi}$}_C$ is not defined, and the limits should be interpreted as high and low values of $\braket{C|1}$. } \begin{ruledtabular} \begin{tabular}{c | c | c | c} Measure& Symbol &High $\mbox{$\mathsmaller{\Pi}$}_C$ & Low $\mbox{$\mathsmaller{\Pi}$}_C$ \\ \hline Ground-Current Centrality &$\mathbf{M}^\mathrm{GCC}_{ij}$&$\delta_{ij}\mbox{$\mathsmaller{\Pi}$}_C$& $\mbox{$\mathsmaller{\Pi}$}_C$\\ &&$\big[\;\delta_{ij}C_i\;\big]$ &$\big[\;C_j\;\big]$ \\ Exogenous Ground-Current Centrality &$\widetilde{\mathbf{M}}^\mathrm{GCC}_{ij}$&$\mathbf{A}_{ij}$& $(1-\delta_{ij})\mbox{$\mathsmaller{\Pi}$}_C$\\ &&$\big[\;\mathbf{A}_{ij}\;\big]$ &$\big[\;(1-\delta_{ij})C_j\;\big] $ \end{tabular} \end{ruledtabular} \end{table} We have argued that the ground-current centrality has a naturally arising parameter $\mbox{$\mathsmaller{\Pi}$}_C$. Though $\mbox{$\mathsmaller{\Pi}$}_C$ was necessary to force the centrality to interact with the network structure, it is easy to see that this parameter also has the effect of tuning the centrality's reach. Consider the $\mbox{$\mathsmaller{\Pi}$}_C\rightarrow\infty$ limit. When $\mbox{$\mathsmaller{\Pi}$}_C$ is large, the vast majority of the current leaving the battery at node $i$ follows the very high-conductance edge directly to ground, rather than following the relatively low-conductance edges leading to other locations in the network. A node can thus only influence itself, and $\mathbf{M}$ becomes diagonal. This can also be seen from setting $j=i$ in the second line of Eq.~(\ref{gccfinal}), whereby $c_i \approx \mathbf{M}_{i i} = I_{i\rightarrow g}$ for large $\mbox{$\mathsmaller{\Pi}$}_C$. Thus the reach is low when $\mbox{$\mathsmaller{\Pi}$}_C$ is high. The behavior in the low-$\mbox{$\mathsmaller{\Pi}$}_C$ limit is easy to understand through physical properties of resistor networks: As $\mbox{$\mathsmaller{\Pi}$}_C\rightarrow0$ the effective resistance to ground approaches infinity, leading to very small currents in the network; therefore all node potentials approach the value 1 because the potential drop between adjacent network nodes becomes tiny. Therefore all ground currents are identical: $\mathbf{M}_{ij}=(1 -V_g) \mbox{$\mathsmaller{\Pi}$}_C=\mbox{$\mathsmaller{\Pi}$}_C$, since $V_g=0$. Nodes at large graph distances from $i$ are not penalized by the centrality. This means that $c_i=N \mbox{$\mathsmaller{\Pi}$}_C$ for all $i$. When $\mbox{$\mathsmaller{\Pi}$}_C$ is low, the reach is high and the network looks the same from every node. \begin{figure*} \includegraphics[scale=1.2, trim={0 0cm 0cm 0cm},clip]{figures/reach_demo_a.pdf} \includegraphics[scale=1.2, trim={0 0cm 0cm 0cm},clip]{figures/reach_demo_b.pdf} \caption{\label{fig:reach_demo} {High and low reach in the exogenous ground-current centrality.} This is demonstrated on the (weighted) kangaroo interaction network from \cite{kangadata,grant1973dominance}. Compare the grasp behavior of the conditional current betweenness in Fig.~\ref{fig:grasp_demo}. Line thickness for edges $(k,l)$ indicates the product of the normalization factor $\tilde{\alpha}$ from Eq.~(\ref{eq:exocentcentrality}) and the edge current magnitude $I^i_{k\rightarrow l}$, where the current flow results from a unit potential difference between the source node $i$ (large, green) and the ground node $g$ (not pictured). For readability, the line thickness is proportional to the {square root} of $\tilde{\alpha} I^i_{k\rightarrow l}$. Dashed lines indicate negligible current: $\tilde{\alpha} I^i_{k\rightarrow l}<.0001$. All connections to ground have conductance $\mbox{$\mathsmaller{\Pi}$}_C$ and, because this is the {\it exogenous } centrality variant $(\widetilde{\mathbf{M}})$, every node other than the source node is connected to ground. Node $j$'s final contribution to $i$'s centrality is $\tilde{\alpha}\hspace{.2em} \widetilde{\mathbf{M}}^\mathrm{GCC}_{ij} = \tilde{\alpha} I^i_{j\rightarrow g}$. (a) At high reach (low $\mbox{$\mathsmaller{\Pi}$}_C$), the current spreads out to every node. Though the currents $I^i_{k\rightarrow l}$ are very small at this parameter value, the normalization factor results in nonnegligible influences $\tilde{\alpha}\hspace{.2em} \widetilde{\mathbf{M}}^\mathrm{GCC}_{ij}$. In accordance with Table~\ref{tab:gcc}, the current to ground is the same at every node. (b) At low reach (high $\mbox{$\mathsmaller{\Pi}$}_C$), the current only flows along edges adjacent to the source, weighted by the edge conductance---see Table \ref{tab:gcc}. } \end{figure*} \begin{figure*} \includegraphics[scale=1.9, trim={3.0cm 2.2cm 2.8cm 2.2cm},clip]{figures/reach_intermediate.pdf} \caption{\label{fig:reach_intermediate} {Intermediate reach in the exogenous ground-current centrality.} See the caption to Fig.~\ref{fig:reach_demo} for explanatory details. The reach is demonstrated on the (weighted) Florida power-grid network from \cite{dale,xu2014architecture}. In this version of the network, the weights are readable from the figure: they are inversely proportional to the Euclidean distance between nodes. When the reach is high ($\mbox{$\mathsmaller{\Pi}$}_C$ is low), the currents spread to nodes at large weighted distance from the voltage source. In this regime (e.g., at $\mbox{$\mathsmaller{\Pi}$}_C=0.1$), the amount of current flowing to ground from each node is approximately identical. As the reach decreases ($\mbox{$\mathsmaller{\Pi}$}_C$ increases to, e.g., $5.0$), the ground-currents are no longer identical. The currents along edges far from the voltage source are diminished and, at very low reach (e.g., $\mbox{$\mathsmaller{\Pi}$}_C=500$), only currents to the voltage source's nearest neighbors remain. } \end{figure*} It is also useful to consider the exogenous ground-current centrality $\widetilde{\mathbf{M}}^\mathrm{GCC}$. Referencing Fig.~\ref{fig:gcc}, this amounts to the removal of the dotted connection to ground. This variant can recover the adjacency matrix for large values of $\mbox{$\mathsmaller{\Pi}$}_C$---much like the communicability centrality recovers the adjacency matrix for large values of $\mbox{$\mathsmaller{\Pi}$}_T$. Detailed calculations for the limiting forms of the two variants of ground-current centrality for \em arbitrary \em $\ket{C}$ vectors are presented in Appendix \ref{app:gcc}. We summarize the limits in Table \ref{tab:gcc}. The behavior of the ground-current centrality at intermediate values of $\mbox{$\mathsmaller{\Pi}$}_C$ is intermediate to the behavior at the limits. As $\mbox{$\mathsmaller{\Pi}$}_C$ decreases from $\infty$, pairs of nodes ($i,j$) separated by larger weighted graph distances $D_{ij}$ start to receive non-negligible ground current $\mathbf{M}_{ij}$. This means that the {\it reach} of the centrality increases as $\mbox{$\mathsmaller{\Pi}$}_C$ decreases and, therefore, $\mbox{$\mathsmaller{\Pi}$}_C$ is a reach parameter. Finally, as $\mbox{$\mathsmaller{\Pi}$}_C$ approaches $0$, {\it all } pairs produce the same value of $\mathbf{M}_{ij}$, regardless of the distance between $i$ and $j$---reach is maximized. (The centrality {\it at } $\mbox{$\mathsmaller{\Pi}$}_C=0$ is undefined, however, since there is no ground-current flow in that situation.) Increasing the reach by decreasing $\mbox{$\mathsmaller{\Pi}$}_C$ allows longer network paths to be explored, which leads to more parallel paths to the same destination. Therefore, tuning reach in this case also necessarily tunes grasp, but this is a secondary effect. The reach behavior of the exogenous ground-current centrality at high reach (low $\mbox{$\mathsmaller{\Pi}$}_C$) and low reach (high $\mbox{$\mathsmaller{\Pi}$}_C$) is illustrated in Fig.~\ref{fig:reach_demo}. The intermediate reach behavior is illustrated in Fig.~\ref{fig:reach_intermediate}. The figures also clearly illustrate the ground-current centrality's status as a {\it radial } measure: influence spreads outward from the node $i$. Further, the centrality of every node is derived from a single conserved current flow in a resistor network. These key properties of the ground-current centrality are reflected in its position in Table~\ref{tab:classification}. \section{Unique features of the ground-current centrality}\label{sec:results} Because of its unique position in the taxonomy presented in Table~\ref{tab:classification}, the ground-current centrality differs significantly from similar centralities. In Sec.~\ref{sec:adv_conserved} we compare it to other conserved flow centralities (top row in Table I), while in Sec.~\ref{sec:adv_reg} we compare it to other radial reach-parametrized centralities (left column in the table). \subsection{Differences from other conserved-flow centralities} \label{sec:adv_conserved} Referencing the final expressions in Eqs.~(\ref{eq:gccf}) and (\ref{gccfinal}), we consider the differences between the ground-current centrality and other current-based centrality measures previously considered (the first two rows in Table \ref{tab:classification}). Of course, the most important difference is that the ground-current centrality is the only one of these that can control reach, which is in many ways a more intuitive type of parametrization than grasp. Further, the other methods' centrality matrices do not reduce to the adjacency matrix at any parameter value---this is a consequence of these centralities not using a reach parameter, and thus being unable to restrict influence to nearest neighbors. The ground-current centrality is also mathematically simpler than the alternatives. The closeness and betweenness centralities rely on algorithms (Dijkstra's algorithm and the method described by Brandes in \cite{brandes2001faster}, respectively), while the ground-current centrality has a closed-form solution. The resistance closeness and the current betweenness rely on the calculation of currents using the pseudoinverse or the inverse of a reduced Laplacian matrix. On the other hand, the ground-current centrality uses an ordinary matrix inverse and the ordinary Laplacian $\mathbf{L}$. This is convenient for formula manipulations such as those in Appendix \ref{app:gcc}. Further, the conditional forms \cite{gurfinkel2020absorbing} of the resistance closeness and current betweenness require the calculation of current on every edge, while the ground-current centrality only calculates currents that correspond to elements of $\mathbf{M}^\mathrm{GCC}$. In fact, even this is unnecessary: Eqs.~(\ref{eq:gccf}) and (\ref{gccfinal}) show that the final centralities can be found from the diagonal of the inverted matrix, without summing over $\mathbf{M}_{ij}$. Finally, we emphasize that the ground-current centrality is significantly simpler conceptually than the alternative measures. All of these involve solving a current (or walker) flow problem between pairs of nodes and aggregating all such pairs to calculate the final centrality. The ground-current centrality, however, requires only a single current-flow problem for every node whose centrality we wish to calculate. \subsection{Differences from other radial reach-parametrized centralities} \label{sec:adv_reg} In Table \ref{tab:classification}, the ground-current centrality is the only radial reach-parametrized centrality that is based on an acyclic, conserved flow. As a result, it differs significantly from the Katz, PageRank, and communicability centralities. Especially at high reach, these alternative centralities lead to unintuitive centrality rankings on simple example networks. The reason is that the cyclic flows employed by these centralities are forced to retrace their steps when the reach is high, while the ground-current centrality's conserved current flow never does so because current flow is acyclic. \begin{table}[] \caption{\em Summary of real-world example networks\/\em. Networks have $N$ nodes and $M$ edges. The density of a network is defined as the number of edges divided by the number of possible edges: $M/(0.5 N (N-1))$. } \label{tab:nets} \setlength\tabcolsep {.35em} \begin{tabular}{clccccc} Network & Refs. & \textit{N} & \textit{M}& Density & Weights \\ \hline \textit{C. elegans} Neuronal Network & \cite{Choe-2004-connectivity}& 277 & 1918 & 0.05 &Unweighted\\ Weighted Florida Power Grid & \cite{xu2014architecture,dale} & 84 & 137 &0.04 & Continuous \\ Unweighted Florida Power Grid & \cite{xu2014architecture,dale} & 84 & 137 &0.04 & Unweighted \\ Italian Power Grid & \cite{hama10} & 127 & 169 & 0.02 & Unweighted \\ Vole Trapping & \cite{nr-aaai15, voles} & 118 & 283 & 0.04 & Integer\\ Kangaroo Group & \cite{kangadata,grant1973dominance} & 17 & 91 & 0.67 & Integer \\ Benchmark Circuit & \cite{milo2004superfamilies} & 512 & 819 & 0.006 & Unweighted \\ \end{tabular} \end{table} We compare the behavior of the radial reach-parametrized centralities on line networks, subdivided star networks, Cayley trees modified to become regular networks, and a lattice network with a weighted bottleneck. In these simply structured networks, the nodes' centrality rankings are intuitive. Here we take the closeness centrality to provide the paradigmatic intuitive centrality ranking, since it assigns greater importance to nodes that are close to many others. In our simply structured example networks, such nodes are easy to identify by eye. In addition to the simple example networks, we analyze numerical data from seven real-world example networks, summarized in Table \ref{tab:nets}. Because we compare node-node centrality flows (elements of $\mathbf{M}$) as well as final centralities (elements of $\ket{c}$), we rely specifically on the harmonic \cite{Dekker2005,Rochat2009,NEWM10} closeness centrality: $\mathbf{M}^\mathrm{HCC}=D_{i j}^{-1}$. Of the parametrized centralities considered here, only the ground-current centrality can reproduce the intuitive ordering in all the simply structured example networks. Furthermore, in the case of networks with bottlenecks---including real-world networks---the ground-current centrality reproduces aspects of the betweenness centrality, as well as the harmonic closeness. In the case of regular networks, the ground-current centrality does not result in nearly identical centrality values, as do several of the alternative measures. Conversely, in the case of real-world networks with hubs, we show that the ground-current centrality assigns centrality weight more equitably than the communicability centrality, while still giving the most weight to the hub. In this section we use the exogenous form ($\widetilde{\mathbf{M}}$) of the discussed centralities, since only the exogenous forms of the communicability, Katz, and ground-current centralities reduce to degree centrality at low reach ($\mathbf{M}$ reduces to $\mathbf{A}$). Furthermore, only the exogenous communicability centrality leads to nontrivial results in the case of regular networks (see Sec.~\ref{sec:reg}). However, the results for the full ground-current centrality $\mathbf{M}^\mathrm{GCC}$ are very similar to those for $\widetilde{\mathbf{M}}^\mathrm{GCC}$. We also limit the discussion to {\it normalized} centralities, introducing the normalization factor $\widetilde{\alpha}$ into Eq.~(\ref{gccfinal}) so that $\widetilde{\alpha} \sum_{ij}\widetilde{\mathbf{M}}=1$. Without normalization, centrality values for the communicability ($\widetilde{\mathbf{M}}^\mathrm{COM}$) become unmanageably large at high reach, while ground-current centrality values ($\widetilde{\mathbf{M}}^\mathrm{GCC}$) go to zero in the same regime. \subsubsection{Line Networks}\label{sec:lines} Consider the unweighted network of $N$ nodes arranged in a straight line, so that the two end nodes have degree 1, while the middle $N-2$ nodes have degree 2. Here, the harmonic closeness centrality specifies a centrality ranking that grows with proximity to the center of the line. Indeed, this intuitive ordering is reproduced by almost all the centrality measures under consideration, and across all parameter values (except those extremal values where all centralities are equal). The PageRank is the only centrality that does not reproduce this ordering. The PageRank places the degree 1 nodes in the lowest centrality rank, but the rankings from there on out are the {\it reverse} of those of the harmonic closeness, so that the node at the center of the line has the second-lowest rank. This unintuitive ordering occurs at all nonextremal parameter values. More generally, the PageRank has properties that make it unsuitable as a reach-parametrized centrality. As the parameter goes to zero, the reach technically increases. However, at this parameter value, the random walk behind the PageRank is allowed to take many steps, including steps that retrace its own path. Thus the walk approaches its stationary distribution, which is proportional to the degree of nodes \cite{NEWM10}. The result is the paradoxical situation where increasing the PageRank's reach tends to make it more like the degree centrality, which is inherently low-reach. We believe that this behavior leads to the unintuitive ordering on the line network. Originally, the PageRank centrality was developed to rank websites, which form {\it directed} networks of hyperlinks. Our simple test case suggests that the PageRank is not well suited to {\it undirected} networks. \subsubsection{Subdivided Star Networks} \label{sec:sstar} We now introduce a simple class of {weighted} networks that also have intuitive centrality matrix values based on the harmonic closeness. These {\it subdivided star networks\/} $\mathcal{S}_{\{d\}}$ comprise a series of ``spokes'' emanating from the hub node $n_0$. Each spoke consists of a chain of edges. The network is specified precisely by ${\{d\}}$, the list of {\it unweighted} distances along the spokes. The edge weights are chosen to make the {\it weighted} distance ($D$) along each spoke equal to unity. See the caption to Fig.~\ref{fig:sstar} for further details and an illustration for ${\{d\}}={\{1,2,3,4,6,8\}}$. We also intend to compare the behavior of a node very distant from $n_0$. To do this, we connect a final node $n_\mathrm{long}$ directly to $n_0$, setting $D_{n_0 n_\mathrm{long}}=1000$. \begin{figure}[h] \includegraphics[scale=0.7, trim={0 0cm 0cm 0cm},clip]{figures/substar.pdf} \caption{\label{fig:sstar} The subdivided star network $\mathcal{S}_{\{1,2,3,4,6,8\}}$. We only compare the centralities of the large, labeled nodes. However, all 26 nodes are accounted for in the adjacency matrix. The node labels indicate the number of edges in the ``spoke'' terminated by that node, {\it e.g.}, one must traverse 4 edges to move from $n_0$ to $n_4$. The weights are chosen to make the total weighted graph distance along the spoke equal to unity: Here, $D_{n_0 n_1}=D_{n_0 n_2}=D_{n_0 n_3}=D_{n_0 n_4}=D_{n_0 n_6}=D_{n_0 n_8}=1$. All edges within a spoke have the same length, and thus the same weight. (The edge weights are inversely proportional to the Euclidean distances in the figure). For example, the 6 edges between $n_0$ and $n_6$ have weight 6. Because edge weights are inverse to weighted edge distances, $6 \times (1/6)=D_{n_0 n_6}=1$. There is one exception to the previous rules: a long edge $(n_0,n_\mathrm{long})$, where $d_{n_0 n_\mathrm{long}}=1$ and $D_{n_0 n_\mathrm{long}}=1000$. } \end{figure} We are only concerned with influence flows between the hub node and the nodes at the ends of the spokes. We choose $\mathcal{S}_{\{1,2,5,10,18,30\}}$ as a representative example network, on which we compare the influence values $\widetilde{\mathbf{M}}$ for different centralities. Specifically, we consider $\widetilde{\mathbf{M}}_{n_0 i^\mathrm{p}}$, for peripheral nodes $i^\mathrm{p} \in {\{n_\mathrm{long},n_1,n_2,n_5,n_{10} \ldots\}}$. All the nodes $i^\mathrm{p}$ (except $n_\mathrm{long}$) are the same weighted distance from $n_0$, but their unweighted distances $d_{n_0 i^\mathrm{P}}$ are all different. As a result, the ordering of matrix elements in the {\it unweighted} harmonic closeness (HCC) is clear: $\widetilde{\mathbf{M}}^\mathrm{HCC}_{n_0 i^\mathrm{p}}$ goes down for $i^\mathrm{p}$ on ``longer'' spokes, while these matrix elements are all the same for the {\it weighted} HCC. On the other hand, the { unweighted} HCC matrix elements $\widetilde{\mathbf{M}}^\mathrm{HCC}_{n_0 n_\mathrm{long}}$ and $\widetilde{\mathbf{M}}^\mathrm{HCC}_{n_0 n_1}$ are the same, while in the weighted case $\widetilde{\mathbf{M}}^\mathrm{HCC}_{n_0 n_\mathrm{long}}$ is much smaller than any other $\widetilde{\mathbf{M}}^\mathrm{HCC}_{n_0 i^\mathrm{P}}$ . (Note that we use the harmonic closeness, because the standard closeness does not specify matrix elements.) Of all the parametrized centralities considered here, the ground-current centrality is the only one that matches the intuitive ordering of both weighted and unweighted HCC. Note that we are not comparing the final centrality values $c_{i^\mathrm{p}}$ of the peripheral nodes, but rather the matrix elements $\widetilde{\mathbf{M}}_{n_0 i^\mathrm{p}}$, since these values are not inflated by the presence of nodes along the spokes \footnote{In the final calculation, the many nonperipheral nodes (unlabeled in Fig.~\ref{fig:sstar}) account for the majority of the contribution to $i^\mathrm{p}$'s centrality. This means that peripheral nodes on ``long'' spokes will have larger centrality, just because they are near many nonperipheral nodes. As a result, the $c_{i^\mathrm{p}}$ will have an ordering that places greater importance on nodes a farther unweighted distance from the hub node $n_0$. All of the centralities under discussion reproduce this expected $c_{i^\mathrm{p}}$ ordering. We focus on the matrix elements $\tilde{\mathbf{M}}_{n_0 i^\mathrm{p}}$ rather than the final centrality $c_i$ because they are a direct measurement of the influence between two nodes, and as such are more sensitive to the chosen centrality method. }. Figure \ref{fig:exp_star} depicts the communicability centrality $\widetilde{\mathbf{M}}^\mathrm{COM}_{n_0 i^\mathrm{p}}$. Though the rank ordering for all $i^\mathrm{P}$ except $n_\mathrm{long}$ matches HCC at low reach (high $\mbox{$\mathsmaller{\Pi}$}_T$), the levels begin to cross as the reach is increased, and at high reach ($\mbox{$\mathsmaller{\Pi}$}_T\to0$) $\widetilde{\mathbf{M}}^\mathrm{COM}_{n_0 n_{30}}$ becomes the highest, though in HCC it is the lowest. This matrix element alone deviates from the ordering established at low-reach (high $\mbox{$\mathsmaller{\Pi}$}$). The reason, to be discussed in Sec.~\ref{sec:conc}, is the duplicating nature of the communicability centrality. Another issue is that the ranking of centrality element $\widetilde{\mathbf{M}}^\mathrm{COM}_{n_0 n_\mathrm{long}}$ does not appreciably change as the parameter is decreased (reach is increased). Furthermore, the addition of spokes to the network can affect the rank ordering of the other $i^\mathrm{p}$. For example, while the figure shows that $\widetilde{\mathbf{M}}^\mathrm{COM}_{n_0 n_5}>\widetilde{\mathbf{M}}^\mathrm{COM}_{n_0 n_{10}}$ for the network $\mathcal{S}_{\{1,2,5,10,18,30\}}$, this is not the case for the network $\mathcal{S}_{\{1,2,5,10\}}$, even though they only differ by the addition of two spokes. In the smaller network $\widetilde{\mathbf{M}}^\mathrm{COM}_{n_0 n_{10}}$ is the largest at high reach (low $\mbox{$\mathsmaller{\Pi}$}_T$) (and in general the largest $\widetilde{\mathbf{M}}^\mathrm{COM}_{n_0 i^\mathrm{p}}$ at high reach occurs for the $i^\mathrm{p}$ with the largest value of $d_{n_0 i^\mathrm{p}}$ in the network ). The ground-current centrality is not susceptible to such reshuffling upon the addition of spokes because different spokes are electrically independent when $n_0$ is the network's voltage source, as in the calculation of $\widetilde{\mathbf{M}}^\mathrm{GCC}_{n_0 i^\mathrm{p}}$ (see Sec.~\ref{sec:gcc_first}). \begin{figure}[h] \includegraphics[scale=.75, trim={0 .5cm 0cm 0cm},clip]{figures/exp_star2a.pdf} \includegraphics[scale=.9, trim={0 0cm 0cm 0cm},clip]{figures/exp_star2leg.pdf} \includegraphics[scale=.75, trim={0cm .5cm 0cm 0cm},clip]{figures/exp_star2b.pdf} \caption{ \label{fig:exp_star} {Selected values of $\widetilde{\alpha} \widetilde{\mathbf{M}}^\mathrm{COM}$ for the $\mathcal{S}_{\{1,2,5,10,18,30\}}$ network}. The same data are plotted on (a) log-linear and (b) log-log scales. Note that the normalization factor $\widetilde{\alpha}$ depends on $\mbox{$\mathsmaller{\Pi}$}_T$. Without $\widetilde{\alpha}$, the $\widetilde{\mathbf{M}}$ values become unmanageably large. The Katz centrality is qualitatively similar, but with convergence failure at high reach (low values of $\mbox{$\mathsmaller{\Pi}$}_T$). At high reach, the COM fails to reproduce the intuitive HCC ranking of the $\widetilde{\mathbf{M}}_{n_0 i^\mathrm{p}}$. } \end{figure} The Katz centrality on the $\mathcal{S}_{\{1,2,5,10,18,30\}}$ network is qualitatively similar to the communicability centrality in Fig.~\ref{fig:exp_star}, reproducing the features discussed above. As with the communicability, $\widetilde{\mathbf{M}}^\mathrm{KC}_{n_0 n_{30}}$ begins to overtake the other values of $\widetilde{\mathbf{M}}^\mathrm{KC}_{n_0 i^\mathrm{p}}$ as $\mbox{$\mathsmaller{\Pi}$}_T$ is reduced. However, the convergence fails before it can overtake $\widetilde{\mathbf{M}}^\mathrm{KC}_{n_0 n_5}$. The PageRank centrality reproduces HCC's $\widetilde{\mathbf{M}}_{n_0 i}$ ranking for all nodes except $i^\mathrm{P}=n_\mathrm{long}$. In fact, $\widetilde{\mathbf{M}}^\mathrm{PRC}_{n_0 n_1}=\widetilde{\mathbf{M}}^\mathrm{PRC}_{n_0 n_\mathrm{long}}$ for all values of $\mbox{$\mathsmaller{\Pi}$}_{PRC}$---therefore $\widetilde{\mathbf{M}}^\mathrm{PRC}_{n_0 n_\mathrm{long}}$ is consistently tied for the highest rank. This happens because the random walker beginning on either $n_1$ or $n_\mathrm{long}$ {\it must} traverse the edge to $n_0$, regardless of the weight of that edge. The result does not seem reasonable, because the connection from $n_0$ to $n_\mathrm{long}$ is meant to carry very little influence. Figure \ref{fig:gcc_star} shows that, for the ground-current centrality, the ordering of the $\widetilde{\mathbf{M}}^\mathrm{GCC}_{n_0 i^\mathrm{p}}$ matches the HCC ordering at all parameter values for all $i^\mathrm{p}$ except $n_\mathrm{long}$. In addition, the $\widetilde{\mathbf{M}}^\mathrm{GCC}_{n_0 n_\mathrm{long}}$ is ranked lowest at high reach (low $\mbox{$\mathsmaller{\Pi}$}_C$), matching the weighted HCC. While this matrix element is not ranked lowest at low reach (high $\mbox{$\mathsmaller{\Pi}$}_C$), Fig. \ref{fig:gcc_star}(a) shows that it does not amount to a significant centrality contribution at those parameter values. The inset of Fig. \ref{fig:gcc_star}(b) shows that, while all $\widetilde{\mathbf{M}}_{n_0 i^\mathrm{p}}$ values eventually converge as $\mbox{$\mathsmaller{\Pi}$}_C\to 0$, those for the peripheral nodes $i^\mathrm{p}$ other than $n_\mathrm{long}$ converge at much higher $\mbox{$\mathsmaller{\Pi}$}_C$. This behavior is reasonable, given that $D_{n_0 i^\mathrm{p}}=1$ for all $i^\mathrm{p}$ other than $n_\mathrm{long}$, and that $D_{n_0 n_\mathrm{long}}=1000$. \begin{figure}[h] \includegraphics[scale=.75, trim={0 .5cm 0cm 0cm},clip]{figures/gcc_star3a.pdf} \includegraphics[scale=.9, trim={0 0cm 0cm 0cm},clip]{figures/gcc_star3leg.pdf} \includegraphics[scale=.75, trim={0cm .5cm 0cm 0cm},clip]{figures/gcc_star3b.pdf} \caption{\label{fig:gcc_star} { Selected values of $ \widetilde{\alpha} \widetilde{\mathbf{M}}^\mathrm{GCC}$ for the $\mathcal{S}_{\{1,2,5,10,18,30\}}$ network}. The same data are plotted on (a) log-linear and (b) log-log scales. Note that the normalization factor $\widetilde{\alpha}$ depends on $\mbox{$\mathsmaller{\Pi}$}_c$. Without $\widetilde{\alpha}$, $\widetilde{\mathbf{M}}$ values go to zero at small $\mbox{$\mathsmaller{\Pi}$}_c$. The inset shows the detailed behavior of the curves at high reach (low $\mbox{$\mathsmaller{\Pi}$}_C$), where the values for all peripheral nodes $i^\mathrm{p}$ become indistinguishable well before $\widetilde{\mathbf{M}}_{n_0 n_\mathrm{long}}$ acheives the same value. At all parameter values, the ground-current centrality reproduces the HCC's intuitive centrality ordering on the $\widetilde{\mathbf{M}}_{n_0 i}$. } \end{figure} \subsubsection{Regular Networks} \label{sec:reg} We have seen that (the exogenous forms of) several centralities under discussion reduce to degree centrality at low reach (high $\mbox{$\mathsmaller{\Pi}$}$). In a sense, then, lower parameter values (higher reach) are perturbations on the degree centrality. Therefore, it becomes reasonable to factor out the contribution of nearest-neighbor influence to probe each centrality method's unique characteristics. Testing on a $k$-regular network, where every node has degree $k$, accomplishes this goal. For $k$-regular networks, the communicability, Katz, and PageRank---but not ground-current---centralities are always trivial, with every node's centrality value equal to $1/N$. More generally, this result obtains for any $\mathbf{M}$ that can be written as a power series in the adjacency matrix: $\mathbf{M}(\mathbf{A})= a_0 \pmb{\mathbb{I}} + a_1 \mathbf{A} + a_2 \mathbf{A}^2 + \cdots$. This is because $\mathbf{A}\ket{1}=k\ket{1}$, and so $\mathbf{M}(\mathbf{A}) \ket{1}$ is proportional to $\ket{1}$ as well. Applying the normalization factor from Eq.~(\ref{eq:centrality}) results in $\ket{c}= \alpha\mathbf{M}(\mathbf{A})\ket{1} = (1/N)\ket{1}$. Equations (\ref{eq:communicability}) and (\ref{eq:katzexpand}), respectively, show that the communicability and Katz centralities display this degeneracy. Equation (\ref{eq:prc}) for the PageRank centrality shows the same, noting that, for regular graphs the factor of $\pmb{\mathrm{Diag}}(\ket{k^{-1}})$ becomes a scalar. Indeed, in the case of regular graphs, the PageRank becomes identical to the Katz centrality with $\mbox{$\mathsmaller{\Pi}$}^\mathrm{KC}= k \mbox{$\mathsmaller{\Pi}$}^\mathrm{PRC}$. It is still possible to achieve nontrivial results by removing the diagonal of $\mathbf{M}$, {\it i.e.}, using the {\it exogenous} forms of these centralities, given by $\widetilde{\mathbf{M}}$. (On the other hand, the diagonal forms $\overline{\mathbf{M}}$ tend to produce the inverse centrality ranking, because $\mathbf{M}\ket{1} =\widetilde{\mathbf{M}}\ket{1}+\overline{\mathbf{M}}\ket{1}$. ) Nonetheless, the centrality values are still nearly identical, because the diagonal does not account for a large fraction of the final centrality weight. In general, the ground-current centrality results in nontrivial and more varied centrality values for both $\mathbf{M}$ and $\widetilde{\mathbf{M}}$. As a test case, we consider the modified Cayley trees depicted in Fig.~\ref{fig:cayley_closed_example}. The (unmodified) Cayley tree is an acyclic nearly regular network, defined by two parameters: $k$ and $n$. The first of these is the degree of every interior (i.e., nonleaf) node, while the second is the number of generations grown out from the central generation-0 node. For $m\ge1$, the $m$th generation contains $k (k-1)^{m-1}$ nodes. Cayley trees have the special property that it is intuitive which nodes are more central than others: the lower the generation, the higher the centrality, in accordance with the harmonic closeness, HCC. This is because, as can be seen in Fig. \ref{fig:cayley_closed_example}, lower-generation nodes are closer to the center, while higher-generation nodes are more peripheral. To arrive at the modified Cayley tree, we add edges to every leaf node, resulting in a $k$-regular graph. The new edges are added in such a way as to keep the leaf nodes on the network's periphery and the lower-generation nodes closer to the center. This ``tree closure" method, described below, can be employed for all odd values of $k$. However, here we report centrality results only for $k=3$ and $n=7$, since results are qualitatively similar for other values of $k$ and $n$. To ``close'' a $k=3$ Cayley tree, every leaf node $i$ makes two additional connections. The closest leaf node to $i$, which lies graph distance $d=2$ away, is skipped. Then $i$ is connected to the next-closest two leaf nodes, a graph distance $d=4$ away. This produces a symmetric network, where every node at a given generation is equivalent. The HCC ordering is unaffected by the addition of these edges. All the centralities under discussion reproduce the HCC centrality hierarchy: lower generation nodes have higher centralities. However, the centralities other than the ground-current centrality are nearly trivial. In Fig.~\ref{fig:cayley_results}, we plot the centralities for the parameters that produce the largest range between the centrality values of the 0th and $n$th generation nodes. For consistency, we have used the exogenous ($\widetilde{\mathbf{M}}$) forms of every centrality. However, the full ($\mathbf{M}$) ground-current centrality is very similar. The full form of the other centralities leads to the trivial result of centrality values of $1/N$ for every node, illustrated by the horizontal line in the figure. However, for the other centralities, even the exogenous form does not produce much deviation from $1/N$. The analysis presented here also leads to similar results when applied to square-lattice segments, made into regular networks by the addition of multiedges along the periphery. Based on these considerations, we propose that the ground-current centrality as a reasonable choice for discriminating central and noncentral network structure in regular graphs. This may also be true for nearly-regular graphs, such as the street networks of cities that have gridlike layouts. \begin{figure} \includegraphics[scale=1.0, trim={1cm 0cm 0cm 0cm},clip]{figures/cayley_examples.pdf} \caption{\label{fig:cayley_closed_example} Closed Cayley trees with degree $k$ and $n$ generations. } \end{figure} \begin{figure} \includegraphics[scale=.8, trim={0 0cm 0cm 0cm},clip]{figures/cayley_results2.pdf} \caption{\label{fig:cayley_results} Exogenous centrality values for the closed Cayley tree with $k=3$ and $n=7$. In this network, all nodes at a given generation are equivalent, so there are only 8 unique data points. The parameter values for each centrality are chosen to give the largest possible spread in the centrality values of the generations (ground-current: $\mbox{$\mathsmaller{\Pi}$}_C=0.010$, communicability: $\mbox{$\mathsmaller{\Pi}$}_T=0.202$, PageRank: $\mbox{$\mathsmaller{\Pi}$}_\mathrm{PRC}=1.055$). As discussed in the text, the Katz and PageRank centralities are identical on this network. The communicability values are similar but not equal to the Katz values. The horizontal line indicates the value of $1/N$, which coincides with the normalized degree centrality values on this network. } \end{figure} \subsubsection{Networks with bottlenecks} \label{Networks with bottlenecks}\label{sec:bnecks} The ground-current centrality is the only radial reach centrality in Table \ref{tab:classification} that is based entirely on a single acyclic, conserved flow. As a result, it is more sensitive to bottlenecks than the other centralities. {\it Lattices with a Bottleneck } We have argued that, of all the reach-parametrized centralities considered here, the ground-current centrality is the only centrality that reproduces intuitive centrality orderings on a range of networks. To this end, we have showed that it matches the centrality rankings specified by the harmonic closeness. In this section, we further show that the ground-current centrality also captures intuitive aspects of the betweenness centrality when applied to networks with bottlenecks. To show that the ground-current centrality independently captures aspects of harmonic closeness and betweenness, we construct a network to which those two centralities assign very different centrality rankings. Consider the weighted bottleneck network $\mathcal{B}(L=5)$ depicted in Fig.~\ref{fig:bottleneck_network}. It consists of two $L\times L$ square sublattices, connected by a single node bottleneck node. All edges have unit length, except for the two edges incident on the bottleneck node, which have length 10. The weighting of these edges helps distinguish the harmonic closeness and the (weighted \cite{brandes2001faster}) betweenness on this network, as shown in Fig.~\ref{fig:bottleneck_network_besclos}. The addition of the bottleneck node significantly changes the structure of the network by increasing the number of nodes reachable from the peripheral regions of the two sublattices. It is remarkable then, that the Communicability, Katz, and PageRank centralities are largely insensitive to the bottleneck node's inclusion. \begin{figure} \includegraphics[scale=1.0, trim={0cm 0cm 0cm 0cm},clip]{figures/bottleneck_example3.pdf} \caption{\label{fig:bottleneck_network} The weighted bottleneck network with length 5: $\mathcal{B}(L=5)$. All but two of the edges have unit lengths. The two edges forming the bottleneck have lengths of 10. They are depicted as thick lines in the figure. } \end{figure} \begin{figure} \includegraphics[scale=1, trim={0cm 0cm 0cm 0cm},clip]{figures/betBottle.pdf} \includegraphics[scale=1, trim={0cm 0cm 0cm 0cm},clip]{figures/closBottle.pdf} \includegraphics[scale=1, trim={0cm 0cm .4cm 0cm},clip]{figures/legendunlabeled.pdf} \caption{\label{fig:bottleneck_network_besclos} Normalized (a) betweenness and (b) closeness results on $\mathcal{B}(L=15)$. Each non-white pixel corresponds to a node of $\mathcal{B}(L=15)$. For readability, the color scale is chosen such that the maximum centrality value (at given $\mbox{$\mathsmaller{\Pi}$}$) is black and the minimum nearly white. The (normalized) centrality values corresponding to these colors are reported for every $\mbox{$\mathsmaller{\Pi}$}$. A completely white region in the subfigures indicates a lack of network nodes in that location. } \end{figure} Consider Fig.~\ref{fig:exp_contraction}, which depicts the exogenous communicability centrality values $\widetilde{c}^\mathrm{COM}$ on $\mathcal{B}(L=15)$ on a range of $\mbox{$\mathsmaller{\Pi}$}_T$ values. (The results in this section also hold for other values of $L$.) The full range of parameters is shown, in that increasing/decreasing the parameter values does not alter the image. The bottom-right portion of the figure confirms that the exogenous centrality is proportional to the degree centrality at low reach (high $\mbox{$\mathsmaller{\Pi}$}$): all nonperipheral nodes have the identical, highest centrality rank. As the the reach is increased ($\mbox{$\mathsmaller{\Pi}$}$ decreased), the region of high centrality rank shrinks towards the middle of each sublattice, largely insensitive to the presence of the bottleneck node. The top-left portion of the figure shows the high reach (low $\mbox{$\mathsmaller{\Pi}$}$) centrality values of the isolated $L=15$ lattice---its centrality ranks are almost indistinguishable from the sublattices of $\mathcal{B}(L=15)$. The Katz centrality behaves similarly, and so is not pictured. \begin{figure} \includegraphics[scale=1.0, trim={0cm 0cm 0cm 0cm},clip]{figures/exp_contraction3.pdf} \caption{\label{fig:exp_contraction} Communicability centrality on $\mathcal{B}(L=15)$. The full range of parameters is shown, in the sense that increasing/decreasing their values does not alter the image. The parameters are equally spaced on a log scale. For comparison, the red-bordered subfigure illustrates the centrality on the isolated $L=15$ lattice, using a maximum reach $\mbox{$\mathsmaller{\Pi}$}_T$ value for {that} network, so that decreasing $\mbox{$\mathsmaller{\Pi}$}_T$ does not alter the image. See the caption to Fig.~\ref{fig:bottleneck_network_besclos} for details. } \end{figure} \begin{figure} \includegraphics[scale=1.0, trim={0cm 0cm 0cm 0cm},clip]{figures/pagerank_contraction3.pdf} \caption{\label{fig:pagerank_contraction} PageRank centrality on $\mathcal{B}(L=15)$. See the caption to Fig.~\ref{fig:bottleneck_network_besclos} for details. Here, the red-bordered subfigure illustrates the centrality on the isolated $L=15$ lattice at very low reach. } \end{figure} \begin{figure} \includegraphics[scale=1.0, trim={0cm 0cm 0cm 0cm},clip]{figures/gcc_contraction3.pdf} \caption{\label{fig:gcc_contraction} Ground-current centrality on $\mathcal{B}(L=15)$. See the caption to Fig.~\ref{fig:bottleneck_network_besclos} for details.} \end{figure} The PageRank is also insensitive to the bottleneck, as seen in Fig.~\ref{fig:pagerank_contraction}. There, the top-left portion shows that the PageRank reduces to degree centrality at high reach (low $\mbox{$\mathsmaller{\Pi}$}$), unlike the communicability, Katz, and ground-current centralities. As the reach is decreased ($\mbox{$\mathsmaller{\Pi}$}$ increased), the centrality ranks remain largely symmetric {\it within} each sublattice, regardless of proximity to the bottleneck node. The bottom-right portion of the figure shows that the resulting pattern is very similar to that produced by PageRank on an isolated $L=15$ lattice. In contrast, the high-reach ground-current centrality is highly sensitive to the presence of the network's bottleneck, as shown in Fig.~\ref{fig:gcc_contraction}. At intermediate reach ($\mbox{$\mathsmaller{\Pi}$}_C=0.00674$), the centrality ranks within the sublattices are very similar to those of the isolated lattice at high reach ($\mbox{$\mathsmaller{\Pi}$}_C=0.00002$), shown in the figure's top-left. The rankings are also similar to the harmonic closeness centrality of Fig.~\ref{fig:bottleneck_network_besclos}(b). While increasing the reach (lowering $\mbox{$\mathsmaller{\Pi}$}_C$) does not change the centrality pattern in the isolated lattice, it has a large effect on the weighted bottleneck network. The figure shows that the region of high centrality contracts tightly around the bottleneck as $\mbox{$\mathsmaller{\Pi}$}_C \to 0$, creating a pattern much more similar to the betweenness centrality of Fig.~\ref{fig:bottleneck_network_besclos}(a). {\it Bottlenecks in Real Networks} The ground-current centrality's sensitivity to bottlenecks at high reach is also present in real networks. Here, we use high-betweenness nodes as a proxy for bottleneck structures. We compare the betweenness and the communicability, PageRank, and ground-current centralities as applied to seven example networks, including the previously discussed kangaroo network, Florida power grid network, and weighted bottleneck network $\mathcal{B}(L=15)$. We also analyze the Italian power grid previously studied in \cite{hama10}. The unweighted {\it C. elegans} network \cite{Choe-2004-connectivity} consists of 277 nodes corresponding to the majority of the nematode worm's neurons. The nematode is well studied in network theory \cite{watts1998collective, newman2002assortative} and neuroscience \cite{yan2017network} because it has one of the simplest neural structures of any organism. Here we analyze only the undirected version of this network. Finally, we analyze the largest connected component of the vole trapping network from \cite{nr-aaai15, voles}, depicted in Fig.~\ref{fig:voles}. The network's 118 nodes represent voles, while its 283 edges link voles that were caught in the same trap during a particular trapping session, where the integer edge weights correspond to the number of times they were trapped together. This network is different from the other real networks under consideration because it has high betweenness nodes that do not also have high degree. \begin{figure} \includegraphics[scale=1.3, trim={0cm 0cm 0cm 0cm},clip]{figures/vole_network.pdf} \caption{\label{fig:voles} Vole trapping network \cite{nr-aaai15,voles}. The black nodes are those in the top 5\% of betweenness rank. The gray nodes are, at high reach, those in the top 5\% exogenous communicability centrality (equivalently, eigenvector centrality) rank. They are the two nodes with the highest weighted degree and some of their high-degree neighbors.} \end{figure} We quantify the preference of a centrality X for bottlenecks by the ratio $f_X$: the number of nodes that are highly ranked in both $X$ and betweenness divided by the number of nodes that are highly ranked in betweenness. Here, ``high'' means ranked in the top 5\%. This measurement is illustrated for the exogenous communicability and the exogenous ground-current centralities in Figs.~\ref{fig:unweightedFlorida_comm} and \ref{fig:vole_comm}. The solid curves in the figures indicate the centrality values of the nodes in the corresponding networks (respectively, the unweighted version of the Florida power grid depicted in Fig.~\ref{fig:reach_intermediate}, and the trapping network of voles depicted in Fig.~\ref{fig:voles}). The thick black curves correspond to nodes that lie in the top 5\% of betweenness rank. The dotted red curve indicates the cutoff for high centrality: all the values above this curve lie in the top 5\% of communicability centrality in part (a) or ground-current centrality in part (b). The centrality's sensitivity to bottlenecks is measured as the fraction $f$ of thick black curves that lie above the dotted red curve. (In these, and the following, figures, we use a scaled form of $\mbox{$\mathsmaller{\Pi}$}$ that is constrained to lie between zero and one. See Appendix \ref{app:scaledparams} for details.) Figures \ref{fig:unweightedFlorida_comm} and \ref{fig:vole_comm} also illustrate the unique properties of the vole network. The low reach (high parameter) region in these plots display the networks' degree centralities. Unlike the weighted Florida power grid network, the vole network has no high-betweenness nodes in the upper ranks of degree centrality. This absence of high-betweenness nodes persists across most of the parameter range, for both the communicability and the ground-current centrality. At very high reach, the ground-current centrality assigns close to equal importance to every node. However, the high-betweenness nodes rise in relative rank. As a result, the fraction $f_\mathrm{GCC}$ quickly rises from zero at low $\mbox{$\mathsmaller{\Pi}$}_C$. \begin{figure} \hspace{0cm} \includegraphics[scale=.838, trim={0cm 0cm 0cm 0cm},clip]{figures/UnweightedFlorida_Comm5.pdf}\hspace{2cm} \includegraphics[scale=.855, trim={0cm 0cm 0cm 0cm},clip]{figures/UnweightedFlorida_GCC4.pdf} \caption[(a) Exogenous communicability centrality on the unweighted Florida power-grid network. (b) Exogenous ground-current centrality on the unweighted Florida power-grid network]{\label{fig:unweightedFlorida_comm}{(a) Exogenous communicability centrality on the unweighted Florida power-grid network. (b) Exogenous ground-current centrality on the unweighted Florida power-grid network}. The black curves correspond to nodes in the top 5\% of betweenness rank. All the curves above the dashed red line correspond to nodes in the top 5\% of (a) exogenous communicability and (b) exogenous ground-current centrality rank. See the text for details.} \end{figure} \begin{figure} \hspace{.7cm} \includegraphics[scale=.8, trim={0cm 0cm 0cm 0cm},clip]{figures/Vole_Comm4.pdf}\hspace{1cm} \includegraphics[scale=.855, trim={0cm 0cm 0cm 0cm},clip]{figures/Vole_GCC4.pdf} \caption[(a) Exogenous communicability centrality on the vole network. (b) Exogenous ground-current centrality on the vole network]{\label{fig:vole_comm} { (a) Exogenous communicability centrality on the vole network \cite{nr-aaai15,voles}. (b) Exogenous ground-current centrality on the vole network}. For details, see the text and the caption to Fig.~\ref{fig:unweightedFlorida_comm}.} \end{figure} The values of $f$ are reported in Fig. \ref{fig:comm_all_nets}. In part (a), the communicability centrality is not sensitive to bottlenecks: for all but one of the networks under consideration (the unweighted Florida grid), $f_\mathrm{COM}$ is maximized at large $\mbox{$\mathsmaller{\Pi}$}_T$, where $c^\mathrm{COM}$ is equivalent to the degree centrality. Note that $f_\mathrm{COM}$ is zero for the vole network at all parameter values. This is because the high betweenness nodes do not have the highest degrees and are not in the most highly connected regions of the network. Part (b) shows that the PageRank centrality is also not sensitive to bottlenecks. In 3 out of 7 example networks, $f_\mathrm{PRC}$ is maximized at high reach (low $\mbox{$\mathsmaller{\Pi}$}_\mathrm{PRC}$), which is equivalent to the degree centrality. In the other 4 cases, the amount of variation in $f_\mathrm{PRC}$ is small. This is in sharp contrast to $f_\mathrm{GCC}$, illustrated in Fig.~\ref{fig:comm_all_nets}(c). In every example network, the highest value of $f_\mathrm{GCC}$ is achieved at the lowest $\mbox{$\mathsmaller{\Pi}$}_c$ (highest reach), and these maxima are significantly larger than the values at large $\mbox{$\mathsmaller{\Pi}$}_C$, which is equivalent to degree centrality. Notably, $f_\mathrm{GCC}$ is very high for the vole and kangaroo networks, which had $f_\mathrm{COM}=0$ for all $\mbox{$\mathsmaller{\Pi}$}_T$. The low-$\mbox{$\mathsmaller{\Pi}$}_c$ values of $f_\mathrm{GCC}$ are also greater than or equal to the values $f_\mathrm{HCC}$ for the harmonic closeness centrality, as indicated by crosses in the figure. In summary, the ground-current centrality at high reach captures features of the betweenness centrality, assigning high ranks to bottleneck nodes. \begin{figure}[h] \hspace{2cm}\includegraphics[scale=0.8, trim={0cm 0cm 0cm 0cm},clip]{figures/allfractions.pdf} \caption{\label{fig:comm_all_nets} Fractions of high betweenness nodes among nodes with high (a) exogenous communicability, (b) exogenous PageRank Centrality, and (c) exogenous ground-current centrality. The fraction in (a) is equal to zero at all parameter values for both the vole network and the kangaroo network. The fraction in (b) is equal to zero at all parameter values for the kangaroo network. The crosses in (c) depict the fractions of high betweenness nodes among nodes with high harmonic closeness centrality (HCC). These are always less than or equal to the highest fractions obtained by the ground-current centrality. } \end{figure} \subsubsection{Localization} \label{sec:local} Centrality localization \cite{martin2014localization,pradhan2020principal} describes the situation when a small number of nodes account for a large fraction of the total centrality. (This can be viewed as a generalization of Freeman's centralization metric \cite{freeman1978centrality}.) As shown in Fig.~\ref{fig:cayley_results}, the Communicability, Katz, and PageRank centralities exhibit virtually no localization on closed Cayley trees, since the centrality values of all nodes are nearly equal. In \cite{martin2014localization}, the amount of localization of a {\it square-normalized } centrality $c$ is measured with the {\it inverse participation ratio } (IPR): \begin{equation} \mathrm{IPR}(c) = \sum_i c_i^4. \end{equation} The minimum IPR value for a network of size $N$ is $1/N$, and occurs in the trivial case where all centrality values are identical. The largest value of $ \mathrm{IPR}(\widetilde{c}^\mathrm{COM}) N$ for the closed Cayley tree ($k=3,n=7$), across all possible parameters, is approximately $1.004$. The fact that this is close to 1 confirms that localization is absent to the extent that the centrality is nearly trivial. The ground-current centrality is still highly unlocalized, but farther from the trivial limit: $\mathrm{IPR}(\widetilde{c}^\mathrm{GCC})N \approx 2.243$. While the communicability centrality exhibits little localization (is nearly trivial) in the case of regular networks, in many cases it exhibits so much localization that most nodes have centralities that are nearly zero. In \cite{martin2014localization}, it is shown that networks with prominent hub nodes ({\it i.e.}, nodes directly connected to a large number of other nodes) lead to highly localized eigenvector centrality, which is the high-reach limit of communicability centrality. Among the networks studied by the authors is the electrical circuit network 838 from the ISCAS 89 benchmark set \cite{milo2004superfamilies}. The maximum IPR value for any network is 1, and occurs when all nodes but one have zero centrality. The eigenvector centrality for the circuit network has relatively high localization: $ \mathrm{IPR}\approx .179$, corresponding to very little centrality assigned to nodes other than the hub node and its neighbors. Thus we see that in cases of both high and low localization, the centrality is not informative about most of the nodes in the network. Hub networks are not the only network architecture that leads to strongly-localized eigenvector centralities. For example, the vole network eigenvector centrality leads to $\mathrm{IPR}\approx .218$. Here, the localization is due to nodes with high {\it weighted} degree that do not have high {\it unweighted } degree, and so are not hubs in the usual sense. See Fig.~\ref{fig:voles} for an illustration. Here, the top 5\% of nodes in eigenvector centrality rank account for about 87\% of the total centrality. Another metric of localization is the Gini coefficient, frequently used by economists to quantify wealth or income inequality \cite{gastwirth1972estimation}. The simplest definition is the following weighted average of centrality differences: \begin{equation} \label{eq:gini} \mbox{Gini coefficient }(c) =\frac{\sum_i^N\sum_j^N \left| c_i - c_j \right|}{2 (N-1) \sum_i^N c_i}. \end{equation} An advantage of the Gini coefficient over the IPR is that the latter is constrained between 0 (trivially unlocalized) and 1 (maximally localized) for all networks. We report similar results with both metrics, though the Gini may be easier to interpret. For example, the Gini coefficient for the eigenvector centralities of the circuit and vole networks are approximately .780 and .939, respectively, which indicates significant localization. So far we have only considered the eigenvector centrality, which is the high-reach limit of the communicability centrality. The IPR and Gini coefficient values for all parameter values of the exogenous communicability centrality, as applied to all the considered example networks, are reported in Fig.~\ref{fig:IPR_comm_all_nets}(a) and (b), respectively. The localization almost always increases with increasing reach, and in several cases it reaches values indicating a significant degree of localization. At high reach, the vole network scores higher than the circuit network on both localization measures. The Italian power grid network scores higher than the circuit network on the Gini coefficient. This result is reasonable: the top 5\% of nodes in eigenvector centrality rank account for approximately 44\% of all centrality, indicating the presence of localization. In general, as can be seen in Fig.~\ref{fig:IPR_comm_all_nets}(b) the communicability centrality cannot produce unlocalized results, except in the case of regular networks as discussed in Sec.~\ref{sec:reg}, or in the case of nearly-regular networks such as $\mathcal{B}(L=15)$. The pattern is reversed with the ground-current centrality, which tends to produce unlocalized centrality values. The IPR and Gini coefficient for the exogenous ground-current centrality are shown in Fig.~\ref{fig:IPR_gcc_all_nets}(a) and (b), respectively. Almost always, the localization values decrease with increasing reach. At very high reach they invariably reach the minimum values ($N^{-1}$ for IPR, 0 for Gini), since the ground-current centrality always produces uniform centrality in the limit of high reach. However, this occurs only at very high reach, meaning that the centrality is unlocalized, but not trivial. In general, the Gini coefficients are between .15 and .50 for much of the parameter range. For comparison, the range of Gini coefficients for income across all nations is .24 to .63, according to the World Bank \cite{worldbank}. The crosses in the figure represent the IPR and Gini values for the nonbacktracking centrality (defined only for unweighted networks), which is presented in \cite{martin2014localization} as a nonlocalizing alternative to the eigenvector centrality. \begin{figure}[h] \includegraphics[scale=.52, trim={0cm 0cm 0cm 0cm},clip]{figures/IPR_Comm_All_Nets2.pdf} \includegraphics[scale=.73, trim={0cm 0cm 0cm 0cm},clip]{figures/Gini_Comm_All_Nets2.pdf} \caption{\label{fig:IPR_comm_all_nets} (a) The $\mathrm{IPR}$ of $\widetilde{c}^\mathrm{COM}$ and (b) the Gini coefficient of $\widetilde{c}^\mathrm{COM}$ for our example networks. The IPR is plotted on a log scale. The network labeled ``circuit'' is the electrical circuit network 838 from the ISCAS 89 benchmark set. The low reach (scaled $\mbox{$\mathsmaller{\Pi}$} \approx 1$) results are equivalent to those of the degree centrality. The high-reach results (scaled $\mbox{$\mathsmaller{\Pi}$} \approx 0$) results are equivalent to those of the eigenvector centrality.} \end{figure} \begin{figure}[h] \includegraphics[scale=.546, trim={0cm 0cm 0cm 0cm},clip]{figures/IPR_GCC_All_Nets2.pdf} \includegraphics[scale=.704, trim={0cm 0cm 0cm 0cm},clip]{figures/Gini_GCC_All_Nets2.pdf} \caption{\label{fig:IPR_gcc_all_nets} (a) The $\mathrm{IPR}$ of $\widetilde{c}^\mathrm{GCC}$ and (b) the Gini coefficient of $\widetilde{c}^\mathrm{GCC}$ for our example networks. See the caption to Fig.~\ref{fig:IPR_comm_all_nets}. Crosses represent the values of the nonbacktracking centrality (NBC) \cite{martin2014localization}, based on the Hashimoto matrix \cite{hashimoto1989zeta}. } \end{figure} \section{Conclusion: acyclic, conserved-flow centralities} \label{sec:conc} Network centrality measures can be described as more or less appropriate only relative to the specific demands of a given application. Here we have shown that the ground-current centrality is particularly well suited for purposes requiring low localization. However, there may be situations in which it would be desirable to pick out only some important nodes from a network. In this case, high localization would be desirable, and both the ground-current centrality and the nonbacktracking centrality from \cite{martin2014localization} would be inappropriate choices. A key aim of centrality research is to identify the properties that may render a centrality more or less useful in different situations. To aid in this task, we have expanded Borgatti's centrality typology \cite{borgatti2005centrality,borgatti2006graph} to categorize the properties of {\it parametrized} centralities (see Table \ref{tab:classification}). The expanded typology includes our newly introduced reach-parametrized and grasp-parametrized categories. [From this perspective, the communicability centrality is a reach-parametrized centrality that increases localization with increasing reach (Fig.~\ref{fig:IPR_comm_all_nets}), while the ground-current centrality is a reach-parametrized centrality that decreases localization with increasing reach (Fig.~\ref{fig:IPR_gcc_all_nets}).] Along with the reach/grasp distinction, we categorize parametrized centralities as to their Walk Position (radial vs. medial), as well as whether they are based on acyclic and conserved flows. The utility of the ground-current centrality stems from its unique position in this classification system. The ground-current centrality is the only radial reach-parametrized centrality based on acyclic, conserved flows (see Table \ref{tab:classification} and Sec.~\ref{sec:newcent}). As a result, it closely matches intuitive aspects of the harmonic closeness and betweenness centrality orderings, unlike the PageRank, Katz, and communicability centralities. It is noteworthy that the closeness and betweenness are, respectively, the low-grasp limits of the conditional resistance-closeness and the conditional current-betweenness centralities, which are also based on acyclic, conserved flows. This behavior is demonstrated on a variety of networks, including line networks, star networks, regular networks, and networks with bottlenecks, as discussed in Sec.~\ref{sec:adv_reg}. The reason is that, with acyclic, conserved flows, influence cannot get trapped in any part of the network; as the reach is increased, the influence must always flow toward as yet unvisited nodes \footnote{In principle, an acyclic flow could reach a dead end and therefore fail to visit all nodes. This scenario cannot occur with the ground-current centrality, since it is based on electrical currents flowing to ground, which is connected to all nodes.}. We now consider how this manifests on the types of networks listed above. In the line network (see Sec.~\ref{sec:lines}), the random walkers of the PageRank centrality ``bounce" off the end nodes, so that walkers on nodes near the periphery are less likely to leave the periphery than walkers near the center are likely to leave the center. This leads to a higher centrality for peripheral nodes. (However, end nodes have the lowest centrality of all, because all walkers on them have no choice but to leave.) This scenario cannot occur with acyclic centralities, because ``bouncing'' off the end node always creates cycles of length 2. For cyclic centralities on the closed Cayley tree (see Sec.~\ref{sec:reg}), influence that originates on the periphery is less likely to arrive at the center node than it is to stay on the periphery. This is because all nodes have the same degree, and so the influence is not biased toward the center. The same reasoning holds for any regular network that has a central location. In acyclic centralities like the ground-current centrality, all sufficiently high-reach (and thus long) paths must pass through the center. Thus, the ground-current centrality provides a sensitive, nonlocal measure of centrality for regular networks (see Fig.~\ref{fig:cayley_results}). We propose that it may also be the appropriate choice for {\it nearly}-regular networks, such as the Manhattan street grid, though further study is needed. The weighted bottleneck network $\mathcal{B}$ (see the first part of Sec.~\ref{sec:bnecks}) behaves similarly to regular networks: cyclic-centrality influence originating in one of the sublattices is likely to stay there, since the nodes there have higher degrees than the bottleneck nodes. In acyclic centralities, all sufficiently long paths must pass through the bottleneck node. This reasoning also holds for real networks with bottlenecks (see the second part of Sec.~\ref{sec:bnecks}), where the ground-current centrality prefers high-betweenness nodes at high reach (low $\mbox{$\mathsmaller{\Pi}$}_C$). For example, the high-betweenness nodes in the vole network (black nodes in Fig.~\ref{fig:voles}) do not have very high weighted or unweighted degree. At high reach (low $\mbox{$\mathsmaller{\Pi}$}_T$), the highest communicability centrality (gray nodes) occurs in nodes with high weighted degree, near clusters of high unweighted degree. The influence is trapped in these parts of the network, just as it was in the sublattices of $\mathcal{B}$. In contrast, the acyclic ground-current centrality must pass influence through the high-betweenness nodes when the reach (and thus the path length) is sufficiently high; see Figs.~\ref{fig:unweightedFlorida_comm}-\ref{fig:comm_all_nets}. The cyclic nature of the communicability and eigenvector centralities also contributes to their tendency toward strong localization on some networks (see Sec.~\ref{sec:local}). In \cite{martin2014localization}, the nonbacktracking centrality is used as a less localizing alternative. It is based on the Hashimoto matrix \cite{hashimoto1989zeta}, whose definition prevents influence from traveling in cycles of length 2. The ground-current centrality does not allow influence to travel in cycles of any length, and consequently tends to have even less localization than the nonbacktracking centrality, as seen in Fig.~\ref{fig:IPR_gcc_all_nets}. In addition to being acyclic, the ground-current centrality is based on conserved, rather than duplicating, flows. (Though cyclicity and duplication are generally independent dimensions of centrality type, Table \ref{tab:classification} demonstrates that they coincide for the metrics considered here.) The reliance on duplicating flows leads the communicability (and Katz) rankings to deviate from those of the other centralities in the subdivided star network $\mathcal{S}$ (see Sec.~\ref{sec:sstar}). As shown in Fig.~\ref{fig:exp_star}, communicability influence originating on the central node $n_0$ of $\mathcal{S}_{\{1,2,5,10,18,30\}}$ flows primarily to $n_{30}$ at high reach (low $\mbox{$\mathsmaller{\Pi}$}_T$). This is paradoxical because $n_{30}$ is the node at the highest unweighted distance from $n_{0}$. The situation is explained by the pattern of influence duplication within the communicability centrality, defined in Eq.~(\ref{eq:communicability}). There, each factor $\mathbf{A}^l$ corresponds to influence traveling $l$ steps, duplicating at every node in proportion to its weighted degree. Because nodes on the $n_{30}$ spoke have the highest weighted degrees in the network, most of the duplication occurs there. In fact, when the reach (and therefore $l$) is high, $\approx $ 99.4\% of the influence is created along the $n_{30}$ spoke, even though its original source is $n_0$. As a result, $n_{30}$ receives the highest centrality. Thus, the high-degree regions of a network are doubly challenging for the communicability centrality and similar measures. Because of cyclicity, influence tends to get trapped in these areas and, because of duplication, even more influence is created there. These phenomena can lead to very high centrality localization \cite{martin2014localization}. However, these situations do not arise with the acyclic, conserved (nonduplicating) ground-current centrality. In summary, the unique features of the ground-current centrality arise from its position in the classification system of Table~\ref{tab:classification}, which encompasses parametrized measures of two types: reach and grasp. The ground-current centrality is the only acyclic, conserved measure with parametrized reach. Furthermore, the other acyclic, conserved centralities have more complicated descriptions and formulas, since grasp parametrization requires more involved calculations \cite{gurfinkel2020absorbing}. Real-world processes on networks usually have limitations on both travel distance (reach) and the number of paths that can be traveled (grasp). An appropriate choice of $\mbox{$\mathsmaller{\Pi}$}$ is required to apply parametrized centralities to study such processes. We are currently developing methods to quantify the levels of reach and grasp across different centrality measures.
{'timestamp': '2022-02-23T02:15:21', 'yymm': '2005', 'arxiv_id': '2005.14373', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14373'}
arxiv
\subsection{IR-Based Code Search}\label{back_ir} Traditional code search models (e.g., CodeHow \cite{lv2015codehow}) mainly depend on IR techniques. Generally, the IR-based model builds an index for
terms (i.e., each word token) in code methods, so that the keywords generated from search query can match the expected code methods quickly by checking the index. Afterward, the model recommends the top-n relevant code methods according to a designed keywords-terms matching degree. During the indexing, the method name and body are commonly regarded as two different components for a method \cite{lv2015codehow}. This is because method name (e.g., "readTextLineByLine()") is often defined in natural language whose semantic representation is close to the query (e.g., "how to read a text line by line"). But the method body implements the goal of the method name in programming languages (e.g., "new Buffered Reader()" and "br.close()"). Therefore, when matching keywords for a code method, a weighted sum of matching degrees on two different method components is utilized \cite{lv2015codehow}. Due to the success of existing search engines (e.g., Elasticsearch \cite{gormley2015elasticsearch}), code search researchers can easily build an IR-based model using the APIs provided by a search engine. The researchers can focus more on query understanding to address the semantic gap between query and code method. Namely, the IR-based models take more effort on how to generate correct keywords and how to calculate the keywords-terms matching degree. As illustrated in Table \ref{tab_example}, two code methods may contain many keywords related to the search query, but how to identify the relevant code and exclude the irrelevant one is still a challenging issue. CodeHow \cite{lv2015codehow} is an IR-based model. It leverages an Extended boolean model \cite{salton1983extended} to measure the degree to which query keywords match a method name and body respectively, and sort code methods based on a weighted sum of matching degrees for method name and body. \bl{To spot representative methods in the searched candidate methods, some models cluster candidates based on code patterns \cite{keivanloo2014spotting,mishne2012typestate,xie2006mapo}. For example, Keivanloo et al. \cite{keivanloo2014spotting} represented a method by an encoded pattern in terms of a set of code entities (e.g., method blocks). After retrieving a list of candidate methods for a query, they transformed them into vectors within a latent space where the methods with similar code patterns can be easily clustered. Afterward, they reranked representative methods in clusters based on their features (e.g., code conciseness and completeness).} \subsection{DL-Based Code Search}\label{back_deepcs} Gu et al. \cite{gu2018deep} indicated that existing IR-based models (e.g., CodeHow) do not work well because they have difficulties in query understanding. They cannot effectively address irrelevant/noisy keywords in the queries, so that the IR-based models cannot find code methods that highly related to a query \cite{gu2018deep}. To overcome this challenge, Gu et al. \cite{gu2018deep} provided a model DeepCS that leverages the DL technique to better understand query semantics and correctly search related code from a large scale of the codebase. Generally, to map the semantics of a search query and code methods, DeepCS aims to leverage DL the technique to vectorize queries and methods at first, and learn their semantic relationships directly. In this way, the methods relevant to a query can be easily identified by calculating the cosine similarities. In specific, the DL-based model DeepCS \cite{gu2018deep} encoded a pair of query and method into vectors with a fixed-size of vocabulary, and input the vectorized query-method pairs into different long-short term memory networks (LSTMs) \cite{hochreiter1997long}. The LSTM model aims to capture the sequential relationship between words in query or method. The parameters in the networks are optimized by a ranking loss function, which rewards the related query-method pairs and penalizes the unrelated ones. As the ground-truth query-method pair is not easy to obtain, DL-based models used the first line of method comment to represent the corresponding query. Similarly, the unrelated query to a method is the comment randomly selected from other commented methods. More details on DeepCS can be found in \cite{gu2018deep}. \subsection{Qualitative Analysis}\label{qualitative_analysis} This subsection performs a qualitative analysis of CodeMatcher, DeepCS, CodeHow\bl{, and UNIF}. To compare these models, we classified all their returned methods into seven categories\bl{, namely whether the query can match the semantics of method name/body}. Table \ref{fig_classify} illustrates that the definitions and real query-code examples for each category. \bl{To be specific, the categories identified whether the method name or body of a searched code can reflect the semantics in query;} whether a code's method body is useless, i.e., an abstract method or a getter/setter; whether a method is a duplication of previously inspected one in the top-10 code list. Table \ref{tb_classify} lists the classification results for different models. \begin{table} \centering \small \caption{Definitions of seven categories on search results and their query-code examples.} \begin{tabular}{|l|} \toprule \textbf{1. MM: Matched method name and Matched method body.}\\ \bl{Query: create arraylist from array}\\ public void \rd{createArrayListFromArray}()\{\\ \quad{}\rd{String[] dogs = \{"Puppy", "Julie", "Tommy"\};}\\ \quad{}\rd{List<String> doglist = Arrays.asList(dogs);}\\ \quad{}assertEqual(3, dogsList.size());\\ \}\\ \midrule \textbf{2. NM: Not-matched method name but matched method body.}\\ \bl{Query: how to declare an array}\\ public void arrayCardinality(ParameterRegistration parameterRegistration)\{\\ \quad{}\rd{Integer[] array = new Integer[]\{1, 2, 3\};}\\ \quad{}Int arrayCardinality = procedures(parameterRegistration).arrayCardinality(array);\\ \quad{}assertEquals(array.length, arrayCardinality);\\ \}\\ \midrule \textbf{3. MN: Matched method name but Not-matched method body.}\\ \bl{Query: converting string to int in java}\\ static Int \rd{convert}Status\rd{StringToInt}(String statusVal)\{\\ \quad{}if (statusVal.equalsIgnoreCase(STATUS\_REGRESSION)) ||\\ \quad{}\quad{}statusVal.equalsIgnoreCase(STATUS\_FAILED)\{\\ \quad{}\quad{}return -1;\\ \quad{}\} else if (statusVal.equalsIgnoreCase(STATUS\_PASSED))\{\\ \quad{}\quad{}return 1;\\ \quad{}\}\\ \quad{}return 0;\\ \}\\ \midrule \textbf{4. MU: Matched method name but Useless method body.}\\ \bl{Query: how can I initialise a static map}\\ private void \rd{initialiseMap}(GoogleMap googleMap)\{\\ \quad{}mMap = googleMap;\\ \}\\ \midrule \textbf{5. NU: Not-matched method name and Useless method body.}\\ \bl{Query: converting iso 8601-compliant string to date}\\ public static String convertDate2String(Date date)\{\\ \quad{}return convertDate2String(date);\\ \}\\ \midrule \textbf{6. NN: Not-matched method name and Not-matched method body.}\\ \bl{Query: how to read a large text file line by line using java}\\ private String textLine(String name, long version, String value)\{\\ \quad{}return String.format("name: \%s version: \%d value: \%s", name, version, value);\\ \}\\ \midrule \textbf{7. RM: Repeated Method.}\\ \bl{Query: convert an inputstream to a string}\\ public static String \rd{convertInputStreamToString}(InputStream inputStream)\{...\}\\ private String \rd{convertInputStreamToString}(InputStream inputStream)\{...\}\\ \bottomrule \end{tabular} \label{fig_classify} \end{table} \begin{table}[] \small \caption{Classification of \bl{1740} code search results into 7 categories for different models.} \begin{tabular}{|c|l|rr|rrrrr|} \toprule \textbf{Queries} & \textbf{Model} & \textbf{MM} & \textbf{NM} & \textbf{MN} & \textbf{MU} & \textbf{NU} & \textbf{NN} & \textbf{RM} \\ \midrule \multirow{4}{*}{$Queries_{50}$} & DeepCS & 93 (18.6\%) & 12 (2.4\%) & 25 (5.0\%) & 53 (10.6\%) & 97 (19.4\%) & 218 (43.6\%) & 2 (0.4\%) \\ & CodeHow & 34 (6.8\%) & 73 (14.6\%) & 2 (0.4\%) & 1 (0.2\%) & 4 (0.8\%) & 331 (66.2\%) & 55 (11.0\%) \\ & UNIF & 60 (12.0\%) & 23 (4.6\%) & 0 (0.0\%) & 7 (1.4\%) & 18 (3.6\%) & 392 (78.4\%) & 0 (0.0\%) \\ & CodeMatcher & 285 (57.0\%) & 0 (0.0\%) & 32 (6.4\%) & 27 (5.4\%) & 28 (5.6\%) & 125 (25.0\%) & 3 (0.6\%) \\ \midrule \multirow{4}{*}{$Queries_{99}$} & DeepCS & 59 (6.0\%) & 48 (4.8\%) & 0 (0.0\%) & 3 (0.3\%) & 69 (7.0\%) & 811 (81.9\%) & 0 (0.0\%) \\ & CodeHow & 31 (3.1\%) & 155 (15.7\%) & 0 (0.0\%) & 0 (0.0\%) & 3 (0.3\%) & 781 (78.9\%) & 20 (2.0\%) \\ & UNIF & 118 (11.9\%) & 38 (3.8\%) & 5 (0.5\%) & 4 (0.4\%) & 84 (8.5\%) & 741 (74.8\%) & 0 (0.0\%) \\ & CodeMatcher & 367 (37.1\%) & 0 (0.0\%) & 52 (5.3\%) & 89 (8.9\%) & 0 (0.0\%) & 473 (47.8\%) & 9 (0.9\%) \\ \midrule \multirow{4}{*}{$Queries_{25}$} & DeepCS & 6 (2.4\%) & 8 (3.2\%) & 0 (0.0\%) & 1 (0.4\%) & 15 (6.0\%) & 220 (8.0\%) & 0 (0.0\%) \\ & CodeHow & 6 (2.4\%) & 29 (11.6\%) & 0 (0.0\%) & 0 (0.0\%) & 0 (0.0\%) & 213 (85.2\%) & 2 (0.8\%) \\ & UNIF & 28 (11.2\%) & 10 (4.0\%) & 2 (0.8\%) & 0 (0.0\%) & 15 (6.0\%) & 195 (78.0\%) & 0 (0.0\%) \\ & CodeMatcher & 47 (18.8\%) & 1 (0.4\%) & 8 (3.2\%) & 3 (1.2\%) & 0 (0.0\%) & 191 (76.4\%) & 0 (0.0\%) \\ \midrule \multirow{4}{*}{$Queries_{all}$} & DeepCS & 158 (9.1\%) & 68 (3.9\%) & 25 (1.4\%) & 57 (3.3\%) & 181 (10.4\%) & 1249 (71.8\%) & 2 (0.1\%) \\ & CodeHow & 71 (4.1\%) & 257 (14.8\%) & 2 (0.1\%) & 1 (0.1\%) & 7 (0.4\%) & 1325 (76.1\%) & 77 (4.4\%) \\ & UNIF & 206 (11.8\%) & 71 (4.1\%) & 7 (0.4\%) & 11 (0.6\%) & 117 (6.7\%) & 1328 (76.3\%) & 0 (0.0\%) \\ & CodeMatcher & 699 (40.2\%) & 1 (0.1\%) & 92 (5.3\%) & 119 (6.8\%) & 28 (1.6\%) & 789 (45.3\%) & 12 (0.7\%) \\ \bottomrule \end{tabular} \label{tb_classify} \end{table} \subsubsection{The Reasons Why DeepCS \bl{and UNIF} Succeeded and Failed} From the Table \ref{tb_classify}, we observed that the DeepCS\bl{/UNIF} obtained a \bl{13\%/15.9\%} success (MM and NM), where \bl{9.1\%/11.8\%} of success in MM was due to a correct semantic matching between query and method, as Table \ref{fig_classify}(1); the \bl{3.9\%/4.1\%} of success in NM implies that DeepCS\bl{/UNIF} can somewhat capture the semantics in code (i.e., API sequence and tokens) although the method name does not relate to the goal of a query as Table \ref{fig_classify}(2). However, there are \bl{87\%/88.2\%} of failed results, where \bl{0.1\%/0.0\%} of failures were caused by returning repeated methods (RM). The source code provided by Gu et al. \cite{gu2018deep} excluded the methods whose cosine similarity differences with related queries are larger than 0.01. But we observed that this judgment could not clear out repeated methods for \bl{the DL-based models} with some negligible difference, e.g., the modifier difference, as shown in Table \ref{fig_classify}(7). Meanwhile, \bl{1.4\%/0.4\%} of failures (MN) were caused by unmatched method body, because two methods for different usages may have the same method name, as exemplified in Table \ref{fig_classify}(3). Moreover, for the \bl{10.7\%/7.3\%} of failures (MU and NU), we found that DeepCS\bl{/UNIF} returned some useless methods that can be a getter/setter for a class, or contain abstract APIs with insufficient context to understand, such as the examples in Table \ref{fig_classify}(4-5). In this way, useless methods do not satisfy the requirement of the method-level code search since developers need to search and jump to related code. And the manual code jump will increase developers' code inspection time, and it is also uncertain how many jumps they need. Thus, the self-contained source code is advantageous for the method-level code search. In addition, for the most part \bl{(71.8\%/76.3\%)} of features (NN), DeepCS\bl{/UNIF} completely mismatched the code to queries, as illustrated in Table \ref{fig_classify}(6). We attribute these failures to the insufficient model training, because (1) DeepCS\bl{/UNIF} was optimized by pairs of method and Javadoc comment, but 500 epochs of training with randomly selected pairs cannot guarantee its sufficiency; (2) DeepCS\bl{/UNIF} assumed that the first line of Javadoc comment could well describe the goal of related code, but it is uncertain whether the used line is a satisfactory label or a noise; (3) during the model training, the optimization never stop because of the convergence of its loss function values. \OB{1}{\bl{The DL-based models (DeepCS and UNIF)} can capture the semantics between queries and code methods. But its unsatisfactory performance is likely derived from the gap between the training data and our codebase.} \subsubsection{CodeMatcher vs. DeepCS\bl{/UNIF}} For the proposed model CodeMatcher that matches keywords in query with a method, Table \ref{tb_classify} shows that \bl{40.2\%} of code search succeeded due to the well-matched method name and body. However, there is no success from NM because wrongly combining keywords in the query only leads to unmatched code, i.e., MN \bl{(5.3\%)} and NN \bl{(45.3\%)}. This is because CodeMatcher cannot handle complex queries like DeepCS\bl{/UNIF} via the embedding technique. The \bl{8.4\%} of failures (MU and NU) on useless methods indicate that boosting the useful methods on a higher rank in terms of the percentage of JDK APIs is not the optimal solution, and directly removing those useless methods may be a better substitution. \bl{Moreover, we can observe that as CodeMatcher searched code methods based on how the tokens in methods matched the important query words sequentially, this sequential requirement would substantially exclude many duplicated methods for a query. After filtering out the redundant methods by simply comparing their MD5 hash values, the CodeMatcher can only return a limited number of repeated methods (RM=12).} \bl{Comparing with DeepCS/UNIF,} CodeMatcher returned more repeated methods. Thus, simply filtering redundant methods by their MD5 hash values, as described in Section \ref{method}, is not enough. A better choice would be comparing the API usages, data structure, and working flow in the method body. Comparing with DeepCS\bl{/UNIF}, the main advantage of CodeMatcher is the correct keywords matching between query and code, i.e., a high percentage of MM. However, CodeMatcher \bl{can hardly} handle partial matching only on the code body (i.e., \bl{NM=1}). But this is what DL-based models are good at (NM=\bl{68} for DeepCS \bl{and NM=71 for UNIF}) because they can capture high-level intent between query and method by joint embedding\bl{, although the NM are much smaller than MM for DeepCS/UNIF (MM=158 for DeepCS while MM=206 for UNIF).} Therefore, \bl{the advantages of DeepCS/UNIF} are good to compensate for CodeMatcher's disadvantages. \OB{2}{CodeMatcher cannot address complex queries like \bl{DL-based models (DeepCS and UNIF)}. Meanwhile, CodeMatcher can accurately find code methods for simple queries and avoid the out-of-vocabulary issue in DeepCS \bl{and UNIF}.} \subsubsection{CodeMatcher vs. CodeHow} By analyzing the classification of these search results in Table \ref{tb_classify}, we can observe that CodeHow is good at matching a query with the method body (\bl{14.8\%} for NM), because it extends query with related official APIs so that method body can be well filtered in terms of internal APIs. However, the main reason for the failures is the unmatched keywords (\bl{76.1\%} for NN) since CodeHow ignores the importance of programming words and their sequence. The other main reason is that it does not exclude repeated methods (\bl{4.4\%} for RM). However, we can observe that CodeHow can compensate for the disadvantages of CodeMatcher, i.e., the difficulty in matching a query with the method body. \OB{3}{The IR-based models CodeMatcher and CodeHow compensate with each other as they are good at matching a query with method name and body respectively.} \subsection{\bl{Conciseness and Completeness}}\label{concise_complete} \bl{Keivanloo et al. \cite{keivanloo2014spotting} indicated that, for a searched code, although the correctness (i.e., whether it is relevant to the search query) is important, developers also care about two other features. One feature is the conciseness, the ratio of irrelevant lines to the total lines. Lower conciseness indicates that a code contains no irrelevant lines. The other feature is the completeness, the number of addressed tasks divided by the total number of tasks, where the task includes the intent of the search query and other missed statements (well-typed, variable initialization, control flow, and exception handling) \cite{keivanloo2014spotting}. Besides, Keivanloo et al. \cite{keivanloo2014spotting} indicated that the code readability (whether the variable names are well-chosen) is also important. However, the readability is not easy to measure and the conciseness is the key component of readability (namely the searched code should use "as little code as possible" and show "the most basic version of the problem") \cite{buse2012synthesizing}. This is also the reason why Keivanloo et al. \cite{keivanloo2014spotting} only measured the conciseness and completeness of code.} \bl{Figs. \ref{fig_concise}-\ref{fig_complete} showed these two features of all the code searched by the CodeMatcher and baseline models (DeepCS, CodeHow, and UNIF). Fig. \ref{fig_concise} shows that the average conciseness of DeepCS, CodeHow, and UNIF are 0.88, 0.83, and 0.85 respectively. The proposed model CodeMatcher achieves a value of 0.61, outperforming the baselines by 30.7\%, 26.5\%, and 28.2\% respectively. After performing the Wilcoxon signed-rank test \cite{wilcoxon1945individual} between the results of CodeMatcher and a baseline model, the statistical result indicates that the advantages of CodeMatcher over baselines are significant (p-value<0.05). Moreover, Fig. \ref{fig_complete} shows that CodeMatcher also outperforms baseline models significantly in terms of the completeness with an average value of 0.33.} \bl{To investigate why CodeMatcher shows substantially better conciseness and completeness over baselines, we excluded the searched results irrelevant to the 174 queries for each model. As the left searched results are different for CodeMatcher and baselines, we applied the two-sample Kolmogorov-Smirnov test \cite{massey1951kolmogorov} at a 5\% significance level to estimate the statistical difference. Figs. \ref{fig_concise_short}-\ref{fig_complete_short} show that the CodeMatcher and baselines have no significant difference on conciseness and completeness for the relevant code. These results indicate that the number of returned code relevant to the query strongly affects the total conciseness and completeness. However, these results also implied that all the studied code search models do not consider the conciseness and completeness in the model. Therefore, we suggest that further studies take efforts to improve the quality of the searched code.} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{concise.eps} \caption{\bl{The conciseness of baseline models (DeepCS, CodeHow, UNIF, and CodeMatcher) for all the searched results; '*' indicates the significant difference (p-value $<$ 0.05) between a model and CodeMatcher, which is tested by the Wilcoxon signed-rank test \cite{wilcoxon1945individual}.}} \label{fig_concise} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{complete.eps} \caption{\bl{The completeness of baseline models (DeepCS, CodeHow, UNIF, and CodeMatcher) for all the searched results; '*' indicates the significant difference (p-value $<$ 0.05) between a model and CodeMatcher, which is tested by the Wilcoxon signed-rank test \cite{wilcoxon1945individual}.}} \label{fig_complete} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{concise-short.eps} \caption{\bl{The conciseness of baseline models (DeepCS, CodeHow, UNIF, and CodeMatcher) for all the correctly searched results; no baseline showed significant difference (p-value $>$ 0.05) with CodeMatcher, which is tested by the two-sample Kolmogorov-Smirnov test \cite{massey1951kolmogorov} at a 5\% significance level.}} \label{fig_concise_short} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{complete-short.eps} \caption{\bl{The completeness of baseline models (DeepCS, CodeHow, UNIF, and CodeMatcher) for all the correctly searched results; no baseline showed significant difference (p-value $>$ 0.05) with CodeMatcher, which is tested by the two-sample Kolmogorov-Smirnov test \cite{massey1951kolmogorov} at a 5\% significance level.}} \label{fig_complete_short} \end{figure} \subsection{Why DeepCS \bl{and UNIF} Did Not Work Well on Our New Dataset?}\label{why} One may notice that the performance of DeepCS \bl{and UNIF} are lower than the ones reported by Gu et al. \cite{gu2018deep} \bl{and Cambronero et al. \cite{cambronero2019deep} respectively}. To verify the validity of our re-ran DeepCS \bl{and re-implemented UNIF}, we tested our trained DeepCS \bl{and UNIF} not on our new testing data but the original testing data shared by Gu et al. \cite{gu2018deep}; we refer to the results of our experiment as DeepCS$_{old}$ and UNIF$_{old}$. Table \ref{tab_why} shows that the performance of DeepCS$_{old}$ is close to the one reported by Gu et al. \cite{gu2018deep} in terms of MRR (0.59 vs. 0.6)\bl{; the MRR of UNIF$_{old}$ is also close to the one reported by Cambronero et al. \cite{cambronero2019deep} (0.54 vs. 0.58)}. Thus, these results confirm the validity of the re-ran DeepCS model \bl{and our re-implemented UNIF model}. Moreover, not like \bl{Gu et al.'s} testing data, our testing data shares no overlap with \bl{their} training data. Therefore, the reduced performance on our testing data implies that the overlap affects the generalizability of DeepCS \bl{and UNIF}. Ideally, the performance of DeepCS \bl{and UNIF} can be improved by tuning with more training data. To reach the goal, we built the models DeepCS$_{tune}$ and UNIF$_{tune}$ tuned with more training data extracted from our codebase. Totally, we collected 1,283,445 commented code methods as training data instances following the training data extraction process described in \cite{gu2018deep}. In the model optimization step, we tuned the pre-trained DeepCS \bl{and UNIF} with the new training data in default settings, i.e., 500 epochs. Table \ref{tab_why} shows that DeepCS$_{tune}$ achieves a better performance with MRR \bl{0.38 (an improvement of 15.2\% over DeepCS) while the MRR of UNIF$_{tune}$ is 0.46 (an improvement of 12.2\% over UNIF)}. The experimental results imply that DeepCS \bl{and UNIF} did not work due to the gap between \bl{Gu et al.'s} training and our testing data. Tuning DeepCS \bl{and UNIF} with data generated from the testing data can mitigate the generalizability issue. However, their tuned performance (MRR = 0.38 \bl{for DeepCS and MRR = 0.46 for UNIF}) are still far from the result of CodeMatcher (MRR = \bl{0.60}). We observed that this case is likely attributed to the limited training data and a fixed size of vocabulary. As shown in Table \ref{codebase}, our new testing data contains about 16 million methods in total but only 21.91\% of them have Javadoc comments. Moreover, only 7.73\% of the total methods can be used to tune DeepCS\bl{/UNIF} due to the limited vocabulary size. Therefore, DeepCS\bl{/UNIF} would have difficulty in comprehending the semantics of methods and their relationship to related Javadoc comments. To further analyze the impacts of the vocabulary, we investigate how it covers the words in the new testing data. Table \ref{tab:vocab} shows that more than 95\% of words in method components (API, name, and tokens) cannot be covered. In terms of the word frequency, there are 42.95\%, 6.22\%, and 12.75\% of new words that appeared in method API, name, and token respectively. Due to the out-of-vocabulary issue, more than half (52.91\%) of methods in testing data contain new words. Therefore, the trained DeepCS\bl{/UNIF} would have difficulty in understanding the semantics of methods in the new testing data. \begin{table}[] \caption{Performance comparison of DeepCS in different settings \bl{(old, testing model on DeepCS' original testing data \cite{gu2018deep}; tune, tuning model with data collected from our testing data and testing model on our testing data)}, where the model performance is measured by SuccessRate@1/5/10 (SR@1/5/10), Precision@1/5/10 (P@1/5/10), and MRR.} \begin{tabular}{|c|l|ccc|ccc|c|} \toprule \textbf{Query} & \textbf{Model} & \textbf{SR@1} & \textbf{SR@5} & \textbf{SR@10} & \textbf{P@1} & \textbf{P@5} & \textbf{P@10} & \textbf{MRR} \\ \midrule \multirow{2}{*}{$Queries_{50}$} & $DeepCS_{old}$ & 0.44 & 0.72 & 0.82 & 0.44 & 0.42 & 0.41 & 0.59 \\ & $UNIF_{old}$ & 0.38 & 0.72 & 0.80 & 0.38 & 0.39 & 0.33 & 0.54 \\ \midrule \multirow{2}{*}{$Queries_{50}$} & $DeepCS_{tune}$ & 0.30 & 0.62 & 0.72 & 0.30 & 0.31 & 0.30 & 0.44 \\ & $UNIF_{tune}$ & 0.26 & 0.56 & 0.64 & 0.26 & 0.26 & 0.19 & 0.41 \\ \midrule \multirow{2}{*}{$Queries_{99}$} & $DeepCS_{tune}$ & 0.24 & 0.42 & 0.47 & 0.24 & 0.20 & 0.15 & 0.36 \\ & $UNIF_{tune}$ & 0.33 & 0.64 & 0.71 & 0.33 & 0.32 & 0.26 & 0.49 \\ \midrule \multirow{2}{*}{$Queries_{25}$} & $DeepCS_{tune}$ & 0.16 & 0.36 & 0.40 & 0.16 & 0.18 & 0.10 & 0.30 \\ & $UNIF_{tune}$ & 0.36 & 0.52 & 0.52 & 0.36 & 0.22 & 0.17 & 0.48 \\ \midrule \multirow{2}{*}{$Queries_{all}$} & $DeepCS_{tune}$ & 0.24 & 0.47 & 0.53 & 0.25 & 0.23 & 0.19 & 0.38 \\ & $UNIF_{tune}$ & 0.32 & 0.60 & 0.66 & 0.32 & 0.29 & 0.23 & 0.46 \\ \bottomrule \end{tabular} \label{tab_why} \end{table} \begin{table} \centering \caption{Statistics of words in our new testing data covered by DeepCS' \bl{and UNIF's} vocabulary, where \bl{DeepCS represents code by API, name, and token; meanwhile, UNIF represents code by name and token.}.} \begin{tabular}{|l|rrr|} \toprule \textbf{Item} & \textbf{Count} & \textbf{New} & \textbf{Percentage} \\ \midrule Unique Words in API & 2,406,547 & 2,397,969& 99.64\%\\ Unique Words in Name & 270,084 & 261,113 & 96.68\%\\ Unique Words in Token & 1,610,147 & 1,602,435 & 99.52\%\\ Words in API & 64,754,427 & 27,811,106 & 42.95\%\\ Words in Name & 44,194,936 & 2,749,420 & 6.22\%\\ Words in Token & 177,572,258 & 22,633,682 & 12.75\%\\ Method & 16,611,025 & 8,788,967 & 52.91\%\\ \bottomrule \end{tabular} \label{tab:vocab} \end{table} \subsection{\bl{Why is the Word Sequence Important?}} \bl{To capture the sequential relationship between important words in query, the proposed model CodeMatcher considered it in the fuzzy search component and reranking strategies. However, one major question is that whether the sequence frequently occurred. To investigate this research question, we analyzed the studied 174 queries. We found that for 79 queries, the order of words that appear in the query plays an important role -- if the words in the query are reordered, the meaning will change. To analyze the root causes, we classified these 79 queries into four cases: \textit{1) the order of multiple tasks,} the query "read from file A and write to file B" is different from the one "write from file A and read to file" because exchanging the phrases "read from" and "write to" will change the intent of two tasks; \textit{2) data conversion,} "convert int to string" and "convert string to int" work differently; \textit{3) conditional job,} "sort map by values" contradicts with the semantics "sort values by map"; \textit{4) the exchanged core word,} "read property file" and "read file property" would be implemented in two ways because "property file" and "file property" are two completely different objects. The statistics in Table \ref{table_word_sequence} shows that among these 4 cases, the data conversion is the most frequent one.} \begin{table}[] \caption{\bl{Classification of cases with sequentially important query words.}} \begin{tabular}{|c|l|l|c|r|} \toprule \textbf{No.}&\textbf{Case} & \textbf{Description} & \textbf{Count} & \textbf{Percentage} \\ \midrule 1&The order of multiple tasks & do A and B & 9 & 11.4\% \\ 2&Data conversion & do (from) A to B & 38 & 48.1\% \\ 3&Conditional job & do A with/via/in/over/based/by/when/if B & 17 & 21.5\% \\ 4&The exchanged core word & do A B & 15 & 19.0\% \\ -&\textbf{Total} & - & 79 & 100.0\% \\ \bottomrule \end{tabular} \label{table_word_sequence} \end{table} \subsection{Research Questions}\label{rqs} To verify the validity of the proposed model, this study investigates the following research questions (RQs): \RQ{1}{Can CodeMatcher outperform \bl{the baseline} models?} The proposed model CodeMatcher aims to leverage IR technique to simplify the complexity of DeepCS while retaining its advantages, namely its capability of addressing irrelevant/noisy keywords in queries and recognizing semantically related words between queries and code methods as described by Gu et al. \cite{gu2018deep}. To verify the validity of the proposed model, this RQ investigates whether the CodeMatcher outperforms the relevant models, \bl{including two DL-based model DeepCS \cite{gu2018deep} and UNIF \cite{cambronero2019deep}, and an IR-based model CodeHow \cite{lv2015codehow}.} \RQ{2}{Is CodeMatcher faster than \bl{the baseline} models?} We observed that training and testing the DL-based model DeepCS is time-consuming due to its high complexity. Therefore, we intend to analyze whether the simplified model CodeMatcher works faster than DeepCS substantially. Besides, we also compare the time-efficiency of CodeMatcher with \bl{UNIF \cite{cambronero2019deep} and CodeHow \cite{lv2015codehow}}. \RQ{3}{\bl{How do the CodeMatcher components contribute to the code search performance?}} CodeMatcher consists of three important components: \bl{the fuzzy search component retrieves an initial set of candidate relevant code from the indexed codebase, the reranking component sorts the candidate list based on the designed strategy, and the query understanding component collects metadata for the previous two components.} They largely determine the performance of the code search. Thus, this RQ aims to analyze how much these components contribute to the model performance. The result can also help analyze the necessity of these components. \RQ{4}{Can CodeMatcher outperform existing online code search engines?} GitHub search\footnote{https://github.com/search} and Google search\footnote{https://google.com} are two commonly used search engines for developers to find code in practice. To measure the performance of GitHub/Google search, all the search engines are tested with the same queries as CodeMatcher and they are set to search only GitHub repositories; for Google search, we use the following advanced settings: "site:github.com" and "filetype:java". \subsection{Dataset}\label{data} \vspace{5pt}\noindent\textbf{Codebase.} Originally, Gu et al. \cite{gu2018deep} collected 9,950 Java projects that have at least 20 stars in GitHub as the testing data of DeepCS. However, we cannot use their testing data to verify our IR-based model CodeMatcher, because Gu et al. \cite{gu2018deep} just provided a preprocessed data that can be used for DeepCS only. Therefore, we built a new and larger scale testing data for model evaluation. We crawled 41,025 Java repositories from GitHub created from Jul. 2016 to Dec. 2018 with more than five stars. The number of stars filter ($>$5 stars) which is different from Gu et al.'s \cite{gu2018deep} setting (i.e., at least 20 stars) is used so that the testing data includes more Java projects. Besides, the time duration (Jul. 2016 to Dec. 2018) of our new testing data can ensure the non-overlapping with DeepCS' training data (created from Aug. 2008 to Jun. 2016). Table \ref{codebase} shows that the new codebase contains \textasciitilde{}17 million methods, and 21.91\% of them have Javadoc comments that describe the corresponding methods. \begin{table} \centering \caption{Statistics of the constructed codebase.} \begin{tabular}{|cccc|} \toprule \textbf{$\#$Project} & \textbf{$\#$Method} & \textbf{$\#$Javadoc} & \textbf{\#LOC} \\ \midrule 41,025 & 16,611,025 & 3,639,794 & 70,332,245\\ \bottomrule \end{tabular} \label{codebase} \end{table} \vspace{5pt}\noindent\textbf{Queries.} To simulate a real-world code search scenario, we validated a code search model \bl{with three query sets in total of 174 queries} as listed in \bl{Tables \ref{queries50}-\ref{queries25}}: \begin{itemize} \item \textit{$Queries_{50}$.} \bl{The first query set was} manually collected by Gu et al. \cite{gu2018deep} from Stack Overflow in a systematic way. These queries are top-50 voted Java programming questions\footnote{https://stackoverflow.com/questions/tagged/java?sort=votes} following three criteria: \textit{(1) concrete,} a question should be a specific programming task, such as "How can I concatenate two arrays in Java?"; \textit{(2) well-answered,} the accepted answers corresponding to the question should contain at least one code snippet; \textit{(3) non-duplicated,} the question is not a duplicate of another question in the same collection.\vspace{5pt} \item \textit{$Queries_{99}$.} \bl{The second query set was collected from the CodeSearchNet challenge\footnote{https://github.com/github/CodeSearchNet} that was built by Husain et al. \cite{husain2019codesearchnet}. The contained 99 queries\footnote{https://github.com/github/CodeSearchNet/blob/master/resources/queries.csv} were common search queries from Bing that have high clickthrough rates to code with clearly technical keywords \cite{husain2019codesearchnet}.}\vspace{5pt} \item \textit{$Queries_{25}$.} \bl{The third query set has 25 queries based on the studies of Mishne et al. \cite{mishne2012typestate} and Keivanloo et al. \cite{keivanloo2014spotting}. Note that these two studies used API names as code search input. However, our study focused on the queries written in natural language. Therefore, we used the natural language descriptions provided in these two studies \cite{mishne2012typestate,keivanloo2014spotting} as our model inputs.} \end{itemize} \begin{table}[b] \scriptsize \caption{\bl{The Queries$_{50}$ collected from Gu et al. \cite{gu2018deep}} and the Evaluation Results (NF: Not Found within the top 10 returned results, DCS: DeepCS \cite{gu2018deep}, CM: CodeMatcher, CH: CodeHow \cite{lv2015codehow}, UNIF \cite{cambronero2019deep}.)} \begin{tabular}{|c|l|cccc|} \toprule \textbf{No.} & \textbf{Query} & \textbf{DCS} & \textbf{CM} & \textbf{CH} & \textbf{UNIF} \\ \midrule 1 & convert an inputstream to a string & 2 & 1 & 2 & 2 \\ 2 & create arraylist from array & 8 & 1 & 1 & 4 \\ 3 & iterate through a hashmap & 1 & 1 & 1 & NF \\ 4 & generating random integers in a specific range & 1 & 1 & 5 & 4 \\ 5 & converting string to int in java & 8 & 1 & 2 & 1 \\ 6 & initialization of an array in one line & 1 & 1 & 1 & 2 \\ 7 & how can I test if an array contains a certain value & NF & 1 & 1 & 9 \\ 8 & lookup enum by string value & NF & 1 & 1 & NF \\ 9 & breaking out of nested loops in java & NF & 3 & NF & NF \\ 10 & how to declare an array & 1 & 1 & 1 & NF \\ 11 & how to generate a random alpha-numeric string & 1 & 1 & NF & 5 \\ 12 & what is the simplest way to print a java array & 1 & 1 & NF & 5 \\ 13 & sort a map by values & 5 & 1 & 1 & 1 \\ 14 & fastest way to determine if an integer’s square root is an integer & NF & NF & NF & NF \\ 15 & how can I concatenate two arrays in java & 9 & 1 & NF & 3 \\ 16 & how do I create a java string from the contents of a file & 3 & NF & 5 & 2 \\ 17 & how can I convert a stack trace to a string & 2 & 1 & 1 & 2 \\ 18 & how do I compare strings in java & NF & 1 & 1 & NF \\ 19 & how to split a string in java & 1 & 1 & 10 & 9 \\ 20 & how to create a file and write to a file in java & NF & 3 & 4 & 8 \\ 21 & how can I initialise a static map & 2 & 2 & 1 & 4 \\ 22 & iterating through a collection, avoiding concurrent modification exception when removing in loop & 5 & NF & 10 & 2 \\ 23 & how can I generate an md5 hash & 4 & 1 & 1 & 1 \\ 24 & get current stack trace in java & 1 & 1 & 1 & 1 \\ 25 & sort arraylist of custom objects by property & 2 & NF & 7 & 8 \\ 26 & how to round a number to n decimal places in java & 1 & 1 & 3 & 2 \\ 27 & how can I pad an integers with zeros on the left & 8 & NF & 10 & 2 \\ 28 & how to create a generic array in java & 3 & 1 & NF & 2 \\ 29 & reading a plain text file in java & 4 & 3 & 3 & 1 \\ 30 & a for loop to iterate over enum in java & NF & 1 & NF & NF \\ 31 & check if at least two out of three booleans are true & NF & NF & NF & NF \\ 32 & how do I convert from int to string & 10 & 1 & 4 & 10 \\ 33 & how to convert a char to a string in java & 6 & 1 & NF & 1 \\ 34 & how do I check if a file exists in java & NF & 1 & 3 & 1 \\ 35 & java string to date conversion & NF & 1 & 1 & NF \\ 36 & convert inputstream to byte array in java & 1 & 1 & 4 & 1 \\ 37 & how to check if a string is numeric in java & 2 & 1 & NF & 1 \\ 38 & how do I copy an object in java & NF & 5 & 7 & NF \\ 39 & how do I time a method's execution in java & NF & 1 & NF & 4 \\ 40 & how to read a large text file line by line using java & 8 & NF & NF & 1 \\ 41 & how to make a new list in java & 4 & 1 & 1 & NF \\ 42 & how to append text to an existing file in java & 1 & NF & NF & NF \\ 43 & converting iso 8601-compliant string to date & 9 & NF & 5 & 2 \\ 44 & what is the best way to filter a java collection & NF & 2 & NF & NF \\ 45 & removing whitespace from strings in java & NF & 1 & NF & 1 \\ 46 & how do I split a string with any whitespace chars as delimiters & NF & NF & 1 & 2 \\ 47 & in java, what is the best way to determine the size of an objects & NF & 1 & NF & NF \\ 48 & how do I invoke a java method when given the method name as a string & NF & NF & NF & NF \\ 49 & how do I get a platform dependent new line character & NF & NF & NF & NF \\ 50 & how to convert a map to list in java & 7 & 1 & NF & 4 \\ \bottomrule \end{tabular} \label{queries50} \end{table} \begin{table}[] \scriptsize \caption{\bl{The Queries$_{99}$ collected from Husain et al. \cite{husain2019codesearchnet}} and the Evaluation Results (NF: Not Found within the top 10 returned results, DCS: DeepCS \cite{gu2018deep}, CM: CodeMatcher, CH: CodeHow \cite{lv2015codehow}, UNIF \cite{cambronero2019deep}.)} \begin{tabular}{|c|L{80pt}|cccc||c|L{120pt}|cccc|} \toprule \textbf{No.} & \textbf{Query} & \textbf{DCS} & \textbf{CM} & \textbf{CH} & \textbf{UNIF} & \textbf{No.} & \textbf{Query} & \textbf{DCS} & \textbf{CM} & \textbf{CH} & \textbf{UNIF} \\ \midrule 1 & convert int to string & 2 & 1 & 6 & 2 & 51 & how to randomly pick a number & 6 & 5 & NF & 1 \\ 2 & priority queue & NF & NF & NF & 1 & 52 & normal distribution & NF & 1 & NF & NF \\ 3 & string to date & 9 & 1 & 1 & 6 & 53 & nelder mead optimize & NF & NF & NF & NF \\ 4 & sort string list & 1 & 1 & NF & 7 & 54 & hash set for counting distinct elements & 7 & NF & NF & NF \\ 5 & save list to file & 2 & 1 & NF & NF & 55 & how to get database table name & NF & 2 & NF & NF \\ 6 & postgresql connection & NF & 1 & 3 & NF & 56 & deserialize json & 1 & 1 & 1 & 1 \\ 7 & confusion matrix & NF & 6 & NF & 2 & 57 & find int in string & 5 & 1 & NF & 1 \\ 8 & set working directory & NF & 1 & NF & 1 & 58 & get current process id & 6 & 1 & NF & 1 \\ 9 & group by count & NF & 2 & 2 & NF & 59 & regex case insensitive & NF & 2 & NF & 10 \\ 10 & binomial distribution & NF & 7 & NF & 1 & 60 & custom http error response & 7 & 1 & 2 & NF \\ 11 & aes encryption & 5 & 1 & 3 & 7 & 61 & how to determine a string is a valid word & NF & NF & NF & NF \\ 12 & linear regression & NF & 2 & 4 & 1 & 62 & html entities replace & NF & NF & 1 & 1 \\ 13 & socket recv timeout & NF & 1 & 5 & 1 & 63 & set file attrib hidden & NF & NF & NF & NF \\ 14 & write csv & 1 & 2 & 6 & 1 & 64 & sorting multiple arrays based on another arrays sorted order & 5 & NF & 7 & NF \\ 15 & convert decimal to hex & 1 & 1 & NF & 1 & 65 & string similarity levenshtein & NF & 2 & 6 & 1 \\ 16 & export to excel & NF & 1 & 3 & 4 & 66 & how to get html of website & NF & 5 & 3 & NF \\ 17 & scatter plot & NF & 1 & 4 & NF & 67 & buffered file reader read text & 1 & 1 & 1 & NF \\ 18 & convert json to csv & NF & NF & 4 & NF & 68 & encrypt aes ctr mode & NF & NF & 1 & NF \\ 19 & pretty print json & 1 & 1 & NF & 1 & 69 & matrix multiply & NF & 1 & 1 & 1 \\ 20 & replace in file & 1 & 1 & NF & 2 & 70 & print model summary & NF & NF & NF & NF \\ 21 & k means clustering & NF & 3 & 1 & 3 & 71 & unique elements & 1 & 1 & NF & 1 \\ 22 & connect to sql & 1 & 1 & 1 & 1 & 72 & extract data from html content & 4 & NF & 2 & 1 \\ 23 & html encode string & 1 & 2 & 4 & 1 & 73 & heatmap from 3d coordinates & NF & NF & 3 & NF \\ 24 & finding time elapsed using a timer & NF & 1 & NF & 5 & 74 & get all parents of xml node & NF & 3 & 1 & NF \\ 25 & parse binary file to custom class & NF & NF & NF & NF & 75 & how to extract zip file recursively & NF & 1 & 7 & 4 \\ 26 & get current ip address & 2 & 1 & NF & 1 & 76 & underline text in label widget & NF & NF & NF & NF \\ 27 & convert int to bool & 2 & 1 & NF & 8 & 77 & unzipping large files & NF & 1 & 2 & NF \\ 28 & read text file line by line & 5 & NF & 1 & 1 & 78 & copying a file to a path & NF & 1 & 1 & NF \\ 29 & get executable path & 1 & 8 & 1 & 1 & 79 & get the description of a http status code & 5 & 1 & 4 & 1 \\ 30 & httpclient post json & 2 & 3 & 1 & NF & 80 & randomly extract x items from a list & 1 & NF & NF & NF \\ 31 & get inner html & 4 & 3 & 2 & 2 & 81 & convert a date string into yyyymmdd & 1 & 1 & 4 & 2 \\ 32 & convert string to number & 8 & 1 & 1 & 2 & 82 & convert a utc time to epoch & NF & NF & NF & NF \\ 33 & format date & 1 & 1 & 1 & 1 & 83 & all permutations of a list & NF & 1 & NF & 1 \\ 34 & readonly array & NF & NF & NF & NF & 84 & extract latitude and longitude from given input & 5 & NF & NF & 2 \\ 35 & filter array & 2 & 1 & 1 & 1 & 85 & how to check if a checkbox is checked & NF & 2 & NF & 2 \\ 36 & map to json & 2 & 3 & 2 & 3 & 86 & converting uint8 array to image & 1 & NF & NF & NF \\ 37 & parse json file & NF & 1 & NF & 2 & 87 & memoize to disk - persistent memoization & NF & NF & NF & NF \\ 38 & get current observable value & NF & NF & NF & NF & 88 & parse command line argument & 1 & 1 & 3 & 1 \\ 39 & get name of enumerated value & NF & NF & NF & 1 & 89 & how to read the contents of a .gz compressed file? & NF & NF & 1 & NF \\ 40 & encode url & 1 & 1 & 4 & 3 & 90 & sending binary data over a serial connection & NF & NF & NF & NF \\ 41 & create cookie & 1 & 1 & NF & NF & 91 & extracting data from a text file & 6 & NF & 9 & 1 \\ 42 & how to empty array & NF & NF & NF & NF & 92 & positions of substrings in string & 1 & 1 & 1 & 6 \\ 43 & how to get current date & NF & 2 & 1 & NF & 93 & reading element from html - \textless{}td\textgreater{} & NF & 1 & NF & NF \\ 44 & how to make the checkbox checked & NF & 1 & 2 & NF & 94 & deducting the median from each column & NF & NF & 1 & NF \\ 45 & initializing array & 2 & 1 & 10 & 4 & 95 & concatenate several file remove header lines & NF & 1 & NF & NF \\ 46 & how to reverse a string & NF & 1 & 1 & 5 & 96 & parse query string in url & 2 & NF & 9 & 2 \\ 47 & read properties file & 1 & 1 & NF & 1 & 97 & fuzzy match ranking & NF & NF & NF & NF \\ 48 & copy to clipboard & 1 & 1 & 1 & 3 & 98 & output to html file & NF & NF & 2 & NF \\ 49 & convert html to pdf & NF & 1 & 2 & NF & 99 & how to read .csv file in an efficient way & NF & NF & NF & NF \\ 50 & json to xml conversion & NF & 1 & NF & NF & \multicolumn{1}{l|}{} & & \multicolumn{4}{l|}{} \\ \bottomrule \end{tabular} \label{queries99} \end{table} \begin{table} \scriptsize \caption{\bl{The Queries$_{25}$ collected from Mishne et al. \cite{mishne2012typestate} and Keivanloo et al. \cite{keivanloo2014spotting}}, and the Evaluation Results (NF: Not Found within the top 10 returned results, DCS: DeepCS \cite{gu2018deep}, CM: CodeMatcher, CH: CodeHow \cite{lv2015codehow}, UNIF \cite{cambronero2019deep}.)} \begin{tabular}{|c|l|cccc|} \toprule \textbf{No.} & \textbf{Query} & \textbf{DCS} & \textbf{CM} & \textbf{CH} & \textbf{UNIF} \\ \midrule 1 & upload a file & 1 & 3 & 5 & 1 \\ 2 & parse a command-line and get values from it & NF & 1 & 1 & NF \\ 3 & prepare the executable and argument of a command-line & NF & 2 & NF & NF \\ 4 & create a path element and append it to existing and boot paths & NF & NF & NF & NF \\ 5 & run a query and iterate over the results & NF & NF & NF & NF \\ 6 & commit and then rollback a transaction & NF & 2 & 1 & 1 \\ 7 & get the key of an array element type & 1 & NF & 6 & NF \\ 8 & get the description and nature IDs of a Java project & NF & NF & NF & NF \\ 9 & create a new action & 2 & 1 & 1 & 6 \\ 10 & get the input for the current editor & NF & 2 & NF & NF \\ 11 & retrieve arguments from command line & NF & 1 & 1 & 6 \\ 12 & check user selection & 6 & 1 & NF & 1 \\ 13 & set up a ScrollingGraphicalViewer & NF & NF & NF & NF \\ 14 & create a project & 1 & 2 & NF & 3 \\ 15 & successfully login and logout & NF & 1 & NF & NF \\ 16 & click an Element & 1 & 1 & 1 & 2 \\ 17 & commit and rollback a statement & NF & NF & NF & 3 \\ 18 & send a HTTP request via URLConnection & 6 & 1 & 2 & 1 \\ 19 & redirect Runtime exec() output with System & NF & NF & NF & NF \\ 20 & get OS Level information such as memory & NF & NF & NF & 1 \\ 21 & SSH Connection & NF & 5 & NF & 3 \\ 22 & download and save a file from network & NF & 1 & NF & 1 \\ 23 & generate a string-based MD5 hash value & NF & NF & 6 & 2 \\ 24 & read the content of a HttpResponse object line by line & NF & NF & NF & NF \\ 25 & search via Lucene and manipulate the hits & NF & NF & 1 & NF \\ \bottomrule \end{tabular} \label{queries25} \end{table} \subsection{Baseline Models and Replication Package.}\label{baselines} In the experiment, \bl{three} models are selected as our baseline models, including Gu et al.'s \cite{gu2018deep} DeepCS, Lv et al.'s \cite{lv2015codehow} CodeHow\bl{, and Cambronero et al.'s UNIF \cite{cambronero2019deep}}. To test baseline models on our codebase, we first preprocessed Java code files within all projects by our tool JAnalyzer\footnote{https://github.com/liuchaoss/janalyzer}. It first parses the abstract syntax tree of each Java code file by leveraging the Javaparser\footnote{https://github.com/javaparser/javaparser} library, and extracts all necessary method components as the inputs of baselines, such as method name, comment, Javadoc, APIs in the method body, etc. Note that we do not test these models on DeepCS' original testing data but our new data, because DeepCS provides no raw data (i.e., no source code) but preprocessed data that can be only used by DeepCS. To mitigate the replication difficulty for our study, we provide a replication package\footnote{\textbf{Replication Package: https://bitbucket.org/ChaoLiuCQ/codematcher}} to share our codebase, and source code of CodeMatcher and baseline models. The compared baseline models are described as follows. \vspace{5pt}\noindent\textbf{DeepCS,} the DL-based model proposed by Gu et al. \cite{gu2018deep}: We trained DeepCS by re-running the source code\footnote{https://github.com/guxd/deep-code-search} and pre-processed training data\footnote{https://pan.baidu.com/s/1U$\_$MtFXqq0C-Qh8WUFAWGvg} provided by the authors. To test DeepCS on our codebase, we first did some natural language processing (e.g., stemming) following the description in Gu et al.'s paper \cite{gu2018deep}, then encoded data using DeepCS' vocabulary, and saved the encoded data into the required format by using DeepCS' internal APIs in source code. \vspace{5pt}\noindent\textbf{CodeHow,} the IR-based model developed by Lv et al. \cite{lv2015codehow}: It expands a query with words in related official APIs and matches it with code methods. As Lv et al. \cite{lv2015codehow} provide no source code and data for replication, we re-implemented CodeHow strictly following their paper. Our implementation can be found in our shared replication package. Note that as CodeHow was used for searching C\# projects while our targets are Java repositories, we thus used the JDK (Java development kit) as the source of official APIs. \vspace{5pt}\noindent\textbf{\bl{UNIF,}} \bl{the DL-based model proposed by Cambronero et al. \cite{cambronero2019deep}: Similar to DeepCS, the UNIF transforms code and query into vectors and it is trained by pairs of code and natural language description. However, the main differences are that: UNIF represents code by method name and a bag of tokens, which is a subset of DeepCS' input; the UNIF used the pre-trained fastText \cite{bojanowski2017enriching} as its embedding layer. Details can be found in \cite{cambronero2019deep}. As the author provided no source code, we re-implemented it by ourselves strictly following the description in the original study, which is also provided in our replication package.} \subsection{Evaluation Criteria}\label{criteria} To measure the effectiveness of code search models, we need to identify the relevancy of a returned code method to a query. Following Gu et al. \cite{gu2018deep}, the relevancy is manually identified by two independent developers, and the disagreements are resolved by open discussions. During the relevance identification, developers only consider the top-10 returned code methods. Based on the identified relevancy, we measure model performance using four widely used evaluation metrics following Gu et al. \cite{gu2018deep}, including FRank, SuccessRate@k, Precision@k, and Mean Reciprocal Rank (MRR). Note that FRank is defined only for one query while the other metrics are defined on all queries. \vspace{5pt}\noindent\textbf{FRank}, is the rank of the first correct result in the result list \cite{gu2018deep}. It measures users' inspection efforts for finding a relevant method when scanning the candidate list from top to bottom. A smaller FRank value implies fewer efforts and better effectiveness of a code search tool for a particular query. \vspace{5pt}\noindent\textbf{SuccessRate}@k, the percentage of queries for which more than one correct result exists in the top-k ranked results \cite{gu2018deep}. Specifically, $SuccessRate@k=Q^{-1}\sum_{q=1}^{Q}\delta(FRank_{q}\leq{k})$, where $Q$ is the total number of tested queries; $\delta(\cdot)$ is an indicator function that returns 1 if the input is true and 0 otherwise. Higher SuccessRate@k means better code search performance, and users can find desired method by inspecting fewer returned method list. \vspace{5pt}\noindent\textbf{Precision@k}, is the average percentage of relevant results in top-k returned method list for all queries. It is calculated by $Precision@k=Q^{-1}\sum_{q=1}^{Q}r_{q}/k$, where $Q$ is the total number of queries; $r_{q}$ is the number of related results for a query $q$ \cite{gu2018deep}. Precision@k is useful and important because users often check many returned results for learning different code usages \cite{raghothaman2016swim}. Larger Precision@k indicates that a code search model returns less noisy results. \vspace{5pt}\noindent\textbf{MRR}, the average of the reciprocal ranks for all queries, where the reciprocal rank of a query is the inverse of the rank of the first relevant result ($FRank$) \cite{gu2018deep}. Thus, the formula is $MRR=Q^{-1}\sum_{q=1}^{Q}FRank_{q}^{-1}$. Larger MRR value means a higher ranking for the first relevant methods. \subsection{The Importance of Model Simplification} \subsection{Pros and Cons of DL-based Model and IR-Based Model}\label{pros} The DL-based model has three major advantages over the IR-based model. One is language processing ability. By leveraging the embedding technique, it can better address complex queries and tolerate errors to some extent according to Section \ref{disucss}. The second is the bilingual learning ability. With the joint embedding framework between a query written in natural language and a code implemented in programming language, their mapping relationship can be well learned as described in Section \ref{rq1}. At last, the DL-based model may require less upfront cost than IR-based model, because the former requires much less domain knowledge and feature engineering like the IR-based model. However, \bl{the DL-based models (DeepCS and UNIF) have} a limitation on a new and dynamic codebase, because \bl{the studied models} suffer from overfitting and out-of-vocabulary issues as discussed in Section \ref{why}. But these problems do not occur for the IR-based models (CodeHow and CodeMatcher). Meanwhile, running the IR-based model is substantially more efficient over the DL-based model, because the IR-based model requires no model training and can speed up the code search process using a framework like Elasticsearch, as shown in Section \ref{codematcher_evaluation}. To sum up, the two kinds of studied code search models complement each other, therefore it is suggested to balance their pros and cons and make a fusion in the future. \SG{1}{Combine the advantages of IR-based and DL-based models.} \subsection{The Importance of Method Name} Section \ref{why_well} indicates that CodeMatcher works well mainly because it can precisely match the \bl{semantics between query and the method name}, where CodeMatcher assigns higher importance on programming words (e.g., Inputstream or String) and considers their sequence in the query. Besides, Section \ref{disucss} also shows that if a method precisely matched \bl{the semantics} in query such method is likely to contain expected implementation in the method body. This is because the method name is very similar to the query: \textit{(1) writing in natural language.} There is \bl{less} semantic gap between query and method name; \textit{(2) short in text.} They usually use the same keywords in order; \textit{(3) specific to code implementation.} Their semantic relationship to code implementation is usually the same and straightforward. Therefore, a code search engine should assign higher weights on method names, no matter for the DL-based or IR-based model. Moreover, although CodeMatcher is capable of handling synonyms in a query by using the WordNet as described in Section \ref{method}, it has three limitations: \textit{(1) abbreviation.} It cannot match the word 'initialize' in a query to the method named with 'ini'; \textit{(2) Acronym.} The method named with 'RMSD' should be missed for a query with the keyword "root mean square deviation"; \textit{(3) low quality of method naming.} The method name is not a correct abstraction on its code implementation. Meanwhile, other code search models (e.g., DeepCS and CodeHow) also do not consider these situations. One solution to these limitations is to increase the scale of the codebase, because the increased search space may include the methods whose names are similar to search queries (e.g, using abbreviations or acronyms). However, to solve these challenges, maybe the best way is to require developers strictly following a method naming standard in the beginning. For example, a developer follows the Google Java style guide\footnote{https://google.github.io/styleguide/javaguide.html\#s5-naming} and writes method name in verb (phases) with commonly used words, instead of self-defined synonym, acronym, and abbreviation. As to the existing large-scale source code in GitHub, maybe a better solution is to format their method names in a standard and unified way. \SG{2}{Method name has a significant role in code search; improving the quality of developers' method names helps code search.} \section{Introduction}\label{intro} \input{introduction.tex} \section{Background}\label{back} \input{background.tex} \section{CodeMatcher}\label{method} \input{method.tex} \section{Experiment Setup}\label{exp} \input{experiment.tex} \section{Results}\label{result} \input{results.tex} \section{Discussion}\label{disucss} \input{discussion.tex} \section{Implication}\label{implication} \input{implication.tex} \section{Threats to Validation and Model Limitation}\label{threat} \input{threat.tex} \section{Related Work}\label{related} \input{related.tex} \section{conclusion}\label{conclude} \input{conclusion.tex} \bibliographystyle{ACM-Reference-Format} \subsection{Motivation.} Different from the existing IR-based models, the DL-based model DeepCS can: (1) understand irrelevant/noisy keywords by word embedding technique, including synonyms and the words that have never appeared in the code methods; (2) capture sequential relationships between words in query and code methods via the LSTM model; and (3) map query intent to the semantics of code methods by measuring their semantic similarity \cite{gu2018deep}. However, due to the high complexity of DeepCS, it takes hours for model training and the code search is also time-consuming for practical usage. Therefore, it is necessary to find a way to simplify the model complexity but retain the beneficial features of the DL-based model. In this study, we simplify the DL-based model DeepCS into an IR-based model called CodeMatcher that maintains the advantages of DeepCS. Generally, we remove noisy keywords according to some collected metadata; replaces irrelevant keywords with appropriate synonyms extracted from codebase; design a code reranking approach by measuring the semantic similarity between query and code, which also considers the sequential relationship between words in query/code. The following subsection presents the implementation details of CodeMatcher. \subsection{Implementation}\label{implementation} The proposed model CodeMatcher performs code search in two phases following existing IR-based models, namely code indexing and code search, as described in Section \ref{back_ir}. Fig. \ref{fig_workflow} illustrates the overall framework of CodeMatcher. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{workflow.eps} \caption{Overall framework of CodeMatcher.} \label{fig_workflow} \end{figure} \vspace{5pt}\noindent\textbf{Phase-I: Codebase Preprocessing and Indexing.} The first phase leverages the Elasticsearch \cite{gormley2015elasticsearch}, a Lucene-based text search engine for code indexing. Similar to existing IR-based models, we represent a code method as two components (i.e., method name and method body) and create an index for the method by calling the APIs provided by Elasticsearch. To extract method components from source code files, we developed a tool JAnalyzer\footnote{https://github.com/liuchaoss/janalyzer}, which transforms a method into an abstract syntax tree (AST) with the Javaparser\footnote{https://github.com/javaparser/javaparser} \cite{hosseini2013javaparser} library and extracts method name (e.g., "convertInputStreamToString()" in Table \ref{tab_example}) and body by traversing the AST. \bl{As JAnalyzer is depended on the Javaparser tool, thus it can only support the Java language instead of other programming languages.} Similar to the IR-based model CodeHow \cite{lv2015codehow}, the method body is parsed as a sequence of fully qualified tokens (e.g., Java.lang.String or System.io.File.readlines()) in method parameters, method body, and returned parameter. \vspace{5pt}\noindent\textbf{Phase-II: Code Search.} CodeMatcher takes three steps to perform code search on the indexed code methods for a query. The first step extracts some metadata for query understanding, and addresses noisy and irrelevant query words. The second step quickly finds a pool of highly related code methods from the codebase according to the collected metadata for query words. The last step reranks the searched methods in the second step according to the matching degrees between the query and candidate methods. \underline{\textit{Step-1: Query understanding.}} To address noisy words in a query, we removed the words that are not commonly used for programming. Specifically, we first identified word property (e.g., verb or noun) by using the Stanford Parser\footnote{https://nlp.stanford.edu/software/lex-parser.shtml} \cite{chen2014fast,zhang2017mining}. Then, we filtered out the question words and related auxiliary verb (e.g., how do), and excluded the verb-object/adpositional phrase on the programming language (e.g., in Java) as it is only used for the identification of programming language while our study only focuses on Java projects as Gu et al. \cite{gu2018deep}. Afterward, we removed the words that are not the verbs, nouns, adjectives, adverbs, prepositions, or conjunctions, since the removed words (e.g., 'a', 'the', and non-English symbols) are rarely used for coding \cite{arnold2000java}. Moreover, to identify the irrelevant words in a query, we counted the frequency of each word occurring in the method names of the codebase. We replaced the irrelevant words (i.e., the words not appeared in codebase) by their synonyms generated by the WordNet \cite{miller1998wordnet}. If the WordNet generates multiple synonyms for a word, we choose the synonym with the highest frequency in the codebase. Subsequently, we stemmed\footnote{https://pythonprogramming.net/stemming-nltk-tutorial/} the rest of query words to improve their generalizability in code search. Moreover, except for the words property (e.g., verb or noun) and frequency, we identified the third metadata named 'importance' for query words before performing code search. The word importance refers to how important a word used for programming. It can help CodeMatcher identify related important keywords between query and code methods. The importance is categorized by five levels as shown in Table \ref{tb_level}. For example, the most important one (i.e., level 5) is the JDK noun, e.g., "InputStream" or "readLine()". These metadata can help query understanding and forming keywords for code search. \bl{Verbs and nouns are important (level 4) as they can represent most of the semantics in code methods. This is because method/API names are typically made of verbs or verb phrases (such as "stop" or "sendMessage") while variable names are often written in nouns (e.g., "student" or "processor"), as described in the Google Java Style Guide\footnote{https://google.github.io/styleguide/javaguide.html}. We assigned JDK nouns with higher importance (level 5), because they are unique (namely it cannot be replaced by synonyms) and important to represent the semantics in code methods. For example, the JDK nouns "Inputstream" and "String" are important for the code with the semantics "convert Inputstream to String". Adjectives and adverbs are assigned lower importance (level 3) because they cannot indicate precise meaning without their corresponding noun/verb. Next, we assigned preposition/conjunction with the importance level 2. Finally, we assigned the other (often meaningless) symbols the lowest importance level (level 1).} \begin{table} \centering \caption{Five word importance levels for programming based on the word property (e.g., verb or noun) and whether the word is a class name in JDK.} \begin{tabular}{|cll|} \toprule \textbf{Level} & \textbf{Condition} & \textbf{Examples} \\ \midrule 5 & JDK Noun & "Inputstream" and "readLine()"\\ 4 & Verb or Non-JDK Noun & "convert" and "whitespace"\\ 3 & Adjective or Adverb & "numeric" and "decimal"\\ 2 & Preposition or Conjunction & "from" and "or"\\ 1 & Other & Number and Non-English Symbols\\ \bottomrule \end{tabular} \label{tb_level} \end{table} \underline{\textit{Step-2: Iterative fuzzy search.}} To map the semantics of the preprocessed query and indexed code methods, we utilized the filtered query words in Step-1 to generate keywords for code search. Here we only match the keywords with indexed method names to quickly narrow the search space. To perform code search, we launched an iterative fuzzy match on indexed method names with a regular string\footnote{https://docs.python.org/3/library/re.html}. The string is formed by all remaining query words in order as "$.*word_{1}.*\cdots.*word_{n}.*$". We used the regular match instead of the term match to capture the sequential relationship between words as DeepCS \cite{gu2018deep}. For example, "convert int to string" should be different from "convert string to int". Afterward, if the total number of returned methods is no more than ten, we removed the least important word with lower frequency one at a time according to their metadata, and performed the fuzzy search again until only no query word left. For each search round, we filtered out the repeated method by comparing the MD5 hash\footnote{https://docs.python.org/2/library/hashlib.html} values of their source code. \underline{\textit{Step-3: Reranking.}} To refine the method rankings returned from Step-2, we designed a metric ($S_{name}$) to measure the matching degree between query and method name as Eq. (\ref{eq_score}). A larger value of $S_{name}$ indicates a higher-ranked method with more overlapped tokens between query and method name in order. Moreover, if equal $S_{name}$ exists, we boosted the rank of a method with a higher matching score ($S_{body}$) between query and method body, as calculated in Eq. (\ref{eq_score2}). Similar to $S_{name}$, a higher $S_{body}$ value implies better token matching between query and tokens in method body orderly. However, different from $S_{name}$, we added the last term in $S_{body}$ to represent the ratio of JDK APIs in method, in terms of the fully qualified tokens, because developers favor a method with more basic APIs (e.g., JDK) \cite{sadowski2015developers,martie2015sameness}. After the above two rounds of ranking refinement, we returned the top-10 methods in the list. \bl{Note that, although our tool JAnalyzer can extract Javadocs and comments for code, the CodeMatcher did not consider them when reranking the searched code following all the related studies \cite{gu2018deep,lv2015codehow, cambronero2019deep,feng2020codebert,husain2019codesearchnet}. This is because the code search task aims to solve the semantic gap between the query in natural language and the code in programming language. Besides, whether the comments and Javadocs written in natural language are helpful for code search is highly dependent on their quality and number. However, in practical usage, we cannot assume that any code always contains high-quality comment or Javadoc.} \begin{equation}\label{eq_score} \begin{split} S_{name} &= \frac{\#query\ words\ as\ keywords}{\#query\ words} \\ &\times{} \frac{\#characters\ in\ name\ orderly\ matched\ keywords}{\# characters\ in\ name}\\ \end{split} \end{equation} \begin{equation}\label{eq_score2} \begin{split} S_{body} &= \frac{\#API\ words\ matched\ query\ words}{\#query\ words} \\ &\times{} \frac{Max[\#API\ words\ orderly\ matched\ query\ words]}{\#query\ words}\\ &\times{} \frac{\#JDK APIs}{\#APIs}\\ \end{split} \end{equation} \vspace{5pt}\noindent\textbf{Example.} Fig. \ref{codematcher_example} illustrated an example for the first query \textit{"convert an inputstream to a string"} in Table \ref{queries50}. From the token metadata, we can notice that both 'inputstream' and 'string' have level-5 importance (i.e., they are JDK objects), and they are frequently used for method naming (frequency$>$3442). With this metadata, CodeMatcher successively generates four candidate regular match strings on indexed method names. For the two returned methods, the first one ranked higher due to its larger matching scores on method name ($S_{name}$) and body ($S_{body}$). \begin{table} \centering \caption{An example for CodeMatcher} \begin{tabular}{|l|} \begin{tabular}{|l|c|c|c|c|c|c|} \toprule \mc{7}{\textbf{Query:} convert an inputstream to a string} \\ \midrule \mc{7}{\textbf{(0) Indexed Codebase}} \\ \midrule \mc{7}{\bl{Source Code 1:}}\\ \mc{7}{ public String \textbf{convertInputStreamToString}(InputStream is)\{}\\ \mc{7}{\quad{}InputStreamReader isr = new InputStreamReader(is);}\\ \mc{7}{\quad{}BufferedReader r = new BufferedReader(isr);}\\ \mc{7}{StringBuilder sb = new StringBuilder();}\\ \mc{7}{\quad{}String line;}\\ \mc{7}{\quad{}while ((line = r.readLine()) != null)\{}\\ \mc{7}{\quad{}\quad{}sb.append(line);}\\ \mc{7}{\quad{}\}}\\ \mc{7}{\quad{}return sb.toString();}\\ \mc{7}{\}}\\ \mc{7}{\bl{Method Name: }\rd{convertInputStreamToString}}\\ \mc{7}{\bl{API Sequence: }java.io.\rd{InputStream}, java.io.\rd{InputStream}Reader, java.lang.\rd{String}Builder,}\\ \mc{7}{java.lang.\rd{String}, java.lang.\rd{String}Builder.readline(), java.lang.\rd{String}Builder.append(),}\\ \mc{7}{java.lang.\rd{String}Builder.\rd{toString}(), java.io.\rd{String}}\\ \midrule \mc{7}{\bl{Source Code 2:} public String convertInputStream2String(InputStream is)\{}\\ \mc{7}{\quad{}return convert(is);}\\ \mc{7}{\}}\\ \mc{7}{\bl{Method Name: }\rd{convertInputStream}2\rd{String}}\\ \mc{7}{\bl{API Sequence: }java.io.\rd{InputStream}, Util.\rd{convert}(), java.lang.\rd{String}}\\ \midrule \mc{7}{\textbf{(1) Token Metadata}}\\ \midrule \bl{Token} & convert & an & inputstream & to & a & string\\ \midrule \bl{Property} & verb & other & noun & prep & other & noun \\ \midrule \bl{Frequency} & 39292 & 0 & 3442 & 22 & 0 & 52369\\ \midrule \bl{Importance} & 4 & 1 & 5 & 2 & 1 & 5\\ \midrule \mc{7}{\textbf{(2) Keywords for Code Search}}\\ \midrule \mc{7}{\bl{Regular Match String 1:} .*convert.*inputstream.*to.*string.*}\\ \mc{7}{\bl{Regular Match String 2:} .*convert.*inputstream.*string.*}\\ \mc{7}{\bl{Regular Match String 3:} .*inputStream.*string.*}\\ \mc{7}{\bl{Regular Match String 4:} .*String.*}\\ \midrule \mc{7}{\textbf{(3) Reranking}}\\ \midrule \mc{7}{\bl{Rank = 1, }\textbf{convertInputStreamToString()\{...\}}}\\ \mc{7}{\rd{$S_{name}=\frac{4}{6}\times\frac{26}{26}=0.67$}, \rd{$S_{body}=\frac{3}{6}\times\frac{3}{6}\times\frac{8}{8}=0.25$}}\\ \midrule \mc{7}{\bl{Rank = 2, }\textbf{convertInputStream2String()\{...\}}}\\ \mc{7}{\rd{$S_{name}=\frac{3}{6}\times\frac{24}{25}=0.48$}, \rd{$S_{body}=\frac{3}{6}\times\frac{2}{6}\times\frac{2}{3}=0.11$}}\\ \bottomrule \end{tabular} \end{tabular} \label{codematcher_example} \end{table} \subsection{Can CodeMatcher Outperform the \bl{Baseline} Models?}\label{rq1} \begin{table}[] \caption{Performance comparison of CodeMatcher and baseline models (DeepCS, CodeHow, and UNIF), where the model performance is measured by SuccessRate@1/5/10 (SR@1/5/10), Precision@1/5/10 (P@1/5/10), and MRR.} \begin{tabular}{|c|l|ccc|ccc|c|} \toprule \textbf{Queries} & \textbf{Model} & \textbf{SR@1} & \textbf{SR@5} & \textbf{SR@10} & \textbf{P@1} & \textbf{P@5} & \textbf{P@10} & \textbf{MRR} \\ \midrule \multirow{4}{*}{$Queries_{50}$} & DeepCS & 0.22 & 0.46 & 0.64 & 0.22 & 0.23 & 0.22 & 0.36 \\ & CodeHow & 0.30 & 0.52 & 0.62 & 0.30 & 0.23 & 0.21 & 0.41 \\ & UNIF & 0.22 & 0.58 & 0.68 & 0.22 & 0.21 & 0.17 & 0.40 \\ & CodeMatcher & 0.64 & 0.76 & 0.76 & 0.64 & 0.58 & 0.57 & 0.71 \\ \midrule \multirow{4}{*}{$Queries_{99}$} & DeepCS & 0.20 & 0.38 & 0.45 & 0.20 & 0.14 & 0.11 & 0.33 \\ & CodeHow & 0.22 & 0.45 & 0.54 & 0.22 & 0.21 & 0.19 & 0.36 \\ & UNIF & 0.33 & 0.49 & 0.56 & 0.33 & 0.21 & 0.15 & 0.43 \\ & CodeMatcher & 0.48 & 0.65 & 0.68 & 0.48 & 0.42 & 0.37 & 0.58 \\ \midrule \multirow{4}{*}{$Query_{25}$} & DeepCS & 0.16 & 0.20 & 0.28 & 0.16 & 0.06 & 0.06 & 0.26 \\ & CodeHow & 0.24 & 0.32 & 0.40 & 0.24 & 0.15 & 0.14 & 0.34 \\ & UNIF & 0.24 & 0.44 & 0.52 & 0.24 & 0.22 & 0.15 & 0.38 \\ & CodeMatcher & 0.28 & 0.56 & 0.56 & 0.28 & 0.26 & 0.19 & 0.46 \\ \midrule \multirow{4}{*}{$Queries_{all}$} & DeepCS & 0.20 & 0.38 & 0.48 & 0.20 & 0.15 & 0.13 & 0.33 \\ & CodeHow & 0.25 & 0.45 & 0.54 & 0.25 & 0.20 & 0.19 & 0.37 \\ & UNIF & 0.29 & 0.51 & 0.59 & 0.29 & 0.21 & 0.16 & 0.41 \\ & CodeMatcher & 0.50 & 0.67 & 0.68 & 0.50 & 0.44 & 0.40 & 0.60 \\ \bottomrule \end{tabular} \label{tab:results} \end{table} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{baseline.eps} \caption{Violin plots of FRank for CodeMatcher and baseline models (DeepCS, CodeHow, and UNIF), where red squares and white dots indicate the mean and median FRank respectively; the values denote the mean FRank for each model; '*' indicates the significant difference (p-value $<$ 0.05) between a model and CodeMatcher, which is tested by the Wilcoxon signed-rank test \cite{wilcoxon1945individual} at a 5\% significance level.} \label{fig_frank_baseline} \end{figure} Table \ref{tab:results} compares the experimental results between the proposed model CodeMatcher and \bl{three baseline models (i.e., DeepCS, CodeHow, and UNIF)} on our large-scale testing data. We can notice that, for the studied 174 queries ($Queries_{all}$), DeepCS obtains an \bl{MRR of 0.33, where SuccessRate@1/5/10 = 0.20/0.38/0.48 and Precision@1/5/10 = 0.20/0.15/0.13.} Meanwhile, CodeHow achieves an \bl{MRR of 0.37 with SuccessRate@1/5/10 = 0.25/0.45/0.54 and Precision@1/5/10 = 0.25/0.20/0.19.} Besides, \bl{UNIF gains a better search performance with MRR = 0.41, where SuccessRate@1/5/10 = 0.29/0.51/0.59 and Precision@1/5/10 = 0.29/0.21/0.16.} Table \ref{tab:results} shows that the proposed model CodeMatcher achieves an \bl{MRR of 0.60 with SuccessRate@1/5/10 = 0.50/0.67/0.68 and Precision@1/5/10 = 0.50/0.44/0.40.} Comparing with the baseline models \bl{(DeepCS, Codehow, and UNIF)}, the MRR improved by \bl{81.8\%, 62.2\%, and 46.3\%} respectively; SuccessRate@1/5/10 increased by \bl{150\%/76.3\%/41.7\%, 100\%/48.9\%/25.9\%, and 72.4\%/31.4\%/15.3\%} respectively; Precision@1/5/10 boosted by \bl{150\%/193.3\%/207.7\%, 100\%/120\%/110.5\%, and 72.4\%/109.5\%/150\%} respectively. Moreover, Fig. \ref{fig_frank_baseline} shows the violin plots of FRank between models, where CodeMatcher obtains better mean FRank (4.55) over the other baselines ($>$6). CodeMatcher's advantage in FRank is also statistically significant (p-value$<$0.05), after we tested the FRank between CodeMatcher and a model with the Wilcoxon signed-rank test \cite{wilcoxon1945individual} at a 5\% significance level. The above experimental results imply that the CodeMatcher performs well and clearly outperforms existing solutions. \RS{1}{Our CodeMatcher outperforms the baseline models \bl{(DeepCS, CodeHow, and UNIF)} substantially, indicating the simplification from DeepCS to CodeMatcher is reasonable and valuable.} \subsection{Is CodeMatcher Faster than \bl{the Baseline} Models?}\label{codematcher_evaluation} Table \ref{tb_time} compares the time efficiency between CodeMatcher and baseline models \bl{(DeepCS, CodeHow, and UNIF)}. All these models ran on a server with an Intel-i7 CPU and an Nvidia Geforce GTX1060 GPU. \bl{In the practical usage, developers expect that code search models can quickly respond their search query, namely short searching time. For the IR-based models (CodeMatcher and CodeHow), they can quickly retrieve a small subset of methods from codebase by leveraging the indexing technique (commonly with <1s for a query) and sort the candidates with a light-weight ranking strategy. Note that the indexing technique needs to build indexing for all methods in advance to facilitate the code search. Although building indexing may take a long time, it is acceptable as building indexing commonly works offline. Meanwhile, to return the top-n relevant methods for a query, the DL-based models (DeepCS and UNIF) have to compute the cosine similarities between query and all methods among codebase in terms of high-dimension vectors. As DL-based models cannot accelerate the search time like the IR-based models with the indexing technique, they have to leverage the multi-threading technique to boost their performance.} The results show that DeepCS took 58.16h to train, 24.51h to preprocess codebase (i.e., parse, encode, and vectorize method name/APIs/tokens), and 376.4s to search code for each query. \bl{In contrast, UNIF only spent 4.1h for model training because its network is simple which contains only embedding and attention layers instead of the complex LSTM layer used in DeepCS. However, UNIF required 455.3s to complete the searching task for a query, much slower than DeepCS, because UNIF represented code/query into vector with 500 dimensions \cite{cambronero2019deep} while DeepCS only needed vectors with 400 dimensions \cite{gu2018deep}. Although the dimension just increased by 25\%, UNIF takes 21\% more time for code search as the codebase is large with more than 16 million code as described in Table \ref{codebase}. Therefore, the similarity computation for the higher dimensional vectors takes much more time, even if the UNIF has a simple network.} Comparing with DeepCS \bl{and UNIF}, our IR-based model CodeMatcher ran faster and \bl{substantially decreased the code search time from 370+s to 0.3s per query.} We can notice that CodeMatcher did not need a long-time training \bl{as DL-based models, although the training usually happens rarely (or just once).} As to the searching time, CodeMatcher only required 23.5h to preprocess code (23.2h for code parsing and 0.3h for code indexing). \bl{We found that the DL-based models work slowly mainly because it is time-consuming ($>$300s for a query) to load the 23+GB vectorized codebase (with 16m methods) for computing their cosine similarities with the query even using the multi-threading technique. But the IR-based models do not have this issue, because they leveraged the indexing technique to quickly ($<$1s for a query) reduce the search space scale (from 16m methods to hundreds of methods or less) so that the code search can be done in a very short time. Therefore, further DL-based studies need to consider ways to solve this bottleneck.} Additionally, CodeMatcher works \bl{8 times faster than} the IR-based model CodeHow mainly because \bl{the fuzzy search component in CodeMatcher only retrieved a limited number of candidate code from codebase as described in Section \ref{implementation}.} \begin{table} \centering \caption{Time efficiency comparison between CodeMatcher and baseline models in three different phases.} \setlength{\tabcolsep}{12pt}{ \begin{tabular}{|l|rrr|} \toprule \textbf{Model} & \textbf{Train} & \textbf{Preprocess} & \textbf{Search}\\ \midrule DeepCS & 58.2h & 24.5h & 376.4s/query\\ UNIF & 4.1h & 24.2h & 455.3s/query \\ CodeHow & - & 23.5h & 2.4s/query\\ CodeMatcher & - & 23.5h & 0.3s/query \\ \bottomrule \end{tabular}} \label{tb_time} \end{table} The 23.5 hours of code preprocessing seems time-consuming for CodeMatcher. However, there are about 17 million methods, as shown in Table \ref{tb_time}, and each method only takes about 0.005s for code preprocessing on average. \bl{The low preprocessing time for the IR-based models means that the model can quickly parse and index the update or added methods in codebase.} Thus, CodeMatcher can support the dynamic and rapidly expanding code scale of GitHub as this model requires no optimization, where changed or new methods can be rapidly parsed and indexed. \RS{2}{Our model CodeMatcher is faster than the DL-based models DeepCS \bl{and UNIF}, because CodeMatcher requires no model training and it can process a query with \bl{more than 1.2k times} speedup. Meanwhile, the CodeMatcher works \bl{8 times faster than} the IR-based model CodeHow.} \subsection{\bl{How do the CodeMatcher Components Contribute to the Code Search Performance?}}\label{why_well} \bl{Generally, the CodeMatcher consists of three components: query understanding, fuzzy search, and reranking, As described in Section \ref{implementation}. The query understanding aims to collect metadata for fuzzy search and reranking. Note that the query understanding component leveraged the Stanford Parser to identify the property of query words (e.g., noun and verb). However, the property cannot be precisely identified due to the limitation of the Stanford Parser. We found that the parser failed to work for 124 words in 7 cases, which affects 55.2\% of the 174 queries. To investigate how the parser's accuracy affects the model performance. We manually corrected the wrong cases, and applied them to the code search (CodeMatcher+SP$^{+}$). Table \ref{tab:components} shows that CodeMatcher+SP$^{+}$ obtains an MRR of 0.61, outperforming CodeMatcher by only 2\%. Fig. \ref{fig_frank_component} also indicates that the mean FRank (4.47 vs. 4.55) between CodeMatcher+SP$^{+}$ and CodeMatcher is not significantly different (p-value$>$0.05) after performing the Wilcoxon signed-rank test \cite{wilcoxon1945individual} at a 5\% significance level. We found that the optimization seldom improved the search performance because of two reasons. One is that some failed identifications do not influence the importance level for words as described in Table \ref{tb_level}, including "Noun->Verb" and "Verb->Noun". The other reason is that many queries are short with a limited number of words so that the other two components can still find the expected code. Thus, under this situation, the accuracy of the used Stanford Parser is acceptable for CodeMatcher.} \bl{Moreover, to investigate how the key components (fuzzy search and reranking) affect the model performance, we tested the CodeMatcher by excluding the reranking (CodeMatcher-Rerank). Table \ref{tab:components} shows that the MRR of CodeMatcher-Rerank is 0.47, which is reduced by 21.7\% comparing with the MRR of CodeMatcher. Besides, Fig. \ref{fig_frank_component} shows that the mean FRank value is increased from 4.55 to 6.0, where the difference is significant (p-value$<$0.05). These results indicate that the reranking plays an important role in searching code. Besides, we can observe that even though the fuzzy search component outperforms the other baseline models (DeepCS, CodeHow, and UNIF) in terms of MRR (<0.42) by 42.4\%, 27.0\%, and 14.7\% respectively.} \begin{table}[] \caption{Performance comparison of CodeMatcher with different component settings (\bl{-Rerank, excluding the reranking component; -S$_{body}$, excluding the ranking strategy $S_{body}$ in the reranking component; +SP$^{+}$, using the corrected results generated by the Stanford Parser}), where the model performance is measured by SuccessRate@1/5/10 (SR@1/5/10), Precision@1/5/10 (P@1/5/10), and MRR.} \begin{tabular}{|c|l|ccc|ccc|c|} \toprule \textbf{Queries} & \textbf{Model} & \textbf{SR@1} & \textbf{SR@5} & \textbf{SR@10} & \textbf{P@1} & \textbf{P@5} & \textbf{P@10} & \textbf{MRR} \\ \midrule \multirow{4}{*}{$Queries_{50}$} & CodeMatcher & 0.64 & 0.76 & 0.76 & 0.64 & 0.58 & 0.57 & 0.71 \\ & CodeMatcher-Rerank & 0.54 & 0.68 & 0.68 & 0.54 & 0.42 & 0.40 & 0.63 \\ & CodeMatcher-$S_{body}$ & 0.60 & 0.72 & 0.74 & 0.60 & 0.53 & 0.50 & 0.68 \\ & CodeMatcher+$SP^+$ & 0.66 & 0.76 & 0.76 & 0.66 & 0.59 & 0.58 & 0.72 \\ \midrule \multirow{4}{*}{$Queries_{99}$} & CodeMatcher & 0.48 & 0.65 & 0.68 & 0.48 & 0.42 & 0.37 & 0.58 \\ & CodeMatcher-Rerank & 0.33 & 0.51 & 0.58 & 0.33 & 0.29 & 0.25 & 0.45 \\ & CodeMatcher-$S_{body}$ & 0.45 & 0.62 & 0.67 & 0.45 & 0.37 & 0.33 & 0.56 \\ & CodeMatcher+$SP^+$ & 0.49 & 0.66 & 0.69 & 0.49 & 0.42 & 0.38 & 0.60 \\ \midrule \multirow{4}{*}{$Queries_{25}$} & CodeMatcher & 0.28 & 0.56 & 0.56 & 0.28 & 0.26 & 0.19 & 0.46 \\ & CodeMatcher-Rerank & 0.12 & 0.16 & 0.24 & 0.12 & 0.11 & 0.09 & 0.21 \\ & CodeMatcher-$S_{body}$ & 0.32 & 0.52 & 0.52 & 0.32 & 0.28 & 0.22 & 0.46 \\ & CodeMatcher+$SP^+$ & 0.32 & 0.56 & 0.56 & 0.32 & 0.30 & 0.26 & 0.48 \\ \midrule \multirow{4}{*}{$Queries_{all}$} & CodeMatcher & 0.50 & 0.67 & 0.68 & 0.50 & 0.44 & 0.40 & 0.60 \\ & CodeMatcher-Rerank & 0.36 & 0.51 & 0.56 & 0.36 & 0.30 & 0.27 & 0.47 \\ & CodeMatcher-$S_{body}$ & 0.48 & 0.63 & 0.67 & 0.48 & 0.40 & 0.37 & 0.58 \\ & CodeMatcher+$SP^+$ & 0.52 & 0.67 & 0.69 & 0.52 & 0.45 & 0.42 & 0.61 \\ \bottomrule \end{tabular} \label{tab:components} \end{table} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{component.eps} \caption{Violin plots of FRank for CodeMatcher \bl{with different settings (-Rerank, excluding the reranking component; -S$_{body}$, excluding the ranking strategy $S_{body}$ in the reranking component; +SP$^{+}$, using the corrected results generated by the Stanford Parser)}, where red squares and white dots indicate the mean and median FRank respectively; the values denote the mean FRank for each model; '*' indicates the significant difference (p-value $<$ 0.05) between a model and CodeMatcher, which is tested by the Wilcoxon signed-rank test \cite{wilcoxon1945individual} at a 5\% significance level.} \label{fig_frank_component} \end{figure} \begin{table}[] \caption{\bl{Wrong cases that the Stanford Parser in CodeMatcher identified the property of query words.}} \begin{tabular}{|c|c||c|c|} \toprule \textbf{Wrong Case} & \textbf{No.} & \textbf{Wrong Case} & \textbf{No.} \\ \midrule Noun-\textgreater{}Verb & 67 & Noun-\textgreater{}Adjective & 7 \\ Verb-\textgreater{}Noun & 23 & Verb-\textgreater{}Adjective & 2 \\ Adjective-\textgreater{}Verb & 18 & Conjective-\textgreater{}Noun & 1 \\ Adjective-\textgreater{}Noun & 6 & \textbf{Total} & 124 \\ \bottomrule \end{tabular} \label{wrong_cases_for_sp} \end{table} For CodeMatcher described in Section \ref{method}, $S_{name}$ and $S_{body}$ are two matching scores to determine the ranks of searched results, but it is unknown how much they attributed to the performance of CodeMatcher. Thus, we used CodeMatcher for code search but removing the matching score $S_{body}$. Table \ref{tab:results} shows that the model only using $S_{name}$ (CodeMatcher-$S_{body}$) obtains \bl{MRR = 0.47}. Comparing with standard CodeMatcher, MRR is reduced by \bl{21.7\%}. Besides, the violin plots in Fig. \ref{fig_frank_component} shows that the mean FRank \bl{(4.55 vs. 4.81)} between CodeMatcher and CodeMatcher-$S_{body}$ are not significantly different (p-value $>$ 0.05 for the Wilcoxon signed-rank test \cite{wilcoxon1945individual} at a 5\% significance level). These results indicate that the score $S_{name}$ that matches query keywords with method name dominated the performance of CodeMatcher, and implies that method name is a significant bridge for the semantic gap between query and code. Furthermore, although the influence of $S_{body}$ that matches query keywords with method body is low, we cannot ignore its contribution. We observed that $S_{body}$ did not work as well as $S_{name}$ because it cannot fully connect the semantics of query and method body, which are written in natural language and programming language respectively. Therefore, the $S_{body}$ part requires further improvement. \RS{3}{\bl{All the CodeMatcher components are necessarily required.} CodeMatcher works well mainly because it can precisely match the query with relevant methods in names by considering the importance of the programming words in the query and the order of query tokens.} \subsection{Can CodeMatcher Outperform Existing Online Code Search Engines?}\label{rq4} \begin{table}[] \caption{Performance comparison of CodeMatcher and online search engines (Google and GitHub search), where the model performance is measured by SuccessRate@1/5/10 (SR@1/5/10), Precision@1/5/10 (P@1/5/10), and MRR.} \begin{tabular}{|c|l|ccc|ccc|c|} \toprule \textbf{Queries} & \textbf{Model} & \textbf{SR@1} & \textbf{SR@5} & \textbf{SR@10} & \textbf{P@1} & \textbf{P@5} & \textbf{P@10} & \textbf{MRR} \\ \midrule \multirow{4}{*}{$Queries_{50}$} & CodeMatcher & 0.64 & 0.76 & 0.76 & 0.64 & 0.58 & 0.57 & 0.71 \\ & GitHub Search & 0.28 & 0.60 & 0.64 & 0.28 & 0.21 & 0.17 & 0.44 \\ & Google Search & 0.32 & 0.80 & 0.90 & 0.32 & 0.34 & 0.30 & 0.51 \\ & CodeMatcher+Google & 0.72 & 0.90 & 0.94 & 0.72 & 0.42 & 0.34 & 0.80 \\ \midrule \multirow{4}{*}{$Queries_{99}$} & CodeMatcher & 0.48 & 0.65 & 0.68 & 0.48 & 0.42 & 0.37 & 0.58 \\ & GitHub Search & 0.27 & 0.57 & 0.64 & 0.27 & 0.21 & 0.17 & 0.42 \\ & Google Search & 0.27 & 0.56 & 0.66 & 0.27 & 0.29 & 0.24 & 0.42 \\ & CodeMatcher+Google & 0.49 & 0.71 & 0.76 & 0.49 & 0.34 & 0.26 & 0.61 \\ \midrule \multirow{4}{*}{$Queries_{25}$} & CodeMatcher & 0.28 & 0.56 & 0.56 & 0.28 & 0.26 & 0.19 & 0.46 \\ & GitHub Search & 0.16 & 0.40 & 0.48 & 0.16 & 0.14 & 0.14 & 0.31 \\ & Google Search & 0.28 & 0.52 & 0.60 & 0.28 & 0.24 & 0.24 & 0.41 \\ & CodeMatcher+Google & 0.32 & 0.56 & 0.68 & 0.32 & 0.25 & 0.24 & 0.45 \\ \midrule \multirow{4}{*}{$Queries_{all}$} & CodeMatcher & 0.50 & 0.67 & 0.68 & 0.50 & 0.44 & 0.40 & 0.60 \\ & GitHub Search & 0.26 & 0.55 & 0.61 & 0.26 & 0.20 & 0.17 & 0.41 \\ & Google Search & 0.29 & 0.62 & 0.72 & 0.29 & 0.30 & 0.26 & 0.45 \\ & CodeMatcher+Google & 0.53 & 0.74 & 0.80 & 0.53 & 0.35 & 0.28 & 0.64 \\ \bottomrule \end{tabular} \label{tab:search_engine} \end{table} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{online.eps} \caption{Violin plots of FRank for CodeMatcher and compared models (GitHub search, Google search, CodeMatcher + Google), where red squares and white dots indicate the mean and median FRank respectively; the values denote the mean FRank for each model; '*' indicates the significant difference (p-value $<$ 0.05) between a model and CodeMatcher, which is tested by the Wilcoxon signed-rank test \cite{wilcoxon1945individual}.} \label{fig_frank_engine} \end{figure} GitHub\footnote{https://github.com/search} and Google\footnote{https://www.google.com/} search engines are what developers commonly used for code search in the real world. Comparing CodeMatcher with GitHub/Google search is helpful for understanding the usefulness of CodeMatcher. To investigate the advantages of CodeMatcher over the GitHub/Google search, we used the \bl{174 queries in Tables \ref{queries50}-\ref{queries25}} as their inputs and performed code search on the whole Java code methods in GitHub. To work on GitHub codebase, the Google search engine performs code search with the following advanced settings: "site:github.com" and "filetype:java". However, we need to note that GitHub and Google search engines are different from CodeMatcher in three aspects: \textit{(1) Larger-scale codebase.} the codebase of GitHub and Google contains more repositories, because these two online search engines cannot control the search scope as our collected codebase for CodeMatcher. \textit{(2) Wider context.} GitHub/Google search matches keywords in a query on all text in code files (e.g., method, comment, and Javadoc) while CodeMatcher only uses texts on methods. \textit{(3) Code snippet vs. method.} GitHub/Google search returns a list of code snippets, but they would not necessarily be a method as what CodeMatcher returns. When using the GitHub and Google search engines for code search, both of them return a list of code snippets with matched keywords, and then we inspect top-10 code snippets to check whether the snippets are relevant to the corresponding queries. We excluded the snippet that was returned just because its comment records the Stack Overflow link (e.g., \textit{"https://stackoverflow.com/questions/157944/create-arraylist-from-array"}) with the same string as the search query (e.g., "create arraylist from array"). This is because the query is also collected from the question title of that link, so that this kind of code snippet will overestimate the performance of GitHub/Google search. \bl{As CodeMatcher searches code not based on comment or Javadoc, we do not need to filter the Stack Overflow links as GitHub and Google searchers.} Table \ref{tab:search_engine} shows that the GitHub search achieves \bl{MRR = 0.41 with SuccessRate@1/5/10 = 0.26/0.55/0.61 and Precision@1/5/10 = 0.26/0.20/0.17.} Meanwhile the Google search obtains an \bl{MRR of 0.45, where SuccessRate@1/5/10 = 0.29/0.62/0.72 while Precision@1/5/10 = 0.29/0.30/0.26} respectively. We can notice that CodeMatcher outperforms GitHub and Google search engines by \bl{46.3\% and 33.3\%} respectively in terms of MRR; by \bl{92.3\%/21.8\%/11.5\% and 72.4\%/8.1\%/-5.6\%} in terms of SuccessRate@1/5/10; by \bl{92.3\%/120\%/135.3\% and 72.4\%/46.7\%/53.8\%} in terms of Precision@1/5/10. Additionally, as illustrated in Fig. \ref{fig_frank_engine}, CodeMatcher achieves better mean FRank \bl{(4.55)} over GitHub/Google search (mean FRank $>$ 5.1); the difference is statistically significant (p-value$<$0.05 tested by the Wilcoxon signed-rank test \cite{wilcoxon1945individual} at a 5\% significance level). These results indicate that CodeMatcher shows advantages in SuccessRate@1/5, Precision@1/5/10, and MRR as compared with the existing two online search engines. These experimental results indicate the practical merit of the CodeMatcher over the GitHub/Google search. Moreover, we can notice that although Google cannot achieve high precision as CodeMatcher if returning only one method for a query, it can successfully recommend at least one relevant method for more queries. Due to this observation, we investigate the code search performance of combining CodeMatcher and Google together, where the first method returned by Google is replaced by the one recommended by CodeMatcher. The last row in Table \ref{tab:search_engine} shows that the combined model (CodeMatcher+Google) gains the best SuccessRate@1/5/10 \bl{(0.53/0.74/0.80), Precision@1 (0.53), and MRR (0.64)} compared with other models in the Table, even if there are some sacrifices in Precision@5/10 \bl{(0.35/0.28)}. \bl{We found that CodeMatcher+Google gains higher SuccessRate@1 than CodeMatcher because CodeMatcher cannot find any results for some queries which can be compensated by Google's results. Specifically, the CodeMatcher and Google cannot return any correct code for 31.6\% and 28.2\% of total queries respectively when we inspected the top-10 returned results. But the adopted combination strategy (CodeMatcher+Google) can reduce the failure rate to 20\%.} We can also notice that the combined model (CodeMatcher+Google) improved the mean FRanks of CodeMatcher \bl{(4.55) and Google (5.16) to 3.78} as illustrated in Fig. \ref{fig_frank_engine}, although the improvement over CodeMatcher is not statistically significant (p-value$>$0.05 tested by the Wilcoxon signed-rank test \cite{wilcoxon1945individual} at a 5\% significance level). This case implies that it is beneficial to incorporate the proposed model CodeMatcher into the Google search engine. \RS{4}{CodeMatcher is also advantageous over existing online search engines, GitHub and Google search. It is beneficial to incorporate CodeMatcher into the Google search for practical usage.} \subsection{Threats to Validity \rs{and Limitations}} There are some threats affecting the validity of our experimental results and conclusions. \vspace{5pt}\noindent\textbf{Manual Evaluation.} The relevancy of returned code methods to the studied queries was manually identified, which could suffer from subjectivity bias. To mitigate this threat, the manual analysis was performed independently by two developers from Baidu inc. \bl{For the first query set ($Queries_{50}$),} they reach a substantial agreement in terms of the value (0.62) of Cohen's Kappa static \cite{viera2005understanding}; and if developers disagreed on a relevancy identification, they performed an open discussion to resolve it. \bl{For the second and third query sets ($Queries_{99}$ and $Queries_{25}$), the agreement was improved to 0.72 in terms of the Cohen's Kappa static due to developers' increased evaluation experiences.} In the future, we will mitigate this threat by inviting more developers. Moreover, in the relevancy identification, we only consider the top-10 returned code results following Gu et al. \cite{gu2018deep}. However, in the real-world code search, this setting is reasonable because developers would like to inspect the top-10 results and ignore the remaining due to the impacts of developers' time and patience. \vspace{5pt}\noindent\textbf{Limited Queries and Java Codebase.} Following Gu et al. \cite{gu2018deep}, we evaluated the model with popular questions from Stack Overflow, which may not be representative of all possible queries for code search engines. To mitigate this threat, the selected top-50 queries are the most frequently asked questions collected by Gu et al. \cite{gu2018deep} in a systematic procedure, as referred to in Section \ref{exp}. \bl{We also extended 99 more queries that provided in the CodeSearchNet challenge \cite{husain2019codesearchnet} and 25 more queries from two related studies \cite{mishne2012typestate,keivanloo2014spotting}.} \bl{Besides, many studied queries are highly related to popular APIs, so that the CodeMatcher may not work for queries with less popular APIs. In this case, the performance of CodeMatcher could be overestimated.} In the future, we will extend the \bl{scale, scope, and variety} of the code search queries. \bl{We also plan to investigate how to automatically evaluate model performance on a large-scale codebase}. Furthermore, we performed the experiments with large-scale open-source Java repositories. But we have not evaluated repositories in other programming languages, though the idea of extending CodeMatcher to any language is easy and applicable. Moreover, we collected about 41k GitHub projects with high-quality code (i.e., more than 5 stars) as the codebase. But such search space is likely to overestimate a model, because such projects are going to have more accurate Javadoc and generally cleaner, easier to understand code that is more likely to be commented. Although we extended Gu et al.'s \cite{gu2018deep} codebase (around 1k projects with at least 20 stars) on a larger scale, this situation cannot be ignored. We plan to extend our codebase more in the near future. \vspace{5pt}\noindent\textbf{Baseline Reproduction.} To estimate DeepCS on our testing data, we preprocessed the testing data according to Gu et al. \cite{gu2018deep} although the source code and training data have no difference. Meanwhile, we re-implemented the baseline CodeHow \bl{and UNIF strictly following the paper \cite{lv2015codehow} and \cite{cambronero2019deep} respectively} because their authors provide no source code. Our baseline reproductions may threaten the validity of our model. To mitigate this threat, we double-checked our code and also present all the replication packages in public as described in Section \ref{baselines}. \vspace{5pt}\noindent\textbf{Model Limitation.} Like all the existing code search models, the proposed solution CodeMatcher may not work if a search query shares no irrelevant words with the methods in codebase. To address this limitation, CodeMatcher replaced the words that do not appeared in a codebase by their synonyms extracted from the codebase. This limitation could be further mitigated by combining CodeMatcher with a DL-based model as described in Section \ref{pros}. We intend to investigate this research in the near future. \bl{Section \ref{rq4} indicated the Stanford Parser used in CodeMatcher is not accurate. Although the corrected results only improve the model performance slightly for our studies, we cannot ignore this limitation in the further studies.}
{'timestamp': '2020-06-01T02:10:19', 'yymm': '2005', 'arxiv_id': '2005.14499', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14499'}
arxiv
\section{Introduction}\label{sec:intro} A vast number of todays technological advancements is due to the ever advancing improvement in modeling and solving of partial differential equation (PDE) probl
ems. One class of especially challenging problems is PDE-constrained optimization which has a broad number of applications, reaching from engineering design and control problems to medical applications like cancer treatment \cite{schenk2009,biegler2003large,fisher2009data}. We consider a typical PDE-constrained optimization problem on a space-time cylinder $[0,T] \times \Omega$ given by \begin{align} \min_{y,u} J(y,u) \label{eq:objective0} \\ \mbox{subject to } \dot{y} = \mathcal{L}(y) + u, \label{eq:constraint0} \end{align} where we are interested in the minimization of the functional $J(y,u)$ constrained by a differential operator $\mathcal{L}(y)$ with space and time dependent state $y$ and control $u$. An extensive analysis of this type of problems can be found e.g. in \cite{Troeltzsch2010} or \cite{Hinze2009}. In this work, we will follow the popular approach of first discretizing the problem and then formulating the discrete first-order optimality conditions as described in \cite{book::IK08}. Finding an efficient numerical solution to the large system of equations resulting from these conditions is of high interest and many approaches exist to solve the resulting saddle point system such as using a block preconditioner to subsequently solve the preconditioned system with iterative solvers like {{\sc Minres }} (cf. \cite{stoll2010,pearson2012regularization,rees2010optimal,schoberl2007symmetric}). However, todays PDE models often result in millions of variables and solving optimization problems governed by such huge models is still a challenging task due to their size and complexity. They often even prove to be impossible to solve on a normal computer due to the memory required to store the usually large and dense solution computed by standard algorithms. Therefore, memory efficient solvers, which compute reduced approximations to the solution are a desirable approach to solve these challenging problems. We here propose a widely applicable low-rank solver, which computes a projection of the solution onto a small subspace. The method relies on the reformulation of the problem into a matrix equation, which will be illustrated in section~\ref{sec:pb}. The numerical solution of this PDE-constrained optimization problem poses a significant challenge regarding the complexity of both storage and computational resources. We show that it is possible to separate spatial and temporal data via a novel low-rank matrix equation solver in section~\ref{sec:lowrank}. We introduce a method to reduce the system of equations into a single low-rank matrix equation. {In section~\ref{section:comp_specifications1} we discuss the choice of the approximation space, while in section~\ref{section:comp_specifications2} we analyze} stopping criteria and a fast update scheme of the residual computation. And finally in section~\ref{sec:num}, we examine the performance and applicability of our method on various numerical examples. We show the method's robustness with respect to the various problem parameters as well as assess the performance of our method compared with a previously proposed low-rank approach on a distributed heat equation model as well as a boundary control problem. Further, we show the applicability of our method to more challenging problems such as a partial state observation and a non-symmetric PDE constraint. \section{Problem formulation} \label{sec:pb} We consider a PDE-constrained optimal control problem on a space-time cylinder with time interval $[0,T]$ and a spatial domain $\Omega$ with a subdomain $\Omega_{1} \subseteq \Omega$ as in Equation \eqref{eq:objective0}-\eqref{eq:constraint0}. The functional we want to minimize reads, \begin{equation} J(y,u) = \frac{1}{2} \int_{0}^T \int_{\Omega_1} (y - \hat{y} )^2 \,\mathrm{d} x \,\mathrm{d} t + \frac{\beta}{2} \int_{0}^T \int_{\Omega} u^2 \,\mathrm{d} x \,\mathrm{d} t, \label{eq:objective} \end{equation} where $y$ is the state, $\hat{y}$ the given desired state, and $u$ the control, which is regularized by the control cost parameter $\beta$. For the PDE subject to which we want to minimize the functional $J(y,u)$ let us exemplarily consider the heat equation with $\mathcal{L}(y) = \Delta y$ and Dirichlet boundary condition, \begin{align} \dot{y} -\Delta y &= u \mbox{ in } \Omega, \label{eq:PDE}\\ y &= 0 \mbox{ on } \partial \Omega. \end{align} There are two distinctive options to proceed with this problem. Either we formulate the optimality conditions and discretize them subsequently or we first discretize the problem and then formulate the discrete first-order optimality conditions, known as the Karush-Kuhn-Tucker (KKT) conditions \cite{book::IK08}. In this work we follow the second approach. We discretize space and time in an all-at-once approach. The spatial discretization is done with finite elements and to discretize in time, we split the time interval into $n_T$ intervals of length $\tau = \frac{T}{n_T}$. Using a rectangle rule, the discretization of \eqref{eq:objective} becomes \begin{equation} \sum_{t=1}^{n_T} \frac{\tau}{2} (y_{t} - \hat{y}_{t})^T M_1 (y_{t} - \hat{y}_t) + \frac{\tau \beta}{2} u_t^T M u_t, \end{equation} where $y_t, \hat{y}_t$, and $u_t$ are spatial discretizations of $y,\hat{y}$, and $u$ of size $n$ for each time step $t = 1,\hdots,n_T$. Using an implicit Euler-scheme the discrete formulation of the PDE \eqref{eq:PDE} reads \begin{align} \frac{M (y_{t} - y_{t-1})}{\tau} + K y_t &= M u_t, \mbox{ for } t = 1,\hdots, n_T,\\ y_0 &= 0. \end{align} Here, $M,N,M_1 \in \mathbb{R}^{n\times n}$ are the mass matrices of the spatial discretization and $K\in \mathbb{R}^{n \times n}$ denotes the so-called stiffness matrix resulting from the discretization of the differential operator $\mathcal{L}(y)$. The boundary constraints are incorporated into the stiffness matrix. On further details about the discretization of PDE operators we refer the reader to \cite{elman2005}. We collect the discretizations of the variables in matrices $Y = [y_1,\hdots,y_{n_T}] \in \mathbb{R}^{n \times n_T}$ and denote their vectorizations by $\underline{Y}= \mbox{vec}(Y)$, respectively for $\hat{y}$ and $u$. With this we write the optimization problem in compact form as \begin{align} \min_{Y,U} \frac{\tau}{2} (\underline{Y}-\underline{\hat{Y}})^T \mathcal{M}_1 (\underline{Y}-\underline{\hat{Y}}) &+ \frac{\tau \beta}{2} \underline{U}^T \mathcal{M} \underline{U}, \label{eq:oc_discrete1} \\ \mbox{s. t. } \mathcal{K}\underline{Y} - \tau \mathcal{N} \underline{U} &= 0, \label{eq:oc_discrete2} \end{align} with the matrices \begin{align} \mathcal{M} = \begin{bmatrix} M & & & \\ & M & & \\ & & \ddots & \\ & & & M \end{bmatrix},\qquad \mathcal{K} = \begin{bmatrix} M + \tau K & & & \\ -M & \ddots & & \\ & \ddots & \ddots \\ & & -M & M + \tau K \end{bmatrix}, \end{align} where mass matrices $M\in \mathbb{R}^{n\times n}$ and stiffness matrix $K \in \mathbb{R}^{n\times n}$ repeat $n_T$ times each. The matrices $\mathcal{N}$ and $\mathcal{M}_1$ are block diagonal matrices like $\mathcal{M}$ but with different mass matrices $N$ and $M_1$ respectively on the diagonal. To solve the discrete problem \eqref{eq:oc_discrete1} - \eqref{eq:oc_discrete2}, we have to solve the system of equations resulting from the first order optimality conditions \cite{benzi_2005}, They state that an optimal solution must be a saddle point of the Lagrangian of the problem, \begin{equation} \nabla \mathcal{L}(\underline{Y}^*, \underline{U}^*, \underline{\Lambda}^*) = 0. \end{equation} The Lagrangian of this problem reads \begin{equation} \mathcal{L}(\underline{Y},\underline{U},\underline{\Lambda}) = \frac{\tau}{2}(\underline{Y}-\underline{\hat{Y}})^T \mathcal{M}_1 (\underline{Y}-\underline{\hat{Y}}) + \frac{\tau \beta}{2} \underline{U}^T \mathcal{M} \underline{U} + \underline{\Lambda}^T (\mathcal{K} \underline{Y} - \tau \mathcal{N} \underline{U}). \end{equation} Thus, the optimal solution solves the following set of linear equations, \begin{align} 0 = \nabla_Y \mathcal{L}(\underline{Y},\underline{U},\underline{\Lambda)} &= \tau \mathcal{M}_1 (\underline{Y} - \underline{\hat{Y}}) + \mathcal{K}^T \underline{\Lambda}, \label{eq:KKT_conditions1} \\ 0 = \nabla_U \mathcal{L}(\underline{Y},\underline{U},\underline{\Lambda)} &= \tau \beta \mathcal{M} \underline{U} - \tau \mathcal{N}^T \underline{\Lambda}, \label{eq:KKT_conditions2} \\ 0 = \nabla_\Lambda \mathcal{L}(\underline{Y},\underline{U},\underline{\Lambda}) &= \mathcal{K} \underline{Y} - \tau \mathcal{N} \underline{U}. \label{eq:KKT_conditions3} \end{align} Memory-wise, an effective approach to solve this large-scale problem is to use a low-rank approach, which finds a cheap representation of the solution matrix within a low-rank subspace range($V$) and a reduced solution $Z$, whose number of columns is related to the number of employed time steps, \begin{equation} Y \approx V Z , \end{equation} and similarly for the other matrix variables $U$ and $\Lambda$. In this work we show that we can compute such a low-rank solution more efficiently than recent approaches by rearranging the large system of equations into a generalized Sylvester matrix equation of the form \begin{equation} \label{eq:matrix_equation} A_1 X + X C + A_2 X I_0 - A_3 X D - F_1F_2^T = 0. \end{equation} The resulting matrix equation can be efficiently solved by using a tailored low-rank Krylov-subspace method. This new approach greatly reduces storage and time requirements while being robust to parameter changes. This matrix equation oriented methodology allows one to clearly identify space and time dimensions, and aims at reducing the problem dimension in the space variables; the time variable is handled using an all-at-once procedure. Similar strategies have been used, for instance, in \cite{Breiten.Simoncini.Stoll.16,Palitta.19}. One assumption we make is that we have a low-rank approximation or representation of the desired state as \begin{equation} \hat{Y} \approx Y_1 Y_2^T, \end{equation} with $Y_1 \in \mathbb{R}^{n \times r}$, $Y_2 \in \mathbb{R}^{n_T \times r}$ and $r < n_T$. First, we introduce the auxiliary matrices \begin{align} \mathcal{I} = \begin{bmatrix} 1 & & \\ & \ddots & \\ & & 1 \end{bmatrix} \mbox{ and } C = \begin{bmatrix} 1 & & & \\ -1 & 1 & & \\ & \ddots & \ddots & \\ & & -1 & 1 \end{bmatrix}, \end{align} to rewrite the system matrices as Kronecker products $\mathcal{M} = \mathcal{I} \otimes M$, $\mathcal{M}_1 = \mathcal{I} \otimes M_1$, $\mathcal{N} = \mathcal{I} \otimes N$ and $\mathcal{K} = \mathcal{I} \otimes \tau K + \mathcal{C} \otimes M$. With this our system of KKT conditions \eqref{eq:KKT_conditions1} - \eqref{eq:KKT_conditions3} becomes \begin{align} \tau (\mathcal{I} \otimes M_1) \underline{Y} + (\mathcal{I} \otimes \tau K^T + \mathcal{C}^T \otimes M^T) \underline{\Lambda} - \tau (\mathcal{I} \otimes M_1) \underline{\hat{Y}} &=0, \label{eq:KKT_kronecker1}\\ \tau \beta (\mathcal{I} \otimes M) \underline{U} - \tau (\mathcal{I} \otimes N^T) \underline{\Lambda} &= 0, \label{eq:KKT_kronecker2}\\ (\mathcal{I} \otimes \tau K + \mathcal{C} \otimes M) \underline{Y} - \tau (\mathcal{I} \otimes N) \underline{U} & = 0. \label{eq:KKT_kronecker3} \end{align} We exploit the relation \begin{equation} \label{eq:kronecker_relation} (W^T \otimes V) \mbox{vec}(Y) = \mbox{vec}(VYW) \end{equation} to rewrite the equations \eqref{eq:KKT_kronecker1} - \eqref{eq:KKT_kronecker3} as \begin{align} \tau M_1 Y + \tau K^T \Lambda + M^T\Lambda C - \tau M_1 \hat{Y} &= 0, \label{eq:system1} \\ \tau \beta M U - \tau N^T \Lambda &= 0, \label{eq:betau} \\ \tau K Y + M Y C^T - \tau N U &= 0. \label{eq:system3} \end{align} The mass matrix $M$ arising from a standard Galerkin method will always be symmetric and can be considered lumped, i.e., diagonal. Therefore, we can eliminate Equation \eqref{eq:betau} by setting $U = \frac{1}{\beta} M^{-1}N^T\Lambda$ in the remaining two equations, \begin{align} \tau M_1 Y + \tau K^T \Lambda + M^T \Lambda C - \tau M_1 \hat{Y} &= 0, \label{eq:orig1}\\ \tau K Y + M Y C^T - \frac{\tau}{\beta} N M^{-1} N^T \Lambda &= 0, \label{eq:orig2} \end{align} which are equivalent to \begin{align} M^{-1} M_1 Y + M^{-1}K^T \Lambda + \Lambda \tilde{C} - M^{-1}M_1 \hat{Y} & = 0, \\ M^{-1} K Y + Y \tilde{C}^T - \frac{1}{\beta} M^{-1}N M^{-1}N^T \Lambda &= 0, \end{align} where $\tilde{C} = \frac{1}{\tau}C$. For now, let us assume that $K = K^T$ but our approach can be easily generalized for non-symmetric $K$ as we will demonstrate later on. Now this representation corresponds to \begin{equation} \begin{split} M^{-1}K \begin{bmatrix} Y & \Lambda\end{bmatrix} + \begin{bmatrix} Y & \Lambda\end{bmatrix} \underbrace{\begin{bmatrix} \tilde{C}^T & 0 \\ 0 & \tilde{C} \end{bmatrix}}_{C_1} + M^{-1}M_1 \begin{bmatrix} Y & \Lambda\end{bmatrix} \underbrace{\begin{bmatrix} 0 & \mathcal{I} \\ 0 & 0 \end{bmatrix}}_{I_0} &+ \\ M^{-1}N M^{-1}N^T \begin{bmatrix} Y & \Lambda\end{bmatrix} \underbrace{\begin{bmatrix} 0 & 0 \\ \frac{-\mathcal{I}}{\beta} & 0 \end{bmatrix}}_{D} - \underbrace{\begin{bmatrix} 0 & M^{-1}M_1 \hat{Y} \end{bmatrix}}_{F_1 F_2^T} &= 0. \end{split} \end{equation} We denote $X = \begin{bmatrix} Y & \Lambda\end{bmatrix}$, $A_1 = M^{-1}K$, $A_2 = M^{-1}M_1$, $A_3 = M^{-1}N M^{-1}N^T$, $F_1 = M^{-1}M_1 Y_1$ and $F_2 = [0_{n_T \times r}, Y_2]$, where $\hat Y = Y_1 Y_2$ with $Y_1, Y_2$ of low column and row rank, respectively. Then we get the desired format from Equation \eqref{eq:matrix_equation} as \begin{align} A_1 X + X C_1 + A_2 X I_0 + A_3XD - F_1 F_2^T = 0, \label{eq:matrix_equation_final} \end{align} where the left-hand coefficient matrices have size $n\times n$ while the right-hand ones have size $2n_T\times 2n_T$, so that $X \in {\mathbb R}^{n\times 2n_T}$. \section{Low rank solution}\label{sec:lowrank} The generalized Sylvester equation in (\ref{eq:matrix_equation_final}) replaces the large Kronecker product based system of equations \eqref{eq:KKT_kronecker1} - \eqref{eq:KKT_kronecker3}. Since the solution matrix $X \in \mathbb{R}^{n\times 2n_T}$ will be dense and potentially very large, to exploit the new setting it is desirable to find an appropriate approximation space and a low-rank reduced matrix approximation $Z\in \mathbb{R}^{p \times 2n_T}$ such that \begin{equation} \label{eq:lowRank_X} X \approx V_p Z. \end{equation} where the orthonormal columns of $V_p \in \mathbb{R}^{n \times p}$ generate the approximation space. With this setting we can construct a reduced version of the matrix equation \eqref{eq:matrix_equation_final}. Let us assume we compute an approximation as in \eqref{eq:lowRank_X}. Then the residual matrix associated with (\ref{eq:matrix_equation_final}) reads \begin{align} \label{eq:residual} R = A_1 V_p Z + V_p Z C_1 + A_2 V_p Z I_0 + A_3 V_p Z D - F_1 F_2^T. \end{align} We impose the Galerkin orthogonality of the residual matrix with respect to the approximation space, which in the matrix inner product is equivalent to writing $V_p^T R = 0$, so that our equation becomes \begin{equation} V_p^T A_1 V_p Z + V_p^T V_p Z E + V_p^T A_2 V_p Z I_0 - V_p^T F_1 F_2^T = 0. \end{equation} Let us denote the reduced $p\times p$ coefficient matrices as $A_{1,r} := V_p^T A_1 V_p$, $A_{2,r} := V_p^T A_2 V_p$, $A_{3,r} := V_p^T A_3 V_p$ and set $F_{1,r} = V_p^T F_1 \in \mathbb{R}^{p \times 1}$. The resulting reduced equation \begin{equation} \label{matrix_equation_reduced} A_{1,r} Z + \mathcal{I}_p Z C_1 + A_{2,r} Z I_0 + A_{3,r} Z D - F_{1,r} F_2^T = 0 \end{equation} has the same structure as the original matrix equation \eqref{eq:matrix_equation_final} but its size is reduced to $p \times 2n_T$. By exploiting once again the relation in Equation \eqref{eq:kronecker_relation}, we get the small linear system of equations \begin{equation} \label{eq:lineq} \big ( (\mathcal{I}_{2n_T} \otimes A_{1,r}) + (C_1^T \otimes \mathcal{I}_p) + (I_0^T \otimes A_{2,r}) + (D^T \otimes A_{3,r}) \big ) \underline{Z} = F_{1,r}F_2^T, \end{equation} with $\underline{Z} = \mbox{vec}(Z)$. For a small subspace size $p \ll n$ this system of equations is significantly easier to solve and we can either use a direct or an iterative method to do so. If the obtained approximate solution $V_pZ$ is not sufficiently good, then the space can be expanded and a new approximation constructed, giving rise to an iterative method. {The use of low-rank methods within optimization of large-scale systems has been successfully documented in several articles, and we refer the reader to \cite{stoll2014,dolgov2016fast,cichocki2017tensor,benner2016block} for recent accounts.} \section{Subspace computation} \label{section:comp_specifications1} To construct the projection \eqref{eq:lowRank_X} we need an iterative subspace method, which constructs a relevant subspace for our problem. For this we make use of rational Krylov subspaces \begin{equation} \label{eq:Krylov} V_p(A,v,s) = \mbox{span} \big \{ v, (A+s_1I)^{-1}v, \hdots, \prod_{j=1}^{p-1}(A+s_j I)^{-1}v \big \} \end{equation} with shifts $s_j$ as described in \cite{Simoncini2016}. This approach has proven to be very well suited to solve similarly structured problems as the shifts allow for efficient updates of the subspace using relevant information on the matrices' eigenvalues. As an initial vector we take the right-hand side, $v_0 = F_1$, and to construct the subspace \eqref{eq:Krylov} we employ a tailored strategy to adapt to the different settings. More precisely: i) {\it Case $M_1=M$, $N$ square, full rank.} This corresponds to a setup where desired state and control are both distributed equally on the whole domain $\Omega$. We use the matrix $A = A_1$. We observe that in this case $A_2=I$ and $A_3$ is a diagonal nonsingular matrix. ii) {\it Case $M_1\ne M$, $N$ square, full rank.} This corresponds to a setup where, e.g., the resulting state is only observed on a partial domain. We construct a mixed subspace where we add the following two new vectors in step $k$, \begin{equation} \label{eq:subspace} \{ (A_1 + s^{(1)}_k I)^{-1}v_k, \,\, (A_2 + s^{(2)}_k I)^{-1}v_k \}, \end{equation} so that the space dimension grows by at most two per iteration, instead of one. iii) {\it Case $M_1= M$, $N$ tall.} In this case, we can only control a partial domain, e.g., the boundary. Here, $A_2=I$ while $A_3=M^{-1} N M_b^{-1} N^T$ is not invertible. We thus define $A_3(\alpha) = A_3 + \alpha A_1$, this selection is justified below. In this case we use the following two new vectors in step $k$, \begin{equation} \label{eq:subspace3} \{ (A_1 + s^{(1)}_k A_3(\alpha))^{-1}v_k, \,\, ( A_3(\alpha) + s^{(2)}_k A_1)^{-1}v_k \}, \end{equation} and once again the space dimension grows by at most two per iteration, instead of one. iv) {\it Case $M=M_1$, $N$ square, full rank, $K \neq K^T$.} For a number of PDE constraints, like convection-diffusion problems, the stiffness matrix is non-symmetric. In this case, we have to slightly modify the first component $A_1X$ in \eqref{eq:matrix_equation_final} to $A_1 X \begin{bmatrix} \mathcal{I} & 0 \\ 0 & 0 \end{bmatrix} + M^{-1}K^TX \begin{bmatrix} 0 & 0 \\ 0 & \mathcal{I} \end{bmatrix}$. Here, again we construct a mixed subspace where we add two new vectors in step $k$, \begin{equation} \{ (A_1 + s_k^{(1)}I)^{-1}v_k, \,\, (M^{-1}K^T + s_k^{(2)}I)^{-1}v_k \}. \end{equation} A strategy similar to (ii)-(iv) was successfully adopted in \cite{Powell2017} for a multiterm linear matrix equation in a different context. The effectiveness of the mixed approaches in (ii)-(iii) relies on the fact that the generated space be rich in eigencomponents of both matrices. In (ii) where $A_2$ is diagonal and singular with a bounded and tight nonzero spectral interval, a good spectral homogeneity in the space construction can be reached by shifting the matrix $A_2$ by some $\alpha$ so that the spectra of $A_1$ and $A_2 + \alpha_2 I_n$ are contained by the same interval. Hence, we introduce a shift parameter $\alpha_2$ in (\ref{eq:matrix_equation_final}) to get \begin{equation} \label{eq:shift} A_1 X + X (C_1 - \alpha_2 I_0) + (A_2 + \alpha_2 I_n) X I_0 - A_3 X D - F_1 F_2^T = 0. \end{equation} With these premises, a good choice for $\alpha_2$ is a rough approximation to the largest eigenvalue of $A_1$. Due to the good properties of the transformed spectrum, the shifts were computed by only using the projection of the matrix $A_1$ onto the generated space; more details on the shift selection can be found in \cite{simoncini2011}. In the cases i) and iv), $A_2$ is not singular but still the spectra of $A_1$ and $A_2$ differ greatly. Applying the same strategy to shift $A_2$ as in \ref{eq:shift} also proved to be beneficial in these cases. In (iii), the structure and singularity of $A_3$ was significantly more challenging, because $\|A_3\|$ inversely depends on $\beta$, hence the spectrum of a shifted version of $A_3$ may not be enclosed in that of $A_1$. We propose to consider the equivalent problem \begin{equation} A_1 X(I-\alpha_3 D) + X C_1 + A_2 X I_0 - (A_3+\alpha_3 A_1) X D - F_1 F_2^T = 0. \end{equation} where $\alpha_3 = \frac 1 {\sqrt{\beta}}\|A_3\|_F/\|A_1\|_F$. With this formulation, we found particularly successful the use of the projection of the pair $(A_1, A_3+\alpha_3 A_1)$ onto the current approximation space to determine the next shifts during the iteration. {\it Further subspace reduction.} By computing $X \approx V_p Z$ we want to reduce storage requirements and computation time. This goal is only achieved if the approximation space dimension remains considerably smaller than $n_T$. By enriching the subspace as outlined above, however, the space dimension may in principle grow up to $n$ in case the solution is not sufficiently accurate. To keep the subspace as small as possible when it is not optimal we include a truncation scheme which bounds the subspace dimension to less than $n_T$. Thus, in the worst case the solution will have maximum rank. To reduce the dimension we compute a singular value decomposition of $Z = U \Sigma V^T$, with $\Sigma={\rm diag}(\sigma_1, \ldots, \sigma_p)$ where $\sigma_1 \geq \hdots, \geq \sigma_p$ are the singular values of $Z$. In case some of the singular values are too small, say $\sigma_i < \varepsilon$ for some $i$, it is clear that some of the subspace information is not needed for computing a good approximation to the solution. If this occurs we truncate the subspace with $\tilde{p} = i-1$ by setting \begin{equation} V_{\tilde{p}} = V_p U_{\tilde{p}}, \end{equation} with $U_{\tilde{p}}$ denoting the first $\tilde{p}$ columns of $U$. Note that the next subspace additions \eqref{eq:subspace} are still conducted using $v_k$ from the latest generated vectors in the original subspace $V_p$, and orthogonalized with respect to the reduced subspace. We refer the reader to the discussion of Table~\ref{table:memory} for an illustration of the achievable memory savings by adopting this strategy. \section{Residuals and stopping criteria} \label{section:comp_specifications2} To determine a computable stopping criterion we monitor the residuals of the two equations \eqref{eq:orig1} and \eqref{eq:orig2}, \begin{align} R_1 &= \tau M_1 Y + \tau K^T \Lambda + M^T \Lambda C - \tau M_1 \hat{Y}, \label{eq:res1} \\ R_2 &= \tau K Y + M Y C^T - \frac{\tau}{\beta} N M^{-1}N^T \Lambda, \label{eq:res2} \end{align} which are closely related to the original system. Computing the residuals poses a bottleneck in this scheme as straightforward computation is time consuming and would require forming the full solution $[Y \Lambda] = V_p Z$. To avoid this and substantially speed up the computation time, we rely on a low-rank representation of the residual and calculate its norm making use of an updated QR method. The matrix residuals can be rewritten as \begin{align} R_1 &= \tau M_1 V_p Z_Y + \tau K^T V_p Z_{\Lambda} + M V_p Z_{\Lambda} C - \tau M_1 \hat{Y} \\ & = \begin{bmatrix} \tau M_1 V_p & \tau K^T V_p & M V_p & -\tau M_1 Y_1 \end{bmatrix} \cdot \begin{bmatrix} Z_Y & Z_{\Lambda} & Z_{\Lambda} C & Y_2 \end{bmatrix}, \\ R_2 &= \tau K V_p Z_Y + M V_p Z_Y C^T - \frac{\tau}{\beta} N M^{-1} N^T V_p Z_{\Lambda} \\ & \begin{bmatrix} \tau K V_p & M V_p & - \frac{\tau}{\beta} N M^{-1} N^T V_p \end{bmatrix} \cdot \begin{bmatrix} Z_Y & Z_Y C^T & - Z_{\Lambda} \end{bmatrix}^T, \end{align} where $Z_Y$ denotes the $Y$-component of $Z$ and $Z_{\Lambda}$ denotes the component associated with $\Lambda$ respectively. Let any of the two quantities be denoted by \begin{equation} R = R_L R_R^T. \end{equation} Here the matrices have dimensions $R_L \in \mathbb{R}^{n \times (3p+1)}$ and $R_R \in \mathbb{R}^{n_T\times (3p+1)}$. We consider a reduced QR decomposition of the tall and skinny matrix $R_L$, \begin{equation} R_L = Q_1 R_1, \quad {\rm with} \quad Q_1 \in \mathbb{R}^{n \times (4p+1)},\quad R_1 \in \mathbb{R}^{(4p+1)\times (4p+1)}. \end{equation} At each iteration new columns are added to the matrices $R_L$ and $R_R$. To further reduce the computational cost, we update the QR decomposition so as to avoid a new factorization from scratch at each iteration. Assume that we have a current subspace of size $p$, so that $V_p = [v_1,\hdots, v_p]$ and we add another vector $v_{p+1}$ to the space. Thus, four new columns are added to $R_L$, which we add to the end of $R_L$, while keeping the same reordering for $R_R$. Adding a new column $v_{p+1}$ gives \begin{equation} \begin{bmatrix} R_L & v_{p+1} \end{bmatrix} = \begin{bmatrix} Q_1 R_1 & v_{p+1}\end{bmatrix} = \begin{bmatrix} Q_1 & q_2 \end{bmatrix} \begin{bmatrix} R_1 & R_{1,2} \\ 0 & R_{2,2} \end{bmatrix}, \end{equation} where only two column vectors $q_2$ and $R_{1,2}$ and a scalar value $R_{2,2}$ have to be computed. We have \begin{equation} Q_1 R_{1,2} + q_2 R_{2,2} = v_{p+1}. \label{eq:newQR} \end{equation} Setting $R_{1,2} = Q_1^T v_{p+1}$ and constructing the vector \begin{equation} \hat{q}_2 = v - Q_1(Q_1^T v_{p+1}) = v_{p+1} - Q_1 R_{1,2}, \end{equation} we can set $q_2 = \frac{\hat{q}_2}{\|\hat{q}_2\|}$ and thus $R_{2,2} = \|\hat{q}_2\|$. With this, Equation \eqref{eq:newQR} holds and the new column $q_2$ is orthogonal to $Q_1$, \begin{equation} Q_1^T q_2 = Q_1^T \frac{\hat{q}_2}{R_{2,2}} = \frac{Q_1^T v_{p+1} - Q_1^T Q_1 Q_1^T v_{p+1}}{R_{2,2}} = 0. \end{equation} Now we can completely avoid forming the full residuals, as with $R = R_L R_R^T = Q_1 R_1 R_R^T$ the desired Frobenius norm of the residual becomes \begin{equation} \|R\|_F = \sqrt{\mbox{trace}(R^T R)} = \sqrt{\mbox{trace}(R_R R_1^T Q_1^T Q_1 R_1 R_R^T)} = \sqrt{\mbox{trace}(R_R R_1^T R_1 R_R^T)}. \label{eq:QR_final} \end{equation} To compute the residual norms in this form only a very small matrix of size $n_T \times n_T$ has to be computed. Both of these residuals can be computed without forming the approximations $Y$ and $\Lambda$. The use of the trace square root may lead to inaccuracy at the level of the machine epsilon square root. However, our convergence tolerance is in general significantly larger, say $10^{-4}$, hence these problems were not encountered. This computational procedure allows us to cheaply compute both $\|R_1\|_F$ and $\|R_2\|_F$, that is the absolute value of the residual norms. Following the guidelines in \cite{Higham2002} for linear matrix equations, we also monitor a scaled backward error norm of the two equations, that is \begin{equation} \begin{aligned} \rho_3 &= \frac{\|R_1\|_F}{\tau(\|M_1\|Z_Y\|_F + \|K\|_F \|Z_{\Lambda}\|_F + \|M\|_F \|\hat{Y}\|_F) + \|M\|_F\|Z_Y\|_F\|C\|_F}, \\ &+ \frac{\|R_2\|_F}{\tau(\|K\|_F \|Z_Y\|_F + \frac{1}{\beta}\|M\|_F\|Z_{\Lambda}\|_F + \|M\|_F\|Z_Y\|_F\|C\|_F}. \label{eq:resFull} \end{aligned} \end{equation} which takes into account the data order of magnitude. Summarizing, we stop our iteration whenever $$ \max \{ \|R_1\|_F, \|R_2\|_F, \rho_3 \} \le {\tt tol} $$ where {\tt tol} is a prescribed tolerance. \section{Numerical Results}\label{sec:num} We now present the performance and flexibility of our method on multiple examples for different PDE-constrained optimization problems. The spatial FE-discretizations of the PDE operator were conducted using the deal.II framework \cite{deal2007} with Q1 finite elements and the method described in the previous section was implemented in {{\sc matlab }} \, R2018b. All experiments were run on a desktop computer with an {{\sc Intel }} \, Core i7-4770 Quad-core processor running at 4 $\times$ 3400 MHz with 32 GB of RAM. We will show robustness with respect to the discretization sizes $n$ and $n_T$ as well as the control parameter $\beta$. Furthermore, we demonstrate the applicability of our method to multiple different setups such as partial observation of the desired state, boundary control and a different non-symmetric PDE-operator. {Other low-rank solvers.} To emphasize the competitiveness of our new approach we compare it with another low-rank method aimed at the same problem class. The idea of exploiting the system structure to avoid forming a large system of equations and finding a low-rank way to compactly represent the solution is not an entirely new approach. Earlier, in \cite{stoll2014} the authors developed a technique rewriting the problem in a low-rank context and solving it with a preconditioned Krylov subspace solver, namely {{\sc Minres }}\ introduced in \cite{paige1975}. We denote this solver as {{\sc lrminres }}\ (low-rank {{\sc Minres }}) \footnote[1]{The {{\sc lrminres }}\ MATLAB code used for our comparison is available under \hyperlink{https://www.tu-chemnitz.de/mathematik/wire/pubs/PubCodes.zip}{https://www.tu-chemnitz.de/mathematik/wire/pubs/PubCodes.zip}.}. Here, the variables are represented in a low-rank matrix format rather than vectors, as $Y = W_Y V_Y^T$, $U = W_U V_U^T$ and $\Lambda = W_{\Lambda} V_{\Lambda}^T$ and the system of equations \eqref{eq:system1} - \eqref{eq:system3} is split into a matrix product as \begin{equation} \begin{bmatrix} \tau M_1 W_Y & \tau K^T W_{\Lambda} & M \Lambda \end{bmatrix} \begin{bmatrix} V_Y^T \\ V_{\Lambda}^T \\ V_{\Lambda}^T C \end{bmatrix} = \tau M_1 \hat{Y} \end{equation} for Equation \eqref{eq:system1} and respectively for Equations \eqref{eq:betau} and \eqref{eq:system3}. From here, the system of equations is solved with a tailored version of {{\sc Minres }}. Opposed to the original version of {{\sc Minres }}, the residual and all other upcoming variables are formulated as low-rank matrices rather than vectors and subsequently truncated to keep memory requirements low. Combined with exploiting the structure of the low-rank equation system in the upcoming matrix vector products and preconditioning, this method provides a memory efficient and fast alternative to established approaches for solving PDE constrained optimization problems. A bottleneck in this approach, however, is the setup of a preconditioner. To construct the necessary low-rank preconditioner, a matrix equation needs to be solved which significantly affects the overall performance. Therefore, our approach of directly starting from a matrix equation formulation proofs to be superior in many setups. \subsection{Fully observed heat equation} As a first example PDE we use the heat equation with a full observation, thus $\Omega_1 = \Omega$ and $M_1 = M$. This leads to a simplified version of Equation \eqref{eq:matrix_equation} as \begin{equation} A_1X + X C_1 + X I_0 + A_3 X D - Y_1 F_2^T = 0. \end{equation} First, we will use this simple setup to investigate the convergence behavior and our choice of stopping criteria by comparing the results to a solution obtained by a direct solver accurate up to machine precision. The constant-in-time desired state $\hat{Y}$ is displayed in Figure \ref{figure:desired_state}. This constant desired state leads to a low-rank representation of the right hand side of rank 1. The reference solution was obtained by solving the linear equation system of a small setup with MATLABs \textit{backslash} operator. We used a discretization of $n = 1089$ and $n_T = 100$. The monitored quantities \eqref{eq:res1}, \eqref{eq:res2}, \eqref{eq:resFull} and the actual relative errors to the reference solution are displayed in Figure \ref{figure:convergence}. We see that the monitored quantities are a good estimation for the magnitude of the errors as $R_1$ stays close above the actual errors. Inserting the direct solution into Equation \eqref{eq:matrix_equation_final} gives a residual of $5.11 \cdot 10^{-9}$. Thus, the different scaling of the matrix equation prohibits further accuracy gains once a high accuracy is reached. This effect is reflected in Figure \ref{figure:convergence} as the last iterations do not provide further accuracy gains. \begin{figure}[htb] \centering \includegraphics[width=0.9\textwidth]{convergence.jpg} \caption{Observed stopping criteria and actual error progression.} \label{figure:convergence} \end{figure} \begin{example}\label{ex:constant-in-time} { \rm We continue with the above simple example, i. e. the heat equation with full observation, $\Omega_1 = \Omega$ and $M_1 = M$ and a constant-in-time desired state. We now investigate the performance of our method with respect to time and space discretization. We vary the number of time steps from 20 to 2500 and the number of spatial discretization nodes from $n=1024$ to $n=263169$ which roughly resembles up to a total of 78 million degrees of freedom. Here, again we fix the control parameter to $\beta = 10^{-4}$. As seen in Table~\ref{table:time_varying}, increasing the discretization size barely impacts the resulting subspace sizes. Additionally, the time needed to solve the optimization problems increases considerably slowly. } \end{example} \begin{table} \centering \begin{tabular}{l l l l l l l l l l r} \toprule $n$ & \multicolumn{2}{c}{1024} & \multicolumn{2}{c}{4225} & \multicolumn{2}{c}{16641} & \multicolumn{2}{c}{66049} &\multicolumn{2}{c}{263169} \\ $n_T$ & $p$ & time(s) & $p$ & time(s) & $p$ & time(s) & $p$ & time(s) & $p$ & time(s)\\ \midrule 20 & 8 & 0.09 & 11 & 0.10 & 11 & 0.26 & 11 & 1.02 & 12 & 5.82 \\ 100 & 7 & 0.05 & 11 & 0.24 & 11 & 0.29 & 11 & 3.45 & 12 & 5.90 \\ 500 & 6 & 0.05 & 11 & 0.23 & 11 & 0.42 & 11 & 1.16 & 13 & 6.61 \\ 2500 & 6 & 0.19 & 11 & 0.91 & 11 & 1.09 & 10 & 1.61 & 15 & 9.52 \\ \bottomrule \end{tabular} \caption{Example \ref{ex:constant-in-time} Subspace size $p$ and required time for different discretizations.} \label{table:time_varying} \end{table} \begin{table}[H] \centering \begin{tabular}{l l | r r r |r r r |r r r} \toprule $n$ & & \multicolumn{3}{c}{1089} & \multicolumn{3}{c}{4225} & \multicolumn{3}{c}{16641} \\ $\beta$ & method & p & r & time(s) & p & r & time(s) & p & r & time(s) \\ \midrule $10^{-1}$ & {{\sc sys2mateq }}\ & 6 & 6& 0.09 & 5 & 5& 0.05 & 5 & 5& 0.14 \\ & {{\sc lrminres }}\ & 20 & 17 & 4.00 & 22 & 21 & 7.49 & 22 & 21 & 31.59 \\ $10^{-3}$ & {{\sc sys2mateq }}\ & 7 & 7& 0.07 & 7 & 7& 0.09 & 7 & 7& 0.22 \\ & {{\sc lrminres }}\ & 11 & 7 & 1.80 & 14 & 11 & 5.46 & 11 & 10 & 24.75 \\ $10^{-5}$ & {{\sc sys2mateq }}\ & 14 & 13& 0.19 &14 & 13& 0.23 & 14 & 13& 0.53 \\ & {{\sc lrminres }}\ & 7 & 6 & 1.08 & 7 & 6 & 4.75 & - & - & - \\ \hline \end{tabular} \caption{Example~\ref{ex:comparisons}. Comparison between the new {{\sc sys2mateq }} and another low-rank scheme {{\sc lrminres }} regarding required memory and time for $\hat{Y}$ with rank 1. \label{table:minres}} \end{table} \begin{example}\label{ex:comparisons} {\rm With the same data as in Example~\ref{ex:constant-in-time} we report on performance comparisons with the low-rank preconditioned {{\sc Minres }}\ ({{\sc lrminres }}) proposed in \cite{stoll2014} and introduced above. We fixed the number of time steps to $n_T = 100$ and computed solutions for different discretization sizes and control parameters $\beta$. The maximum discretization size here is $n=16641$. The results in Table~\ref{table:minres} reveal that our method's performance is superior regarding both required memory and CPU time. Here, our method is labeled as {{\sc sys2mateq }}. {The column denoted by $p$ states the subspace size for {{\sc sys2mateq }}, while for {{\sc lrminres }}\ it denotes the maximum rank per vector needed during the iterations of which up to 15 are required. The column $r$ states the rank of the final solution in both methods. Note that even though sometimes the ranks achieved by {{\sc lrminres }}\ are smaller the required memory to store the solution is still greater. This is because the {{\sc lrminres }}\ scheme needs to store a low-rank representation $U_1 U_2^T$ for each of the three variables as opposed to {{\sc sys2mateq }}\ where the same subspace $V_p$ is used for all variables. Additionally, during the iterations our method needs to store only one subspace of size $p$, whereas {{\sc lrminres }}\ requires to store multiple vectors of size $p$. Also note, that our scheme} does not rely on the time consuming computation of a preconditioner as in {{\sc lrminres }}\ which requires the solution of a matrix equation that gets increasingly difficult for large spatial discretizations combined with small $\beta$. For the last entry of Table \ref{table:minres} {{\sc lrminres }}\ did not reach convergence. With the same settings as before, we now raise the rank of the desired state $\hat{Y}$ to 6 -- leading to a larger rank matrix $F_1 F_2^T$ -- and compare both methods once again. Table \ref{table:minres2} displays the results as before. Again, our method successfully converges with a small subspace sizes within a very short amount of time, whereas {{\sc lrminres }}\ was considerably slower especially for larger problem sizes and did not converge for the largest discretization with $\beta$ being very small. \begin{table}[htb] \centering \begin{tabular}{l l |r r r |r r r |r r r } \toprule $n$ & & \multicolumn{3}{c}{1089} & \multicolumn{3}{c}{4225} & \multicolumn{3}{c}{16641} \\ $\beta$ & method & $p$ & r & time(s) & $p$ & r & time(s) & $p$ & r & time(s) \\ \midrule $10^{-1}$ & {{\sc sys2mateq }}\ & 21 & 21& 0.38 & 21 & 21& 0.47 & 19 & 19& 0.72 \\ & {{\sc lrminres }}\ & 29 & 22 & 20.77 & 27 & 25 & 13.38 & 29 & 29 & 23.63 \\ $10^{-3}$ & {{\sc sys2mateq }}\ & 31 & 31& 1.08 & 30 & 30& 1.09 & 30 & 29& 1.77 \\ & {{\sc lrminres }}\ & 15 & 11 & 2.22 & 17 & 11 & 11.19 & 19 & 15 & 31.05 \\ $10^{-5}$ & {{\sc sys2mateq }}\ & 49 & 43& 3.75 & 49 & 44& 3.97 & 51 & 41& 5.88 \\ & {{\sc lrminres }}\ & 11 & 8 & 1.49 & 11 & 9 & 6.14 & - & - & - \\ \end{tabular} \caption{Example~\ref{ex:comparisons}. Comparison of {{\sc sys2mateq }} and {{\sc lrminres }} , regarding required memory and CPU time for rank($\hat{Y}$)=6. \label{table:minres2}} \end{table} We are also interested in the memory savings our method provides. Therefore, we monitor the memory consumption of {{\sc sys2mateq }}, {{\sc lrminres }}\ and a full rank {{\sc Minres }}\ method for the same set-ups displayed in Table~\ref{table:minres2}. For {{\sc sys2mateq }}\ we monitor the memory needed to construct the subspace, the reduced solution and the setup of the reduced equation system \eqref{eq:lineq}. For {{\sc lrminres }}\ we monitor the memory for all vectors that are used. For comparison we additionally report the memory consumed by a standard {{\sc Minres }} method only to store the required vectors, not taking into account the system matrix and preconditioner. The results are shown in Table~\ref{table:memory}. The quantities refer to the overall memory consumption in megabytes during the process of solving the equation system with the respective method. We see that even with constructing the reduced equation system \ref{eq:lineq} the memory requirements stay well below those of a not low-rank approach.} \end{example} \begin{table}[htb] \centering \begin{tabular}{l l | r r r } \toprule & & \multicolumn{3}{c}{Memory (MB)} \\ n & $\beta$ & {{\sc sys2mateq }}\ & {{\sc lrminres }}\ & {\sc minres} \\ \midrule 1089 & 1e-1 & 4.46 & 8.11 & 15.39\\ & 1e-3 & 8.06 & 6.14 & 15.39\\ & 1e-5 & 17.58 & 4.54 & 15.39\\ 4225 & 1e-1 & 8.58 & 19.21 & 60.46\\ & 1e-3 & 12.57 & 19.35 & 60.46\\ & 1e-5 & 24.36 & 16.15 & 60.46\\ 16641 & 1e-1 & 23.50 & 71.74 & 241.84\\ & 1e-3 & 31.86 & 80.56 & 241.84 \\ & 1e-5 & 51.91 & 205.85 & 241.84\\ \end{tabular} \caption{Example~\ref{ex:comparisons}. Memory requirements of all considered methods. \label{table:memory}} \end{table} \subsection{Partial observation on heat equation} Until now we only investigated the case where $M = M_1$ and therefore the matrix $A_2$ being the identity. One highly interesting and challenging problem is the optimization under partial observation, meaning the desired state is only of interest on a partial domain $\Omega_1 \subseteq \Omega$. This leads to a singular matrix $M_1$, which further increases the difficulty of the PDE-constrained optimization problem.When we set up the rational Krylov subspace with only $A_1$ for this setting, we do not reach convergence. Therefore, we add up to two new vectors in each iteration as outlined in Section \ref{section:comp_specifications1}. \begin{example}\label{ex:partial_obs} {\rm We solved the problem with control parameter $\beta = 10^{-4}$ and 100 time steps on a grid of {1089} spatial DoFs. In Figure~\ref{figure:desired_state} we see the desired state for this problem and the white area is an example of $100$ non-observed nodes, {roughly 10 \%}. Hence, the corresponding entries in $M_1$ are set to 0. Figure~\ref{figure:partial_state} and \ref{figure:partial_control} show the resulting state and control respectively for a fixed time step $t=25$. Table~\ref{table:partial} shows the results when increasing the number of unobserved nodes from $n_0 = 0$ to $n_0 = 900$ which is about 90\% of the nodes. Here, the constructed subspaces are not optimal. Thus, we make use of the truncation modification outlined in Chapter \ref{section:comp_specifications1}. We see that the time and iterations needed to solve this more challenging problem are higher than for the fully observed case in the table's first row. For the non-truncated case with full $V_p$, the columns denoted by $p$ for the subspace size and $r$ for the solution rank show a discrepancy which disappears with stricter truncation. When truncating with a tolerance of $10^{-10}$ both values coincide for all cases. In some cases applying this truncation greatly increases the number of iterations needed to find a solution but the end result is a higher memory reduction. Thus, this option can be toggled to better reflect the desired behavior. Unfortunately, the {{\sc lrminres }}\ scheme from \cite{stoll2014} is not adapted for the case of $M \neq M_1$. Hence, we have no other low-rank method to compare the performance of our method with. } \end{example} \begin{figure}[htb] \centering \begin{subfigure}{0.32\textwidth} \includegraphics[width=0.9\textwidth]{desired_state.jpg} \caption{Partially observed desired state} \label{figure:desired_state} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=0.9\textwidth]{state_solution.jpg} \caption{State solution for partial observation} \label{figure:partial_state} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=0.9\textwidth]{control_solution.jpg} \caption{Control solution for partial observation} \label{figure:partial_control} \end{subfigure} \caption{Example \ref{ex:partial_obs}. Solution at a fixed time instant for a partial observation. \label{figure:partial}} \end{figure} \begin{table}[htb] \begin{tabular}{ l r r r r r r r r r r r r } \toprule & \multicolumn{4}{c}{full $V_p$} & \multicolumn{4}{c}{truncated $V_p$, $10^{-12}$} & \multicolumn{4}{c}{truncated $V_p$, $10^{-10}$} \\ $n_0$ & time(s) & iters & $p$ & $r$ & time(s) & iters & $p$ & $r$ & time(s) & iters & $p$ & $r$ \\ \midrule 0 & 0.06 & 7 & 7 & 7& 0.05 & 7 & 7 & 7& 0.06 & 7 & 7 & 7\\ 100 & 0.67 & 19 & 31 & 29& 0.67 & 19 & 30 & 29& 0.69 & 19 & 23 & 23\\ 300 & 1.46 & 26 & 42 & 34& 1.41 & 26 & 35 & 34& 1.20 & 28 & 26 & 26\\ 500 & 2.55 & 32 & 51 & 37& 2.52 & 34 & 38 & 36& 24.41 & 142 & 29 & 29\\ 700 & 1.50 & 26 & 42 & 37& 1.59 & 27 & 38 & 36& 2.46 & 42 & 29 & 29\\ 900 & 0.82 & 21 & 34 & 34& 0.84 & 21 & 33 & 33& 0.75 & 21 & 25 & 25\\ \bottomrule \end{tabular} \caption{Example \ref{ex:partial_obs}. Results for different levels of partial observation. \label{table:partial}} \end{table} \subsection{Boundary Control} Another setup of high interest is having a non-distributed control. Here, we take a boundary control problem as an example. This leads to the control $u$ not being distributed on $\Omega$ but only on a subdomain $\Omega_b \subset \Omega$. Therefore, we get a smaller mass matrix $M_b \in \mathbb{R}^{n_b \times n_b}$ associated with $U$ present on $n_b < n$ spatial nodes, and $N \in \mathbb{R}^{n \times n_b}$ is now rectangular (tall). Therefore, we have a modification of Equation \eqref{eq:matrix_equation_final} as \begin{equation} A_1 X + X C_1 + X I_0 + A_3 X D - F_1 F_2^T = 0 \end{equation} with $A_1 = M^{-1}K$ and $A_3 = M^{-1}N M_b^{-1} N^T$. Here, the subspace we compute is derived from $A_1$ and $A_3$ as in \eqref{eq:subspace}. \begin{example}\label{ex:bc} {\rm As before, we use $n_T = 100$ time steps and a convergence threshold of $10^{-4}$ for this setup. Results with a constant in time desired state for different levels of spatial discretization and different values for $\beta$ are displayed in Table \ref{table:minresbc}. {{\sc sys2mateq }}\ produces robust results with respect to the discretization size as we can only see a small increase in the ranks throughout discretization. Additionally, the results are acquired within a short amount of time even for small $\beta$. For larger discretization sizes the required time grows only slightly opposed to {{\sc lrminres }}\ where time requirements increase substantially for larger discretization sizes. } \end{example} \begin{table}[H] \centering \begin{tabular}{l l | r r r | r r r | r r r} \toprule $\quad n$ & & \multicolumn{3}{c}{1089} & \multicolumn{3}{c}{4225} & \multicolumn{3}{c}{16641} \\ $\beta$ & method & p & rank & time(s) & p & rank & time(s) & p & rank & time(s) \\ \midrule $10^{-1}$ & {{\sc sys2mateq }}\ & 72 & 47& 3.92 & 80 & 47& 7.33 & 80 & 47& 16.35 \\ & {{\sc lrminres }}\ & 29 & 15 & 29.47 & 28 & 16 & 113.99 & 28 & 16 & 496.70 \\ $10^{-3}$ & {{\sc sys2mateq }}\ & 64 & 46& 3.68 & 68 & 46& 5.08 & 74 & 46& 14.35 \\ & {{\sc lrminres }}\ & 25 & 12 & 12.70 & 24 & 14 & 35.53 & 24 & 14 & 158.82 \\ $10^{-5}$ & {{\sc sys2mateq }}\ & 104 & 45& 12.36 & 116 & 44& 18.95 & 107 & 44& 28.48 \\ & {{\sc lrminres }}\ & 21 & 16 & 34.20 & 21 & 16 & 142.83 & 20 & 16 & 694.93 \\ \bottomrule \end{tabular} \caption{Example \ref{ex:bc}. Comparisons between the new {{\sc sys2mateq }}\ and {{\sc lrminres }}\ regarding required memory and time for a boundary control problem.} \label{table:minresbc} \end{table} \subsection{Non-symmetric convection-diffusion PDE} Until now we only investigated the performance of our method under the constraint of a heat equation. But for more challenging PDEs our method works just as well, e.g. the convection-diffusion equation, \begin{equation} \dot{y} = \varepsilon \nabla^2 y - v \cdot \nabla y + u, \end{equation} with diffusion coefficient $\varepsilon$ and velocity field $v$. When $\varepsilon \ll 1$, the equation is convection-dominated and solving it is a challenging task. As the stiffness matrix $K$ of the resulting equation system is non-symmetric, we have to adjust the matrix equation accordingly. Thus, we get another modification of Equation \eqref{eq:matrix_equation_final} for this setup as \begin{align} A_1 X \begin{bmatrix} \mathcal{I} & 0 \\ 0 & 0 \end{bmatrix} + M^{-1} K^T X \begin{bmatrix} 0 & 0 \\ 0 & \mathcal{I} \end{bmatrix} + X C_1 + A_2 X I_0 + A_3 X D - F_1 F_2^T = 0. \end{align} \begin{example}\label{ex:cd} {\rm We consider example 3.1.4 from \cite{elman2005} with a recirculating wind $v(x,y) = (2y(1-x^2) , -2x(1-y^2))$ as the underlying model. The velocity field $v$ is displayed in Figure~\ref{figure:wind}. In Figures~\ref{figure:CD_state}-\ref{figure:CD_control} the desired state and a snapshot of a control solution for $\beta = 10^{-5}$ are displayed for $n=4225$ and $n_T = 100$. The {{\sc sys2mateq }}\ method produces reliable results throughout a range of different values for $\varepsilon$ and $\beta$ as reported in Table~\ref{table:CD}. Even very small values in $\varepsilon$ do not pose a problem to the solver. } \end{example} \begin{figure}[htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{wind.jpg} \caption{Velocity field $v$} \label{figure:wind} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\textwidth]{state_CD.jpg} \caption{Desired state} \label{figure:CD_state} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\textwidth]{control_CD.jpg} \caption{Control solution} \label{figure:CD_control} \end{subfigure} \caption{Example~\ref{ex:cd}. Solution at a fixed time instant for convection-diffusion problem. \label{figure:CD}} \end{figure} \begin{table}[htb] \centering \begin{tabular}{l l |l r r |l l r|l r r} \toprule $n$ & & \multicolumn{3}{c}{4225} & \multicolumn{3}{c}{16641} & \multicolumn{3}{c}{66049} \\ $\beta$ & $\varepsilon$ & mem & $p$ & time & mem &$p$ & time & mem &$p$ & time \\ \midrule $10^{-1}$ & $ 1$ & 0.03 & 5 & 0.25& 0.01 & 1 & 0.43& 0.01 & 1 & 4.61 \\ & $ 10^{-1}$ & 0.05 & 9 & 0.27& 0.07 & 13 & 1.51& 0.05 & 9 & 7.87 \\ & $ 10^{-2}$ & 0.15 & 29 & 1.42& 0.10 & 19 & 2.56& 0.10 & 19 & 15.40 \\ & $ 10^{-3}$ & 0.23 & 43 & 3.04& 0.09 & 17 & 2.96& 0.09 & 17 & 17.16 \\ \midrule $10^{-3}$ & $ 1$ & 0.06 & 11 & 0.33& 0.05 & 9 & 1.10& 0.05 & 9 & 9.07 \\ & $ 10^{-1}$ & 0.09 & 17 & 0.51& 0.09 & 17 & 2.17& 0.05 & 9 & 7.94 \\ & $ 10^{-2}$ & 0.08 & 15 & 0.55& 0.10 & 19 & 2.71& 0.07 & 13 & 10.60 \\ & $ 10^{-3}$ & 0.34 & 65 & 8.29& 0.07 & 13 & 2.37& 0.09 & 17 & 16.97 \\ \midrule $10^{-5}$ & $ 1$ & 0.10 & 19 & 0.70& 0.08 & 15 & 2.08& 0.08 & 15 & 13.53 \\ & $ 10^{-1}$ & 0.02 & 3 & 0.08& 0.02 & 3 & 0.38& 0.16 & 31 & 26.91 \\ & $ 10^{-2}$ & 0.04 & 7 & 0.18& 0.02 & 3 & 0.48& 0.02 & 3 & 3.01 \\ & $ 10^{-3}$ & 0.08 & 15 & 0.56& 0.02 & 3 & 1.06& 0.02 & 3 & 6.10 \\ \bottomrule \end{tabular} \caption{Example~\ref{ex:cd}. Results for different diffusion coefficients $\varepsilon$ for the convection-diffusion problem with different parameters $\beta$ and different spatial discretizations. \label{table:CD}} \end{table} \section{Conclusion and Outlook}\label{sec:conc} We proposed a new scheme to solve a class of large-scale PDE-constrained optimization problems in a low-rank format. This method relied on the reformulation of the KKT saddle point system into a matrix equation, which was subsequently projected onto a low dimensional space generated with rational Krylov type iterations. We showed that the method's convergence is robust with respect to different discretizations and parameters. Furthermore we demonstrated higher memory and time savings compared to an established low-rank scheme. Additionally, the {{\sc sys2mateq }}\ is very flexible with respect to different constraints such as non-symmetric PDE operators or partial state observation. In the future, we plan on further investigating the subspace choice and the performance of our scheme for other challenging setups. Further improvements are expected from realizing a truncation or restarting mechanism in cases where the subspaces become unexpectedly large. \section*{Acknowledgments} The work of the first and third authors was supported by the German ScienceFoundation (DFG) through grant 1742243256 - TRR 9. The second author is a member of Indam-GNCS. Its support is gratefully acknowledged. This work was also supported by the DAAD-MIUR Mobility Program 2018 ``Optimization and low-rank solvers for isogeometric analysis (34876)'' between TU-Chemnitz and the Universit\`a di Bologna. \bibliographystyle{siamplain}
{'timestamp': '2021-05-06T02:13:27', 'yymm': '2005', 'arxiv_id': '2005.14445', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14445'}
arxiv
\section{Introduction} In \cite{Hit.1}, Nigel Hitchin describes, for a Riemann surface $S$, a connected component of the character variety $$\Rep(\pi_1(S), \PSL_n(\R)) = \Hom(\pi_1(S), \PSL_n(\R
))/\PSL_n(\R)$$ which he parametrizes by holomorphic differentials. These components are called \textbf{Hitchin components} and their study \emph{higher Teichmüller theory}. His approach uses Higgs bundle theory, more precisely the hyperkähler structure of the moduli space of polystable Higgs bundles. These components can also be described by representation-theoretic methods. For $\PSL_2(\R)$, Hitchin's component is \textbf{Teichmüller space}, which is the moduli space of various geometric structures on the underlying smooth surface $\S$, for example complex structures or hyperbolic structures. Thus, the question naturally arises \textit{whether there is a geometric structure on $\S$ whose moduli space gives Hitchin's component for higher rank}. In \cite{FockThomas} a candidate for such a geometric structure is constructed, called the \textbf{higher complex structure} or $n$-complex structure, since it generalizes the complex structure. The higher complex structure can be seen as a special $\mf{sl}_n$-valued 1-form. In local coordinates it is given by $\Phi=\Phi_1dz+\Phi_2d\bar{z}$ where $(\Phi_1, \Phi_2)$ is a pair of commuting nilpotent matrices. The construction of the $n$-complex structure uses the punctual Hilbert scheme of the plane and is reviewed in section \ref{highercomplex}. The group of symplectomorphisms of $T^*\S$ acts on 1-forms, so on the higher complex structure. We denote by $\T^n$ the moduli space of higher complex structures. A prominent role is played by the \emph{cotangent bundle $\cotang$}. Its elements consist of a higher complex structure and a cotangent vector, described by a set of holomorphic differentials. In this paper we prove several steps towards a canonical diffeomorphism between the moduli space of higher complex structures $\T^n$ and Hitchin's component. Before giving the structure of the paper, we give a comparison to Hitchin's approach, which motivates and clarifies our ideas. \subsection{Comparison to Hitchin's approach} Hitchin's approach to construct components in the character variety is to use the hyperkähler structure of the moduli space of Higgs bundles $\mc{M}_H$. One starts from a Riemann surface $S$, i.e. a smooth surface $\S$ equipped with a \emph{fixed} complex structure. Then one considers \textbf{Higgs bundles} on $S$, i.e. pairs of a holomorphic bundle $V$ and a holomorphic $\End(V)$-valued 1-form $\Phi$, the \textbf{Higgs field}. We summarize this approach in one picture: the twistor space of $\mc{M}_H$, which encodes all Kähler structures at once. To a hyperk\"ahler manifold $M$ one associates the \textbf{twistor space} $X_M =\C P^1\times M$ endowed with the complex structure at the point $(\l, m)$ given by $I_{\l, m}=(I_0, I_\l)$ where $I_0$ is the standard structure of $\C P^1$ and $I_\l$ is the complex structure of $M$ associated to $\l \in \C P^1$. The projection $X_M \rightarrow \C P^1$ is holomorphic and a holomorphic section is called a \textbf{twistor line}. With some extra data, it is possible to \textit{reconstruct the hyperkähler manifold $M$ as the space of all real twistor lines}. This is a result of \cite{HKLR} (theorem 3.3). On the left hand side of figure \ref{HK} we draw the twistor space of the moduli space of Higgs bundles $\mc{M}_H$. In one complex structure, say at $\l=\infty$, we have the moduli space of Higgs bundles $\mc{M}_{H}$ (with its complex structure coming from the one of $S$). For $\l=0$, we see the conjugated complex structure. In all other $\l$, we see the complex structure of the character variety $\Rep(\pi_1(\S), G^{\C})$, which can be seen as hamiltonian reduction of the space of all connections $\mc{A}$ by all the gauge transformations $\mc{G}$ (Atiyah-Bott reduction for unitary gauge). Going from $\l=0$ to $\l=1$ is the \textbf{non-abelian Hodge correspondence}. Finally, there is the Hitchin fibration going from $\mc{M}_{H}$ to a space of holomorphic differentials. This fibration admits a section whose monodromy, via the non-abelian Hodge correspondence, is in the split real form. In this paper, we always consider the case $G=\PSL_n(\C)$, for which we get flat $\PSL_n(\R)$-connections. \begin{figure} \centering \begin{tikzpicture}[scale=1.5] \draw (0,0) circle (1cm); \draw [domain=180:360] plot ({cos(\x)},{sin(\x)/3}); \draw [domain=0:180, dotted] plot ({cos(\x)},{sin(\x)/3}); \draw [fill=white] (0,1) circle (0.04); \draw [fill=white] (0,-1) circle (0.04); \draw (0,1.2) node {$\overline{\mc{M}}_H$}; \draw[below] (0,1) node {$0$}; \draw (0,-1.2) node {$\mc{M}_H$}; \draw[above] (0,-1) node {$\infty$}; \draw (0,-1.5) node {$\downarrow$}; \draw (0,-1.9) node {$\bigoplus_{i=2}^n H^0(K^i)$}; \draw [domain=-30:32, dashed, ->] plot ({0.33*cos(\x)}, {0.33*sin(\x)-1.5}); \draw (1,-1.4) node {Hitchin}; \draw (1,-1.62) node {section}; \draw (-1.25,-1.4) node {Hitchin}; \draw (-1.25,-1.65) node {fibration}; \draw (1.85,0.15) node {$\Rep(\pi_1\S, G^{\C})$}; \draw (1.65,-0.18) node {$\cong \mc{A}//\mc{G}$}; \begin{scope}[xshift=4.6cm] \draw (0,0) circle (1cm); \draw [domain=180:360] plot ({cos(\x)},{sin(\x)/3}); \draw [domain=0:180, dotted] plot ({cos(\x)},{sin(\x)/3}); \draw [fill=white] (0,1) circle (0.04); \draw [fill=white] (0,-1) circle (0.04); \draw (0,1.2) node {$\overline{U}$}; \draw[below] (0,1) node {$0$}; \draw (0,-1.2) node {$U \subset \cotang$}; \draw[above] (0,-1) node {$\infty$}; \draw (0,-1.5) node {$\downarrow$}; \draw (0,-1.8) node {$\T^n$}; \draw [domain=-40:24, dashed, ->] plot ({0.33*cos(\x)}, {0.33*sin(\x)-1.5}); \draw (1,-1.4) node {zero-}; \draw (1,-1.6) node {section}; \draw (-1.25,-1.4) node {canonical}; \draw (-1.25,-1.65) node {projection}; \draw (2,0.15) node {$\Rep(\pi_1\S, G^{\C}) \cong$}; \draw (2,-0.18) node {$(\mc{A}//\mc{P})//\Symp_0$}; \end{scope} \end{tikzpicture} \caption{Twistor space for Higgs bundles and $\cotang$}\label{HK} \end{figure} In our approach, we start from a smooth surface $\S$ which we equip with a higher complex structure (which can vary). This structure is locally given by a 1-form $\Phi=\Phi_1dz+\Phi_2d\bar{z}$ where $(\Phi_1, \Phi_2)$ is a pair of commuting nilpotent matrices. The role of $\mc{M}_H$ is played by a neighborhood $U$ of the zero-section of the cotangent bundle $\cotang$ to the moduli space of higher complex structures. Conjecturally, this neighborhood $U$ carries a hyperkähler structure (see discussion around Conjecture \ref{hkcotang}). On the right-hand side of figure \ref{HK}, we draw the conjectural twistor space of $\cotang$. In complex structure at $\l=\infty$, we see the cotangent bundle $\cotang$. At the opposite point $\l=0$ we see the conjugated complex structure. In all other complex structures, we see an open dense subset of the character variety $\Rep(\pi_1(\S), G^{\C})$, this time obtained as a double reduction of the space of connections $\mc{A}$, by some parabolic subgroup $\mc{P}$ of all gauges, and then by higher diffeomorphisms $\Symp_0$ (details in section \ref{parabolicreduction}). The analog of the Hitchin fibration is simply the projection map $\cotang \rightarrow \T^n$ and the analog of the Hitchin section is the zero-section $\T^n\subset \cotang$. A flat connection associated to a point of the zero-section $\T^n\subset \cotang$ has monodromy in the split real form, using the reality constraint in the twistor space. For $G^\C = \PSL_2(\C)$, the situation is well understood thanks to the work of Trautwein \cite{Trautwein}. \medskip We stress again that most of the right-hand side is conjectural. In this paper we \begin{itemize} \item[$\bullet$] describe the space $\mc{A}//\mc{P}$ of connections, called ``parabolic'', and an infinitesimal action whose moment map gives the curvature, \item[$\bullet$] include this space into a family of flat $h$-connections ($h=\l^{-1}$), \item[$\bullet$] show that at the limit $\l \rightarrow \infty$ we get $\cotang$, \item[$\bullet$] give partial results for the existence of twistor lines, i.e. a canonical deformation of $\cotang$ to flat connections, \item[$\bullet$] prove the diffeomorphism between $\T^n$ and Hitchin's component assuming the existence and uniqueness of twistor lines. \end{itemize} \subsection{Summary and structure} In section \ref{highercomplex}, we review the construction of the higher complex structure. In particular, we describe the cotangent bundle $\cotang$ in subsection \ref{cotangs}. Then we give some new aspects: we describe a bundle induced by the $n$-complex structure in \ref{indbundle} and the conjugated structure in \ref{dualcomplexstructure}. In Section \ref{parabolicreduction} we construct and analyze the space of parabolic connections $\mc{A}//\mc{P}$. It allows an infinitesimal action whose moment map is the curvature (see Theorem \ref{moment-map-symp}). We generalize this reduction for $h$-connections in Section \ref{parabolicwithlambda}, such that in the limit when $h$ goes to zero we get $\cotang$ (see Theorem \ref{conditioncinconnection}). In Section \ref{finalstep} we impose a reality constraint on our space of $h$-connections and investigate the link to Hitchin's component. We first put the connections we look at in some standard form in \ref{standard-form}. We then give partial results and ideas of the existence of a canonical deformation of $\cotang$ to flat connections in \ref{flatconnectionlambda}. Finally under the assumption that this canonical deformation exists and is unique, we prove that our moduli space $\T^n$ is diffeomorphic to Hitchin's component in Theorem \ref{mainthmm}. We include three appendices: in the first appendix \ref{appendix:A} we give some facts about the punctual Hilbert scheme of the plane. In appendix \ref{appendix:B} and \ref{appendix:C} we prove two technical points. \bigskip \noindent \textbf{\textit{Notations.}} Throughout the paper, $\Sigma$ denotes a smooth closed surface of genus $g \geq 2$. A complex local coordinate system on $\Sigma$ is denoted by $(z, \bar{z})$ and its conjugate coordinates on $T^{*\mathbb{C}}\Sigma$ by $p$ and $\bar{p}$. The canonical bundle is $K=T^{*(1,0)}\S$. The space of sections of a bundle $B$ is denoted by $\Gamma (B)$. The hamiltonian reduction (or symplectic reduction, or Marsden-Weinstein quotient) of a symplectic manifold $X$ by a group $G$ is denoted by $X//G$ where the reduction is over the zero-coadjoint orbit. The equivalence class of some element $a$ is denoted by $[a]$. \bigskip \noindent \textbf{\textit{Acknowledgments.}} I warmly thank Vladimir Fock for all the ideas and discussions he shared with me. This paper is part of my PhD thesis accomplished at the University of Strasbourg. \section{Higher complex structures}\label{highercomplex} The goal of this section is twofold: first we give a summary of \cite{FockThomas}, in particular the construction of higher complex structures, their moduli space and the cotangent bundle to its moduli space. The main ingredient for the higher complex structure is the punctual Hilbert scheme of the plane (see also Appendix \ref{appendix:A}). Second, we describe a bundle with several extra structures induced by the higher complex structure. This bundle is crucial in the sequel. \subsection{Higher complex structures} A complex structure on a surface is characterized by the \textbf{Beltrami differential} $\mu\in\Gamma(K^{-1}\otimes \bar{K})$ where $K$ is the canonical bundle. It determines the notion of a local holomorphic function $f$ by the condition $(\delbar-\mu\del)f=0$. The Beltrami differential determines a linear direction in $T^{\C}\S$, the direction generated by the vector $\delbar-\mu\del$. Since $\delbar-\mu\del$ and $\del-\bar{\mu}\delbar$ have to be linearly independent, we get the condition $\mu\bar{\mu}\neq 1$. Replacing the tangent bundle $T\S$ by the cotangent bundle $T^*\S$, we can say that the complex structure is \emph{entirely encoded in a section of $\bb{P}(T^{*\C}\S)$}. The idea of higher complex structures is to replace the linear direction by a polynomial direction, or more precisely a $n$-jet of a curve inside $T^{*\C}\S$. To get a precise definition, we use the \textbf{punctual Hilbert scheme} of the plane, denoted by $\Hilb^n(\C^2)$ which is defined by $$\Hilb^n(\C^2)=\{I \text{ ideal of } \C[x,y] \mid \dim \C[x,y]/I = n\}.$$ A generic point in $\Hilb^n(\C^2)$ is an ideal whose algebraic variety is a collection of $n$ distinct points in $\C^2$. A generic ideal can be written as $$\langle-x^n+t_1x^{n-1}+...+t_n, -y+\mu_1+\mu_2x+...+\mu_nx^{n-1} \rangle.$$ Moving around in $\Hilb^n(\C^2)$ corresponds to a movement of $n$ particles in $\C^2$. But whenever $k$ particles collide the Hilbert scheme retains an extra information: the $(k-1)$-jet of the curve along which the points entered into collision. The \textbf{zero-fiber}, denoted by $\Hilb^n_0(\C^2)$, consists of those ideals whose support is the origin. A generic point in $\Hilb^n_0(\C^2)$ is of the form $$\langle x^n, -y+\mu_2x+\mu_3x^2+...+\mu_nx^{n-1}\rangle$$ which can be interpreted as a $(n-1)$-jet of a curve at the origin (see appendix \ref{appendix:A} for details). We can now give the definition of the higher complex structure: \begin{definition}[Def.2 in \cite{FockThomas}] A \textbf{higher complex structure} of order $n$ on a surface $\Sigma$, in short \textbf{$n$-complex structure}, is a section $I$ of $\Hilb^n_0(T^{*\mathbb{C}}\Sigma)$ such that at each point $z\in \S$ we have $I(z)+\bar{I}(z)=\langle p, \bar{p} \rangle$, the maximal ideal supported at the origin of $T_z^{*\mathbb{C}}\Sigma$. \end{definition} Notice that we apply the punctual Hilbert scheme pointwise, giving a Hilbert scheme bundle over $\S$. The condition on $I+\bar{I}$ ensures that $I$ is a generic ideal, so locally it can be written as \begin{equation}\label{I-expr} I(z,\bar{z})=\langle p^n, -\bar{p}+\mu_2(z, \bar{z})p+\mu_3(z, \bar{z}) p^2...+\mu_n(z, \bar{z})p^{n-1}\rangle. \end{equation} The coefficients $\mu_k$ are called \textbf{higher Beltrami differentials}. A direct computation gives $\mu_k \in \Gamma(K^{1-k}\otimes \bar{K})$. The coefficient $\mu_2$ is the usual Beltrami differential. In particular for $n=2$ we get the usual complex structure. The punctual Hilbert scheme admits an equivalent description as a space of pairs of commuting operators. To an ideal $I$ of $\C[x,y]$ of codimension $n$, one can associate the multiplication operators by $x$ and by $y$ in the quotient $\C[x,y]/I$, denoted by $M_x$ and $M_y$. This gives a pair of commuting operators. Conversely, to two commuting operators $(A,B)$ we can associate the ideal $I(A,B)=\{P\in\C[x,y] \mid P(A,B)=0\}$. For details see \ref{matrixviewhilb} in the appendix. The zero-fiber $\Hilb^n_0(\C^2)$ corresponds to nilpotent commuting operators. From this point of view, a higher complex structure is a \emph{gauge class of special matrix-valued 1-forms locally of the form $\Phi_1dz+\Phi_2d\bar{z}$ where $(\Phi_1, \Phi_2)$ is a pair of commuting nilpotent matrices with $\Phi_1$ principal nilpotent} (which means of maximal rank $n-1$). \medskip \noindent To define a finite-dimensional moduli space of higher complex structures, we have to define some equivalence relation. It turns out that the good notion is the following: \begin{definition}[Def.3 in \cite{FockThomas}] A \textbf{higher diffeomorphism} of a surface $\Sigma$ is a hamiltonian diffeomorphism of $T^*\Sigma$ preserving the zero-section $\Sigma \subset T^*\Sigma$ setwise. The group of higher diffeomorphisms is denoted by $\Symp_0(T^*\Sigma)$. \end{definition} Symplectomorphisms act on $T^{*\C}\S$, so also on 1-forms. This is roughly how higher diffeomorphisms act on the $n$-complex structure, considered as the limit of an $n$-tuple of 1-forms. We then consider higher complex structures modulo higher diffeomorphisms, i.e. two structures are equivalent if one can be obtained by the other by applying a higher diffeomorphism. Locally, all $n$-complex structures are equivalent: \begin{thm}[Theorem 1 in \cite{FockThomas}]\label{loctriii} The $n$-complex structure can be locally trivialized, i.e. there is a higher diffeomorphism which sends the structure to $(\mu_2(z,\bar{z}),...,\mu_n(z,\bar{z}))=(0,...,0)$ for all small $z\in \C$. \end{thm} \medskip \noindent We define the \textbf{moduli space of higher complex structures}, denoted by $\T^n$, as the space of $n$-complex structures modulo higher diffeomorphisms. The main properties are given in the following theorem: \begin{thm}[Theorem 2 in \cite{FockThomas}]\label{mainresultncomplex} For a surface $\Sigma$ of genus $g\geq 2$ the moduli space $\T^n$ is a contractible manifold of complex dimension $(n^2-1)(g-1)$. Its cotangent space at any point $\mu=(\mu_2,...,\mu_n)$ is given by $$T^*_{\mu}\bm\hat{\mathcal{T}}^n = \bigoplus_{m=2}^{n} H^0(\Sigma,K^m).$$ In addition, there is a forgetful map $\bm\hat{\mathcal{T}}^n \rightarrow \bm\hat{\mathcal{T}}^{n-1}$ and a copy of Teichmüller space $\mc{T}^2\rightarrow \T^n$. \end{thm} The forgetful map in coordinates is just given by forgetting the last Beltrami differential $\mu_n$. The copy of Teichmüller space is given by $\mu_3=...=\mu_n=0$ (this relation is unchanged under higher diffeomorphisms). We notice the similarity to Hitchin's component, especially the contractibility, the dimension and the copy of Teichmüller space inside. At the end of the paper in section \ref{finalstep} we indicate how to link $\T^n$ to Hitchin's component. Assuming a strong conjecture (an analog of the non-abelian Hodge correspondence in our setting), we prove that $\T^n$ is canonically diffeomorphic to Hitchin's component in theorem \ref{mainthmm}. \subsection{Cotangent bundle of higher complex structures}\label{cotangs} The main object to link higher complex structures to character varieties is the total cotangent bundle $\cotang$ which we describe here in detail. The punctual Hilbert scheme inherits a complex symplectic structure from $\C^2$. It can be described as follows: to an ideal $I \in \Hilb^n(\C^2)$, associate the two multiplication operators $M_x$ and $M_y$. The symplectic structure $\omega$ is given by $$\omega = \tr dM_x \wedge dM_y.$$ The zero-fiber is an isotropic subspace of dimension $n-1$. The dimension of $\Hilb^n(\C^2)$ being $2n$, the zero-fiber cannot be Lagrangian. The subspace $\Hilb^n_{red}(\C^2)$, called \textbf{reduced Hilbert scheme}, consisting of those ideals $I$ whose support has barycenter the origin (generically $n$ points with barycenter equal to the origin), is a symplectic submanifold of $\Hilb^n(\C^2)$ and the zero-fiber is Lagrangian inside the reduced Hilbert scheme. Hence its cotangent bundle is isomorphic to its normal bundle (using the symplectic form): $$T^*\Hilb^n_0(\C^2)\cong T^{normal}\Hilb^n_{0}(\C^2) \approx \Hilb^n_{red}(\C^2).$$ Near the zero-section, to the first order, the normal bundle can be identified with the whole space, here the reduced Hilbert scheme. There is a general fact stating that the cotangent bundle to a quotient space $X/G$ (where $X$ is a manifold and $G$ a Lie group) is a hamiltonian reduction: $T^*(X/G) \cong T^*X//G$. Using this we can compute \begin{align} \cotang &= T^*\left(\Gamma(\Hilb^n_0(T^{*\mathbb{C}}\Sigma))/\Symp_0(T^*\Sigma)\right) & \nonumber \\ &= \Gamma(T^*\Hilb^n_0(T^{*\mathbb{C}}\Sigma)) // \Symp_0(T^*\Sigma) \nonumber \\ &= \Gamma(T^{normal}\Hilb^n_0(T^{*\mathbb{C}}\Sigma))// \Symp_0(T^*\Sigma) \nonumber \\ &= \Gamma(\Hilb^n_{red}(T^{*\mathbb{C}}\Sigma)) // \Symp_0(T^*\Sigma) & \mod t^2. \label{hkquotientofmodulispace} \end{align} We see that $\cotang$ is obtained by a hamiltonian reduction of $\Hilb^n_{red}(T^{*\C}\S)$. An element of the latter Hilbert scheme bundle is an ideal \begin{equation}\label{ideal-I} I=\langle p^n-t_2p^{n-2}-...-t_n, -\bar{p}+\mu_1+\mu_2p+...+\mu_np^{n-1}\rangle. \end{equation} The coefficient $\mu_1$ is an explicit function of the other variables. So the $2n-2$ variables $(t_k, \mu_k)_{2\leq k \leq n}$ form a coordinate system. The fact that the normal bundle is only the total space near the zero-section is expressed by ``modulo $t^2$'', meaning that all quadratic or higher terms in the $t_k$ have to be dropped. To compute the moment map, we have to understand with more detail the action of higher diffeomorphisms on the Hilbert scheme bundle. The ideal $I$ has two generators which we put into the form $p^n-P(p)$ and $-\bar{p}+Q(p)$ where $P(p)=t_2p^{n-2}+...+t_n$ and $Q(p)=\mu_2p+...+\mu_np^{n-1}$. A higher diffeomorphism generated by some Hamiltonian $H$ acts on $I$ by changing the two polynomials. Their infinitesimal variations $\delta P$ and $\delta Q$ are given by \begin{align} \delta P &= \{H, p^n-P(p)\} \mod I \nonumber \\ \delta Q &= \{H, -\bar{p}+Q(p)\} \mod I. \label{idealvariation} \end{align} \begin{Remark} One can easily show that only the class $H \mod I$ acts, i.e. $H_1$ and $H_2$ with $H_1 = H_2 \mod I$ have the same action. \hfill $\triangle$ \end{Remark} Using these variation formulas, one can compute the moment map which gives: \begin{thm}[Theorem 3 in \cite{FockThomas}]\label{conditionC} The cotangent bundle to the moduli space of $n$-complex structures is given by \begin{align*} T^*\bm\hat{\mathcal{T}}^n= \Big\{& (\mu_2, ..., \mu_n, t_2,...,t_n) \mid \mu_k \in \Gamma(K^{1-k}\otimes \bar{K}), t_k \in \Gamma(K^k) \text{ and } \; \forall k\\ & (-\bar{\partial}\!+\!\mu_2\partial\!+\!k\partial\mu_2)t_{k}+\sum_{l=1}^{n-k}((l\!+\!k)\partial\mu_{l+2}+(l\!+\!1)\mu_{l+2}\partial)t_{k+l}=0 \Big\} \Big/\Symp_0(T^*\S) \end{align*} \end{thm} We call the condition coming from the moment map \textbf{condition $(\mc{C})$}. It is a \emph{generalized holomorphicity condition}: for $\mu_k=0$ for all $k$, we simply get $\delbar t_k =0$. The punctual Hilbert scheme $\Hilb^n(\C^2)$ inherits a hyperkähler structure from $\C^2$ (see \cite{Nakajima}). This should induce a hyperkähler structure on $\cotang$: \begin{conj}\label{hkcotang} The cotangent bundle $\cotang$ admits a hyperkähler structure near the zero-section. \end{conj} There are three good reasons to believe in the conjecture: \begin{itemize} \item[$\bullet$] \textit{Construction by hyperkähler quotient}: Equation \ref{hkquotientofmodulispace} points towards a possible hyperkähler reduction. Indeed under some mild conditions a complex symplectic reduction $X//G^\C$ is isomorphic to a hyperkähler quotient $X///G^\R$. In our case $X=\Gamma(\Hilb^n_{red}(T^{*\mathbb{C}}\Sigma))$ is hyperkähler, since $\Hilb^n_{red}(\C^2)$ is. So it is plausible that $\cotang$ can be obtained as HK quotient of $\Gamma(\Hilb^n_{red}(T^{*\mathbb{C}}\Sigma))$ by the real group $\Symp_0(T^*\S)$, so it gets a hyperkähler structure itself. Notice that the complexified Lie algebra of $\Symp_0(T^*\S)$, i.e. the space of smooth complex-valued functions on $T^*\S$, has the same action on $X$ as the real Lie algebra since one can prove that a Hamiltonian $H$ acts the same as $H\mod I$. \item[$\bullet$] \textit{Feix-Kaledin structure}: If Hitchin's component and our moduli space $\T^n$ are diffeomorphic, Hitchin's component gets a complex structure. With its Goldman symplectic structure, there is good hope to get a Kähler structure. A general result of Feix and Kaledin (see \cite{Feix} and \cite{Kaledin}) asserts that for a Kähler manifold $X$, there is a neighborhood of the zero-section in $T^*X$ which admits a hyperkähler structure. \item[$\bullet$] \textit{Construction by twistor approach}: The 1-parameter deformation of $\cotang$ described in this paper is a good candidate to be the twistor space of $\cotang$ (see figure \ref{HK}). \end{itemize} \subsection{Induced bundle}\label{indbundle} To any section $I$ of the Hilbert scheme bundle $\Hilb^n(T^{*\C}\S)$, we can canonically associate a vector bundle $V'$ of rank $n$ over $\S$ whose fiber over a point $z$ is $\mathbb{C}[p,\bar{p}]/I(z)$. We can also describe $V'$ by a global construction: consider $\mc{C}:=\mathcal{C}_{pf}(T^{*\C}\S)$ the space of functions on $T^{*\C}\S$ which are polynomial in each fiber. The bundle $V'$ is the quotient of $\mc{C}$ by $I$. By the matrix viewpoint of the punctual Hilbert scheme, we get $\Phi \in \Gamma(\End(V')\otimes T^{*\C}\S)$, i.e. a $\mf{gl}_n$-valued 1-form $\Phi = \Phi_1+\Phi_2$ which acts on $V'$ (locally $\Phi_1$ by multiplication by $p$, $\Phi_2$ by multiplication by $\bar{p}$). We restrict now attention to the case where $I$ is a higher complex structure, i.e. $(\Phi_1,\Phi_2)$ is a pair of nilpotent commuting matrices. From the local structure of higher complex structures, we know that $$ I=\langle p^n, -\bar{p}+\mu_2 p+\mu_3p^2+...+\mu_np^{n-1} \rangle.$$ Hence, we see that there is a section $s$ of $V'$ such that $B=(s,\Phi_1s,...,\Phi_1^{n-1}s)$ is a basis of the fibers. It is sufficient to take for $s$ any element of $\mc{C}$ whose restriction on $\S$ is non-vanishing. Locally the basis can be written $(s,ps,...,p^{n-1}s)$. Under a coordinate change $z\mapsto w(z)$, this basis transforms in a diagonal way since $p^k \mapsto (\frac{dw}{dz})^kp^k$, so we get that $V'$ is topologically the bundle $\mathcal{O}\oplus K^{-1}\oplus K^{-2}\oplus...\oplus K^{-(n-1)}$, where $\mathcal{O}$ denotes the trivial line bundle. To get a vector bundle of degree 0, we fix a square root of the canonical bundle $K^{1/2}$ and we define $$V := V'\otimes K^{(n-1)/2} = K^{(n-1)/2}\oplus K^{(n-3)/2}\oplus...\oplus K^{-(n-1)/2}.$$ We still have a basis of the form $B=(s,ps,...,p^{n-1}s)$, but $s$ is now a section of $K^{(n-1)/2}$. The 1-form $\Phi$ also acts on $V$. There, we can write $\Phi_1$ globally as the principal nilpotent matrix with 1's under the main diagonal and $\Phi_2$ is given by the higher Beltrami differentials: $$\Phi_1 = \begin{pmatrix} &&&& \\ 1& &&& \\ &1 & && \\ &&\ddots && \\ &&&1&\end{pmatrix} \;\text{ and } \;\; \Phi_2 = \begin{pmatrix} &&&& \\ \mu_2 & &&& \\ \mu_3& \mu_2 & && \\ \vdots & \ddots &\ddots && \\ \mu_n&\cdots&\mu_3&\mu_2&\end{pmatrix}.$$ Here 1 has to be interpreted as the canonical section of $$\Hom(K^{(n-i)/2},K^{(n-i-2)/2}\otimes K) \cong \mathcal{O}.$$ There is an extra structure on $V$ coming from the higher complex structure: a filtration, i.e. a complete flag in each fiber. Put $I_k=I+\langle p,\bar{p} \rangle^k$, so we have $I_0=\C[p, \bar{p}] \supset I_1 \supset ...\supset I_n = I$. Define $$F_{n-k}=\ker (\mathbb{C}[p,\bar{p}]/I(z, \bar{z}) \rightarrow \C[p, \bar{p}]/I_k).$$ Then, the $F_k$ form an increasing complete flag with $\dim F_k=k$. In the local basis $B$, we have $F_k=\Span(p^{n-k}s,...,p^{n-1}s)$. \begin{prop} The filtration on $V$ induced by a higher complex structure is preserved under higher diffeomorphisms. \end{prop} \begin{proof} First, higher diffeomorphisms act on higher complex structures, so on all induced objects from them. In particular they act on $V$ and the filtration. Preserving the filtration is a local property, so we can work in the basis $B=(s,ps,...,p^{n-1}s)$. Further take a Hamiltonian $H$ generating a higher diffeomorphism. An element $p^ks$ changes by $$\{H, p^ks\}=p^k\{H,s\}+kp^{k-1}(\del H)s.$$ Since $H$ has no constant terms, we get only terms with factor $p^l$ with $l\geq k$. So the action of $\Symp_0$ on $V$ is lower triangular, the flag structure is preserved. \end{proof} From the filtration in $V$, we get a line subbundle $L^*$ in the dual bundle $V^*$ defined by $$L^* = (F_n / F_{n-1})^* \cong \Ann(F_{n-1})$$ where $\Ann(F_{n-1})$ is the annihilator of $F_{n-1}$. In local coordinates, $L^*$ is generated by $s^*$, the first element of the basis dual to $B$. This induced line subbundle will play an important role in the sequel. The link between higher complex structures and character varieties relies on the idea to deform the 1-form $\Phi$ to a flat connection. \subsection{Conjugated higher complex structures}\label{dualcomplexstructure} There is a natural notion of conjugated space to $\cotang$ using the natural complex conjugation on the complexified cotangent bundle $T^{*\C}\Sigma$. We associate to an ideal $I\in \Hilb^n_{red}(T^{*\C}\S)$ the ideal $\bar{I}$ and then take $\Symp_0$-equivalence classes. In coordinates, we start from $$I=\langle p^n-t_2p^{n-2}-...-t_n, -\bar{p}+\mu_1+\mu_2p+...+\mu_np^{n-1} \rangle.$$ To get the conjugated structure, we have to express $\bar{I}$ in the same form as $I$, i.e. as \begin{align*} \bar{I} &=\langle \bar{p}^n-\bar{t}_2\bar{p}^{n-2}-...-\bar{t}_n, -p+\bar{\mu}_1+\bar{\mu}_2\bar{p}+...+\bar{\mu}_n\bar{p}^{n-1} \rangle \\ &= \langle p^n-{}_2tp^{n-2}-...-{}_nt, -\bar{p}+{}_1\mu+{}_2\mu p+...+{}_n\mu p^{n-1} \rangle. \end{align*} where $({}_kt, {}_k\mu)$ are the parameters of the conjugate to $\cotang$. It is possible to explicitly express the conjugated coordinates $({}_kt, {}_k\mu)_k$ in terms of $(t_k, \mu_k)_k$. For example one gets ${}_2\mu=\frac{1}{\bar{\mu}_2}$ and ${}_nt = \bar{\mu}_2^n\bar{t}_n$. \section{Parabolic connections and reduction}\label{parabolicreduction} In this section, we describe the generic fiber of the twistor space of $\cotang$ from figure \ref{HK} which is a space of flat connections. The idea about the deformation of $\cotang$ is to replace the polynomial functions on $T^*\S$ by differential operators. The higher complex structure is given by two polynomials (the generators of $I$), so in the deformation one gets a pair of differential operators. The space of pairs of differential operators can be obtained by a reduction of all connections by some specific parabolic gauge. This procedure was first introduced by Bilal, Fock and Kogan in \cite{BFK}. In that paper, the authors also describe some ideas for generalized complex and projective structures. Our higher complex structures are the mathematically rigorous version of their ideas. Our treatment of the parabolic reduction is independent of their paper and follows some other notation. The question about how to impose the commutativity condition on the differential operators remained open in their paper. We show the existence of an infinitesimal action admitting a moment map, whose vanishing implies this commutativity, i.e. flatness of the associated connection. \subsection{Atiyah-Bott reduction} Before going to the parabolic reduction, we recall the classical reduction of connections by gauge transforms, developed by Atiyah and Bott in their famous paper \cite{AtBott}. Let $\Sigma$ be a surface and $G$ be a semisimple Lie group with Lie algebra $\g$. Let $E$ be a trivial $G$-bundle over $\Sigma$. Denote by $\mathcal{A}$ the space of all $\g$-connections on $E$. It is an affine space modeled over the vector space of $\g$-valued 1-forms $\Omega^1(\Sigma, \g)$. Further, denote by $\mathcal{G}$ the space of all gauge transforms, i.e. bundle automorphisms. We can identify the gauge group with $G$-valued functions: $\mathcal{G}=\Omega^0(\Sigma,G)$. On the space of all connections $\mathcal{A}$, there is a natural symplectic structure given by $$\bm\hat{\omega} = \int_{\Sigma} \tr \;\delta A \wedge \delta A$$ where tr denotes the Killing form on $\g$ (the trace for matrix Lie algebras). Since $\mathcal{A}$ is an affine space, its tangent space at every point is canonically isomorphic to $\Omega^1(\Sigma, \g)$. So given $A \in \mathcal{A}$ and $A_1, A_2 \in T_A\mathcal{A} \cong \Omega^1(\Sigma, \g)$, we have $\bm\hat{\omega}_A(A_1, A_2) = \int_{\Sigma} \tr \; A_1\wedge A_2$. Note that $\bm\hat{\omega}$ is constant (independent of $A$) so $d\bm\hat{\omega} = 0$. Further, the 2-form $\bm\hat{\omega}$ is clearly antisymmetric and non-degenerate (since the Killing form is). Remark finally that this construction only works on a surface. We can now state the famous theorem of Atiyah-Bott (see end of chapter 9 in \cite{AtBott} for unitary case, see section 1.8 in Goldman's paper \cite{Goldman.2} for the general case): \textit{the action of gauge transforms on the space of connections is hamiltonian and the moment map is the curvature}. Thus, the hamiltonian reduction $\mathcal{A}//\mathcal{G}$ is the moduli space of flat connections. Let us explain the moment map with more detail: the moment map $m$ is a map from $\mathcal{A}$ to $\Lie(\mathcal{G})^*$. The Lie algebra $\Lie(\mathcal{G})$ is equal to $\Omega^0(\Sigma, \g)$, so its dual is isomorphic to $\Omega^2(\Sigma, \g)$ via the pairing $\int_{\Sigma} \tr$. On the other hand, given a connection $A$, its curvature $F(A)$ is a $\g$-valued 2-form, i.e. an element of $\Omega^2(\Sigma,\g)$. Hence, the map $m$ is well-defined. \newpage \subsection{Parabolic reduction} \subsubsection{Setting and coordinates}\label{settingparab} Consider a hermitian vector bundle $V$ of degree 0 over a surface $\S$ equipped with a fixed line subbundle $L^*$ in the dual bundle $V^*$. Using the hermitian structure on $V$, we can identify $L^*$ to some line subbundle $L$ in $V$. Fix a reference complex structure giving local coordinates $(z, \bar{z})$ on $\S$. We want to mimic the Atiyah--Bott reduction for $G=\SL_n(\mathbb{C})$ on $V$ with the extra constraint of preserving $L$. Thus, we consider the subgroup $\mathcal{P}\subset \mathcal{G}$, consisting in those gauges preserving $L$. We can write these as function from $\S$ to the space of matrices of the form $$\begin{pmatrix} * & \cdots & * & * \\ \vdots & & \vdots & \vdots \\ * & \cdots & *& * \\ 0 & \cdots& 0 & * \\ \end{pmatrix}$$ i.e. preserving the last direction in the dual space. We want to compute and analyze the hamiltonian reduction $\mathcal{A}//\mathcal{P}$, which we call \textbf{space of parabolic connections}. Since $\mathcal{P}\subset \mathcal{G}$, we know by the Atiyah--Bott theorem that the action of $\mathcal{P}$ on the space of connections $\mathcal{A}$ is hamiltonian with moment map $m: A\mapsto i^*F(A)$ where $i: \mathcal{P} \hookrightarrow \mathcal{G}$ is the inclusion and $i^*: \Lie(\mathcal{G})^* \twoheadrightarrow \Lie(\mathcal{P})^*$ the induced surjection on the dual Lie algebras. Since $G=\SL_n(\mathbb{C})$, the map $i^*$ is explicitly given by forgetting the first $n-1$ entries in the last column. This means that $m^{-1}(\{0\})$ is the space of all $A \in \mathcal{A}$ such that the curvature $F(A)$ is of rank 1. More precisely, $F(A)$ is of the form $$\begin{pmatrix} 0& \cdots & 0 & \xi_n\\ \vdots & & \vdots &\vdots\\ \vdots & & \vdots & \xi_2\\ 0& \cdots &0 & 0 \\ \end{pmatrix}.$$ To get coordinates on the hamiltonian reduction $\mathcal{A}//\mathcal{P}$, take a connection $A\in \mathcal{A}$ and decompose it into its holomorphic and anti-holomorphic parts: $A=A_1 + A_2$. As a covariant derivative, we set $\nabla = \partial + A_1$ and $\bar{\nabla} = \bar{\partial}+ A_2$. For a generic connection $A$, via the parabolic gauge, it is possible to reduce $A_1$ to a companion matrix \begin{equation}\label{firstmatrix} A_1 \sim \begin{pmatrix} & & & \bm\hat{t}_n\\ 1 & & &\vdots \\ & \ddots & &\bm\hat{t}_2 \\ & & 1 & 0 \\ \end{pmatrix}dz. \end{equation} The existence of such a gauge is proven in the Appendix \ref{appendix:B}. The precise condition on $A$ is the following: we need the existence of a basis of the form $(s,\nabla s, \nabla^2 s, ..., \nabla^{n-1}s)$ where $s$ is some section. This is equivalent to the Griffiths transversality condition which in our setting says that the bundle $V$ admits a filtration $\mathcal{F}$ such that $$\nabla : \mc{F}_i/\mc{F}_{i-1} \rightarrow \mc{F}_{i+1}/\mc{F}_i\otimes K$$ is well-defined and an isomorphism for all $i$. \begin{Remark} If all the objects ($V, \bm\hat{t}_k$ and the section $s$) are chosen to be holomorphic, we recover the notion of an oper. \end{Remark} Reducing $A_1$ to the form above means choosing a basis of the form $B= (s, \nabla s, \nabla^2 s,..., \nabla^{n-1}s)$. This takes all the gauge freedom. A connection in $\mathcal{A}//\mathcal{P}$ verifies $[\nabla, \bar{\nabla}]\nabla^i s = 0$ for $i=0,1,...,n-2$ since $[\nabla, \bar{\nabla}] = F(A)$ is the curvature which is concentrated on the last column. It follows that $\bar{\nabla}\nabla^i s = \nabla^i \bar{\nabla} s$ for all $i=1,...,n-1$. Thus, the connection is fully described by $\nabla^n s$ and $\bar{\nabla}s$. We can write these expressions in the basis $B$: \begin{equation}\label{eqqq1} \nabla^n s = \bm\hat{t}_{n}s + \bm\hat{t}_{n-1} \nabla s+...+\bm\hat{t}_{2}\nabla^{n-2}s = \bm\hat{P}(\nabla)s \end{equation} \begin{equation}\label{eqqq2} \bar{\nabla}s = \bm\hat{\mu}_1 s + \bm\hat{\mu}_2 \nabla s + ... + \bm\hat{\mu}_n \nabla^{n-1}s = \bm\hat{Q}(\nabla)s. \end{equation} Notice that $\bm\hat{t}_1=0$ since $\tr A_1=0$. The second part $A_2$ is uniquely determined by its first column given by equation \eqref{eqqq2}. Since $\bar{\nabla}\nabla^i s = \nabla^i \bar{\nabla} s$ for $i=1,...,n-1$, the $i$-th column of $A_2$ is given by applying $(i-1)$ times $\nabla$ to the first column. We get a 1-form of the following type: \begin{equation}\label{secondmatrix} A_2 \sim \begin{pmatrix} \bm\hat{\mu}_1& \partial \bm\hat{\mu}_1+\bm\hat{\mu}_n \bm\hat{t}_n & \cdots\\ \bm\hat{\mu}_2 & \bm\hat{\mu}_1+\partial \bm\hat{\mu}_2+\bm\hat{\mu}_n \bm\hat{t}_{n-1} & \cdots \\ \vdots & \vdots & \vdots \\ \bm\hat{\mu}_{n-1} & \bm\hat{\mu}_{n-2}+\partial \bm\hat{\mu}_{n-1}+\bm\hat{\mu}_n \bm\hat{t}_2 & \cdots \\ \bm\hat{\mu}_n & \bm\hat{\mu}_{n-1} +\partial \bm\hat{\mu}_n & \cdots \end{pmatrix}d\bar{z}. \end{equation} \begin{Remark} Notice the similarity between Equations \eqref{eqqq1} and \eqref{eqqq2}, and the ideal from Equation \eqref{ideal-I}. If we formally replace $\nabla$ by $p$ and $s$ by $1$, we get precisely the generators of the ideal from Equation \eqref{ideal-I}. \end{Remark} The functions $(\bm\hat{\mu}_2, ..., \bm\hat{\mu}_n, \bm\hat{t}_{2}, ..., \bm\hat{t}_{n})$ completely parameterize $\mathcal{A}//\mathcal{P}$ since it is possible to express $\bm\hat{\mu}_{1}$ in terms of these using that the second matrix is traceless. We call an element of $\mathcal{A}//\mathcal{P}$ a \textbf{parabolic connection}. We consider $\mc{A}//\mc{P}$ as a subspace of $\mc{A}$ by using the representative $A_1 + A_2$ with $A_1$ of the form \eqref{firstmatrix} and $A_2$ like in \eqref{secondmatrix}. \begin{Remark} The existence of a filtration on $V$ satisfying Griffiths transversality condition depends on the complex structure on $\S$. So $(\bm\hat{\mu}_2, ..., \bm\hat{\mu}_n, \bm\hat{t}_{2}, ..., \bm\hat{t}_{n})$ are local coordinates on $\mc{A}//\mc{P}$ depending on the complex structure. By varying the complex structure on $\S$, we cover all parabolic connections. \end{Remark} The curvature of a parabolic connection is concentrated on the last column: $[\nabla, \bar{\nabla}]\nabla^{n-1}s = \xi_n s + \xi_{n-1} \nabla s + ...+ \xi_2 \nabla^{n-2}s$. The following proposition allows to compute the parabolic curvature easily. \begin{prop}[Parabolic curvature]\label{thmcourbure} $[\nabla^n,\bar{\nabla}]s = \sum_{k=2}^n \xi_k\nabla^{n-k}s.$ \end{prop} \begin{proof} Since the first $n-1$ columns of the curvature $F(A)$ are 0, we have $[\nabla, \bar{\nabla}]\nabla^is = 0$ for $i=0,1,...,n-2$. Using Leibniz's rule and induction on $k$, we can prove that $[\nabla^k, \bar{\nabla}]s = 0$ for $k=1,...,n-1$. Indeed, it is true for $k=1$ and we have $[\nabla^{k+1}, \bar{\nabla}]s = \nabla [\nabla^k,\bar{\nabla}]s + [\nabla,\bar{\nabla}]\nabla^k s=0$ whenever $k\leq n-2$. \noindent Therefore, we get $$[\nabla^n, \bar{\nabla}]s = \nabla[\nabla^{n-1}, \bar{\nabla}]s + [\nabla, \bar{\nabla}]\nabla^{n-1}s = [\nabla, \bar{\nabla}]\nabla^{n-1}s = \sum_{k=2}^n \xi_k\nabla^{n-k}s$$ by the last column of the curvature. \end{proof} Inside the non-commutative ring of differential operators, we define the left-ideal $\bm\hat{I}=\langle \nabla^n-\bm\hat{P}, -\bar{\nabla}+\bm\hat{Q} \rangle$ where $\bm\hat{P}$ and $\bm\hat{Q}$ are defined in equations \eqref{eqqq1} and \eqref{eqqq2} respectively. We can express the previous proposition as $$[\nabla^n,\bar{\nabla}]=\sum_{k=2}^n \xi_k\nabla^{n-k} \mod \bm\hat{I}.$$ Notice finally that the $\bm\hat{\mu}_k$ and $\bm\hat{t}_k$ are not tensors on $\S$ (their transformation rules under a coordinate change $z\mapsto w(z)$ is quite complicated). We will see in the following Section \ref{parabolicwithlambda} that if we introduce a parameter $\l$, we get tensors at the semiclassical limit. \subsubsection{Example \texorpdfstring{$n=2$}{n=2}}\label{casen2} Consider a parabolic $\SL(2, \C)$-connection $A=A_1+A_2$, decomposed into the $(1,0)$ and $(0,1)$-part. The first matrix $A_1$ is a companion matrix of the form $\left( \begin{smallmatrix} 0 & \bm\hat{t}_2 \\ 1 & 0 \end{smallmatrix} \right)$. Let us compute the transformed matrix $A_2$. It is the image of the operator $\bar{\nabla}$ in a basis $(s,\nabla s)$. Put $\bar{\nabla}s = \bm\hat{\mu}_1 s+ \bm\hat{\mu}_2 \nabla s.$ The second column can be computed using $\bar{\nabla}\nabla s = \nabla \bar{\nabla}s - [\nabla,\bar{\nabla}]s = \nabla \bar{\nabla}s$ and $\nabla^2 s = \bm\hat{t}_2s$. Since the trace of the matrix is zero, we get $\bm\hat{\mu}_1 = -\frac{1}{2}\partial \bm\hat{\mu}_2$. Hence $$A_2 = \left( \begin{array}{cc} -\frac{1}{2}\partial \bm\hat{\mu}_2 & -\frac{1}{2}\partial^2 \bm\hat{\mu}_2+\bm\hat{t}_2\bm\hat{\mu}_2 \\ \bm\hat{\mu}_2 & \frac{1}{2}\partial \bm\hat{\mu}_2 \end{array} \right).$$ The curvature is of the form $\left( \begin{smallmatrix} 0 & \xi_2 \\ 0 & 0 \end{smallmatrix} \right)$ where $$\xi_2 = (\bar{\partial}-\bm\hat{\mu}_2\partial-2\partial\bm\hat{\mu}_2)\bm\hat{t}_2+\frac{1}{2}\partial^3\bm\hat{\mu}_2.$$ Suppose that the curvature $\xi_2$ is 0. We can then look for flat sections $\Psi = (\psi_1,\psi_2)$. The first condition $(\partial+A_1)\Psi=0$ gives $\psi_1=-\partial \psi_2$ and $$(\partial^2-\bm\hat{t}_2)\psi_2 = 0.$$ The second condition $(\bar{\partial}+A_2)\Psi= 0$ only gives one extra condition: $$(\bar{\partial}-\bm\hat{\mu}_2\partial+\frac{1}{2}\partial\bm\hat{\mu}_2)\psi_2 = 0.$$ For $\bm\hat{\mu}_2=0$ this just means that $\psi_2$ is holomorphic and we get an ordinary differential equation $(\partial^2-\bm\hat{t}_2)\psi_2 = 0$. For $\bm\hat{\mu}_2\neq 0$, the second condition is still a holomorphicity condition, but with respect to another complex structure. For general $n$, a flat section $\Psi=(\psi_k)_{1\leq k \leq n}$ is of the form $\psi_{n-k}=\del^k\psi_n$ and there are two equations on $\psi_n$. The first equation comes from the last column in $A_1$, so directly generalizes to $(\partial^n-\bm\hat{t}_1\partial^{n-1}-...-\bm\hat{t}_n)\psi_n = 0$. The generalized holomorphicity condition comes from the last row in $A_2$: \begin{equation}\label{diffops} (-\bar{\partial}+\bm\hat{\alpha}_{nn}+\bm\hat{\alpha}_{n,n-1}\partial+...+\bm\hat{\alpha}_{n,1}\partial^{n-1})\psi_n=0 \end{equation} where $\bm\hat{\alpha}_{ij}$ denote the entries of $A_2$ which have an explicit but complicated expression in terms of the $\bm\hat{\mu}_k$ and $\bm\hat{t}_k$. \subsection{Infinitesimal action and flat connections}\label{symponconnections} To go from parabolic connections to flat connections, we would like to apply a second hamiltonian reduction. We define an infinitesimal action on $\mc{A}//\mc{P}$, coming from changing the line subbundle $L^*$, and show that it allows a moment map whose zero-set consists of flat connections. We conjecture that the infinitesimal action can be integrated to an action of the group of higher diffeomorphisms. \subsubsection{Infinitesimal action} Recall that the description in coordinates of the space of parabolic connections relies on a basis $B$ of the form $(s, \nabla s, ..., \nabla^{n-1}s)$. Using this basis, we have seen in \ref{indbundle} that the line subbundle $L^*$ is generated by the dual to $s$. So varying $L^*$ is equivalent to varying $s$. A variation $\delta s$ of the section $s$ can be expressed in basis $B$: $$\delta s = v_1 s+v_2\nabla s +...+v_{n}\nabla^{n-1}s = \bm\hat{H}s$$ where $\bm\hat{H}=v_1+v_2\nabla+...+v_{n}\nabla^{n-1}$ is a differential operator of degree $n-1$. It generates an infinitesimal action on the whole basis $B$, and thus on the space of parabolic connections. Let us describe how to compute the matrix $X$ describing the infinitesimal base change induced by $\delta s$. Write the base change as $$(s,\nabla s,...,\nabla^{n-1}s) \mapsto (s,\nabla s,...,\nabla^{n-1}s)+\varepsilon (\delta s,\nabla \delta s,...,\nabla^{n-1}\delta s).$$ So the first column of $X$ is just given by $Xs = \delta s = v_1s+v_2\nabla s+...+v_n\nabla^{n-1}s$. The second is given by $X\nabla s = \nabla \delta s = \nabla(v_1s+v_2\nabla s+...+v_n\nabla^{n-1}s)$. We notice that the construction of this matrix $X$ is exactly the same as for the matrix $A_2$ (see equation \eqref{secondmatrix}) with the only difference that the variables in $A_2$ are called $\bm\hat{\mu}_k$ instead of $v_k$. Since both matrices are traceless, even the terms $v_1$ and $\bm\hat\mu_1$ coincide. \begin{prop}\label{computeX} The matrix $X$ of the gauge coming from a variation of $s$ is given by $$X=A_2 \mid_{\bm\hat{\mu}_k\mapsto v_k}.$$ \end{prop} Notice that if $X$ is a parabolic gauge, i.e. the $(n-1)$ first entries of the last column are zero, then $v_k=0 \;\forall k$, so $X=0$. Let us compute the action on our coordinates $(\bm\hat{t}_k, \bm\hat{\mu}_k)$. The coordinates $\bm\hat{t}_k$ are given by the relation $\nabla^n s = \bm\hat{P}s$ where $\bm\hat{P}=\bm\hat{t}_2\nabla^{n-2}+...+\bm\hat{t}_n$. Similarly, the coordinates $\bm\hat{\mu}_k$ are given by $\bar{\nabla}s = \bm\hat{Q}s$ where $\bm\hat{Q} = \bm\hat{\mu}_1+\bm\hat{\mu}_2\nabla...+\bm\hat{\mu}_n\nabla^{n-1}$. \begin{prop}\label{variation-infinit} The infinitesimal action induces the variations $\delta \bm\hat{P}$ and $\delta \bm\hat{Q}$ given by \begin{align*} \delta \bm\hat{P} &= [\bm\hat{H}, -\nabla^n+\bm\hat{P}] \mod \bm\hat{I} \\ \delta \bm\hat{Q} &= [\bm\hat{H}, -\bar{\nabla}+\bm\hat{Q}] \mod \bm\hat{I} \end{align*} where $\bm\hat{I} = \langle \nabla^n-\bm\hat{P}, -\bar{\nabla}+\bm\hat{Q} \rangle$ is a left-ideal of differential operators. \end{prop} \begin{proof} The proposition follows directly from the oberservation that the variation $\delta \bm\hat{P}$ satisfies $$\nabla^n (s+\varepsilon \bm\hat{H}s) = (\bm\hat{P}+\varepsilon \delta \bm\hat{P})(s+\varepsilon \bm\hat{H}s).$$ The same applies to the variation of $\delta \bm\hat{Q}$. \end{proof} \subsubsection{Moment map} The infinitesimal action on the space of parabolic connections allows a moment map. This is not surprising since we can see $\mc{A}//\mc{P}$ as a subset of $\mc{A}$ and the gauge action on $\mc{A}$ is hamiltonian. The moment map is nothing else than the parabolic curvature: \begin{thm}\label{moment-map-symp} The infinitesimal action on the space of parabolic connections $\mathcal{A}//\mathcal{P}$ is hamiltonian with moment map $$m(\bm\hat{t}_i,\bm\hat{\mu}_j).(v_2,...,v_n) = \int_{\Sigma} \sum_{i=1}^n x_{n,n+1-i}\xi_i$$ where $x_{i,j}$ are the matrix elements of the gauge $X$ and $\xi_i$ is the parabolic curvature of the parabolic connection described by $(\bm\hat{t}_i,\bm\hat{\mu}_i)_{2\leq i \leq n}$. \end{thm} Some explanation for the moment map is necessary: $m$ should be a map from $\mathcal{A}//\mathcal{P}$ to the dual of the Lie algebra of the acting group. In our case, we only have an infinitesimal action, but we know that this action is by special gauge transformations $X$ parametrized by $(v_2,...,v_n)$. So the image of $m$ is some subset of the dual Lie algebra of the group of all gauge transformations. So we are in the setting of Atiyah--Bott. \begin{proof} Our computation is analogous to the Atiyah--Bott reduction. An infinitesimal gauge transform given by $X$ affects $A_1$ and $A_2$ by \begin{align*} \chi(A_1) = [X,A_1]-\partial X \\ \chi(A_2) = [X,A_2] -\bar{\partial}X \end{align*} The symplectic form on $\mathcal{A}//\mathcal{P}$ is the restriction of the one on $\mathcal{A}$, so we can compute \begin{align*} \iota_{\chi}\omega_{\mathcal{A}//\mathcal{P}} &= \int \tr \left(\chi(A_1)\delta A_2-\chi(A_2)\delta A_1 \right)\\ &= \int \tr ([X,A_1]-\partial X )\delta A_2-([X,A_2] -\bar{\partial}X)\delta A_1 \\ &= \int \tr ([A_1,\delta A_2]+\delta\partial A_2-[A_2,\delta A_1]-\delta\bar{\partial}A_1)X \\ &= \int \tr \delta (\partial A_2-\bar{\partial}A_1+[A_1,A_2])X \\ &= \delta \int \tr F(A)X \\ &= \delta \int \sum_{i=1}^n x_{n,n+1-i}\xi_i. \end{align*} Therefore $$m= \int_{\Sigma} \sum_{i=1}^n x_{n,n+1-i}\xi_i.$$ \end{proof} As a consequence, the vanishing of the moment map implies flatness of the parabolic connection. \begin{conj} The infinitesimal action on the space of parabolic connections can be integrated to an action of the group of higher diffeomorphisms, or a deformation of this group. The double hamiltonian reduction $(\mc{A}//\mc{P})//\Symp_0$ is an open dense subset of the character variety. \end{conj} \section{Parabolic reduction of \texorpdfstring{$h$}{h}-connections}\label{parabolicwithlambda} In this section, we study the parabolic reduction on $h$-connections to get very close to the twistor space description from Figure \ref{HK}. The main idea is the following: a point in $\cotang$ is a $\Symp_0$-equivalence class of ideals of the form $$I=\langle -p^n+t_2p^{n-2}+...+t_n, -\bar{p}+\mu_1+\mu_2 p+...+\mu_n p^{n-1} \rangle.$$ Replace the polynomials by $h$-connections using the rule $p \mapsto \nabla=h\del + A_1(h)$ and $\bar{p} \mapsto \bar{\nabla}=h\delbar+A_2(h)$ where $h$ is a formal parameter. This corresponds to the deformation of a higher complex structure $\Phi$ to $\Phi+hd+hA+h^2\Psi=h(d+\l\Phi+A+\l^{-1}\Psi)$ where $\l=h^{-1}$. For $h\neq 0$ we divide the connection by $h$ to get a usual connection with parameter $\l$. For $\l \in \C^*$ fixed, we get the same space as described in the previous Section \ref{parabolicreduction}, i.e. the space of flat parabolic connections. For $\l \rightarrow \infty$ we get the cotangent bundle $\cotang$. \subsection{Parametrization}\label{param-lambda} Consider a surface $\S$ with a reference complex structure. We consider two higher complex structures $I$ and $I'$ on $\S$. From \ref{indbundle}, the higher complex structure $I$ induces a bundle $V$ and a line subbundle $L^*$ in $V^*$. We also get a bundle $W$ from $I'$. Choose an isomorphism $\chi$ between $V$ and $W$ and a hermitian structure on $V$. The hermitian structure allows to identify $L^*$ to some line subbundle $L$ in $V$. We consider connections on $V$ of the form $$C(\l)=\l \Phi + A + \l^{-1}\Psi$$ where $\Phi$ and $\Psi$ are induced from the higher complex structures $I$ and $I'$, and $A$ is some generic connection on $V$. We define $C_1(\l)=\l \Phi_1 + A_1 + \l^{-1}\Psi_2$ and $C_2(\l)=\l \Phi_2 + A_2 + \l^{-1}\Psi_1$, i.e. the $(1,0)$-part and $(0,1)$-part of $C(\l)$. We also define $\nabla=\partial + C_1(\l)$ and $\bar{\nabla}=\bar{\partial}+C_2(\l)$. As for the case without parameter, assuming Griffiths transversality, there is a parabolic gauge which transforms $C(\l)$ to \begin{equation}\label{paragauge} \left(\begin{array}{cccc} & & & \bm\hat{t}_n(\l) \\ 1& & & \vdots \\ & \ddots & & \bm\hat{t}_2(\l) \\ & & 1& 0 \end{array}\right) dz + \left(\begin{array}{cc} \bm\hat{\mu}_1(\l) & \\ \bm\hat{\mu}_2(\l)& \bm\hat{\alpha}_{ij}(\l)\\ \vdots & \\ \bm\hat{\mu}_n(\l) & \end{array}\right) d\bar{z} \end{equation} where $\bm\hat{\alpha}_{ij}(\l)$ and $\bm\hat{\mu}_1(\l)$ are explicit functions of the other variables. Thus, the space is parametrized by $(\bm\hat{t}_i(\l), \bm\hat{\mu}_i(\l))_{i=2,...,n}$. This representative comes from a basis of the form $(s, \nabla s, ..., \nabla^{n-1}s)$ for some section $s$. We then get our coordinates by \begin{equation}\label{paraboliccoord1} \nabla^n s = \bm\hat{t}_{n}(\l)s + \bm\hat{t}_{n-1}(\l) \nabla s+...+\bm\hat{t}_{2}(\l)\nabla^{n-2}s \end{equation} \begin{equation}\label{paraboliccoord2} \bar{\nabla}s = \bm\hat{\mu}_1(\l) s + \bm\hat{\mu}_2(\l) \nabla s + ... + \bm\hat{\mu}_n(\l) \nabla^{n-1}s. \end{equation} You can compute the $\bm\hat{\alpha}_{ij}(\l)$ using $\bar{\nabla}\nabla^ks = \nabla^k\bar{\nabla}s$ for $k\leq n-1$ which holds since the curvature $[\nabla, \bar{\nabla}]$ is concentrated in the last column. \begin{example}\label{examplen2} Take $n=2$ and consider $\Phi_1 = \left(\begin{smallmatrix} 0 & 0 \\ b_1 & 0\end{smallmatrix}\right)$, $A_1 = \left(\begin{smallmatrix} a_0 & a_1 \\ a_2 & -a_0\end{smallmatrix}\right)$, $\Phi_2=\mu_2\Phi_1$. We consider the special case where $C(\l)$ satisfies a reality constraint giving $A_2 = -A_1^{\dagger}$ and $\Psi = \Phi^{\dagger}$. So we have $$C_1(\l)=\begin{pmatrix} a_0 & a_1+\l^{-1}\bar{\mu}_2\bar{b}_1 \\ a_2+\l b_1 & -a_0\end{pmatrix}\; \text{,} \; C_2(\l)=\begin{pmatrix} -\bar{a}_0 & -\bar{a}_2+\l^{-1}\bar{b}_1 \\ -\bar{a}_1+\l\mu_2b_1 & \bar{a}_0\end{pmatrix}.$$ We look for $P=\left(\begin{smallmatrix} p_1 & p_2 \\ 0 & 1/p_1\end{smallmatrix}\right)$ such that $$P C_1(\l)P^{-1}+ P\partial P^{-1} = \begin{pmatrix} 0 & \bm\hat{t}_2(\l) \\ 1 & 0\end{pmatrix}.$$ Multiplying by $P$ from the right, one can solve the system. One finds $p_1=(\l b_1+a_2)^{1/2}$ and $p_2=-\frac{a_0}{p_1}+\frac{\partial p_1}{p_1^2}$. Hence $$\bm\hat{t}_2(\l)=\l a_1b_1+ \text{ constant term }+\l^{-1}\bar{\mu}_2 a_2\bar{b}_1.$$ Transforming $C_2(\l)$ with $P$ we get $$\bm\hat{\mu}_2(\l) = \frac{-\bar{a}_1+\l\mu_2 b_1}{\l b_1+a_2}.$$ Assume in addition that $C(\l)$ is flat. Then we have $a_2=-\bar{\mu}_2\bar{a}_1$, so we can study the behavior of $\bm\hat{\mu}_2(\l)$ near $\infty$ and 0. For $\l\rightarrow \infty$, we can develop the rational expression of $\bm\hat{\mu}_2(\l)$ to get $$\bm\hat{\mu}_2(\l)=\mu_2+(\mu_2\bar{\mu}_2-1)\sum_{k=1}^\infty \frac{\bar{\mu}_2^{k-1}\bar{a}_1^k}{b_1^k}\l^{-k}.$$ For $\l \rightarrow 0$, we get $$\bm\hat{\mu}_2(\l)=\frac{1}{\bar{\mu}_2}+(1-\mu_2\bar{\mu}_2)\sum_{k=1}^\infty \frac{b_1^k}{\bar{\mu}_2^{k+1}\bar{a}_1^k}\l^{k}.$$ Notice that we get ${}_2\mu=1/\bar{\mu}_2$ as leading term (see Section \ref{dualcomplexstructure}). \end{example} The example shows several phenomena which are true in general: \begin{prop}\label{parametrisationlambda} The $\bm\hat{\mu}_k(\l)$ are rational functions in $\l$. The highest term in $\l$ when $\l \rightarrow \infty$ is $\l^{2-k}\mu_k$ where $\mu_k$ is the higher Beltrami differential from the $n$-complex structure. The $\bm\hat{t}_k(\l)$ are also rational functions in $\l$. For $\l \rightarrow \infty$, the highest term is given by $\l^{k-1}t_k$ where $$t_k=\tr A_1\Phi_1^{k-1}.$$ \end{prop} We will see later that $(\mu_k, t_k)$ is a point of the cotangent bundle $\cotang$, which justifies the notation. For $\l\rightarrow 0$, we get the same properties for coordinates coming from $\Psi$. In Section \ref{finalstep} we will impose a reality constraint so that these coordinates become a conjugated higher complex structure with cotangent vector. \begin{proof} The whole point is to analyze equations \eqref{paraboliccoord1} and \eqref{paraboliccoord2} in detail. Let us start with $$\bar{\nabla}s = \bm\hat{\mu}_1(\l) s + \bm\hat{\mu}_2(\l) \nabla s + ... + \bm\hat{\mu}_n(\l) \nabla^{n-1}s.$$ Since $\bar{\nabla}s=(\bar{\partial}+\l\Phi_2+A_2+\l^{-1}\Psi_1)s$ the highest $\l$-term is $\l\Phi_2s=\l\mu_2\Phi_1s+...+\l\mu_n\Phi_1^{n-1}s$. On the other side, the highest term of $\nabla^ks$ is $\l^k\Phi_1^ks$ for $0\leq k \leq n-1$. For generic $s$ the set $(s, \Phi_1s, ..., \Phi_1^{n-1}s)$ is a basis. Hence, we can compare the highest terms and deduce that for $\l\rightarrow \infty$: $$\bm\hat{\mu}_k(\l) = \l^{2-k}\mu_k+\text{ lower terms}.$$ In any case, we can decompose $\nabla^ks$ and $\bar{\nabla}s$ in the basis $(s,\Phi_1s,...,\Phi_1^{n-1}s)$ and notice that the defining equations for $\bm\hat{\mu}_k$ is a quotient of two polynomials in $\l$, i.e. $\bm\hat{\mu}_k$ is a rational function in $\l$. The same decomposition gives that $\bm\hat{t}_k$ is a rational function in $\l$. The last thing is to study the asymptotic behavior of $\bm\hat{t}_k$. For that, we have to study $$\nabla^n s = \bm\hat{t}_{n}(\l)s + \bm\hat{t}_{n-1}(\l) \nabla s+...+\bm\hat{t}_{2}(\l)\nabla^{n-2}s.$$ The highest term of $\nabla^ns$ is not $\l^n \Phi_1^n$ since $\Phi_1^n=0$. The next term is given by $$\l^{n-1}\sum_{l=0}^{n-1}\Phi_1^l \circ(\partial+A_1)\circ\Phi_1^{n-1-l}s$$ where $\circ$ denotes the composition of differential operators. On the other side, the highest terms are given by $\bm\hat{t}_k\l^{n-k}\Phi_1^{n-k}s$. When $\l$ goes to infinity, we compare coefficients in the basis $(s, \Phi_1s, ..., \Phi_1^{n-1}s)$ as before. Using Dirac's ``bra-ket'' notation, we get \begin{align*} \l^{n-k}\bm\hat{t}_k &= \l^{n-1} \langle \Phi_1^{n-k}s \mid \sum_{l=0}^{n-1}\Phi_1^l \circ(\partial+A_1)\circ\Phi_1^{n-1-l} \mid s \rangle \\ &= \l^{n-1}\sum_{l=0}^{n-k} \langle \Phi_1^{n-k-l}s\mid (\partial +A_1)\circ\Phi_1^{n-1-l}\mid s\rangle \\ &= \l^{n-1}\sum_{l=0}^{n-k} \langle \Phi_1^{n-k-l}s\mid (\partial +A_1)\circ\Phi_1^{k-1}\mid \Phi_1^{n-k-l}s\rangle \\ &= \l^{n-1}\tr(\partial+A_1)\circ\Phi_1^{k-1}\\ &= \l^{n-1}\tr A_1 \Phi_1^{k-1}. \end{align*} In the last line, we used that $\tr \del \circ \Phi_1^{k-1} = 0$ since $\Phi_1$ is strictly lower triangular which is preserved under derivation. This precisely gives the expression for $t_k$ as stated in the proposition. \end{proof} At the end of Subsection \ref{settingparab} we have noticed that $\bm\hat{t}_k$ and $\bm\hat{\mu}_k$ do not transform as tensors. We now show that the highest terms, $t_k$ and $\mu_k$, are tensors. Recall that $K=T^{*(1,0)}\Sigma$ is the canonical bundle and that $\Gamma(.)$ denotes the space of sections. \begin{prop}\label{highesttermtensor} We have $t_i \in \Gamma(K^i)$ and $\mu_i \in \Gamma(K^{1-i}\otimes \bar{K})$. \end{prop} \begin{proof} Consider a holomorphic coordinate change $z\mapsto w(z)$. We compute how $\mu_i(z)$ and $t_i(z)$ change. For $\mu_i$, notice that $\Phi_1dz \mapsto \Phi_1 \frac{dz}{dw}dw$, so using $$\Phi_2d\bar{z} = \mu_2(z)\Phi_1dz+...+\mu_n\Phi_1^{n-1}dz^{n-1}$$ we easily get $\mu_i(z)=\frac{d\bar{z}/d\bar{w}}{(dz/dw)^{i-1}}\mu_i(w)$. For $t_i$, we use $t_i=\tr(\Phi_1^{i-1}A_1)$ where $\Phi_1$ and $A_1$ are both $(1,0)$-forms, thus $t_i$ is a $(i,0)$-form, i.e. a section of $K^i$. \end{proof} \subsection{Infinitesimal action and higher diffeomorphisms}\label{inf-action-2} In \ref{symponconnections} we have described an infinitesimal action on the space of parabolic connections $\mc{A}//\mc{P}$. The same action exists on the space of parabolic $h$-connections $\mc{A}(h)//\mc{P}$. In particular, the moment map computed in Theorem \ref{moment-map-symp} stays the same. Here we analyze the infinitesimal action on $\mc{A}(h)//\mc{P}$, in particular what it does on the highest terms $\mu_k$ and $t_k$. We will see that on the highest terms, the action can be integrated to an action of higher diffeomorphisms on higher complex structures. There are two steps: a local analysis and a global analysis. \subsubsection{Local analysis} We prove that the infinitesimal action on the highest terms $\mu_k$ of the parabolic reduction is precisely the infinitesimal action of higher diffeomorphisms on the $n$-complex structure. So we can integrate this action to the group of higher diffeomorphisms. Take a change of section $\delta s = \bm\hat{v}_1s+\bm\hat{v}_2\nabla s+...+\bm\hat{v}_{n}\nabla^{n-1}s=\bm\hat{H}s$. We have previously seen in Proposition \ref{variation-infinit} that the change of coordinates $\delta \bm\hat{\mu}_k$ can be computed by $$\delta \bm\hat{Q}=[\bm\hat{H}, \bm\hat{Q}] \mod \bm\hat{I}$$ where $\bm\hat{I}=\langle -\nabla^n+\bm\hat{t}_2\nabla^{n-2}+...+\bm\hat{t}_n, -\bar{\nabla}+\bm\hat{\mu}_1+\bm\hat{\mu}_2\nabla+...+\bm\hat{\mu}_n\nabla^{n-1} \rangle$ is a left-ideal in the space of differential operators. Since we have a parameter $\l$ in our setting, the variations $\bm\hat{v}_k$ also depend on $\l$. More precisely, for $k\geq 2$ we have that $\bm\hat{v}_k(\l)$ is a rational function in $\l$ with highest term $\l^{2-k}v_k$ when $\l \rightarrow \infty$. Notice that $\bm\hat{v}_1$ is not a free parameter, but depends on the others. It assures that the trace of the gauge transform is zero. One can compute that $\bm\hat{v}_1$ has highest term of degree 0. We can now state: \begin{thm}\label{actionsymponlambdaconn} The infinitesimal action on the highest terms $\mu_k$ of the coordinates $\bm\hat{\mu}_k(\l)$ of the space of parabolic connections with parameter is the same as the infinitesimal action of higher diffeomorphisms on the $n$-complex structure. Therefore, it can be integrated to an action of $\Symp_0(T^*\S)$. \end{thm} The reason for the theorem to be true is roughly speaking that the Poisson bracket is the semi-classical limit of commutators of differential operators. The strategy of the proof is the following: we prove the theorem first for $\mu_2$, and then for $\mu_k$ ($k>2$) supposing $\mu_2=...=\mu_{k-1}=0$ which simplifies the computations. From \cite[Proposition 3]{FockThomas}, we know that the infinitesimal action of a Hamiltonian $H=v_2p+...+v_np^{n-1}$ on the higher Beltrami differentials is given by $$\delta \mu_2 = (\bar{\partial}-\mu_2\partial+\partial\mu_2)v_2$$ for $\mu_2$ and for $\mu_k$, supposing $\mu_2=...=\mu_{k-1}=0$, we simply have $$\delta \mu_k = \bar{\partial}v_{k}.$$ \begin{proof} First, we compute the variation of $\mu_2$ using Proposition \ref{variation-infinit} (we omit $\mod \bm\hat{I}$): $$\delta \bm\hat{\mu}_1+\delta \bm\hat{\mu}_2\nabla+...+\delta \bm\hat{\mu}_n\nabla^{n-1}=[\bm\hat{v}_1+\bm\hat{v}_2\nabla +...+\bm\hat{v}_{n}\nabla^{n-1}, -\bar{\nabla}\!+\!\bm\hat{\mu}_1\!+\!\bm\hat{\mu}_2\nabla+...+\bm\hat{\mu}_n\nabla^{n-1}].$$ Since the highest $\l$-term of $\bm\hat{\mu}_2$ is of degree 0, we are interested in the part of degree $0$ of the coefficient of $\nabla$ in $[\bm\hat{v}_1+\bm\hat{v}_2\nabla +...+\bm\hat{v}_{n}\nabla^{n-1}, -\bar{\nabla}+\bm\hat{\mu}_1+\bm\hat{\mu}_2\nabla+...+\bm\hat{\mu}_n\nabla^{n-1}] \mod \bm\hat{I}$. We first look on contributions coming from $[\bm\hat{v}_k\nabla^{k-1}, \bm\hat{\mu}_l\nabla^{l-1}]$ for $k, l\geq 2$: If $k+l-3<n$ then we do not reduce modulo $\bm\hat{I}$, so the highest term in $\l$ is of degree $4-(k+l)$. Since we have $k, l\geq 2$, the highest term comes from $k=l=2$, which gives $v_2\del\mu_2-\mu_2\del v_2$. If $k+l-3\geq n$, we can have terms with $\nabla^m$ with $n\leq m \leq k+l-3$. So we have to use $\bm\hat{I}$ to reduce it. This reduction gives $\nabla^m = c(\l)\nabla+\text{ other terms}$, and the highest term of $c(\l)$ is of degree $m-2 \leq k+l-5$. Hence, the highest term for $[\bm\hat{v}_k\nabla^{k-1}, \bm\hat{\mu}_l\nabla^{l-1}]$ is $4-(k+l)+k+l-5=-1$. The contributions from $\bm\hat{\mu}_1$ and $\bm\hat{v}_1$ also have degree at most -1. There is one more contribution in degree 0 coming from $[\bm\hat{v}_2\nabla, -\bar{\nabla}]$, which gives $\delbar v_2$. Therefore, we have $$\delta \mu_2 = (\bar{\partial}-\mu_2\partial+\partial\mu_2)v_2.$$ Now, suppose $\mu_2=...=\mu_{k-1}=0$ and compute the variation $\delta \mu_k$ under an action generated by $\bm\hat{v}_k\nabla^{k-1}+...+\bm\hat{v}_n\nabla^{n-1}$. From (again we omit $\mod \bm\hat{I}$) $$\delta \bm\hat{\mu}_k\nabla^{k-1}+...+\delta \bm\hat{\mu}_n\nabla^{n-1}=[\bm\hat{v}_k\nabla^{k-1} +...+\bm\hat{v}_{n}\nabla^{n-1}, -\bar{\nabla}+\bm\hat{\mu}_1+\bm\hat{\mu}_2\nabla+...+\bm\hat{\mu}_n\nabla^{n-1}]$$ we can analyze as above the contribution to the term of degree $2-k$ of the coefficient of $\nabla^{k-1}$. Since $\bm\hat{v}_l$ is of degree at most $2-l$ and $\bm\hat{\mu}_l$ of degree at most $1-l$ for $l<k$ (since we suppose that $\mu_l=0$), we can see that $[\bm\hat{v}_l\nabla^{l-1}, \bm\hat{\mu}_m\nabla^{m-1}]$ cannot contribute to the highest degree. The only contribution comes from the term with $-\bar{\nabla}$. Thus, $$\delta \mu_k = \bar{\partial}v_k.$$ This concludes the proof since the action of higher diffeomorphisms on the $n$-complex structure has the same expression. \end{proof} \begin{Remark} We see that a term $\bm\hat{v}_k\nabla^{k-1}$ can influence $\bm\hat{\mu}_i$ with $i<k$ (unlike the case higher complex structures where $H$ acts like $H \mod I$), but it does not influence the highest term $\mu_i$. In the same vein, a term $\bm\hat{v}_k\nabla^{k-1}$ with $k>n$ acts on parabolic connections, but not on the highest terms. \end{Remark} \subsubsection{Global analysis} We show that the highest term in $\l$ in the zero-curvature condition relates $(\mu_k, t_k)$ to the cotangent bundle $T^*\bm\hat{\mc{T}}^n$. We know that the moment map of the hamiltonian action of $\Symp_0(T^*\Sigma)$ on $\mc{A}//\mc{P}$ is given by $\xi_k=0$, i.e. the remaining curvature of a parabolic connection has to vanish. For connections with parameter $\l$, this gives $\xi_k(\l)=0$. \begin{thm}\label{conditioncinconnection} The highest term in $\l$ of $\xi_k(\l)=0$ gives the condition $(\mc{C})$ of the cotangent bundle $T^*\bm\hat{\mc{T}}^n$ (see theorem \ref{conditionC}). \end{thm} The proof strategy is to reduce the analysis of the highest term in the parabolic curvature to the expression $\xi_k \mod \bm\hat{t}^2 \mod \partial^2$. The following lemma shows that we then get condition $(\mc{C})$. \begin{lemma}\label{curvaturemodmod} The parabolic curvature modulo $\bm\hat{t}^2$ and $\del^2$ gives condition $(\mathcal{C})$ on $T^*\bm\hat{\mathcal{T}}^n$: $$\xi_k = (\bar{\partial}\!-\!\bm\hat{\mu}_2\partial\!-\!k\partial\bm\hat{\mu}_k)\bm\hat{t}_k-\sum_{l=1}^{n-k}\left((l\!+\!k)\partial\bm\hat{\mu}_{l+2}+(l\!+\!1)\bm\hat{\mu}_{l+2}\del\right)\bm\hat{t}_{k+l} \mod \bm\hat{t}^2 \mod \partial^2.$$ \end{lemma} You find the proof of this technical lemma in appendix \ref{appendix:C}. Using the lemma, we can prove theorem \ref{conditioncinconnection}: \begin{proof} From the explicit expression of $\xi_k(\l)$, we know that only derivatives, $\bm\hat{t}_k$'s and $\bm\hat{\mu}_k$'s appear. Since we are only interested in the highest term, we can replace $\bm\hat{t}_k$ by $\l^{k-1}t_k$ and $\bm\hat{\mu}_k$ by $\l^{2-k}\mu_k$. Hence, we get an expression which is a tensor, since both $t_k$ and $\mu_k$ are tensors (by proposition \ref{highesttermtensor}). Since one term is $\bar{\partial}t_k$, we know that the highest term of $\xi_k(\l)$ is a section of $K^k\otimes \bar{K}$ and is of degree $k-1$ in $\l$. In addition, we know that every term in $\xi_k$, apart from $\bar{\partial}t_k$, has at least one partial derivative $\partial$, which adds a $K$-factor to the tensor. The rest is thus at most of type $K^{k-1}\otimes \bar{K}$. The $\bar{K}$-factor comes from a unique $\mu_m$ in each term. Once this $\mu_m$ fixed, only partial derivatives $\partial$ and $t_k$'s contribute to the $K$-factor. Since $t_k$ comes with a factor $\l^{k-1}$, we see that whenever there is a term with a factor $t_it_j$, the contribution in $\l$ is $\l^{i+k-2}$ which is not optimal, since $t_{i+j}$ would contribute with $\l^{i+j-1}$. In the same vein, whenever there is a term with at least two $\partial$, so that the rest is a tensor of type at most $K^{k-2}\otimes \bar{K}$, this term does not have an optimal contribution in $\l$. Therefore, the highest term in $\xi_k(\l)$ is the same as in $\xi_k(\l) \mod \bm\hat{t}^2 \mod \partial^2$. Finally, the statement of the previous lemma \ref{curvaturemodmod} concludes the proof of theorem \ref{conditioncinconnection}. \end{proof} With the previous theorem, we now understand the global meaning of the highest terms $(\mu_k, t_k)$: the $\mu_k$ are the higher Beltrami differentials coming from the higher complex structure, whereas the $t_k$ are a cotangent vector to that higher complex structure. We can say that the \textit{semi-classical limit of $\mc{A}//\mc{P}//\Symp_0$ is $\cotang$}, which confirms the twistor space picture \ref{HK}. \begin{Remark} We have seen in proposition \ref{parametrisationlambda} that $t_k = \tr\Phi_1^{k-1}A_1$. The previous theorem applied for trivial $n$-complex structure $\mu_k=0 \;\forall k$ gives $\delbar t_k = 0$. It can be checked directly that $\delbar \tr\Phi_1^{k-1}A_1 = 0$ using the flatness of $C(\l)$. \hfill $\triangle$ \end{Remark} The question remains how to determine the coefficients of lower degree in $\bm\hat{\mu}_k$ and $\bm\hat{t}_k$. This will be discussed in the next section. \section{Conjectural geometric approach to Hitchin components}\label{finalstep} In this section, we link our moduli space of higher complex structures $\T^n$ to Hitchin's component assuming a conjectural analog of the non-abelian Hodge correspondence in our setting: the existence and uniqueness of real twistor lines. For this, we impose a reality constraint and show that the monodromy of the zero-section is in $\PSL_n(\R)$. \subsection{Reality constraint} Take the setting of Section \ref{param-lambda}. The only difference is that now, we consider a higher complex structure $I$ and its conjugated higher complex structure $I' = \bar{I}$ from Subsection \ref{dualcomplexstructure}. We choose an isomorphism $\chi$ between the two induced bundles $V$ and $W$, and a hermitian structure $h$ on $V$. Recall from Section \ref{indbundle} that the bundles are equipped with a filtration, say $F$ on $V$ and $G$ on $W$. \begin{Remark}\label{hermitian-diag-assumption} Although it is not necessary, we believe that the isomorphism $\chi$ has to be chosen such that the filtrations $F$ and $\chi(G)$ are transversal, inducing a direct sum decomposition of $V$ into line bundles (a complete flag in each fiber). Further, we believe that the hermitian structure $h$ has to be diagonal with respect to this line bundle decomposition. \end{Remark} Consider $C(\l)=\l\Phi + A + \l^{-1}\Psi$ where $\Phi$ and $\Psi$ are given by $I$ and $I'$ respectively. We impose the reality condition $$-C(-1/\bar{\l})^{*_h}=C(\l)$$ where the star denotes the hermitian conjugation. Notice that $-1/\bar{\l}$ is the diametrically opposed point of $\l$ in $\C P^1$. This condition is equivalent to $$\Psi = \Phi^{*_h} \;\text{ and } \; A = -A^{*_h}.$$ The analog of the non-abelian Hodge correspondence in our setting can be formulated as follows: given a point in $\cotang$ near the zero-section (a point in $U$), it induces a higher complex structure $I$ and a bundle $V$. Find an isomorphism $\chi$ and a hermitian structure $h$ such that $C(\l)=\l\Phi+A+\l^{-1}\Phi^{*_h}$ is a flat connection where $A$ is the Chern connection. Further, if the point is in the zero-section, show that the monodromy of $C(\l)$ is in $\PSL_n(\R)$. Admitting that the hermitian structure is nearly determined by the isomorphism $\chi$ (see Remark \ref{hermitian-diag-assumption}), we see that the most important ingredient is the isomorphism $\chi$. For trivial higher complex structure ($\Phi_2=0$), we are in the setting of Higgs bundles with a principal nilpotent Higgs field. In this case, we choose the same hermitian structure $h$. In the sequel, we assume that Griffiths transversality holds for $C(\l)$ for all $\l$ in order to apply the methods from Section \ref{parabolicwithlambda}. This transversality condition is satisfied for $\cotang$, since the induced bundle $V$ has a basis of the form $(s,\Phi_1s,...,\Phi_1^{n-1}s)$ (see Subsection \ref{indbundle}). Since Griffiths transversality is an open condition, $C(\l)$ satisfies this condition for $\l$ near infinity. \subsection{Standard form}\label{standard-form} We want to understand the differential equations implied by the flatness of $C(\l)=\l\Phi + A + \l^{-1}\Phi^{*_h}$. For this we reduce it to a standard form on a local chart where we decompose $\Phi=\Phi_1dz+\Phi_2d\bar{z}$. \begin{lemma}\label{phi1lower} There is a unitary gauge such that $\Phi_1$ becomes lower triangular with entries of coordinates $(i+1,i)$ given by positive real numbers of the form $e^{\varphi_i}$ for all $i=1,...,n-1$. \end{lemma} \begin{proof} The gauge acts by conjugation on $\Phi_1(z)$. Since $\Phi_1(z)$ is nilpotent, for every $z\in \S$, there is an invertible matrix $G(z)\in \GL_n(\C)$ such that $G\Phi_1G^{-1}$ is strictly lower triangular. Since $\Phi(z)$ varies smoothly with $z$, so does $G(z)$. We omit the dependence in $z$ in the sequel of the proof. We decompose $G$ as $G=TU$ where $T$ is lower triangular (not strict) and $U$ is unitary (Gram-Schmidt). Then the matrix $U\Phi_1U^{-1} = T^{-1}(G\Phi_1G^{-1})T$ is already lower triangular. So we have conjugated $\Phi_1$ to a lower triangular matrix via a unitary gauge. Finally, we use a diagonal unitary gauge to change the arguments of the matrix elements with coordinates $(i+1,i)$ to zero. Since $\Phi_1$ is principal nilpotent, all these elements are non-zero, so strictly positive real numbers which can be written as $e^{\varphi_i}$ with $\varphi_i \in \R$. \end{proof} Notice that the unitary gauge preserves the operation $*$, so the form $\l\Phi+A+\l^{-1}\Phi^*$ is preserved. Now, we show that for $\mu=0$, the matrix $A_1$ is upper triangular. Notice the importance of $\Phi_1$ being principal nilpotent. \begin{lemma}\label{a1upper} For $\Phi_2=0$ (trivial higher complex structure) and $\Phi_1$ lower triangular, the flatness of $C(\l)$ implies that $A_1$ is upper triangular. \end{lemma} \begin{proof} We write $A_1=A_l + A_u$ where $A_l$ and $A_u$ are respectively the strictly lower and the (not strictly) upper part of $A_1$. Thus we have $A_2 = -A_l^{*}-A_u^{*}$. The flatness condition at the term $\l$ gives $$0=\bar{\partial}\Phi_1 + [\Phi_1,A_u^{*}]+[\Phi_1,A_l^{*}].$$ Since the first two terms are lower triangular (the operation $*$ exchanges upper and lower triangular matrices), so is the third term $[\Phi_1,A_l^{*}]$. A simple computation shows that a commutator between a principal nilpotent lower triangular matrix and a non-zero strictly upper triangular matrix can never be strictly lower triangular. Thus, $A_l=0$. \end{proof} \subsection{Case \texorpdfstring{$n=2$}{n=2} and \texorpdfstring{$n=3$}{n=3}}\label{n2n3} Let us study the examples of smallest rank, those with $n=2$ and $n=3$. We work locally, so we can suppose that the $n$-complex structure is trivial, i.e. $\mu_k=0$ for $k=2, 3$. We use the standard form from subsection \ref{standard-form}. For $n=2$, write $\Phi_1 = \left(\begin{smallmatrix} 0 & 0 \\ e^{\varphi} & 0\end{smallmatrix}\right)$, $A_1 = \left(\begin{smallmatrix} a_0 & a_1 \\ a_2 & -a_0\end{smallmatrix}\right)$ and $A_2 = -A_1^{\dagger}$. So we have $$C(\l)=\begin{pmatrix} a_0 & a_1 \\ a_2+\l e^{\varphi} & -a_0\end{pmatrix}dz+ \begin{pmatrix} -\bar{a}_0 & -\bar{a}_2+\l^{-1}e^{\varphi} \\ -\bar{a}_1 & \bar{a}_0\end{pmatrix}d\bar{z}.$$ Notice that this is example \ref{examplen2} with $\mu_2=0$ and $b_1=e^{\varphi}$. The flatness equation gives $$ \left \{\begin{array}{cl} a_2 e^{\varphi} &= \; 0 \\ \bar{\partial}\varphi &= \; -2\bar{a}_0 \\ \bar{\partial}a_1 &=\; 2\bar{a}_0a_1 \\ \partial \bar{a}_0+\bar{\partial}a_0 &= \; -a_1\bar{a}_1-e^{2\varphi}. \end{array}\right. $$ The first equation gives $a_2=0$, the second $a_0=-\frac{\del \varphi}{2}$, the third is automatic once we write $a_1=t_2e^{-\varphi}$, where $t_2=\tr \Phi_1A_1$ is the holomorphic quadratic differential. Finally, the last equation gives $$\partial\bar{\partial} \varphi = e^{2\varphi} + t_2\bar{t}_2e^{-2\varphi}$$ which is the so-called $\mathbf{\cosh}$\textbf{-Gordon equation}, which is elliptic for small $t_2$. So we see that the flat connection is uniquely determined by $\mu_2=0, t_2$ and a solution to the $\cosh$-Gordon equation. More details for this case can be found in \cite{Fock}, in particular a link to minimal surface sections in $\S\times \R$. \medskip \noindent For $n=3$, take $\Phi_1 = \left(\begin{smallmatrix} & & \\ c_1 & & \\ b_2 & c_2 &\end{smallmatrix}\right)$. As for $n=2$ the matrix $A_1$ is upper triangular. Thus, we get $$C(\l)=\begin{pmatrix} a_0 & b_0 & c_0 \\ \l c_1 & a_1 & b_1 \\ \l b_2 & \l c_2 & a_2 \end{pmatrix}dz+ \begin{pmatrix} -\bar{a}_0 & \l^{-1}\bar{c}_1 & \l^{-1}\bar{b}_2 \\ -\bar{b}_0 & -\bar{a}_1 & \l^{-1}\bar{c}_2 \\ -\bar{c}_0 & -\bar{b}_1 & -\bar{a}_2\end{pmatrix}d\bar{z}.$$ With a diagonal gauge, we can suppose $c_1=e^{\varphi_1}, c_2=e^{\varphi_2} \in \R_+$. Further, the expressions for the holomorphic differentials are $t_3=\tr \Phi_1^2A_1 = c_0c_1c_2$ and $t_2= \tr\Phi_1A_1 = b_0c_1+b_1c_2+b_2c_0$, hence $c_0=t_3e^{-\varphi_1-\varphi_2}$ and $b_1=-e^{\varphi_1-\varphi_2}b_0-b_2t_3e^{-2\varphi_2-\varphi_1}$. The flatness condition and the zero trace condition then give $a_0=-\frac{2}{3}\partial\varphi_1-\frac{1}{3}\partial\varphi_2$, $a_1=\frac{1}{3}\partial\varphi_1-\frac{1}{3}\partial\varphi_2$ and $a_2=-a_0-a_1$. Let us consider the case where $t_2=t_3=0$. Then $c_0=0$ and $b_1=-e^{\varphi_1-\varphi_2}b_0$. The remaining equations of the flatness are $$ \left \{\begin{array}{cl} \bar{\partial}b_2 &= \; b_2(\bar{\partial}\varphi_1+\bar{\partial}\varphi_2)-\bar{b}_0(e^{\varphi_2}+e^{2\varphi_1-\varphi_2}) \\ -\bar{\partial}b_0 &= \; b_0\bar{\partial}\varphi_1+\bar{b}_2e^{\varphi_2} \\ 2\partial\bar{\partial}\varphi_1 &= \; 2e^{2\varphi_1}-e^{2\varphi_2}+b_2\bar{b}_2+b_0\bar{b}_0(2-e^{2\varphi_1-2\varphi_2}) \\ 2\partial\bar{\partial}\varphi_2 &= \; 2e^{2\varphi_2}-e^{2\varphi_1}+b_2\bar{b}_2+b_0\bar{b}_0(-1+2e^{2\varphi_1-2\varphi_2}). \end{array}\right. $$ For $b_0=b_2=0$ we get the \textbf{Toda integrable system} for $\mf{sl}_3$. This is the same solution as the one obtained from the non-abelian Hodge correspondence applied to the principal nilpotent Higgs field. We see that we need some extra data in order to impose $b_0=b_2=0$. The two variables $b_0$ and $b_2$ are solutions to a system of differential equations. Thus, we only need some initial conditions. Note that if the hermitian structure is diagonal, then $A$ is diagonal as well implying $b_0=b_2=0$ (see Remark \ref{hermitian-diag-assumption}). For $t_2=0$ and $t_3\neq 0$, if we impose $b_0=b_1=b_2=0$ and $\varphi_1=\varphi_2=\varphi$, the flatness becomes \textbf{\c{T}i\c{t}eica's equation} \begin{equation}\label{Titeica} 2\del\delbar \varphi = e^{2\varphi}+t_3\bar{t}_3e^{-4\varphi}. \end{equation} From \cite{Loftin}, we know that \c{T}i\c{t}eica's equation is linked to affine spheres, minimal embeddings and Hitchin representations. Before going to the general case, we push the similarity to Higgs bundles further by choosing a special gauge. \subsection{Higgs gauge}\label{Higgsgauge} Up to now, we have seen the flat connection $C(\l)$ in two gauges. The first, which we call \textit{symmetric gauge}, is the form $C(\l)=\l\Phi+A+\l^{-1}\Phi^*$ where $A_2=-A_1^*$ and the $*$-operator is the hermitian conjugate. The second, which we call \textit{parabolic gauge} and which in the literature is sometimes called \textit{$W$-gauge} or \textit{Drinfeld-Sokolov gauge}, is the form described in equation \eqref{paragauge} where our parameters $\tilde{t}_k(\l)$ and $\tilde{\mu}_k(\l)$ appear. The existence of parabolic gauge (see subsection \ref{existence-para-gauge}) assures that one can go from the symmetric to the parabolic gauge. In Higgs theory, there is a third gauge used, which we call \textit{Higgs gauge}, characterized by $A_2=0$ and by the fact that $\Phi_1$ is a companion matrix. Here we show that for trivial higher complex structure, there exists the Higgs gauge in our setting. \medskip\noindent We start with the existence of the Higgs gauge for trivial higher complex structure. We denote by $\mc{E}_-$ the sum of the negative simple roots, i.e. $\mc{E}_-=\left(\begin{smallmatrix} 0&&& \\ 1 &0&&\\ &\ddots &\ddots& \\ &&1& 0\end{smallmatrix}\right)$. \begin{prop} For $\mu=0$ and a flat connection $\l\Phi+A+\l^{-1}\Phi^*$ in symmetric gauge, there is a gauge $P$ which is lower triangular transforming $\Phi_1$ to $\mc{E}_-$ and $A_2$ to 0. \end{prop} \begin{proof} The statement is equivalent to the following two equations: $$P\Phi_1=\mc{E}_-P \;\text{ and }\; PA_2-\delbar P = 0.$$ The first matrix equation allows to express all entries $p_{i,j}$ of $P$ in terms of the last row $(p_{n,k})_{1\leq k \leq n}$. We then put $\Phi_1 = P^{-1}\mc{E}_-P$ into the flatness equation $0=\delbar\Phi_1+[A_2, \Phi_1]$. After some manipulation, we get $$0=[\mc{E}_-, (\delbar P)P^{-1}-PA_2P^{-1}].$$ We know that the centralizer of $\mc{E}_-$ are polynomials in $\mc{E}_-$. Hence we get $$\delbar P-PA_2=\begin{pmatrix} 0&&&\\ w_2&0&&\\ \vdots&\ddots&\ddots&\\ w_n &\cdots &w_2 &0 \end{pmatrix}P.$$ Looking at the $n$ equations given by the last row, we can choose $(p_{n,k})_{1\leq k \leq n}$ such that $w_2=...=w_n=0$. Therefore $\delbar P =PA_2$, i.e. $A_2$ is transformed to 0. \end{proof} In the Higgs gauge, our flat connection takes the following form: \begin{prop} We suppose $\mu=0$. The flat connection $C(\l)$ in Higgs gauge is locally given by $$(\l\mc{E}_-+A)dz+\l^{-1}\mc{E}_-^*d\bar{z}$$ where the $*$-operation is given by $M^*=HM^{\dagger}H^{-1}$ for some hermitian matrix $H$. Further, we have $\tr \mc{E}_-^kA = t_{k+1}$ and $A=-(\del H) H^{-1}$. \end{prop} \begin{proof} From the existence of Higgs gauge, we know that $\Phi_1=\mc{E}_-$ and $A_2=0$. Since $\mu=0$, we also have $\Phi_2=0$. A direct computation shows that if $P$ denotes the matrix from the Higgs gauge, the matrix $\Phi_1^*$ transforms to $PP^{\dagger}\mc{E}_-^{\dagger}(PP^{\dagger})^{-1}$. So $H=PP^{\dagger}$ which is indeed a hermitian matrix. \noindent Since $P$ is lower triangular, $t_{k+1}=\tr \Phi_1^kA_1$ transforms to $t_{k+1}=\tr \mc{E}_-^kA$. \noindent Finally, since $A_2=P^{-1}\delbar P$ and $A_2=-A_1^{\dagger}$, we get $A_1=-(\del P^{\dagger})P^{\dagger \;-1}$ which transforms under $P$ to $A=-\del(PP^{\dagger})(PP^{\dagger})^{-1}=-(\del H)H^{-1}$. \end{proof} We see that $C(\l)$ in the Higgs gauge becomes close to a Higgs bundle. But in our setting the \emph{holomorphic differentials are in $A$, and not in the Higgs field}. We illustrate the similarity for $n=2$. \begin{example} For $n=2$ and $\mu=0$, we have seen in the previous subsection \ref{n2n3} that in symmetric gauge, our connection reads $$C(\l)=\begin{pmatrix} -\frac{\del \varphi}{2} & t_2e^{-\varphi} \\ \l e^\varphi & \frac{\del \varphi}{2} \end{pmatrix}dz+\begin{pmatrix} \frac{\delbar \varphi}{2} & \l^{-1} e^{\varphi} \\ -\bar{t}_2 e^{-\varphi} & -\frac{\delbar \varphi}{2} \end{pmatrix}d\bar{z}.$$ The flatness condition is equivalent to the $\cosh$-Gordon equation $\del\delbar\varphi = e^{2\varphi}+t_2\bar{t}_2e^{-2\varphi}$. A direct computation gives the form in parabolic gauge: $$C(\l)=\begin{pmatrix} 0 & \bm\hat{t}_2(\l) \\ 1 & 0 \end{pmatrix}dz+\begin{pmatrix} -\frac{1}{2}\del \bm\hat{\mu}_2 & -\frac{1}{2}\del^2 \bm\hat{\mu}_2+\bm\hat{t}_2\bm\hat{\mu}_2 \\ \bm\hat{\mu}_2(\l) & \frac{1}{2}\del \bm\hat{\mu}_2 \end{pmatrix}d\bar{z}$$ where $\bm\hat{t}_2(\l)=\l t_2 + (\del \varphi)^2-\del^2\varphi$ and $\bm\hat{\mu}_2(\l)=-\l^{-1}\bar{t}_2e^{-2\varphi}$. In Higgs gauge, we get $$C(\l)=\begin{pmatrix} -\del\varphi-t_2p_2e^{-\varphi/2} & t_2 \\ \l-a_1 & \del\varphi+t_2p_2e^{-\varphi/2}\end{pmatrix}dz+\begin{pmatrix} -\l^{-1}p_2e^{3\varphi/2} & \l^{-1} e^{2\varphi} \\ -\l^{-1} p_2^2 e^{\varphi} & \l^{-1}p_2e^{3\varphi/2} \end{pmatrix}d\bar{z}$$ where $a_1=(\del p_2+\frac{3}{2}p_2\del\varphi+t_2p_2^2e^{-\varphi/2}) e^{-\varphi/2}$ and $p_2$ comes from the matrix of the Higgs gauge and satisfies $\delbar p_2=-\bar{t}_2e^{-3\varphi/2}+p_2\frac{\delbar \varphi}{2}$. Finally, we can compare to the non-abelian Hodge correspondence which gives $$C(\l)=\begin{pmatrix} -\del\varphi & 0 \\ \l & \del\varphi\end{pmatrix}dz+\begin{pmatrix} 0 & \l^{-1} e^{2\varphi} \\ 0 & 0 \end{pmatrix}d\bar{z}.$$ The flatness condition is equivalent to Liouville's equation $\del\delbar\varphi=e^{2\varphi}$. Notice that we get this connection in our setting in the Higgs gauge for $t_2=0$ (then $p_2=0$). \hfill $\triangle$ \end{example} For non-trivial $n$-complex structure $\mu\neq 0$, there is no Higgs gauge. Even for $n=2$, one can check that there is no $P$ satisfying $P\Phi_1=\mc{E}_-P$ and $PA_2-\delbar P = 0$. \subsection{General case}\label{flatconnectionlambda} Set $t=(t_2,...,t_n)$ and $\mu=(\mu_2,...,\mu_n)$. To examine the existence of an analog to the non-abelian Hodge correspondence, we discuss the cases when $t=0$ or $\mu=0$. \bigskip \noindent \textbf{\textit{Case $t=0$ and $\mu=0$.}} For the trivial structure we find the following result, generalizing the observations for $n=2$ and $n=3$ from the previous subsection \ref{n2n3}. \begin{prop}\label{linktohiggs} For $\Phi_2=0$ and $t=0$, the flat connection $C(\l)$ is uniquely determined up to some finite initial data. There is a choice of initial data such that the flatness equations are equivalent to the Toda integrable system. In particular $C(\l)$ is the same as the connection given by the non-abelian Hodge correspondence applied to a principal nilpotent Higgs field. \end{prop} \begin{proof} Using lemmas \ref{phi1lower} and \ref{a1upper}, we can write $C_1(\l)$ in the following form: $$C_1(\l)=a_0+a_1T+...+a_nT^n$$ where $a_i$ are diagonal matrices and $T$ is given by \begin{equation}\label{matrixT} T=\begin{pmatrix} & 1 & & \\ &&\ddots & \\ &&& 1 \\ \l &&& \end{pmatrix}. \end{equation} We denote by $a_{i,j}$ the $j$-th entry of the diagonal matrix $a_i$ and $a_i'$ the shifted matrix with $a'_{i,j} = a_{i,j+1}$. We write $a^{(k)}$ for the shift applied $k$ times. Notice that $aT=Ta^{(n-1)}$. We can then write $$C_2(\l)= a_0^*+T^{-1}a_1^*+...+T^{-n}a_n^*$$ where $a^*_{i,j}=\pm \bar{a}_{i,j}$, the sign depends on whether the coefficient comes with a $\l$ or not in $C_2(\l)$. By the standard form (lemma \ref{phi1lower}) we can further impose $a_{n,i}=e^{\varphi_i}$ for $i=1,...,n-1$ and $a_{n,0}=0$ since $0=t_n=\prod_i a_{n,i}$. One of the flatness equations gives $\delbar a_n = a_n (a_0^{(n-1)}-a_0)$. Together with the condition that the trace is 0, we can compute $a_0$. We get \begin{equation}\label{a0i} a_{0,i}= \sum_{k=1}^{i-1}\frac{k}{n}\del\varphi_k-\sum_{k=i}^{n-1}\frac{n-k}{k}\del\varphi_k. \end{equation} The other equations give a system of differential equations in $a_1, ..., a_{n-1}$ which is quadratic. It allows the solution $a_i=0$ for all $i=1,...,n-1$. In that case, using a diagonal gauge $\diag (1,\lambda, ..., \lambda^{n-1})$ the connection $C(\l)$ becomes \begin{equation}\label{mu0t0} C(\l)=\begin{pmatrix} *&&& \\ e^{\varphi_1} &* && \\ & \ddots &* & \\ && e^{\varphi_{n-1}} & * \end{pmatrix}dz+\begin{pmatrix}* & e^{\varphi_1} && \\ &* & \ddots & \\ &&* & e^{\varphi_{n-1}} \\ &&&* \end{pmatrix}d\bar{z} \end{equation} where on the diagonals are the $a_{0,i}$ and $-\bar{a}_{0,i}$ given by equation \eqref{a0i}. This is precisely the form of the Toda system. It is known that the Hitchin equations for a principal nilpotent Higgs field are the Toda equations for $\mf{sl}_n$ (see \cite{AF}, proposition 3.1). \end{proof} Notice that in particular the gauge class of the connection $C(\l)$ is independent of $\l\in \C^*$ (i.e. we have a variation of Hodge structure). This is an intrinsic property which might be used to fix the initial data. Putting \eqref{mu0t0} in parabolic gauge, we get the following explicit formula for our coordinates $\tilde{t}(\l)$ and $\tilde{\mu}(\l)$ (see also proposition 3.1 and 4.4 in \cite{AF}): \begin{prop} For $\mu=0$ and $t=0$, one can choose initial conditions such that $\tilde{\mu}_k(\l)=0$ and $\tilde{t}_k(\l)=w_k$ for all $k$, where the $w_k$ are given by $\det (\del-A_1)=\prod_i(\del-a_{0,i}) = \del^n+w_2\del^{n-2}+...+w_n$ (a ``Miura transform''). Furthermore, $A_1$ is diagonal given by equation \eqref{a0i} and the parabolic gauge is upper triangular. \end{prop} \bigskip \noindent \textbf{\textit{Case $t=0$.}} For a point in the zero-section $\T^n \subset \cotang$, we want that the monodromy of the flat connection $C(\l)$ is in $\PSL_n(\R)$. We can prove this claim under the assumption that the flat connection $C(\l)$ is uniquely determined by the point in $\T^n$. \begin{prop}\label{monodromyreal} Assuming Conjecture \ref{nahc}, the flat connection $C(\l)$ associated to a point in $\T^n\subset \cotang$ has monodromy in $\PSL_n(\R)$. \end{prop} The proof imitates the Hitchin's strategy in \cite{Hit.1} using a specific involution associated to the split real form. \begin{proof} In \cite[Section 6 and 7]{Hit.1}, Hitchin constructs a Lie algebra involution $\sigma$ such that $\tau=\sigma \rho$ is an anti-involution corresponding to the split real form (where $\rho$ denotes the anti-involution corresponding to the compact real form). We use a gauge where $\Phi_1 = \sum_{i=1}^{n-1} f_i$, where $f_i=E_{i+1,i}$ in the standard basis of matrices. The involution $\sigma$ satisfies $\sigma(f_i) = -f_i \;\forall i$. Thus $\sigma(\Phi_1) = -\Phi_1$. Let us prove that $\sigma(\Phi_2) = -\Phi_2$. For $\mf{sl}_n\subset M_n(\C)$ the involution $\sigma$ verifies $\sigma(AB) = -\sigma(B)\sigma(A)$ for all matrices $A$ and $B$. Hence $\sigma(\Phi_1^k) = -\Phi_1^k$. Since $\Phi_2$ is a polynomial in $\Phi_1$ without constant term, we get $\sigma(\Phi_2) = -\Phi_2$. Consider a pair $(\Phi,A)$ associated to a point in the zero-section $\T^n\subset \cotang$ satisfying all the conditions from Conjecture \ref{nahc}. Then $(-\Phi,A)$ is also a valid pair (associated to another point in $\T^n$). Since $\sigma$ is a Lie algebra involution, $(\sigma(\Phi), \sigma(A)) = (-\Phi,\sigma(A))$ is also a valid pair. By uniqueness in Conjecture \ref{nahc}, we get $\sigma(A) = A$. Finally, the same computations as in \cite[Section 7]{Hit.1}, show that $\tau(A) = A$ and $\tau(\Phi+\Phi^*) = \Phi^*+\Phi$. Therefore the monodromy of $C(\l)$ has to be in the fixed point set of $\tau$, so in $\PSL_n(\R)$. \end{proof} \bigskip \noindent \textbf{\textit{Case $\mu=0$.}} For trivial $n$-complex structure, the standard form from lemmas \ref{phi1lower} and \ref{a1upper} allow to consider $C(\l)$ as an affine connection with special properties. We denote by $\mc{L}(\mf{sl}_n)$ the loop algebra of $\mf{sl}_n$. It is defined by $\mc{L}(\mf{sl}_n) = \mf{sl}_n \otimes \C[\lambda,\lambda^{-1}]$, the space of Laurent polynomials with matrix coefficients. There is another way to think of elements of $\mc{L}(\mf{sl}_n)$: as an infinite periodic matrix $(M_{i,j})_{i,j \in \mathbb{Z}}$ with $M_{i,j}=M_{i+n,j+n}$ and finite width (i.e. $M_{i,j}=0$ for all $\left| i+j \right|$ big enough). The isomorphism is given as follows: to $\sum_{i=-N}^N N_i\l^i$ we associate $M_{i,j}=(N_{k_j-k_i})_{r_i,r_j}$ where $i=k_in+r_i$ and $j=k_jn+r_j$ are the Euclidean divisions of $i$ and $j$ by $n$ (so $0\leq r_i,r_j <n$), see also figure \ref{affine-matrix}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=1] \draw (0,0)--(3,0); \draw (0,1)--(3,1); \draw (0,2)--(3,2); \draw (0,3)--(3,3); \draw (0,0)--(0,3); \draw (1,0)--(1,3); \draw (2,0)--(2,3); \draw (3,0)--(3,3); \draw (-0.3,1.5) node {$\hdots$}; \draw (3.3,1.5) node {$\hdots$}; \draw (1.5,-0.3) node {$\vdots$}; \draw (1.5,3.3) node {$\vdots$}; \draw (-0.3,3.3) node {$\ddots$}; \draw (3.3,-0.3) node {$\ddots$}; \draw [dashed, gray] (-0.3,2.3)--(2.3,-0.3); \draw [dashed, gray] (0.7,3.3)--(3.3,0.7); \draw [white, fill=white] (0.5,1.5) circle (0.3); \draw [white, fill=white] (1.5,0.5) circle (0.3); \draw [white, fill=white] (1.5,2.5) circle (0.3); \draw [white, fill=white] (2.5,1.5) circle (0.3); \draw (1.5,1.5) node {$N_0$}; \draw (2.5,0.5) node {$N_0$}; \draw (0.5,2.5) node {$N_0$}; \draw (1.5,2.5) node {$N_1$}; \draw (2.5,1.5) node {$N_1$}; \draw (2.5,2.5) node {$N_2$}; \draw (0.5,1.5) node {$N_{-1}$}; \draw (1.5,0.5) node {$N_{-1}$}; \draw (0.5,0.5) node {$N_{-2}$}; \draw [domain=-20:20] plot ({0.2+3.5*cos(\x)},{1.4+5.5*sin(\x)}); \draw [domain=159:199] plot ({2.7+3.5*cos(\x)},{1.4+5.5*sin(\x)}); \end{tikzpicture} \caption{Affine matrix as infinite periodic matrix} \label{affine-matrix} \end{figure} In the second viewpoint, a connection $\l\Phi+A+\l^{-1}\Phi^*$ with $\Phi_1$ lower triangular, $\Phi_2=0$ and thus $A_1$ upper triangular, is precisely an \emph{infinite matrix with period $n$ and width $n$} (shown in figure \ref{affine-matrix} by dashed lines). The $(1,0)$-part $C_1(\l)$ is upper triangular ($\Phi_1$ is lower triangular but $\l\Phi_1$ is upper triangular in the infinite matrix) and the $(0,1)$-part $C_2(\l)$ is lower triangular. Thus, the flatness of $C(\l)$ is a \emph{generalized Toda system}, replacing the tridiagonal property by ``width equal to periodicity''.For $t_i=0 \;\forall i$ we have seen in the proof of Proposition \ref{linktohiggs} that we get exactly the Toda integrable system. For $t_i=0$ for $i=2,...,n-1$ but $t_n\neq 0$, we get the affine Toda system for $\mc{L}(\mf{sl}_n)$. \begin{Remark} In order to describe $h$-connections, we can include parameters into $\mc{L}(\mf{sl}_n)$ by considering its central extension $\widehat{\mf{sl}}_n$ or central coextension. \hfill $\triangle$ \end{Remark} Since for $t=0$ we get an elliptic system, the system stays elliptic for at least small $t\neq 0$, since ellipticity is an open condition (Cauchy-Kowalewskaya theorem). So the generalized Toda system can be solved for small $t$. The study of this generalized Toda system is subject of future research. \bigskip \noindent \textbf{\textit{General case.}} For $\mu\neq 0$ and $t\neq 0$, the system is still elliptic at least for small $t$, since it is for $t=0$. We should get a generalized Toda system with differentials $t_k$ satisfying the higher holomorphicity condition $(\mc{C})$. We conjecture that the connection $C(\l)=\l\Phi+A+\l^{-1}\Phi^*$ is uniquely determined by $\mu$ and $t$. To be more precise: \begin{conj}\label{nahc} Given an element near the zero-section $[(\mu_k, t_k)]\in U \subset T^*\bm\hat{\mc{T}}^n$, there is a unique (up to unitary gauge) flat connection $C(\l)=\l\Phi+A+\l^{-1}\Phi^*$ satisfying \begin{enumerate} \item $\Phi$ is induced by the higher complex structure $[(\mu_2,...,\mu_n)]$, \item $C(\l)$ satisfies the reality condition: $-C(-1/\bar{\l})^*=C(\l)$, \item $t_k = \tr \Phi_1^{k-1}A_1$. \end{enumerate} \end{conj} Assuming this conjecture, we get the desired link to Hitchin's component: \begin{thm}\label{mainthmm} If conjecture \ref{nahc} holds true, there is a canonical diffeomorphism between our moduli space $\T^n$ and Hitchin's component $\mc{T}^n$. \end{thm} \begin{proof} With conjecture \ref{nahc} we get a canonical way to associate a flat connection $C(\l=1)$ to a point in $\cotang$. By proposition \ref{monodromyreal} the monodromy of $C(\l)$ for $t=0$ is in $\PSL_n(\R)$. Following Hitchin's argument from theorem 7.5 in \cite{Hit.1}, we prove that the zero-section in $\cotang$ where $t=0$ describes a connected component of $\Rep(\pi_1(\S), \PSL_n(\R))$. Since $\T^n$ is closed in $\cotang$, the image of the map $s:\T^n \rightarrow \Rep(\pi_1(\S), \PSL_n(\R))$ is a closed submanifold. Furthermore both spaces have the same dimension by theorem \ref{mainresultncomplex}. Therefore the image of $s$ is an open and closed submanifold, i.e. a connected component. Finally, for $\mu=0$ we get the same connection $C(\l)$ as by the non-abelian Hodge correspondence of the principal nilpotent Higgs field. So the component described by $\T^n$ and Hitchin's component $\mc{T}^n$ coincide. \end{proof} Notice that the map between $\mc{T}^n$ and $\T^n$ is something like an exponential map. For $n=2$ Hitchin's description of Teichmüller space is exactly via the exponential map identifying a fiber of the cotangent bundle $T^*_\mu\mc{T}^2$ to $\mc{T}^2$. \begin{coro} Hitchin's component has a natural complex structure. Further, there is a natural action of the mapping class group on it, preserving the complex structure. \end{coro} The first statement follows from theorem \ref{mainresultncomplex} since we explicitly know the cotangent space at a point. The second simply follows by the description of Hitchin's component as moduli space of some geometric structure on the surface. Labourie describes this action in \cite{Lab.3} and shows that it is properly discontinuous using cross ratios.
{'timestamp': '2020-09-21T02:09:49', 'yymm': '2005', 'arxiv_id': '2005.14483', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14483'}
arxiv
\section{Introduction} The solar atmosphere is full of energetic particles. They are produced when ambient plasma particles are accelerated out of thermal equilibrium by a strong electric field or by
rebounding off moving magnetic elements. Upon leaving the site of acceleration, their available trajectories are limited by the Lorentz force, which compels charged particles to follow the direction of the magnetic field. Consequently, the accelerated particles form coherent beams. The interactions of these beams with the ambient plasma are believed to be a key mechanism in solar flares. It is generally accepted that flares are powered by the relaxation of stresses in the magnetic field through the process of magnetic reconnection. The release of magnetic energy manifests as electric field enhancements and plasma flows, leading to strong currents with associated resistive heating as well as jets of outflowing plasma and magnetoacoustic waves. This also creates conditions favourable for particle acceleration. The ensuing beams of energetic particles, which may account for a significant portion of the flare energy \citep{Lin1971}, transfer energy to the plasma along their trajectories through Coulomb interactions. The X-ray bremsstrahlung emitted in these interactions can escape the atmosphere relatively unaffected, and thus it provides valuable information about the energy distribution of the particles and the plasma conditions at the site of emission. Signatures of accelerated particles, in particular non-thermal electrons, are found in observed hard X-ray spectra from active region flaring events, ranging from large flares with energies up to $10^{32}$ erg \citep[see][for a review]{Benz2017} to $10^{27}$ erg microflares at the sensitivity limit of current instruments \citep[e.g.][]{Christe2008}. This suggests that particle beams play an active role in flares of all sizes. Beyond hard X-ray detectability, \citet{Parker1988} predicted frequent impulsive heating events with energies of the order of $10^{24}$ erg that are associated with small-scale reconnection due to the continuous interweaving of the magnetic field by photospheric convective motions. Signs of these types of events, dubbed nanoflares, have been observed down to $10^{25}$ erg as ultraviolet (UV) and soft X-ray flashes in the chromosphere and transition region of the quiet Sun \citep{Krucker1998, Benz2002}. Based on detailed 1D simulations, \citet{Testa2014} found that non-thermal electron beams were required to reproduce the UV spectra in their observations of nanoflares. Early models of particle beams were mostly based on simple analytical expressions for the mean collisional change in velocity of energetic particles moving through the atmospheric plasma \citep{Brown1972, Syrovatskii1972, Emslie1978}. The response of the atmosphere to an injected beam of non-thermal electrons has been studied by incorporating these expressions into the energy equation of 1D hydrodynamics simulations \citep{Somov1981, MacNeice1984, Nagai1984}. More recently, the realism of these types of simulations was improved significantly by the inclusion of detailed radiative transfer \citep{Hawley1994, Abbett1999, Allred2005}. A more general treatment of the accelerated particles is possible by numerically solving the Fokker--Planck equation governing the evolution of the particle distribution. Due to high computational demand, this method was, initially, only used to study the detailed propagation and bremsstrahlung emission of non-thermal electrons in simple static model atmospheres \citep{Leach1981, Leach1983}. However, in state-of-the-art 1D flare simulations \citep{Liu2009, Allred2015, RubiodaCosta2015}, it has now largely replaced the approximate heating expressions derived from mean scattering theory. The high level of detail in these simulations makes them a powerful tool for studying flare dynamics and for generating synthetic diagnostics of flaring atmospheres. Yet, by nature of their dimensionality, simulations of this kind can only consider a single flaring loop at a time, and these loops do not live in isolation. They are part of a continuous magnetic field embedded in a 3D plasma environment. Magnetic reconnection and associated acceleration of energetic particle beams are driven by the overall evolution of the atmosphere, which in turn is influenced by the collective interaction of the beams with the ambient plasma. With the drastic increase in computing power along with the advent of advanced 3D radiative magnetohydrodynamics (MHD) codes in the past couple of decades \citep{Voegler2005, Felipe2010, Gudiksen2011}, realistic simulations that self-consistently reproduce the overall structure and evolution of diverse features of the solar atmosphere are now possible. Incorporating acceleration and propagation of energetic particle beams into these types of 3D simulations would greatly benefit our understanding of the role of particle beams on the Sun. We have taken the first step towards this goal, and here present a simple treatment of energy transport by accelerated particles applied to a realistic 3D simulation of the quiet solar atmosphere. This work is a further development of the model introduced in \citet{Bakke2018}. A related approach was recently used by \citet{Ruan2020} to incorporate electron beams into a 2.5D MHD simulation of a large flare. A brief description of the radiative MHD code that we employ is given in Sect. \ref{sec:atmospheric_simulation}, where we also present the simulated atmosphere used for this paper. In Sect. \ref{sec:accelerated_particles}, we present the inclusion of accelerated particle beams, starting with the method of detecting reconnection sites, followed by the acceleration model and the particle transport model. Methods for reducing the computational demand, as well as for selecting values for the free parameters, are respectively discussed in Sects. \ref{sec:tuning} and \ref{sec:effect_p_delta}. Section \ref{sec:results} contains our results for the transport of energy by particle beams. These are discussed in Sect. \ref{sec:discussion}, where we also consider future work. \section{Methods} \subsection{Atmospheric simulation} \label{sec:atmospheric_simulation} We used the Bifrost code \citep{Gudiksen2011} for simulating a 3D region of the upper solar atmosphere, spanning from the top of the convection zone to the corona. Bifrost solves the resistive MHD equations with the inclusion of radiative transfer and field-aligned thermal conduction. The equation of state is computed under the assumption of local thermodynamic equilibrium (LTE), using the Uppsala Opacity Package \citep{Gustafsson1975}. The radiative transfer computation encompasses optically thin emission from the upper chromosphere and corona, approximated non-LTE radiative losses from chromospheric hydrogen, calcium and magnesium \citep{Carlsson2012}, and full radiative transfer with scattering and LTE plasma opacities in the photosphere and convection zone \citep{Hayek2010}. To maintain numerical stability, the code employs a slightly enhanced overall diffusivity in combination with spatially adaptive local diffusion (so-called hyper diffusion). For this paper, the atmospheric environment for the accelerated particles was provided by a horizontally periodic Bifrost simulation of a $24 \times 24\;\mathrm{Mm}$ patch of the atmosphere that spans vertically from $2.5\;\mathrm{Mm}$ below the photosphere to $14.3\;\mathrm{Mm}$ above it. The simulation has a resolution of 768 grid cells along each dimension, with a uniform grid cell extent of $31\;\mathrm{km}$ in the horizontal directions. Along the vertical direction, the grid cell extent is about $12\;\mathrm{km}$ in the layer between the photosphere (at height zero) and the height of $4\;\mathrm{Mm}$, in order to resolve the abrupt local variations near the transition region. Away from this layer, the extent increases evenly in both directions to about $21\;\mathrm{km}$ near the bottom of the simulation box and $80\;\mathrm{km}$ near the top. Convective motions are maintained in the sub-photospheric part of the simulation by injection of heat through the bottom boundary, balanced by radiative cooling in the photosphere. These motions lead to acoustic shocks and braiding of magnetic field lines, which produce a hot chromosphere and corona. The corona was initially configured with an ambient magnetic field roughly oriented along the $x$-direction. A magnetic flux emergence scenario was then incited by injecting a 2000 G $y$-directed magnetic field, covering $x \in [4, 18]$ Mm and the full extent in $y$, through the bottom boundary. The flux sheet was broken up by convective motions as it rose to the photosphere. Here, the strongest concentrations of the field broke through and expanded into the upper atmosphere, carrying with it cool photospheric plasma. The expanding magnetic bubbles were eventually confined by the ambient coronal field. Interactions between these bubbles and with the ambient coronal field lead to magnetic reconnection at various heights and ensuing explosive events such as Ellerman bombs and UV bursts. See \citet{Hansteen2019} for a more detailed description and analysis of the simulation. It should be noted that all flaring events produced in the simulation are small (at most $\sim 10^{25}$ erg), and can generally be characterised as nanoflares. \begin{figure}[!thb] \includegraphics{magnetogram} \centering \caption{Vertical magnetic field $B_\mathrm{v}$ in the photosphere (height zero) of the simulation snapshot.} \label{fig:magnetogram} \end{figure} \begin{figure}[!thb] \includegraphics{atmospheric_height_profiles} \centering \caption{Horizontally averaged mass density $\rho$ (panel (a)), temperature $T$ (panel (b)) and magnetic field strength $B$ (panel (c)) as a function of height in the simulation snapshot.} \label{fig:atmospheric_height_profiles} \end{figure} This paper considers a snapshot of the atmosphere at a single instant of the continuously evolving dynamic simulation, 8220 s after the magnetic flux sheet was injected. Figure \ref{fig:magnetogram} shows the vertical component of the photospheric magnetic field at this time. The variation in horizontally averaged mass density, temperature, and magnetic field strength with height are shown in Fig. \ref{fig:atmospheric_height_profiles}. In this figure, the presence of the relatively dense and cool magnetic bubbles filling parts of the corona is apparent in the density and temperature profiles. The noticeable break in the density profile near the height of 8 Mm corresponds to the top of the main bubble. \subsection{Accelerated particles} \label{sec:accelerated_particles} Energetic electrons and ions are produced through various acceleration mechanisms during magnetic reconnection. Due to the Lorentz force, the particles follow a gyrating trajectory around the magnetic field as they travel away from the reconnection site. At the same time, they exchange energy and momentum with the background plasma through Coulomb collisions. Based on these processes, we developed a model for the production and transport of accelerated particles suitable for integration into a 3D MHD simulation. The first step was to identify the grid cells of the simulation domain occupying locations with magnetic reconnection. In each of these grid cells, the energy distribution of the locally accelerated particles was estimated. Finally, the heating of the ambient plasma by the passing energetic particle beam was computed along the length of the magnetic field line going through the centre of each grid cell. The following sections describe the steps of this model in detail. \subsubsection{Reconnection sites} \label{sec:reconnection_sites} It is well established that particle acceleration is associated with magnetic reconnection. Reconnection takes place where regions of opposite magnetic polarity come together and produce a strong rotation of the magnetic field. A magnetic diffusion region arises around the interface between the two reconnecting magnetic domains, where the gradients are strong enough to break the coupling between the magnetic field and the plasma. Inside this diffusion region, free magnetic energy is released in several different ways. The electric field induced by the rotation of the magnetic field creates a thin layer of strong current, which heats the local plasma through Joule heating. In addition, plasma is propelled away from the reconnection site from the ends of the current sheet by the magnetic tension force. Finally, a fraction of the local charged particles are accelerated to very high energies, as we discuss further in Sect. \ref{sec:initial_particle_distributions}. \citet{Biskamp2005a} derived the following criterion for conservation of magnetic topology in the context of resistive MHD: \begin{equation} \label{eq:reconnection_criterion} \left\lVert\mathbf{B}\times\left(\nabla\times\mathbf{S}\right)\right\rVert = 0, \end{equation} where \begin{equation} \label{eq:electric_field_projection} \mathbf{S} = \left(\frac{\mathbf{E} \cdot \mathbf{B}}{\mathbf{B} \cdot \mathbf{B}}\right)\mathbf{B} \end{equation} is the projection of the electric field onto the magnetic field direction. Reconnection takes place where Eq. \eqref{eq:reconnection_criterion} is violated. This can thus be used as a criterion for identifying reconnection sites. However, in the context of a numerical simulation, the onset of reconnection only occurs once the value \begin{equation} \label{eq:krec} K = \left\lVert\mathbf{B}\times\left(\nabla\times\mathbf{S}\right)\right\rVert \end{equation} exceeds some finite threshold $K_\mathrm{min}$ due to limited precision in the employed numerical scheme. An example of how $K$ varies with position in our simulated atmosphere is shown in Fig. \ref{fig:krec_slice}. \begin{figure*}[!thb] \includegraphics{krec_slice} \centering \caption{Values of the reconnection factor $K$ (Eq. \eqref{eq:krec}) in a slice through the $y$-axis of the simulation snapshot, at $y = 10.67\;\mathrm{Mm}$.} \label{fig:krec_slice} \end{figure*} We discuss how we determined a suitable value for $K_\mathrm{min}$ in Sect. \ref{sec:selecting_reconnection_sites}. \subsubsection{Initial particle distributions} \label{sec:initial_particle_distributions} There is a range of possible mechanisms that can produce energetic particles during reconnection. One alternative is direct acceleration by the coherent electric field induced by the rotation of the magnetic field across the diffusion region \citep{Speiser1965, Martens1990, Litvinenko1993}. Test particle simulations of this kind of acceleration have been run for various magnetic configurations, including reconnecting Harris current sheets \citep{Zharkova2005a, Zharkova2005}, magnetic X-points \citep{Heerikhuisen2002, Wood2005}, and fan and spine reconnection \citep{Dalla2006, Dalla2008, Stanier2012}. These simulations generally produce particle populations with energy distributions that resemble power-laws. Power-law distributions are also found in more realistic particle-in-cell simulations, which include the changes in the electric field induced by the accelerated particles in a self-consistent manner \citep{Baumann2013a, Li2019}. As can often be seen in these kinds of kinetic simulations, direct acceleration is likely to be accompanied by other types of acceleration processes. One example is first-order Fermi acceleration \citep{Fermi1954}, where particles gain energy by repeatedly scattering back and forth between converging magnetic elements, such as the ends of a shrinking plasmoid \citep{Drake2006} or in a collapsing magnetic trap \citep{Somov1997}. If the scattering agents move in a random rather than systematic fashion, second-order Fermi acceleration \citep{Fermi1949} can take place. Here, the particles experience a fluctuating energy increase owing to the higher likelihood of (accelerating) head-on collisions compared to (decelerating) rear-end collisions. This kind of stochastic acceleration is, like direct acceleration, typically predicted to produce a power-law energy distribution for the accelerated particles, both in models based on the Fokker--Planck formalism \citep{Miller1996, Stackhouse2018} and in test particle simulations \citep{Dmitruk2003, Onofri2006}. It is widely accepted that energetic electrons play an important role in energy transport during solar flares. The detection of gamma-rays from large flares has revealed that also ions can play a part in these events \citep{Chupp1973}. However, for small flares, the effect of accelerated ions is likely to be minor. This is because ions, owing to their high masses, have significantly lower velocities than electrons for a given kinetic energy. As a consequence, they experience a much higher rate of collisions with ambient particles (the frequency of Coulomb collisions decreases with the cube of the velocity), and thus they lose their energy to the background plasma faster. For example, consider an electron or proton with mass $m$ travelling through an ionised hydrogen plasma with number density $n_\mathrm{H}$. The change in kinetic energy $E$ with distance $s$ for the particle is given by \citep[e.g.][]{Emslie1978} \begin{equation} \label{eq:particle_energy_loss} \frac{\mathrm{d}E}{\mathrm{d}s} = -\left(\frac{m}{m_\mathrm{e}}\right)\frac{2\pi e^4 n_\mathrm{H}\ln\Lambda}{E}, \end{equation} where $e$ is the elementary charge and $\ln\Lambda$ is the Coulomb logarithm (discussed further in Sect. \ref{sec:particle_energy_deposition}). It is clear from this that a proton ($m = m_\mathrm{p}$) deposits its energy much faster than an electron ($m = m_\mathrm{e}$), by a factor of $m_\mathrm{p}/m_\mathrm{e} \approx 1800$. Hence, unless the ions are accelerated to very high velocities, their energy can be expected to end up close to the reconnection sites. This suggests that it is safe to omit ions when modelling the long-range energy transport in small flares. Simulating the formation of an accelerated electron distribution during a reconnection event is a computationally expensive task. When many events have to be considered, a detailed simulation of the acceleration is therefore not a viable option on current hardware. However, it is reasonable to assume that the result of the acceleration process will be a non-thermal population of electrons with energies distributed according to a power-law: \begin{equation} \label{eq:non_thermal_distribution} n_\mathrm{NT}(E \geq E_\mathrm{c}) = n_\mathrm{acc}\left(\frac{\delta - 1/2}{E_\mathrm{c}}\right)\left(\frac{E}{E_\mathrm{c}}\right)^{-(\delta + 1/2)}. \end{equation} Here, $n_\mathrm{acc} = \int_{E_\mathrm{c}}^\infty n_\mathrm{NT}(E)\mathrm{d}E$ is the number density of accelerated electrons and $E_\mathrm{c}$ is a lower cut-off energy below which electrons are not considered non-thermal. The power-law index $\delta$ controls how rapidly the number of electrons diminishes with higher energy. It is usually defined in terms of the non-thermal electron flux $F_\mathrm{NT} = n_\mathrm{NT}v$ (where $v \propto E^{1/2}$ is the electron speed), so that $F_\mathrm{NT}(E) \propto E^{-\delta}$. The range of possible values for the power-law index $\delta$ in Eq. \eqref{eq:non_thermal_distribution} is subject to some loose observational constraints. Spectral analysis of hard X-ray bursts has shown that the non-thermal bremsstrahlung emission due to the interactions of accelerated electrons with the ambient plasma tends to have a single or double power-law distribution in energy \citep[e.g.][]{Kane1970, Lin1987}. Working backwards from the observed spectrum one can attempt to infer the initial distribution of the non-thermal electrons by considering the bremsstrahlung emission process inside the X-ray source and the energy loss of the electrons during their journey to the source from the acceleration region \citep{Holman2011, Kontar2011}. Studies of this type, both of regular flares \citep{Kontar2002, Kontar2003, Sui2005, Battaglia2006, Krucker2010} and microflares \citep{Lin2001, Krucker2002, Christe2008, Hannah2008, Glesener2020}, suggest that the initial distribution follows a power-law with $\delta$ varying between 2 and 10, typically with larger values for less energetic events. There is some observational evidence for a linear-log relationship between $\delta$ and the X-ray flux measured at a fixed energy \citep{Grigis2004}, which has been reproduced in numerical models of stochastic acceleration \citep{Grigis2005, Grigis2006}. However, in the absence of a proper acceleration simulation for predicting its value, the least speculative way of specifying $\delta$ is to treat it as a free parameter. Section \ref{sec:effect_p_delta} discusses the effect of varying $\delta$ in our model. The total power $P_\mathrm{acc}$ going into the acceleration of an electron population generally corresponds to some fraction $p$ of the rate of magnetic energy release $P_\mathrm{rec}$ at the reconnection site: \begin{equation} \label{eq:acceleration_power} P_\mathrm{acc} = p P_\mathrm{rec}. \end{equation} If the volume of the reconnection site is $V$, the average acceleration power per volume is \begin{equation} \label{eq:acceleration_power_density} e_\mathrm{acc} = \frac{P_\mathrm{acc}}{V}, \end{equation} and if the acceleration process lasts for a duration $\Delta t$, the energy density of accelerated electrons in the reconnection site is \begin{equation} \label{eq:acceleration_energy_density} u_\mathrm{acc} = e_\mathrm{acc}\Delta t. \end{equation} This quantity is also related to the number density of non-thermal electrons: \begin{equation} \label{eq:acceleration_energy_density_vs_number_density} u_\mathrm{acc} = \int_{E_\mathrm{c}}^\infty E\;n_\mathrm{NT}(E)\;\mathrm{d}E = n_\mathrm{acc}\left(\frac{2\delta - 1}{2\delta - 3}\right)E_\mathrm{c}. \end{equation} So knowledge of the fraction $p$ together with basic properties of the reconnection event enables the determination of $u_\mathrm{acc}$, which can be used to compute $n_\mathrm{acc}$ through Eq. \eqref{eq:acceleration_energy_density_vs_number_density}. In a pure resistive MHD context, all the dissipated magnetic energy is released through Joule heating in the reconnection current sheets. Therefore, the local Joule heating prior to inclusion of particle acceleration can be used as a proxy for the reconnection energy, giving the relation \begin{equation} \label{eq:acceleration_power_density_qjoule} e_\mathrm{acc} = p Q_\mathrm{Joule}, \end{equation} where $Q_\mathrm{Joule}$ is the Joule heating rate per volume. Because a fraction $p$ of the energy that would previously go into Joule heating now is used for electron acceleration, $Q_\mathrm{Joule}$ must be reduced accordingly after application of Eq. \eqref{eq:acceleration_power_density_qjoule}. Observational studies of the energy partition in flares suggest that typical values of $p$ could range from $10\%$ \citep{Emslie2004, Emslie2012} to as high as $50\%$ \citep{Lin1971}, and kinetic reconnection simulations support that values of these magnitudes indeed are conceivable \citep{Tsiklauri2007, Baumann2013a}. However, just like for $\delta$, the way $p$ depends on the details of the acceleration mechanism and the local conditions is subject to a great deal of uncertainty, so it is best kept as a free parameter. The effect of $p$ in our model is discussed in Sect. \ref{sec:effect_p_delta}. The treatment of particle acceleration presented here builds on the premise that some unspecified acceleration mechanism will add a power-law tail with a known number density $n_\mathrm{acc}$ and index $\delta$ to the local thermal distribution of ambient electrons. This non-thermal component can then be isolated by defining the lower cut-off energy $E_\mathrm{c}$ as the energy where the power-law distribution intersects the thermal distribution. The original number density of thermal electrons should in principle be adjusted to account for some of them being accelerated. However, when dealing with the relatively minor energy releases associated with small flares, it is safe to assume that only a small fraction of the available electrons are accelerated. This correction can then be omitted. The thermal electron population follows the Maxwell--Boltzmann distribution \begin{equation} \label{eq:maxwell_boltzmann_distribution} n_\mathrm{T}(E) = n_\mathrm{e}\sqrt{\frac{4E}{\pi(k_\mathrm{B}T)^3}}e^{-E/k_\mathrm{B} T}, \end{equation} where $n_\mathrm{e}$ is the number density of thermal electrons, $T$ is the local temperature, and $k_\mathrm{B}$ is the Boltzmann constant. The above definition of $E_\mathrm{c}$ can then be written as \begin{equation} n_\mathrm{NT}(E_\mathrm{c}) = n_\mathrm{T}(E_\mathrm{c}). \end{equation} After inserting Eqs. \eqref{eq:non_thermal_distribution} and \eqref{eq:maxwell_boltzmann_distribution}, and substituting $n_\mathrm{acc}$ using Eq. \eqref{eq:acceleration_energy_density_vs_number_density} to remove the implicit dependence on $E_\mathrm{c}$, we find after some rearranging \begin{equation} \label{eq:lower_cutoff_energy} {E_\mathrm{c}}^{5/2}e^{-E_\mathrm{c}/k_\mathrm{B} T} = (\delta - 3/2)\left(\frac{u_\mathrm{acc}}{n_\mathrm{e}}\right)\sqrt{\frac{\pi(k_\mathrm{B}T)^3}{4}}. \end{equation} This can be solved numerically for $E_\mathrm{c}$ using, for example, the Newton--Raphson method. Only the highest-energy solution is relevant in this case. The resulting cut-off energy is roughly proportional to temperature, but is not sensitive to $u_\mathrm{acc}$ or $n_\mathrm{e}$, as shown in Fig. (\ref{fig:Ec_parameter_study}). A temperature of $10^6$ K results in a cut-off energy of the order of 1 keV. \begin{figure}[!thb] \includegraphics{lower_cutoff_energy_parameter_study} \centering \caption{Temperature dependence of the lower cut-off energy $E_\mathrm{c}$, for a selection of values of the non-thermal energy per thermal electron, $u_\mathrm{acc}/n_\mathrm{e}$, representative of the conditions in a relatively quiet atmosphere. A power-law index of $\delta = 4$ was used, but any realistic value would give practically identical results. The shaded area is where the cut-off energy would be lower than the average thermal energy.} \label{fig:Ec_parameter_study} \end{figure} With the energy distribution of the non-thermal electrons in place, the next aspect to consider is their directions of motion. This can be described in terms of their distribution of pitch angles $\beta$, defined as the angle between the direction of motion $\hat{\mathbf{v}}$ and the magnetic field direction $\hat{\mathbf{B}}$: \begin{equation} \cos\beta = \hat{\mathbf{v}}\cdot\hat{\mathbf{B}}. \end{equation} The pitch angle distribution, just like the energy distribution, depends on the nature of the acceleration mechanism. Typically, direct acceleration models predict that the non-thermal electrons have most of their velocity along the magnetic field direction, while stochastic acceleration models predict more isotropic populations. When it comes to transport calculations, the simplest approach is to adopt the view of a peaked initial pitch angle distribution and assume that all the electrons accelerated at a given reconnection site will leave the site with the same initial magnitude of the pitch angle cosine $|\mu_0| = |\cos\beta_0|$. If the underlying acceleration mechanism is assumed to only affect the average speed $v_\parallel$ of the electrons parallel to the magnetic field, any deviation from $|\mu_0| = 1$ must come from the average perpendicular speed $v_\perp$ of the electrons before acceleration. This speed corresponds to the average thermal speed \begin{equation} v_\perp = \sqrt{\frac{8 k_\mathrm{B} T}{\pi m_\mathrm{e}}}. \end{equation} The total average speed of the accelerated electrons can be written as \begin{equation} v_\mathrm{mean} = \sqrt{{v_\perp}^2 + {v_\parallel}^2}. \end{equation} This speed can also be computed as the expected value of $v = \sqrt{2E/m_\mathrm{e}}$ for the power-law distribution, which becomes \begin{equation} v_\mathrm{mean} = \frac{2\delta - 1}{2\delta - 2}\sqrt{\frac{2 E_\mathrm{c}}{m_\mathrm{e}}}. \end{equation} The average magnitude of the pitch angle cosine can then be estimated as \begin{equation} |\mu_0| = \frac{v_\parallel}{v_\mathrm{mean}} = \sqrt{1 - \left(\frac{v_\perp}{v_\mathrm{mean}}\right)^2}. \end{equation} We note that the case $v_\perp = v_\mathrm{mean}$, and correspondingly, $\mu_0 = 0$, occurs for $E_\mathrm{c} \approx k_\mathrm{B} T$. As shown in Fig. \ref{fig:Ec_parameter_study}, $E_\mathrm{c}$ will usually exceed $k_\mathrm{B} T$ by about one order of magnitude, so this approach will tend to give $|\mu_0| \approx 1$ in practice. The direction in which the electron beam leaves the acceleration region must also be determined. This can be parallel or anti-parallel to the magnetic field direction, or both, again depending on the nature of the acceleration mechanism. Without a more detailed specification of this mechanism, the method of deciding the directions will naturally be somewhat ad hoc. However, it seems reasonable that the overall electric field direction $\hat{\mathbf{E}}$ in the acceleration region could provide some indication. If $\hat{\mathbf{E}}\cdot\hat{\mathbf{B}}$ is close to $\pm 1$, one might expect most of the electrons to escape in the $\mp\mathbf{B}$-direction (the opposite sign is due to their negative charge). On the other hand, if $\hat{\mathbf{E}}\cdot\hat{\mathbf{B}}$ is closer to zero, there is no immediate reason to prefer one direction over the other, and the electrons would probably partition more evenly between both directions. Based on this, a sensible strategy is to split the available non-thermal power $P_\mathrm{acc}$ between a forward propagating beam ($+\hat{\mathbf{B}}$-direction) with power $P_\mathrm{beam}^+$ and a backward propagating beam with power $P_\mathrm{beam}^-$. The power can be partitioned in the following way: \begin{equation} \label{eq:beam_power_partition} P_\mathrm{beam}^\pm = \frac{1 \mp \hat{\mathbf{E}}\cdot\hat{\mathbf{B}}}{2}P_\mathrm{acc}. \end{equation} So if, for example, $\hat{\mathbf{E}}\cdot\hat{\mathbf{B}} = -0.2$, the forward propagating beam gets $60\%$ of the power and the backward propagating beam gets $40\%$. At any reconnection site, $\hat{\mathbf{E}}\cdot\hat{\mathbf{B}}$ is necessarily non-zero, and the smallest possible magnitude it can have depends on the choice of $K_\mathrm{min}$. \subsubsection{Particle energy deposition} \label{sec:particle_energy_deposition} A particle with charge $q$ leaving a reconnection site with velocity $\mathbf{v}$ experiences a Lorentz force $\mathbf{F} = q(\mathbf{E} + \mathbf{v} \times \mathbf{B})$ due to the local electric and magnetic field. The $\mathbf{v} \times \mathbf{B}$ term gives the particle a helical motion around the magnetic field direction, without affecting its kinetic energy. The relative magnitude of $\mathbf{v}$ and $\mathbf{B}$ decides the radius of the helical motion, which for a typical electron in a normal coronal environment is smaller than a metre. If an electric field is present, the motion of the particle can be influenced in two different ways. Firstly, the particle will be accelerated along the magnetic field direction if the electric field component in this direction is non-zero. However, this can only take place in magnetic diffusion regions where ideal MHD breaks down. Secondly, the centre of the helical motion will drift away from the original field line if the electric field has a component perpendicular to the magnetic field. This effect is generally negligible, because the bulk plasma velocity $\mathbf{u}$ would have to be comparable to the particle velocity to induce an electric field $\mathbf{E} \approx -\mathbf{u} \times \mathbf{B}$ with a magnitude that is comparable to the $\mathbf{v} \times \mathbf{B}$ term. When drift away from the field line is ignored, it is convenient to describe the particle's motion in terms of its kinetic energy $E$, pitch angle $\beta$, and one-dimensional position $s$ along the field line. Because the gyroradius of the particle generally is very small compared to its typical travel distance (which is of the order of megametres), the offset of the particle perpendicular to the field line can safely be disregarded. Additionally, the journey of the particle through the atmosphere is typically so brief that it can be considered instantaneous compared to the time scale of the atmosphere's response to the particle beam. For example, a beam of 1 keV electrons traverses a 10 Mm coronal loop in about 0.5 s, while a pressure change due to the heating at a footpoint would need $\sim 100$ s to propagate the same distance back along the loop (assuming a sound speed of $c_\mathrm{s} \approx 10^4\sqrt{T}\;\mathrm{cm}/\mathrm{s}$ and a temperature of $T = 10^6\;\mathrm{K}$). When the particle enters a region with a stronger magnetic field, it starts to gyrate more rapidly around the field axis due to the increased $\mathbf{v} \times \mathbf{B}$ force. This force, being perpendicular to the direction of motion, does not affect the kinetic energy, so the velocity of the particle parallel to the field axis decreases accordingly. If the increase in the magnetic field strength becomes sufficiently large, the movement of the particle along the field will eventually stop and then continue in the opposite direction. This magnetic mirroring effect could thus potentially trap particles in the coronal part of a magnetic loop. However, since the coronal magnetic field strength typically increases relatively slowly with depth, magnetic trapping is unlikely to drastically inhibit the particles from reaching the lower atmosphere. Therefore, we have ignored the effect of varying magnetic field strength in this initial treatment of particle propagation. As the particle travels along the field line it exchanges energy and momentum with the ambient plasma through Coulomb interactions, both with free electrons and ions, and with electrons bound in neutral atoms. Collectively, these types of collisions have the effect of reducing and randomising the velocities of the accelerated particles, until their distribution merges with the background thermal distribution. The energy loss of the particles manifests as a heating of the local plasma. A simple and widely used approach for modelling Coulomb collisions is to approximate the evolution of the energy and pitch angle of a single particle based on the mean rate of energy and pitch angle dissipation \citep[as derived by][]{Spitzer1962}. This was done by \citet{Brown1972} for non-thermal electrons in an ionised hydrogen plasma. \citet{Emslie1978} generalised Brown's treatment to allow for a hydrogen plasma with an arbitrary, but uniform, degree of ionisation, and also obtained the rate of energy deposition as a function of depth for the full population of accelerated electrons by convolving the mean energy loss of a single electron with the initial non-thermal number distribution. \citet{Hawley1994} showed how an approximation in the derivations of Emslie can allow for an ionisation degree that varies with depth without having to resort to numerical integration. Following their approach, the rate of energy deposition per volume, $Q$, at distance $s$ along the field line can be written as \begin{equation} \label{eq:beam_heating_per_volume} Q(s) = n_\mathrm{H}(s)\left(\frac{\pi e^4 (\delta - 2) F_\mathrm{beam}}{|\mu_0| {E_\mathrm{c}}^2}\right)\gamma(s) B\left(\kappa(s); \frac{\delta}{2}, \frac{1}{3}\right)\left(\frac{N^*(s)}{N_\mathrm{c}^*}\right)^{-\delta/2}. \end{equation} This equation assumes that the electrons all have the same initial pitch angle cosine $\mu_0$ and initial energies given by a power-law distribution as described by Eq. \eqref{eq:non_thermal_distribution}. $F_\mathrm{beam}$ is the energy flux of the beam of accelerated electrons leaving the reconnection site. The quantity $\gamma$, given by \begin{equation} \gamma(s) = x(s)\ln\Lambda + (1 - x(s))\ln\Lambda', \end{equation} is a hybrid Coulomb logarithm that merges the contribution of the free electron Coulomb logarithm $\ln\Lambda$ and the neutral hydrogen Coulomb logarithm $\ln\Lambda'$ depending on the local ionisation fraction $x(s)$. $B$ is the incomplete beta function, defined by \begin{equation} B(\kappa; a, b) = \int_0^\kappa t^{a - 1}(1 - t)^{b - 1}\;\mathrm{d}t. \end{equation} The integration limit used for $B$ is a ramp function given by \begin{equation} \kappa(s) = \mathrm{max}\left(\frac{N(s)}{N_\mathrm{c}(s)}, 1\right), \end{equation} where \begin{equation} N(s) = \int_0^s n_\mathrm{H}(s')\;\mathrm{d}s' \end{equation} is the hydrogen column depth and \begin{equation} N_\mathrm{c}(s) = \frac{\mu_0 {E_\mathrm{c}}^2}{6\pi e^4 \gamma(s)} \end{equation} is the stopping column depth for an electron with energy $E_\mathrm{c}$. The ionised column depth $N^*(s)$ is defined analogously to $N(s)$ as \begin{equation} N^*(s) = \int_0^s \left(\frac{\gamma(s')}{\ln\Lambda}\right)n_\mathrm{H}(s')\;\mathrm{d}s'. \end{equation} Similarly, the ionised stopping column depth $N_\mathrm{c}^*$ corresponds to $N_\mathrm{c}$ with $\gamma = \ln\Lambda$. The rate of energy deposition per distance, $\mathrm{d}\mathcal{E}/\mathrm{d}s$, can be found by integrating Eq. \eqref{eq:beam_heating_per_volume} over the cross-sectional area $A$ of the beam. If $Q(s)$ is assumed uniform across the beam cross-section, this gives $\mathrm{d}\mathcal{E}/\mathrm{d}s = AQ(s)$. Furthermore, if $A$ is assumed constant along the beam trajectory, it can be written as $A = P_\mathrm{beam}/F_\mathrm{beam}$, giving \begin{equation} \label{eq:beam_heating_per_distance} \frac{\mathrm{d}\mathcal{E}}{\mathrm{d}s} = \left(\frac{P_\mathrm{beam}}{F_\mathrm{beam}}\right)Q(s). \end{equation} Fig. \ref{fig:single_beam_parameter_study} shows examples of the evolution of $\mathrm{d}\mathcal{E}/\mathrm{d}s$ with depth, computed from Eqs. \eqref{eq:beam_heating_per_volume} and \eqref{eq:beam_heating_per_distance}, for an electron beam injected into the FAL-C model atmosphere \citep{Fontenla1993}. \begin{figure*}[!thb] \includegraphics{single_beam_parameter_study} \centering \caption{Heating from an electron beam injected into the transition region of an average quiet sun atmosphere. The rate of energy deposition per distance is plotted against the height above the photosphere, for different values of the lower cut-off energy $E_\mathrm{c}$ (panel (a)), initial pitch angle $\beta_0$ (panel (b)) and power-law index $\delta$ (panel (c)). In all cases, the beam originates 2.3 Mm above the photosphere with a power of $P_\mathrm{beam} = 10^{18}\;\mathrm{erg}/\mathrm{s}$. Additionally we have used $E_\mathrm{c} = 2\;\mathrm{keV}$ for panels (b) and (c), $\beta_0 = 0^\circ$ for panels (a) and (c) and $\delta = 4$ for panels (a) and (b). The dashed curve is the temperature profile. The atmosphere corresponds to model C of \citet{Fontenla1993}, extrapolated to coronal temperatures.} \label{fig:single_beam_parameter_study} \end{figure*} The Coulomb logarithm $\ln\Lambda$ emerges in the calculation of the mean rate of velocity change from Coulomb collisions between free particles, $\langle\mathrm{d}v/\mathrm{d}t\rangle$, which involves an integral of the differential collision cross-section over all impact parameters $b$ \citep[e.g.][]{Rosenbluth1957}. The long-range nature of the Coulomb force causes the integral to diverge in the limit of large $b$, but this can be resolved by considering the screening of the force at long distances due to the response of the nearby charge carriers to each particle's electrostatic field. This screening imposes a maximum value $b_\mathrm{max}$ on the impact parameter, enabling the integral for $\langle\mathrm{d}v/\mathrm{d}t\rangle$ to be solved. The solution is $\langle\mathrm{d}v/\mathrm{d}t\rangle \propto \ln\Lambda$, where $\Lambda = b_\mathrm{max}/b_\mathrm{min}$ and $b_\mathrm{min}$ is the minimum impact parameter. For a particle with charge $ze$ and speed $v$ interacting with a stationary particle with charge $Ze$, energy considerations give $b_\mathrm{min} = zZe^2/m v^2$, where $m$ is the reduced mass of the two particles. The Debye screening length $\lambda_\mathrm{D}$ is often used for $b_\mathrm{max}$. In the context of an energetic particle beam, a more appropriate choice might be the particle mean free path $\eta = v/\nu$, where $\nu = \sqrt{4\pi e^2 n_\mathrm{e}/m_\mathrm{e}}$ is the plasma frequency, or the gyroradius $r_\mathrm{g}$, depending on which is smallest \citep{Emslie1978}. Although the Coulomb logarithm in principle varies with both particle energy, local conditions, and the masses of the colliding particles, the logarithmic scaling should keep these variations relatively small. Equation \eqref{eq:beam_heating_per_volume} consequently assumes $\ln\Lambda$ to be constant, so the value of $\ln\Lambda$ should simply be computed in each acceleration region and used throughout the transport calculations for the associated electron beam. Because the electrons will not experience very strong magnetic fields, one can expect that $\eta < r_\mathrm{g}$, and hence use $b_\mathrm{max} = \eta$. For collisions with ambient free electrons, we have $z = Z = 1$ and $m = m_\mathrm{e}/2$, so we get \begin{equation} \label{eq:electron_coulomb_logarithm} \ln\Lambda = \ln\sqrt{\frac{{E_\mathrm{mean}}^3}{2\pi e^6 n_\mathrm{e}}}, \end{equation} where we have used $v = \sqrt{2 E_\mathrm{mean}/m_\mathrm{e}}$, which is the speed corresponding to the mean energy \begin{equation} E_\mathrm{mean} = \left(\frac{2\delta - 1}{2\delta - 3}\right)E_\mathrm{c} \end{equation} of the electrons in the initial distribution. Collisions with ambient protons only account for a tiny fraction of the electron velocity change $\langle\mathrm{d}v/\mathrm{d}t\rangle$ due to the high mass of protons compared to electrons, and are thus ignored in the derivations leading to Eq. \eqref{eq:beam_heating_per_volume}. For collisions with neutral hydrogen, the resulting energy loss rate can be expressed analogously to that of collisions with free electrons \citep[see e.g.][]{Mott1949a, Emslie1978}, with an effective Coulomb logarithm of \begin{equation} \label{eq:neutral_hydrogen_coulomb_logarithm} \ln\Lambda' = \ln\left(\frac{2 E_\mathrm{mean}}{1.105 \chi}\right), \end{equation} where $\chi$ is the ionisation potential of hydrogen. For simplicity, the less important contributions from collisions with helium and heavier elements are not included here. The effective Coulomb logarithm for collisions with neutral helium is similar to Eq. \eqref{eq:neutral_hydrogen_coulomb_logarithm}, and has a comparable magnitude \citep{Evans1955a}. Considering the roughly 20\% abundance of helium, the inclusion of helium collisions would lead to at most a 20\% increase in the rate of energy deposition in the neutral regions of the atmosphere, and less in the partially ionised regions. The treatment of Coulomb collisions outlined here disregards randomisation of energy and direction, which manifests as a diffusion of the energy and pitch angle distribution with propagation depth. As long as the speeds of the ambient particles are negligible compared to the speeds of the accelerated particles (the so-called cold-target approximation), energy and pitch angle diffusion are unimportant. This is because the target particles can be considered effectively stationary, leading to a deterministic evolution of each accelerated particle. \citet{Jeffrey2019} found that the cold-target approximation tends to underestimate the amount of energy deposited in the lower atmosphere compared to the results of a full warm-target model because the electrons that thermalise in the corona eventually will diffuse down to the lower atmosphere and deposit their energy there. However, this conclusion was based on work not including standard thermal conduction, and the inclusion of thermal conduction would mitigate some of the discrepancies between the cold- and warm-target models. Until the difference between these models has been investigated further, we do not implement the more computationally expensive warm-target treatment in our model. Moreover, when using the simple acceleration model presented in Sect. \ref{sec:initial_particle_distributions}, the lowest energies obtained for the accelerated electrons (which tend to come from sites with coronal temperatures) are typically of the order of 1 keV. This can be seen from Fig. \ref{fig:Ec_parameter_study}, which also shows that the target plasma would need to have a temperature of at least $10^7$ K in order for the average thermal energy to be comparable to the typical accelerated electron energy. Although the impact region over time could be heated to this temperature \citep[e.g.][]{Mariska1989, Allred2005}, the relatively low acceleration energies involved in minor flare events makes this unlikely. A beam of energetic electrons departing from an acceleration region takes away negative charge and distributes it along its trajectory, leading to charge separation. This imbalance produces an electrostatic field that drives a counter-flowing return current of ambient electrons \citep{Knight1977, Emslie1980}. A steady state where the return current continuously compensates for the charge separation is reached on a timescale comparable to the electron--ion collision time \citep{Larosa1989}. Because the current associated with the beam then is cancelled by the return current, this mechanism prevents any induction of a significant electromagnetic field by the beam. As long as the beam flux is weak, the energy loss incurred by the beam electrons from moving through the opposing electrostatic potential is negligible compared to their energy loss from Coulomb collisions with the ambient plasma. We confirmed this for our simulation by evaluating the energy loss contributions due to collisions and return currents (given respectively by Eqs. (4) and (6) in \citet{Emslie1980}) in the acceleration regions, where the return current energy loss is at its highest. The ratio of return current to collisional energy loss was found to be at most $10^{-4}$. The accelerated electrons are also subject to a small radiative energy loss. They emit synchrotron radiation due to their gyrating motion around the magnetic field lines \citep{Petrosian1985} as well as bremsstrahlung due to collisions \citep{Brown1971, Haug2004}. A comparison between the energy loss terms from synchrotron and bremsstrahlung emission with the collisional loss term shows that both forms of radiative losses are completely negligible compared to collisional losses under ordinary conditions, and can safely be ignored. There are a variety of considerations in addition to those covered above that a comprehensive particle transport model would need to address. This includes collisional ionisation of neutral chromospheric hydrogen \citep{Ricchiazzi1983, Abbett1999} and helium \citep{Allred2015}, the potential occurrence of a two-stream instability resulting in the generation of plasma oscillations and turbulence \citep{Emslie1984} as well as a fully relativistic treatment of the transport process \citep{McTiernan1990}. However, these effects tend to be more important for larger flares involving higher particle numbers and energies. For application to weaker acceleration events, the transport model presented here, in which only energy dissipation through Coulomb collisions is included, should be a reasonable first step. \subsection{Model tuning} \label{sec:tuning} \subsubsection{Selection of reconnection sites} \label{sec:selecting_reconnection_sites} The method of identifying reconnection sites that is presented in Sect. \ref{sec:reconnection_sites} relies on an appropriate choice of the threshold $K_\mathrm{min}$. It should be set to a value small enough to include all the potentially important reconnection sites. However, it can not simply be set to zero, because limited spatial resolution and numerical diffusion in the MHD simulation prevent $K$ from ever becoming exactly zero in practice. Every point would then be classified as a reconnection site, which would be both unrealistic and prohibitively computationally expensive. From Eqs. \eqref{eq:electric_field_projection} and \eqref{eq:krec} it can be seen that $K$ scales linearly with the strength of the magnetic field, $B$. The magnetic energy density $u_\mathrm{B}$ is proportional to $B^2$, meaning that $K$ is proportional to $\sqrt{u_\mathrm{B}}$. As $K_\mathrm{min}$ is lowered, the additional reconnection sites that are included thus produce less energetic particle distributions on average. On the other hand, the number of included sites also increases rapidly with decreasing $K_\mathrm{min}$. The choice of $K_\mathrm{min}$ is thus a compromise between the inclusion of more reconnection energy and the computational cost of simulating more electron beams. Fortunately, as shown in Fig. \ref{fig:global_heating_krec_lim}, the growth in the number of sites is balanced by the decrease in energy, and the total energy contained in all included beams begins to stagnate as $K_\mathrm{min}$ becomes sufficiently small. As a reasonable trade-off, we used $K_\mathrm{min} = 10^{-4}$ (in internal Bifrost units) for our results. \begin{figure}[!thb] \includegraphics{global_heating_krec_lim} \centering \caption{Variation in acceleration power (Eq. \eqref{eq:acceleration_power}) with the reconnection factor threshold $K_\mathrm{min}$, for a simulation with $\delta = 4$ and $p = 0.2$. The solid curve is the total power for all included reconnection sites, and the dashed curve is the average power per site (multiplied by $10^5$ for composition purposes). The total numbers of included sites are shown in the dotted curve. Short-range beams have been filtered out in the manner discussed in Sect. \ref{sec:short_range_exclusion}.} \label{fig:global_heating_krec_lim} \end{figure} \subsubsection{Exclusion of short-range beams} \label{sec:short_range_exclusion} Not all of the identified acceleration regions produce electron beams that are worth considering. Most importantly, beams that deposit all their energy in the immediate vicinity of the acceleration region add nothing to the model. This is because the slight displacement of heat quickly would be evened out by other energy transport mechanisms such as thermal conduction, plasma advection, or radiative transfer. The outcome would thus be nearly the same as if all the reconnection energy had been converted directly into thermal energy at the reconnection site in the first place. To filter out the short-range beams, we first had to establish a criterion for when a beam is considered depleted. It can be seen from Eq. \eqref{eq:beam_heating_per_volume} that $Q(s)$ approaches zero only asymptotically with distance. Physically, this can be explained by the presence of arbitrarily energetic electrons in the tail of the power-law distribution. Because the collisional cross-section decreases with electron energy, extremely energetic electrons will practically never thermalise, and hence there will always be some non-thermal energy remaining in the beam. However, once $Q$ becomes sufficiently small, the rest of the beam energy can safely be disregarded, provided that the reason for the small heating rate is the depletion of energy and not that the beam happens to pass through a low-density region. This second criterion can be ensured by considering the part of Eq. \eqref{eq:beam_heating_per_volume} representing energy depletion, which is the monotonically decreasing factor \begin{equation} \label{eq:residual_factor} r(s) = \left(\frac{N_*(s)}{N_\mathrm{c}^*}\right)^{-\delta/2}. \end{equation} This is a convenient heuristic for the amount of energy remaining in the beam, as shown in Fig. \ref{fig:deposited_percentage_vs_residual_factor}, where the percentage of the initial beam power that has been deposited can be seen to approach $100\%$ as $r$ becomes smaller. \begin{figure}[!thb] \includegraphics{deposited_percentage_vs_residual_factor} \centering \caption{Energy deposition as a function of $r(s)$ (Eq. \eqref{eq:residual_factor}). The deposited power per distance, $\mathrm{d}\mathcal{E}/\mathrm{d}s$, is plotted along the trajectories of a representative subset of the electron beams in a simulation with $\delta = 4$ and $p = 0.2$, with colours indicating the height above the photosphere. For each beam, the corresponding proportion of the initial beam power that has been deposited at each $r(s)$, given by ${P_\mathrm{beam}}^{-1}\int_0^s \mathrm{d}\mathcal{E}/\mathrm{d}s(s')\;\mathrm{d}s'$, is indicated by a red curve. These red curves are all overlapping.} \label{fig:deposited_percentage_vs_residual_factor} \end{figure} Using $(\mathrm{d}\mathcal{E}/\mathrm{d}s)_\mathrm{min}$ and $r_\mathrm{min}$ to denote lower thresholds for $\mathrm{d}\mathcal{E}/\mathrm{d}s$ and $r$, respectively, a depletion criterion can thus be defined as \begin{equation} \label{eq:depletion_criterion} \frac{\mathrm{d}\mathcal{E}}{\mathrm{d}s}(s) < \left(\frac{\mathrm{d}\mathcal{E}}{\mathrm{d}s}\right)_\mathrm{min}\quad \mathrm{and} \quad r(s) < r_\mathrm{min}. \end{equation} It is clear from the figure that the vast majority of the initial beam power is depleted once $r$ is below $\sim 10^{-5}$. Therefore, this paper uses $r_\mathrm{min} = 10^{-5}$. Moreover, we set $(\mathrm{d}\mathcal{E}/\mathrm{d}s)_\mathrm{min} = 10^5\;\mathrm{erg}/\mathrm{s}/\mathrm{cm}$. As the figure shows, this enables the beams to reach deep into the lower atmosphere before they are considered depleted. Based on the criteria in Eq. \eqref{eq:depletion_criterion}, an estimate $\tilde{s}_\mathrm{dep}$ for the depletion distance can be computed under the assumption that the plasma properties are approximately uniform between $s = 0$ and $s = s_\mathrm{dep}$, so that $N^*(s_\mathrm{dep}) \approx (n_\mathrm{H}(s=0)\gamma(s=0)/\ln\Lambda) s_\mathrm{dep}$. This assumption holds as long as $s_\mathrm{dep}$ is reasonably short. Equations \eqref{eq:beam_heating_per_volume}, \eqref{eq:beam_heating_per_distance}, \eqref{eq:residual_factor}, and \eqref{eq:depletion_criterion} then yield the following estimate for the depletion distance: \begin{equation} \label{eq:estimated_depletion_distance} \tilde{s}_\mathrm{dep} = \left(\frac{N_\mathrm{c}^*\ln\Lambda}{n_\mathrm{H}(0)\gamma(0)}\right)\mathrm{max}\left(c_Q, c_r \right)^{2/\delta}, \end{equation} where \begin{equation} c_Q = \frac{n_\mathrm{H}(0) \gamma(0)}{(\mathrm{d}\mathcal{E}/\mathrm{d}s)_\mathrm{min}}\left(\frac{\pi e^4 (\delta - 2) P_\mathrm{beam}}{|\mu_0| {E_\mathrm{c}}^2}\right) B\left(1; \frac{\delta}{2}, \frac{1}{3}\right) \end{equation} and \begin{equation} c_r = \frac{1}{r_\mathrm{min}}. \end{equation} The derivation also assumes that $\kappa(s_\mathrm{dep}) = 1$, which is always satisfied when $r < 1$. By evaluating Eq. \eqref{eq:estimated_depletion_distance} at each reconnection site, it can be decided whether the resulting electron beam is worth considering further. The beam can be excluded if $\tilde{s}_\mathrm{dep}$ is shorter than an assigned minimum distance $s_\mathrm{min}$. \begin{figure}[!thb] \includegraphics{depletion_distances} \centering \caption{Estimated depletion distances $\tilde{s}_\mathrm{dep}$ plotted against actual depletion distances $s_\mathrm{dep}$ for a representative subset of the electron beams in the same simulation as in Fig. \ref{fig:deposited_percentage_vs_residual_factor}. Points lying on the dashed line correspond to correct estimates. The colour indicates the mass density $\rho$ in the acceleration region.} \label{fig:depletion_distances} \end{figure} Figure \ref{fig:depletion_distances} confirms that Eq. \eqref{eq:estimated_depletion_distance} is accurate for small values of $s_\mathrm{dep}$. In the cases when $s_\mathrm{dep}$ is over-estimated, it could lead to the inclusion of a beam that turns out to propagate shorter than expected, but this has no impact on the accuracy of the result. More problematically, an under-estimation of $s_\mathrm{dep}$ could lead to the rejection of a beam that indeed would contribute to the long-range energy transport. However, cases like these appear to be relatively uncommon. With Fig. \ref{fig:depletion_distances} as a guideline, we chose a minimum distance of $s_\mathrm{min} = 0.5\;\mathrm{Mm}$ for our results. We note that the figure shows a clear inverse relationship between depletion distance and density, and that practically all beams accelerated at densities higher than about $10^{-12}\;\mathrm{g}/\mathrm{cm}^3$ will be rejected. Consequently, all non-thermal energy that is transported a significant distance in this model comes from the corona and upper transition region. \subsubsection{Exclusion of low-energy beams} \label{sec:weak_exclusion} As discussed in Sect. \ref{sec:selecting_reconnection_sites}, the reconnection factor $K$ correlates with the available magnetic energy. However, because $K$ also depends on the configuration of the electromagnetic field, there will still be reconnection sites with $K > K_\mathrm{min}$ that have very low acceleration energies. As a way of reducing computational cost, these sites can be excluded with little consequence for the accuracy of the model by imposing a suitable lower limit $e_\mathrm{min}$ on the acceleration power density $e_\mathrm{acc}$ in Eq. \eqref{eq:acceleration_power_density_qjoule}. The total power in all included acceleration regions saturates as $e_\mathrm{min}$ is reduced, as shown in Fig. \ref{fig:global_heating_min_beam_en}. \begin{figure}[!thb] \includegraphics{global_heating_min_beam_en} \centering \caption{Variation in global acceleration power with the lower limit $e_\mathrm{min}$ on the local acceleration power density. Like in Fig. \ref{fig:global_heating_krec_lim}, the simulated beams have $\delta = 4$ and $p = 0.2$, and the method discussed in Sect. \ref{sec:short_range_exclusion} was used to filter out short-range beams. The curves have the same meaning as in Fig. \ref{fig:global_heating_krec_lim}.} \label{fig:global_heating_min_beam_en} \end{figure} Much like the situation in Fig. \ref{fig:global_heating_krec_lim}, this is because the increase in the number of included sites is counterbalanced by the decrease in the average power at each site. For very small values of $e_\mathrm{min}$, no additional beams are excluded, so the total power reaches a constant value. For this paper, we used $e_\mathrm{min} = 10^{-2}\;\mathrm{erg}/\mathrm{s}/\mathrm{cm}^3$, which lead to a drastic reduction in the number of acceleration regions with a relatively minor loss in included power. \subsection{Effect of $p$ and $\delta$} \label{sec:effect_p_delta} Variation in the acceleration power fraction $p$ in Eq. \eqref{eq:acceleration_power} effectively leads to a proportional scaling of the energy deposition $Q(s)$ at every depth.\footnote{In principle, the relation is not directly proportional since $p$ also affects the cut-off energy $E_\mathrm{c}$ through $u_\mathrm{acc}$ in Eq. \eqref{eq:lower_cutoff_energy}. However, as Fig. \ref{fig:Ec_parameter_study} shows, this dependence is so weak as to be negligible.} Therefore, $p$ does not affect the spatial distribution of deposited beam energy, and the exact choice of its value has a limited qualitative bearing on the energy transport. Of course, if $p$ was extremely small, any effect of non-thermal electrons would be completely negligible regardless of how the electrons distributed their energy. Yet, based on the current understanding of the acceleration mechanisms taking place during reconnection, this seems unlikely. A value of $p = 0.2$ was therefore chosen for this paper. In contrast to $p$, the choice of $\delta$ has a major influence on the resulting spatial distribution of deposited beam energy. Panel (c) in Fig. \ref{fig:single_beam_parameter_study} shows that a larger value of $\delta$ leads to a significantly faster rate of energy deposition with distance, and thus to a shorter penetration depth for the beam. This can be understood mathematically from the $-\delta/2$ power in Eq. \eqref{eq:beam_heating_per_volume}, and physically from the lower fraction of electrons in the high-energy tail of the non-thermal distribution. Because $\delta$ has the unfortunate feature of being both important and uncertain, we present results for a range of $\delta$-values where appropriate. Otherwise, we used a value of $\delta = 4$, which aids the analysis of the energy transport by giving the beams some penetrative power, while still being a realistic value lying well within the observed range. \section{Results} \label{sec:results} \subsection{Global energy transport} \label{sec:global_transport} Particle acceleration predominantly occurs in localised regions that are aligned with the major magnetic field structures. This can be seen in Figs. \ref{fig:xz_power_change_beams} and \ref{fig:horizontal_power_change_beams}, which show the net electron beam heating power accumulated respectively horizontally and vertically over the simulation domain. \begin{figure*}[!thb] \includegraphics{xz_power_change_beams} \centering \caption{Change in heating power in each grid cell due to the inclusion of acceleration and transport of non-thermal electrons. The power changes are accumulated along the $y$-axis of the simulation snapshot. Blue regions indicate a net reduction of thermal energy compared to the case without non-thermal electrons, which is due to the fraction $p$ of the local reconnection energy being injected into accelerated electrons instead of heating the ambient plasma. Orange regions show where the non-thermal electron energy eventually is deposited into the plasma as heat. The simulated beams have $\delta = 4$ and $p = 0.2$.} \label{fig:xz_power_change_beams} \end{figure*} \begin{figure*}[!thb] \includegraphics{horizontal_power_change_beams} \centering \caption{Same as Fig. \ref{fig:xz_power_change_beams}, but with the power changes accumulated over the full height of the simulation snapshot instead of the $y$-axis.} \label{fig:horizontal_power_change_beams} \end{figure*} Due to the exclusion of short-range electron beams discussed in Sect. \ref{sec:short_range_exclusion}, all significant acceleration takes place within the tenuous plasma above the transition region. The acceleration regions (apparent as blue areas in the figures) have lengths ranging from 1 to 15 Mm, and cross-sections typically smaller than 1 Mm. Longer acceleration regions tend to occur higher in the corona, where the magnetic field is more homogeneous. Interestingly, despite the decrease of magnetic field strength with height (panel (c) in Fig. \ref{fig:atmospheric_height_profiles}), these high regions typically exhibit equally energetic acceleration as regions at lower heights. The most intense acceleration can be found in a thin sheet centred on $x = 11$ Mm and $y = 10.67$ Mm (the $y$-coordinate is the same as for the plane of Fig. \ref{fig:krec_slice}). This is the current sheet associated with the Ellerman bomb and UV burst that are analysed by \citet{Hansteen2019}. Energy deposition from the non-thermal electrons (shown in orange in Figs. \ref{fig:xz_power_change_beams} and \ref{fig:horizontal_power_change_beams}) takes place throughout the corona, with higher concentrations near the ends of the acceleration regions and along the dominant magnetic structures. The strongest non-thermal heating typically occurs in the transition region near low-lying acceleration regions. At these locations, the electrons are often able to reach significant chromospheric depths. Numerous electron beams entering the lower atmosphere from different directions aggregate horizontally due to the convergence of the magnetic field with depth. This produces collections of thin, semi-vertical strands of concentrated non-thermal heating in the chromosphere, which are anchored in the photosphere at locations with a strong vertical magnetic field. \subsection{Selected sets of electron beams} \label{sec:selected_beams} To analyse the beam heating in the lower atmosphere more closely, we consider the three subsets of electron beams shown in Fig. \ref{fig:xz_power_change_selected_beams}. \begin{figure}[!thb] \centering \includegraphics{xz_power_change_selected_beams} \caption{Selected sets of electron beams, plotted in the same manner as Fig. \ref{fig:xz_power_change_beams}. Set 1 is a coherent bundle of beams that originates in a long acceleration region at the top of a coronal loop and terminates at one of the footpoints. Set 2 consists of several electron beam bundles coming from various locations in the simulation domain, all converging at the same chromospheric site. Set 3 encompasses electrons that are accelerated in the strong central current sheet and ejected along one of the magnetic 'legs' that connect the current sheet with the lower chromospheric plasma.} \label{fig:xz_power_change_selected_beams} \end{figure} They represent various ways in which electron beams can join together to produce significant localised heating in the lower atmosphere. This includes a single long bundle originating high up in the corona (set 1), the convergence of multiple thin bundles coming from separate acceleration regions (set 2), and a short bundle associated with an acceleration region that lies just above the transition region (set 3). In Fig. \ref{fig:heating_comparison}, the horizontal average of $Q_\mathrm{beam}$ in the core of the cone that penetrates the lower atmosphere is plotted with height for each beam set. In order to demonstrate the effect of the power-law index $\delta$, each beam heating profile is plotted for $\delta$ ranging from 3 to 6. The shapes of the local transition regions are apparent from the included temperature profiles. \begin{figure*}[!thb] \includegraphics{heating_comparison} \centering \caption{Horizontal averages of heating due to electron beams and thermal conduction with height along the three selected sets of electron trajectories. The averages are taken over the grid cells for which the aggregated $Q_\mathrm{beam}$ exceeds its 75th percentile within each horizontal layer. The dashed line shows the corresponding average temperature profile. We note that the height axes have the same extent, but separate bounds.} \label{fig:heating_comparison} \end{figure*} Transition region beam heating can be seen to be relatively robust to variations in $\delta$. At the same time, this is where the difference between the origins of the electron beams for the three beam sets primarily manifests. The reason for this is that the number of incoming low- and intermediate-energy electrons, which make up the bulk of the transition region heating, does not change considerably with $\delta$, but is highly sensitive to the amount of coronal plasma that the beam has propagated through. The electrons accelerated at the top of the coronal loop (set 1) produce a pronounced peak in the beam heating, centred on the bottom of the transition region. This is because most electrons with too little energy to make it through the transition region already would have stopped on the way through the coronal loop. For the converging electron beam bundles coming from separate locations (set 2), the corresponding peak is less distinct. Some of the beams in this set stem from just above the transition region, and their low-energy electrons provide a significant amount of heat to the upper transition region, making the peak less pronounced. This situation is most evident for the beams coming from the strong current sheet that resides in the lower corona near the centre of the simulation domain (set 3). Here, the full spectrum of electron energies is injected directly into the transition region. As a result, the heating culminates near the top of the transition region and decreases monotonically with depth in the lower atmosphere. Below the transition region, the decrease with depth of the average beam heating rate is highly dependent on $\delta$. At a given chromospheric depth, the decrease in $Q_\mathrm{beam}$ with increasing $\delta$ appears to be approximately exponential. However, the average slopes of the beam heating profiles for a given value of $\delta$ are similar for all three beam sets. They are slightly steeper for sets 1 and 3 than for set 2, but this is because the mass densities at these locations are somewhat higher. In contrast to the situation in the transition region, only electrons in the high-energy tail of the incoming distribution can significantly penetrate the chromosphere. This explains the sensitivity to $\delta$, which controls the relative portion of high-energy electrons in the distribution. The electrons in the high-energy tail are not significantly influenced by the coronal plasma, so the distance travelled by the electrons through the corona has little bearing on the distribution of electrons that enter the chromosphere. As a result, any difference between the shapes of the chromospheric heating profiles, barring local variations in mass density, must be caused by a difference between the shapes of the initial electron distributions in the acceleration regions. In the acceleration model used here, this requires significant variations between the temperatures of the involved acceleration regions, which would lead to different values of the lower cut-off energy $E_\mathrm{c}$ (Fig. \ref{fig:Ec_parameter_study}). The selected beam sets all have similar temperatures in their acceleration regions, so the shape of the electron distribution that penetrates the chromosphere is comparable in all three cases. Energy transport by accelerated electrons plays a similar role as thermal conduction, in that it transports energy from the corona to the transition region along the magnetic field. However, the relative importance of these mechanisms differs greatly with depth in the lower atmosphere. This is evident from the red curve in Fig. \ref{fig:heating_comparison}, which shows the conductive heating along the three sets of beam trajectories. In all cases, conductive heating is 10--100 times stronger than electron beam heating throughout most of the transition region. The strong conductive heating stems from the abrupt drop from coronal to chromospheric temperatures. But as the temperature decreases towards the chromosphere, so does the thermal conductivity of the plasma, causing the conductive heating to nearly vanish at the bottom of the transition region. On the other hand, the heat deposited by non-thermal electrons is close to its peak value at this location, owing to the sudden rise in mass density with depth. For all values of $\delta$, beam heating exceeds conductive heating by many orders of magnitude throughout the chromosphere. \subsection{Energetics} The total power of all accelerated electrons in the simulation snapshot is roughly $10^{24}\;\mathrm{erg}/\mathrm{s}$. Beam sets 1 and 2 each produce approximately $3\cdot 10^{21}\;\mathrm{erg}/\mathrm{s}$ of non-thermal electron power. This value is representative of a typical collection of electron beams forming a coherent lower atmospheric heating site in the simulation. Beam set 3, which is associated with a particularly energetic event, exceeds this power by two orders of magnitude. For $\delta = 4$, roughly $1\%$ of the total beam power in the atmosphere is deposited at densities higher than $10^{-11}\mathrm{g}/\mathrm{cm}^3$, in what might be considered chromospheric plasma. Adjusting the value of $\delta$ was found to roughly give a power-law variation in the percentage of non-thermal power deposited in the chromosphere, with significantly smaller percentages for higher values of $\delta$. However, the power-law exponent describing this relationship is highly dependent on the individual beam trajectory. For beam set 1, 2, and 3, the percentage of chromospheric power is respectively $10\%$, $1\%$, and $6\%$ for $\delta = 4$. The percentage is down-scaled by 1--3 orders of magnitude when going from $\delta = 3$ to $\delta = 6$. \section{Discussion and conclusions} \label{sec:discussion} The key factor in determining the amount and energy of accelerated electrons in any part of the corona is the magnetic topology. Although a stronger magnetic field provides a larger source of energy, it is the magnetic topology that determines the potential for this energy to be released by reconnection. This is evident in the distribution of acceleration regions in our simulation. The upper corona, where the expanding magnetic bubbles collide with the weak overlying ambient field, contains acceleration regions that are equally energetic as acceleration regions in magnetically stronger, but topologically simpler parts of the lower corona. Consequently, the overall complexity of the magnetic field configuration is likely to be the main indicator of the significance of non-thermal energy transport. In our simulation, the non-thermal power deposited at notable beam heating sites in the lower atmosphere range from $10^{18}$ to $10^{22}\;\mathrm{erg}/\mathrm{s}$, depending on the particular site and value of $\delta$. A typical small-scale beam heating event in the atmospheric conditions modelled here may then be estimated to release $10^{20}$--$10^{24}$ erg of non-thermal energy in the lower atmosphere, assuming the events to last $\sim 100$ s. Other heating mechanisms, including local Joule heating, thermal conduction, and magnetoacoustic shocks will make a significant additional contribution to the total energy release in some of these events \citep{Archontis2014, Hansteen2017, Hansteen2019}, one example being the heating near the strong central current sheet (beam set 3). Most of the beam heating events are nevertheless relatively weak, even for nanoflares. But they are highly abundant, and a $10 \times 10\;\mathrm{Mm}$ horizontal area of the chromosphere is likely to host a significant number of small beam heating events at any given time. Even though the particle beams in this simulation are weak, their heating effect on the chromosphere is many orders of magnitude stronger than that of thermal conduction. This demonstrates that heating by energetic particles and thermal conduction in the lower atmosphere are qualitatively different, even under relatively quiet solar conditions. Because efficient thermal conduction requires a hot plasma, conductive transport always ceases at the bottom of the transition region. Incoming energetic particles, on the other hand, are not directly affected by the transition region temperature drop. The increase in mass density causes them to thermalise more quickly, but this occurs more gradually with depth than the abrupt shut-down of thermal conduction. The inclusion of chromospheric heating by electron beams in atmospheric simulations such as the one used here could potentially account for discrepancies between synthetic diagnostics and observations. \citet{Testa2014} found that thermal conduction alone could not explain observed blueshifts of the SI IV spectral line in small-scale brightenings at coronal loop footpoints. Instead, their simulations, together with the extended analysis of \citet{Polito2018}, show that non-thermal electron beams can provide sufficient heating at the depths required to produce the upflows responsible for the blueshifts. An advantage of considering the transport of accelerated particles in a 3D rather than a 1D atmospheric model is that it paints a realistic picture of how the available non-thermal energy is distributed in space. Although coherent large-scale flaring events may be reasonably approximated in a 1D coronal loop model with accelerated particles injected at the top, these types of idealised configurations are probably not representative of the situation in most active regions most of the time, and even less so outside of active regions. The quiet solar magnetic field tends to be tangled and inhomogeneous, which can lead to acceleration at any height in the corona and gives a complicated mapping from the acceleration regions to the locations where the non-thermal energy is deposited. Because acceleration takes place over extended regions in which the magnetic field changes topology, particles that are associated with the same reconnection event may end up on completely different trajectories through the atmosphere. Furthermore, the convergence of the magnetic field with depth can lead energetic particles that originate in separate acceleration regions to deposit their energy near the same location in the lower atmosphere (as exemplified by beam set 2 in Fig. \ref{fig:xz_power_change_selected_beams}). The fact that beam heating sites can receive significant contributions of non-thermal electrons from several acceleration regions may have important observational consequences. The incoming electron beams could have been accelerated under different conditions, and thus do not necessarily have the same initial energy distributions. Moreover, beams coming from separate locations are influenced to varying degrees by collisions in the corona due to the different trajectories they take to the beam heating site. For instance, beams traversing a high column mass of coronal plasma lose a large share of their low-energy electrons, which gives them a hard energy distribution upon impact with the lower atmosphere. The total distribution of non-thermal electrons incident on the beam heating site could thus be a superposition of several distinct distributions. Consequently, the common assumption of a power-law distribution for the bremsstrahlung-emitting electrons in flares may not be applicable in all cases. In future work, the response of the atmosphere to the electron beams will be investigated. It is also of interest to generate synthetic spectra from the beam heating sites. Furthermore, the development of the energetic particle model presented here is ongoing, and various improvements could be implemented. Currently, our particle transport model does not include the effects of magnetic gradient forces. However, the strengthening of the magnetic field with depth in the chromosphere could significantly increase the pitch angle of the incoming electrons. If this effect is sufficiently strong, it will hamper the penetration of the most energetic electrons. Instead of thermalising below the photosphere, they might instead only reach the middle chromosphere, resulting in more beam heating at this depth. In general, a numerical treatment of the energy transport problem is required when considering magnetic gradient forces, although a simplified analytical approach has been suggested by \citet{Chandrashekar1986}. The atmospheric simulation used for this paper assumes LTE and statistical equilibrium in its equation of state. The effects of non-equilibrium hydrogen ionisation are likely to alter the ionisation fraction and electron number densities in the chromosphere \citep{Leenaarts2007}, and could thus have a notable impact on the resulting distribution of beam heating. Moreover, because collisions with the non-thermal electrons can ionise neutral hydrogen atoms, the electron beams themselves contribute to an increase in the hydrogen ionisation rate \citep[see e.g.][]{Fang1993}. The resulting increase in the electron number density will, in turn, affect the chromospheric beam heating. When enhanced collisional ionisation is taken into account, it may also be important to consider electrons accelerated below the transition region \citep[as done by][]{Fang2006}. Although the electrons are unable to transfer energy a significant distance away from the acceleration region due to the high plasma density, they still produce a local increase in the ionisation rate which would not be present if all the reconnection energy was converted directly into Joule heating. \begin{acknowledgements} This research was supported by the Research Council of Norway, project number 250810, through its Centres of Excellence scheme, project number 262622, and through grants of computing time from the Programme for Supercomputing. \end{acknowledgements} \bibliographystyle{aa} \section{Introduction} The solar atmosphere is full of energetic particles. They are produced when ambient plasma particles are accelerated out of thermal equilibrium by a strong electric field or by rebounding off moving magnetic elements. Upon leaving the site of acceleration, their available trajectories are limited by the Lorentz force, which compels charged particles to follow the direction of the magnetic field. Consequently, the accelerated particles form coherent beams. The interactions of these beams with the ambient plasma are believed to be a key mechanism in solar flares. It is generally accepted that flares are powered by the relaxation of stresses in the magnetic field through the process of magnetic reconnection. The release of magnetic energy manifests as electric field enhancements and plasma flows, leading to strong currents with associated resistive heating as well as jets of outflowing plasma and magnetoacoustic waves. This also creates conditions favourable for particle acceleration. The ensuing beams of energetic particles, which may account for a significant portion of the flare energy \citep{Lin1971}, transfer energy to the plasma along their trajectories through Coulomb interactions. The X-ray bremsstrahlung emitted in these interactions can escape the atmosphere relatively unaffected, and thus it provides valuable information about the energy distribution of the particles and the plasma conditions at the site of emission. Signatures of accelerated particles, in particular non-thermal electrons, are found in observed hard X-ray spectra from active region flaring events, ranging from large flares with energies up to $10^{32}$ erg \citep[see][for a review]{Benz2017} to $10^{27}$ erg microflares at the sensitivity limit of current instruments \citep[e.g.][]{Christe2008}. This suggests that particle beams play an active role in flares of all sizes. Beyond hard X-ray detectability, \citet{Parker1988} predicted frequent impulsive heating events with energies of the order of $10^{24}$ erg that are associated with small-scale reconnection due to the continuous interweaving of the magnetic field by photospheric convective motions. Signs of these types of events, dubbed nanoflares, have been observed down to $10^{25}$ erg as ultraviolet (UV) and soft X-ray flashes in the chromosphere and transition region of the quiet Sun \citep{Krucker1998, Benz2002}. Based on detailed 1D simulations, \citet{Testa2014} found that non-thermal electron beams were required to reproduce the UV spectra in their observations of nanoflares. Early models of particle beams were mostly based on simple analytical expressions for the mean collisional change in velocity of energetic particles moving through the atmospheric plasma \citep{Brown1972, Syrovatskii1972, Emslie1978}. The response of the atmosphere to an injected beam of non-thermal electrons has been studied by incorporating these expressions into the energy equation of 1D hydrodynamics simulations \citep{Somov1981, MacNeice1984, Nagai1984}. More recently, the realism of these types of simulations was improved significantly by the inclusion of detailed radiative transfer \citep{Hawley1994, Abbett1999, Allred2005}. A more general treatment of the accelerated particles is possible by numerically solving the Fokker--Planck equation governing the evolution of the particle distribution. Due to high computational demand, this method was, initially, only used to study the detailed propagation and bremsstrahlung emission of non-thermal electrons in simple static model atmospheres \citep{Leach1981, Leach1983}. However, in state-of-the-art 1D flare simulations \citep{Liu2009, Allred2015, RubiodaCosta2015}, it has now largely replaced the approximate heating expressions derived from mean scattering theory. The high level of detail in these simulations makes them a powerful tool for studying flare dynamics and for generating synthetic diagnostics of flaring atmospheres. Yet, by nature of their dimensionality, simulations of this kind can only consider a single flaring loop at a time, and these loops do not live in isolation. They are part of a continuous magnetic field embedded in a 3D plasma environment. Magnetic reconnection and associated acceleration of energetic particle beams are driven by the overall evolution of the atmosphere, which in turn is influenced by the collective interaction of the beams with the ambient plasma. With the drastic increase in computing power along with the advent of advanced 3D radiative magnetohydrodynamics (MHD) codes in the past couple of decades \citep{Voegler2005, Felipe2010, Gudiksen2011}, realistic simulations that self-consistently reproduce the overall structure and evolution of diverse features of the solar atmosphere are now possible. Incorporating acceleration and propagation of energetic particle beams into these types of 3D simulations would greatly benefit our understanding of the role of particle beams on the Sun. We have taken the first step towards this goal, and here present a simple treatment of energy transport by accelerated particles applied to a realistic 3D simulation of the quiet solar atmosphere. This work is a further development of the model introduced in \citet{Bakke2018}. A related approach was recently used by \citet{Ruan2020} to incorporate electron beams into a 2.5D MHD simulation of a large flare. A brief description of the radiative MHD code that we employ is given in Sect. \ref{sec:atmospheric_simulation}, where we also present the simulated atmosphere used for this paper. In Sect. \ref{sec:accelerated_particles}, we present the inclusion of accelerated particle beams, starting with the method of detecting reconnection sites, followed by the acceleration model and the particle transport model. Methods for reducing the computational demand, as well as for selecting values for the free parameters, are respectively discussed in Sects. \ref{sec:tuning} and \ref{sec:effect_p_delta}. Section \ref{sec:results} contains our results for the transport of energy by particle beams. These are discussed in Sect. \ref{sec:discussion}, where we also consider future work. \section{Methods} \subsection{Atmospheric simulation} \label{sec:atmospheric_simulation} We used the Bifrost code \citep{Gudiksen2011} for simulating a 3D region of the upper solar atmosphere, spanning from the top of the convection zone to the corona. Bifrost solves the resistive MHD equations with the inclusion of radiative transfer and field-aligned thermal conduction. The equation of state is computed under the assumption of local thermodynamic equilibrium (LTE), using the Uppsala Opacity Package \citep{Gustafsson1975}. The radiative transfer computation encompasses optically thin emission from the upper chromosphere and corona, approximated non-LTE radiative losses from chromospheric hydrogen, calcium and magnesium \citep{Carlsson2012}, and full radiative transfer with scattering and LTE plasma opacities in the photosphere and convection zone \citep{Hayek2010}. To maintain numerical stability, the code employs a slightly enhanced overall diffusivity in combination with spatially adaptive local diffusion (so-called hyper diffusion). For this paper, the atmospheric environment for the accelerated particles was provided by a horizontally periodic Bifrost simulation of a $24 \times 24\;\mathrm{Mm}$ patch of the atmosphere that spans vertically from $2.5\;\mathrm{Mm}$ below the photosphere to $14.3\;\mathrm{Mm}$ above it. The simulation has a resolution of 768 grid cells along each dimension, with a uniform grid cell extent of $31\;\mathrm{km}$ in the horizontal directions. Along the vertical direction, the grid cell extent is about $12\;\mathrm{km}$ in the layer between the photosphere (at height zero) and the height of $4\;\mathrm{Mm}$, in order to resolve the abrupt local variations near the transition region. Away from this layer, the extent increases evenly in both directions to about $21\;\mathrm{km}$ near the bottom of the simulation box and $80\;\mathrm{km}$ near the top. Convective motions are maintained in the sub-photospheric part of the simulation by injection of heat through the bottom boundary, balanced by radiative cooling in the photosphere. These motions lead to acoustic shocks and braiding of magnetic field lines, which produce a hot chromosphere and corona. The corona was initially configured with an ambient magnetic field roughly oriented along the $x$-direction. A magnetic flux emergence scenario was then incited by injecting a 2000 G $y$-directed magnetic field, covering $x \in [4, 18]$ Mm and the full extent in $y$, through the bottom boundary. The flux sheet was broken up by convective motions as it rose to the photosphere. Here, the strongest concentrations of the field broke through and expanded into the upper atmosphere, carrying with it cool photospheric plasma. The expanding magnetic bubbles were eventually confined by the ambient coronal field. Interactions between these bubbles and with the ambient coronal field lead to magnetic reconnection at various heights and ensuing explosive events such as Ellerman bombs and UV bursts. See \citet{Hansteen2019} for a more detailed description and analysis of the simulation. It should be noted that all flaring events produced in the simulation are small (at most $\sim 10^{25}$ erg), and can generally be characterised as nanoflares. \begin{figure}[!thb] \includegraphics{magnetogram} \centering \caption{Vertical magnetic field $B_\mathrm{v}$ in the photosphere (height zero) of the simulation snapshot.} \label{fig:magnetogram} \end{figure} \begin{figure}[!thb] \includegraphics{atmospheric_height_profiles} \centering \caption{Horizontally averaged mass density $\rho$ (panel (a)), temperature $T$ (panel (b)) and magnetic field strength $B$ (panel (c)) as a function of height in the simulation snapshot.} \label{fig:atmospheric_height_profiles} \end{figure} This paper considers a snapshot of the atmosphere at a single instant of the continuously evolving dynamic simulation, 8220 s after the magnetic flux sheet was injected. Figure \ref{fig:magnetogram} shows the vertical component of the photospheric magnetic field at this time. The variation in horizontally averaged mass density, temperature, and magnetic field strength with height are shown in Fig. \ref{fig:atmospheric_height_profiles}. In this figure, the presence of the relatively dense and cool magnetic bubbles filling parts of the corona is apparent in the density and temperature profiles. The noticeable break in the density profile near the height of 8 Mm corresponds to the top of the main bubble. \subsection{Accelerated particles} \label{sec:accelerated_particles} Energetic electrons and ions are produced through various acceleration mechanisms during magnetic reconnection. Due to the Lorentz force, the particles follow a gyrating trajectory around the magnetic field as they travel away from the reconnection site. At the same time, they exchange energy and momentum with the background plasma through Coulomb collisions. Based on these processes, we developed a model for the production and transport of accelerated particles suitable for integration into a 3D MHD simulation. The first step was to identify the grid cells of the simulation domain occupying locations with magnetic reconnection. In each of these grid cells, the energy distribution of the locally accelerated particles was estimated. Finally, the heating of the ambient plasma by the passing energetic particle beam was computed along the length of the magnetic field line going through the centre of each grid cell. The following sections describe the steps of this model in detail. \subsubsection{Reconnection sites} \label{sec:reconnection_sites} It is well established that particle acceleration is associated with magnetic reconnection. Reconnection takes place where regions of opposite magnetic polarity come together and produce a strong rotation of the magnetic field. A magnetic diffusion region arises around the interface between the two reconnecting magnetic domains, where the gradients are strong enough to break the coupling between the magnetic field and the plasma. Inside this diffusion region, free magnetic energy is released in several different ways. The electric field induced by the rotation of the magnetic field creates a thin layer of strong current, which heats the local plasma through Joule heating. In addition, plasma is propelled away from the reconnection site from the ends of the current sheet by the magnetic tension force. Finally, a fraction of the local charged particles are accelerated to very high energies, as we discuss further in Sect. \ref{sec:initial_particle_distributions}. \citet{Biskamp2005a} derived the following criterion for conservation of magnetic topology in the context of resistive MHD: \begin{equation} \label{eq:reconnection_criterion} \left\lVert\mathbf{B}\times\left(\nabla\times\mathbf{S}\right)\right\rVert = 0, \end{equation} where \begin{equation} \label{eq:electric_field_projection} \mathbf{S} = \left(\frac{\mathbf{E} \cdot \mathbf{B}}{\mathbf{B} \cdot \mathbf{B}}\right)\mathbf{B} \end{equation} is the projection of the electric field onto the magnetic field direction. Reconnection takes place where Eq. \eqref{eq:reconnection_criterion} is violated. This can thus be used as a criterion for identifying reconnection sites. However, in the context of a numerical simulation, the onset of reconnection only occurs once the value \begin{equation} \label{eq:krec} K = \left\lVert\mathbf{B}\times\left(\nabla\times\mathbf{S}\right)\right\rVert \end{equation} exceeds some finite threshold $K_\mathrm{min}$ due to limited precision in the employed numerical scheme. An example of how $K$ varies with position in our simulated atmosphere is shown in Fig. \ref{fig:krec_slice}. \begin{figure*}[!thb] \includegraphics{krec_slice} \centering \caption{Values of the reconnection factor $K$ (Eq. \eqref{eq:krec}) in a slice through the $y$-axis of the simulation snapshot, at $y = 10.67\;\mathrm{Mm}$.} \label{fig:krec_slice} \end{figure*} We discuss how we determined a suitable value for $K_\mathrm{min}$ in Sect. \ref{sec:selecting_reconnection_sites}. \subsubsection{Initial particle distributions} \label{sec:initial_particle_distributions} There is a range of possible mechanisms that can produce energetic particles during reconnection. One alternative is direct acceleration by the coherent electric field induced by the rotation of the magnetic field across the diffusion region \citep{Speiser1965, Martens1990, Litvinenko1993}. Test particle simulations of this kind of acceleration have been run for various magnetic configurations, including reconnecting Harris current sheets \citep{Zharkova2005a, Zharkova2005}, magnetic X-points \citep{Heerikhuisen2002, Wood2005}, and fan and spine reconnection \citep{Dalla2006, Dalla2008, Stanier2012}. These simulations generally produce particle populations with energy distributions that resemble power-laws. Power-law distributions are also found in more realistic particle-in-cell simulations, which include the changes in the electric field induced by the accelerated particles in a self-consistent manner \citep{Baumann2013a, Li2019}. As can often be seen in these kinds of kinetic simulations, direct acceleration is likely to be accompanied by other types of acceleration processes. One example is first-order Fermi acceleration \citep{Fermi1954}, where particles gain energy by repeatedly scattering back and forth between converging magnetic elements, such as the ends of a shrinking plasmoid \citep{Drake2006} or in a collapsing magnetic trap \citep{Somov1997}. If the scattering agents move in a random rather than systematic fashion, second-order Fermi acceleration \citep{Fermi1949} can take place. Here, the particles experience a fluctuating energy increase owing to the higher likelihood of (accelerating) head-on collisions compared to (decelerating) rear-end collisions. This kind of stochastic acceleration is, like direct acceleration, typically predicted to produce a power-law energy distribution for the accelerated particles, both in models based on the Fokker--Planck formalism \citep{Miller1996, Stackhouse2018} and in test particle simulations \citep{Dmitruk2003, Onofri2006}. It is widely accepted that energetic electrons play an important role in energy transport during solar flares. The detection of gamma-rays from large flares has revealed that also ions can play a part in these events \citep{Chupp1973}. However, for small flares, the effect of accelerated ions is likely to be minor. This is because ions, owing to their high masses, have significantly lower velocities than electrons for a given kinetic energy. As a consequence, they experience a much higher rate of collisions with ambient particles (the frequency of Coulomb collisions decreases with the cube of the velocity), and thus they lose their energy to the background plasma faster. For example, consider an electron or proton with mass $m$ travelling through an ionised hydrogen plasma with number density $n_\mathrm{H}$. The change in kinetic energy $E$ with distance $s$ for the particle is given by \citep[e.g.][]{Emslie1978} \begin{equation} \label{eq:particle_energy_loss} \frac{\mathrm{d}E}{\mathrm{d}s} = -\left(\frac{m}{m_\mathrm{e}}\right)\frac{2\pi e^4 n_\mathrm{H}\ln\Lambda}{E}, \end{equation} where $e$ is the elementary charge and $\ln\Lambda$ is the Coulomb logarithm (discussed further in Sect. \ref{sec:particle_energy_deposition}). It is clear from this that a proton ($m = m_\mathrm{p}$) deposits its energy much faster than an electron ($m = m_\mathrm{e}$), by a factor of $m_\mathrm{p}/m_\mathrm{e} \approx 1800$. Hence, unless the ions are accelerated to very high velocities, their energy can be expected to end up close to the reconnection sites. This suggests that it is safe to omit ions when modelling the long-range energy transport in small flares. Simulating the formation of an accelerated electron distribution during a reconnection event is a computationally expensive task. When many events have to be considered, a detailed simulation of the acceleration is therefore not a viable option on current hardware. However, it is reasonable to assume that the result of the acceleration process will be a non-thermal population of electrons with energies distributed according to a power-law: \begin{equation} \label{eq:non_thermal_distribution} n_\mathrm{NT}(E \geq E_\mathrm{c}) = n_\mathrm{acc}\left(\frac{\delta - 1/2}{E_\mathrm{c}}\right)\left(\frac{E}{E_\mathrm{c}}\right)^{-(\delta + 1/2)}. \end{equation} Here, $n_\mathrm{acc} = \int_{E_\mathrm{c}}^\infty n_\mathrm{NT}(E)\mathrm{d}E$ is the number density of accelerated electrons and $E_\mathrm{c}$ is a lower cut-off energy below which electrons are not considered non-thermal. The power-law index $\delta$ controls how rapidly the number of electrons diminishes with higher energy. It is usually defined in terms of the non-thermal electron flux $F_\mathrm{NT} = n_\mathrm{NT}v$ (where $v \propto E^{1/2}$ is the electron speed), so that $F_\mathrm{NT}(E) \propto E^{-\delta}$. The range of possible values for the power-law index $\delta$ in Eq. \eqref{eq:non_thermal_distribution} is subject to some loose observational constraints. Spectral analysis of hard X-ray bursts has shown that the non-thermal bremsstrahlung emission due to the interactions of accelerated electrons with the ambient plasma tends to have a single or double power-law distribution in energy \citep[e.g.][]{Kane1970, Lin1987}. Working backwards from the observed spectrum one can attempt to infer the initial distribution of the non-thermal electrons by considering the bremsstrahlung emission process inside the X-ray source and the energy loss of the electrons during their journey to the source from the acceleration region \citep{Holman2011, Kontar2011}. Studies of this type, both of regular flares \citep{Kontar2002, Kontar2003, Sui2005, Battaglia2006, Krucker2010} and microflares \citep{Lin2001, Krucker2002, Christe2008, Hannah2008, Glesener2020}, suggest that the initial distribution follows a power-law with $\delta$ varying between 2 and 10, typically with larger values for less energetic events. There is some observational evidence for a linear-log relationship between $\delta$ and the X-ray flux measured at a fixed energy \citep{Grigis2004}, which has been reproduced in numerical models of stochastic acceleration \citep{Grigis2005, Grigis2006}. However, in the absence of a proper acceleration simulation for predicting its value, the least speculative way of specifying $\delta$ is to treat it as a free parameter. Section \ref{sec:effect_p_delta} discusses the effect of varying $\delta$ in our model. The total power $P_\mathrm{acc}$ going into the acceleration of an electron population generally corresponds to some fraction $p$ of the rate of magnetic energy release $P_\mathrm{rec}$ at the reconnection site: \begin{equation} \label{eq:acceleration_power} P_\mathrm{acc} = p P_\mathrm{rec}. \end{equation} If the volume of the reconnection site is $V$, the average acceleration power per volume is \begin{equation} \label{eq:acceleration_power_density} e_\mathrm{acc} = \frac{P_\mathrm{acc}}{V}, \end{equation} and if the acceleration process lasts for a duration $\Delta t$, the energy density of accelerated electrons in the reconnection site is \begin{equation} \label{eq:acceleration_energy_density} u_\mathrm{acc} = e_\mathrm{acc}\Delta t. \end{equation} This quantity is also related to the number density of non-thermal electrons: \begin{equation} \label{eq:acceleration_energy_density_vs_number_density} u_\mathrm{acc} = \int_{E_\mathrm{c}}^\infty E\;n_\mathrm{NT}(E)\;\mathrm{d}E = n_\mathrm{acc}\left(\frac{2\delta - 1}{2\delta - 3}\right)E_\mathrm{c}. \end{equation} So knowledge of the fraction $p$ together with basic properties of the reconnection event enables the determination of $u_\mathrm{acc}$, which can be used to compute $n_\mathrm{acc}$ through Eq. \eqref{eq:acceleration_energy_density_vs_number_density}. In a pure resistive MHD context, all the dissipated magnetic energy is released through Joule heating in the reconnection current sheets. Therefore, the local Joule heating prior to inclusion of particle acceleration can be used as a proxy for the reconnection energy, giving the relation \begin{equation} \label{eq:acceleration_power_density_qjoule} e_\mathrm{acc} = p Q_\mathrm{Joule}, \end{equation} where $Q_\mathrm{Joule}$ is the Joule heating rate per volume. Because a fraction $p$ of the energy that would previously go into Joule heating now is used for electron acceleration, $Q_\mathrm{Joule}$ must be reduced accordingly after application of Eq. \eqref{eq:acceleration_power_density_qjoule}. Observational studies of the energy partition in flares suggest that typical values of $p$ could range from $10\%$ \citep{Emslie2004, Emslie2012} to as high as $50\%$ \citep{Lin1971}, and kinetic reconnection simulations support that values of these magnitudes indeed are conceivable \citep{Tsiklauri2007, Baumann2013a}. However, just like for $\delta$, the way $p$ depends on the details of the acceleration mechanism and the local conditions is subject to a great deal of uncertainty, so it is best kept as a free parameter. The effect of $p$ in our model is discussed in Sect. \ref{sec:effect_p_delta}. The treatment of particle acceleration presented here builds on the premise that some unspecified acceleration mechanism will add a power-law tail with a known number density $n_\mathrm{acc}$ and index $\delta$ to the local thermal distribution of ambient electrons. This non-thermal component can then be isolated by defining the lower cut-off energy $E_\mathrm{c}$ as the energy where the power-law distribution intersects the thermal distribution. The original number density of thermal electrons should in principle be adjusted to account for some of them being accelerated. However, when dealing with the relatively minor energy releases associated with small flares, it is safe to assume that only a small fraction of the available electrons are accelerated. This correction can then be omitted. The thermal electron population follows the Maxwell--Boltzmann distribution \begin{equation} \label{eq:maxwell_boltzmann_distribution} n_\mathrm{T}(E) = n_\mathrm{e}\sqrt{\frac{4E}{\pi(k_\mathrm{B}T)^3}}e^{-E/k_\mathrm{B} T}, \end{equation} where $n_\mathrm{e}$ is the number density of thermal electrons, $T$ is the local temperature, and $k_\mathrm{B}$ is the Boltzmann constant. The above definition of $E_\mathrm{c}$ can then be written as \begin{equation} n_\mathrm{NT}(E_\mathrm{c}) = n_\mathrm{T}(E_\mathrm{c}). \end{equation} After inserting Eqs. \eqref{eq:non_thermal_distribution} and \eqref{eq:maxwell_boltzmann_distribution}, and substituting $n_\mathrm{acc}$ using Eq. \eqref{eq:acceleration_energy_density_vs_number_density} to remove the implicit dependence on $E_\mathrm{c}$, we find after some rearranging \begin{equation} \label{eq:lower_cutoff_energy} {E_\mathrm{c}}^{5/2}e^{-E_\mathrm{c}/k_\mathrm{B} T} = (\delta - 3/2)\left(\frac{u_\mathrm{acc}}{n_\mathrm{e}}\right)\sqrt{\frac{\pi(k_\mathrm{B}T)^3}{4}}. \end{equation} This can be solved numerically for $E_\mathrm{c}$ using, for example, the Newton--Raphson method. Only the highest-energy solution is relevant in this case. The resulting cut-off energy is roughly proportional to temperature, but is not sensitive to $u_\mathrm{acc}$ or $n_\mathrm{e}$, as shown in Fig. (\ref{fig:Ec_parameter_study}). A temperature of $10^6$ K results in a cut-off energy of the order of 1 keV. \begin{figure}[!thb] \includegraphics{lower_cutoff_energy_parameter_study} \centering \caption{Temperature dependence of the lower cut-off energy $E_\mathrm{c}$, for a selection of values of the non-thermal energy per thermal electron, $u_\mathrm{acc}/n_\mathrm{e}$, representative of the conditions in a relatively quiet atmosphere. A power-law index of $\delta = 4$ was used, but any realistic value would give practically identical results. The shaded area is where the cut-off energy would be lower than the average thermal energy.} \label{fig:Ec_parameter_study} \end{figure} With the energy distribution of the non-thermal electrons in place, the next aspect to consider is their directions of motion. This can be described in terms of their distribution of pitch angles $\beta$, defined as the angle between the direction of motion $\hat{\mathbf{v}}$ and the magnetic field direction $\hat{\mathbf{B}}$: \begin{equation} \cos\beta = \hat{\mathbf{v}}\cdot\hat{\mathbf{B}}. \end{equation} The pitch angle distribution, just like the energy distribution, depends on the nature of the acceleration mechanism. Typically, direct acceleration models predict that the non-thermal electrons have most of their velocity along the magnetic field direction, while stochastic acceleration models predict more isotropic populations. When it comes to transport calculations, the simplest approach is to adopt the view of a peaked initial pitch angle distribution and assume that all the electrons accelerated at a given reconnection site will leave the site with the same initial magnitude of the pitch angle cosine $|\mu_0| = |\cos\beta_0|$. If the underlying acceleration mechanism is assumed to only affect the average speed $v_\parallel$ of the electrons parallel to the magnetic field, any deviation from $|\mu_0| = 1$ must come from the average perpendicular speed $v_\perp$ of the electrons before acceleration. This speed corresponds to the average thermal speed \begin{equation} v_\perp = \sqrt{\frac{8 k_\mathrm{B} T}{\pi m_\mathrm{e}}}. \end{equation} The total average speed of the accelerated electrons can be written as \begin{equation} v_\mathrm{mean} = \sqrt{{v_\perp}^2 + {v_\parallel}^2}. \end{equation} This speed can also be computed as the expected value of $v = \sqrt{2E/m_\mathrm{e}}$ for the power-law distribution, which becomes \begin{equation} v_\mathrm{mean} = \frac{2\delta - 1}{2\delta - 2}\sqrt{\frac{2 E_\mathrm{c}}{m_\mathrm{e}}}. \end{equation} The average magnitude of the pitch angle cosine can then be estimated as \begin{equation} |\mu_0| = \frac{v_\parallel}{v_\mathrm{mean}} = \sqrt{1 - \left(\frac{v_\perp}{v_\mathrm{mean}}\right)^2}. \end{equation} We note that the case $v_\perp = v_\mathrm{mean}$, and correspondingly, $\mu_0 = 0$, occurs for $E_\mathrm{c} \approx k_\mathrm{B} T$. As shown in Fig. \ref{fig:Ec_parameter_study}, $E_\mathrm{c}$ will usually exceed $k_\mathrm{B} T$ by about one order of magnitude, so this approach will tend to give $|\mu_0| \approx 1$ in practice. The direction in which the electron beam leaves the acceleration region must also be determined. This can be parallel or anti-parallel to the magnetic field direction, or both, again depending on the nature of the acceleration mechanism. Without a more detailed specification of this mechanism, the method of deciding the directions will naturally be somewhat ad hoc. However, it seems reasonable that the overall electric field direction $\hat{\mathbf{E}}$ in the acceleration region could provide some indication. If $\hat{\mathbf{E}}\cdot\hat{\mathbf{B}}$ is close to $\pm 1$, one might expect most of the electrons to escape in the $\mp\mathbf{B}$-direction (the opposite sign is due to their negative charge). On the other hand, if $\hat{\mathbf{E}}\cdot\hat{\mathbf{B}}$ is closer to zero, there is no immediate reason to prefer one direction over the other, and the electrons would probably partition more evenly between both directions. Based on this, a sensible strategy is to split the available non-thermal power $P_\mathrm{acc}$ between a forward propagating beam ($+\hat{\mathbf{B}}$-direction) with power $P_\mathrm{beam}^+$ and a backward propagating beam with power $P_\mathrm{beam}^-$. The power can be partitioned in the following way: \begin{equation} \label{eq:beam_power_partition} P_\mathrm{beam}^\pm = \frac{1 \mp \hat{\mathbf{E}}\cdot\hat{\mathbf{B}}}{2}P_\mathrm{acc}. \end{equation} So if, for example, $\hat{\mathbf{E}}\cdot\hat{\mathbf{B}} = -0.2$, the forward propagating beam gets $60\%$ of the power and the backward propagating beam gets $40\%$. At any reconnection site, $\hat{\mathbf{E}}\cdot\hat{\mathbf{B}}$ is necessarily non-zero, and the smallest possible magnitude it can have depends on the choice of $K_\mathrm{min}$. \subsubsection{Particle energy deposition} \label{sec:particle_energy_deposition} A particle with charge $q$ leaving a reconnection site with velocity $\mathbf{v}$ experiences a Lorentz force $\mathbf{F} = q(\mathbf{E} + \mathbf{v} \times \mathbf{B})$ due to the local electric and magnetic field. The $\mathbf{v} \times \mathbf{B}$ term gives the particle a helical motion around the magnetic field direction, without affecting its kinetic energy. The relative magnitude of $\mathbf{v}$ and $\mathbf{B}$ decides the radius of the helical motion, which for a typical electron in a normal coronal environment is smaller than a metre. If an electric field is present, the motion of the particle can be influenced in two different ways. Firstly, the particle will be accelerated along the magnetic field direction if the electric field component in this direction is non-zero. However, this can only take place in magnetic diffusion regions where ideal MHD breaks down. Secondly, the centre of the helical motion will drift away from the original field line if the electric field has a component perpendicular to the magnetic field. This effect is generally negligible, because the bulk plasma velocity $\mathbf{u}$ would have to be comparable to the particle velocity to induce an electric field $\mathbf{E} \approx -\mathbf{u} \times \mathbf{B}$ with a magnitude that is comparable to the $\mathbf{v} \times \mathbf{B}$ term. When drift away from the field line is ignored, it is convenient to describe the particle's motion in terms of its kinetic energy $E$, pitch angle $\beta$, and one-dimensional position $s$ along the field line. Because the gyroradius of the particle generally is very small compared to its typical travel distance (which is of the order of megametres), the offset of the particle perpendicular to the field line can safely be disregarded. Additionally, the journey of the particle through the atmosphere is typically so brief that it can be considered instantaneous compared to the time scale of the atmosphere's response to the particle beam. For example, a beam of 1 keV electrons traverses a 10 Mm coronal loop in about 0.5 s, while a pressure change due to the heating at a footpoint would need $\sim 100$ s to propagate the same distance back along the loop (assuming a sound speed of $c_\mathrm{s} \approx 10^4\sqrt{T}\;\mathrm{cm}/\mathrm{s}$ and a temperature of $T = 10^6\;\mathrm{K}$). When the particle enters a region with a stronger magnetic field, it starts to gyrate more rapidly around the field axis due to the increased $\mathbf{v} \times \mathbf{B}$ force. This force, being perpendicular to the direction of motion, does not affect the kinetic energy, so the velocity of the particle parallel to the field axis decreases accordingly. If the increase in the magnetic field strength becomes sufficiently large, the movement of the particle along the field will eventually stop and then continue in the opposite direction. This magnetic mirroring effect could thus potentially trap particles in the coronal part of a magnetic loop. However, since the coronal magnetic field strength typically increases relatively slowly with depth, magnetic trapping is unlikely to drastically inhibit the particles from reaching the lower atmosphere. Therefore, we have ignored the effect of varying magnetic field strength in this initial treatment of particle propagation. As the particle travels along the field line it exchanges energy and momentum with the ambient plasma through Coulomb interactions, both with free electrons and ions, and with electrons bound in neutral atoms. Collectively, these types of collisions have the effect of reducing and randomising the velocities of the accelerated particles, until their distribution merges with the background thermal distribution. The energy loss of the particles manifests as a heating of the local plasma. A simple and widely used approach for modelling Coulomb collisions is to approximate the evolution of the energy and pitch angle of a single particle based on the mean rate of energy and pitch angle dissipation \citep[as derived by][]{Spitzer1962}. This was done by \citet{Brown1972} for non-thermal electrons in an ionised hydrogen plasma. \citet{Emslie1978} generalised Brown's treatment to allow for a hydrogen plasma with an arbitrary, but uniform, degree of ionisation, and also obtained the rate of energy deposition as a function of depth for the full population of accelerated electrons by convolving the mean energy loss of a single electron with the initial non-thermal number distribution. \citet{Hawley1994} showed how an approximation in the derivations of Emslie can allow for an ionisation degree that varies with depth without having to resort to numerical integration. Following their approach, the rate of energy deposition per volume, $Q$, at distance $s$ along the field line can be written as \begin{equation} \label{eq:beam_heating_per_volume} Q(s) = n_\mathrm{H}(s)\left(\frac{\pi e^4 (\delta - 2) F_\mathrm{beam}}{|\mu_0| {E_\mathrm{c}}^2}\right)\gamma(s) B\left(\kappa(s); \frac{\delta}{2}, \frac{1}{3}\right)\left(\frac{N^*(s)}{N_\mathrm{c}^*}\right)^{-\delta/2}. \end{equation} This equation assumes that the electrons all have the same initial pitch angle cosine $\mu_0$ and initial energies given by a power-law distribution as described by Eq. \eqref{eq:non_thermal_distribution}. $F_\mathrm{beam}$ is the energy flux of the beam of accelerated electrons leaving the reconnection site. The quantity $\gamma$, given by \begin{equation} \gamma(s) = x(s)\ln\Lambda + (1 - x(s))\ln\Lambda', \end{equation} is a hybrid Coulomb logarithm that merges the contribution of the free electron Coulomb logarithm $\ln\Lambda$ and the neutral hydrogen Coulomb logarithm $\ln\Lambda'$ depending on the local ionisation fraction $x(s)$. $B$ is the incomplete beta function, defined by \begin{equation} B(\kappa; a, b) = \int_0^\kappa t^{a - 1}(1 - t)^{b - 1}\;\mathrm{d}t. \end{equation} The integration limit used for $B$ is a ramp function given by \begin{equation} \kappa(s) = \mathrm{max}\left(\frac{N(s)}{N_\mathrm{c}(s)}, 1\right), \end{equation} where \begin{equation} N(s) = \int_0^s n_\mathrm{H}(s')\;\mathrm{d}s' \end{equation} is the hydrogen column depth and \begin{equation} N_\mathrm{c}(s) = \frac{\mu_0 {E_\mathrm{c}}^2}{6\pi e^4 \gamma(s)} \end{equation} is the stopping column depth for an electron with energy $E_\mathrm{c}$. The ionised column depth $N^*(s)$ is defined analogously to $N(s)$ as \begin{equation} N^*(s) = \int_0^s \left(\frac{\gamma(s')}{\ln\Lambda}\right)n_\mathrm{H}(s')\;\mathrm{d}s'. \end{equation} Similarly, the ionised stopping column depth $N_\mathrm{c}^*$ corresponds to $N_\mathrm{c}$ with $\gamma = \ln\Lambda$. The rate of energy deposition per distance, $\mathrm{d}\mathcal{E}/\mathrm{d}s$, can be found by integrating Eq. \eqref{eq:beam_heating_per_volume} over the cross-sectional area $A$ of the beam. If $Q(s)$ is assumed uniform across the beam cross-section, this gives $\mathrm{d}\mathcal{E}/\mathrm{d}s = AQ(s)$. Furthermore, if $A$ is assumed constant along the beam trajectory, it can be written as $A = P_\mathrm{beam}/F_\mathrm{beam}$, giving \begin{equation} \label{eq:beam_heating_per_distance} \frac{\mathrm{d}\mathcal{E}}{\mathrm{d}s} = \left(\frac{P_\mathrm{beam}}{F_\mathrm{beam}}\right)Q(s). \end{equation} Fig. \ref{fig:single_beam_parameter_study} shows examples of the evolution of $\mathrm{d}\mathcal{E}/\mathrm{d}s$ with depth, computed from Eqs. \eqref{eq:beam_heating_per_volume} and \eqref{eq:beam_heating_per_distance}, for an electron beam injected into the FAL-C model atmosphere \citep{Fontenla1993}. \begin{figure*}[!thb] \includegraphics{single_beam_parameter_study} \centering \caption{Heating from an electron beam injected into the transition region of an average quiet sun atmosphere. The rate of energy deposition per distance is plotted against the height above the photosphere, for different values of the lower cut-off energy $E_\mathrm{c}$ (panel (a)), initial pitch angle $\beta_0$ (panel (b)) and power-law index $\delta$ (panel (c)). In all cases, the beam originates 2.3 Mm above the photosphere with a power of $P_\mathrm{beam} = 10^{18}\;\mathrm{erg}/\mathrm{s}$. Additionally we have used $E_\mathrm{c} = 2\;\mathrm{keV}$ for panels (b) and (c), $\beta_0 = 0^\circ$ for panels (a) and (c) and $\delta = 4$ for panels (a) and (b). The dashed curve is the temperature profile. The atmosphere corresponds to model C of \citet{Fontenla1993}, extrapolated to coronal temperatures.} \label{fig:single_beam_parameter_study} \end{figure*} The Coulomb logarithm $\ln\Lambda$ emerges in the calculation of the mean rate of velocity change from Coulomb collisions between free particles, $\langle\mathrm{d}v/\mathrm{d}t\rangle$, which involves an integral of the differential collision cross-section over all impact parameters $b$ \citep[e.g.][]{Rosenbluth1957}. The long-range nature of the Coulomb force causes the integral to diverge in the limit of large $b$, but this can be resolved by considering the screening of the force at long distances due to the response of the nearby charge carriers to each particle's electrostatic field. This screening imposes a maximum value $b_\mathrm{max}$ on the impact parameter, enabling the integral for $\langle\mathrm{d}v/\mathrm{d}t\rangle$ to be solved. The solution is $\langle\mathrm{d}v/\mathrm{d}t\rangle \propto \ln\Lambda$, where $\Lambda = b_\mathrm{max}/b_\mathrm{min}$ and $b_\mathrm{min}$ is the minimum impact parameter. For a particle with charge $ze$ and speed $v$ interacting with a stationary particle with charge $Ze$, energy considerations give $b_\mathrm{min} = zZe^2/m v^2$, where $m$ is the reduced mass of the two particles. The Debye screening length $\lambda_\mathrm{D}$ is often used for $b_\mathrm{max}$. In the context of an energetic particle beam, a more appropriate choice might be the particle mean free path $\eta = v/\nu$, where $\nu = \sqrt{4\pi e^2 n_\mathrm{e}/m_\mathrm{e}}$ is the plasma frequency, or the gyroradius $r_\mathrm{g}$, depending on which is smallest \citep{Emslie1978}. Although the Coulomb logarithm in principle varies with both particle energy, local conditions, and the masses of the colliding particles, the logarithmic scaling should keep these variations relatively small. Equation \eqref{eq:beam_heating_per_volume} consequently assumes $\ln\Lambda$ to be constant, so the value of $\ln\Lambda$ should simply be computed in each acceleration region and used throughout the transport calculations for the associated electron beam. Because the electrons will not experience very strong magnetic fields, one can expect that $\eta < r_\mathrm{g}$, and hence use $b_\mathrm{max} = \eta$. For collisions with ambient free electrons, we have $z = Z = 1$ and $m = m_\mathrm{e}/2$, so we get \begin{equation} \label{eq:electron_coulomb_logarithm} \ln\Lambda = \ln\sqrt{\frac{{E_\mathrm{mean}}^3}{2\pi e^6 n_\mathrm{e}}}, \end{equation} where we have used $v = \sqrt{2 E_\mathrm{mean}/m_\mathrm{e}}$, which is the speed corresponding to the mean energy \begin{equation} E_\mathrm{mean} = \left(\frac{2\delta - 1}{2\delta - 3}\right)E_\mathrm{c} \end{equation} of the electrons in the initial distribution. Collisions with ambient protons only account for a tiny fraction of the electron velocity change $\langle\mathrm{d}v/\mathrm{d}t\rangle$ due to the high mass of protons compared to electrons, and are thus ignored in the derivations leading to Eq. \eqref{eq:beam_heating_per_volume}. For collisions with neutral hydrogen, the resulting energy loss rate can be expressed analogously to that of collisions with free electrons \citep[see e.g.][]{Mott1949a, Emslie1978}, with an effective Coulomb logarithm of \begin{equation} \label{eq:neutral_hydrogen_coulomb_logarithm} \ln\Lambda' = \ln\left(\frac{2 E_\mathrm{mean}}{1.105 \chi}\right), \end{equation} where $\chi$ is the ionisation potential of hydrogen. For simplicity, the less important contributions from collisions with helium and heavier elements are not included here. The effective Coulomb logarithm for collisions with neutral helium is similar to Eq. \eqref{eq:neutral_hydrogen_coulomb_logarithm}, and has a comparable magnitude \citep{Evans1955a}. Considering the roughly 20\% abundance of helium, the inclusion of helium collisions would lead to at most a 20\% increase in the rate of energy deposition in the neutral regions of the atmosphere, and less in the partially ionised regions. The treatment of Coulomb collisions outlined here disregards randomisation of energy and direction, which manifests as a diffusion of the energy and pitch angle distribution with propagation depth. As long as the speeds of the ambient particles are negligible compared to the speeds of the accelerated particles (the so-called cold-target approximation), energy and pitch angle diffusion are unimportant. This is because the target particles can be considered effectively stationary, leading to a deterministic evolution of each accelerated particle. \citet{Jeffrey2019} found that the cold-target approximation tends to underestimate the amount of energy deposited in the lower atmosphere compared to the results of a full warm-target model because the electrons that thermalise in the corona eventually will diffuse down to the lower atmosphere and deposit their energy there. However, this conclusion was based on work not including standard thermal conduction, and the inclusion of thermal conduction would mitigate some of the discrepancies between the cold- and warm-target models. Until the difference between these models has been investigated further, we do not implement the more computationally expensive warm-target treatment in our model. Moreover, when using the simple acceleration model presented in Sect. \ref{sec:initial_particle_distributions}, the lowest energies obtained for the accelerated electrons (which tend to come from sites with coronal temperatures) are typically of the order of 1 keV. This can be seen from Fig. \ref{fig:Ec_parameter_study}, which also shows that the target plasma would need to have a temperature of at least $10^7$ K in order for the average thermal energy to be comparable to the typical accelerated electron energy. Although the impact region over time could be heated to this temperature \citep[e.g.][]{Mariska1989, Allred2005}, the relatively low acceleration energies involved in minor flare events makes this unlikely. A beam of energetic electrons departing from an acceleration region takes away negative charge and distributes it along its trajectory, leading to charge separation. This imbalance produces an electrostatic field that drives a counter-flowing return current of ambient electrons \citep{Knight1977, Emslie1980}. A steady state where the return current continuously compensates for the charge separation is reached on a timescale comparable to the electron--ion collision time \citep{Larosa1989}. Because the current associated with the beam then is cancelled by the return current, this mechanism prevents any induction of a significant electromagnetic field by the beam. As long as the beam flux is weak, the energy loss incurred by the beam electrons from moving through the opposing electrostatic potential is negligible compared to their energy loss from Coulomb collisions with the ambient plasma. We confirmed this for our simulation by evaluating the energy loss contributions due to collisions and return currents (given respectively by Eqs. (4) and (6) in \citet{Emslie1980}) in the acceleration regions, where the return current energy loss is at its highest. The ratio of return current to collisional energy loss was found to be at most $10^{-4}$. The accelerated electrons are also subject to a small radiative energy loss. They emit synchrotron radiation due to their gyrating motion around the magnetic field lines \citep{Petrosian1985} as well as bremsstrahlung due to collisions \citep{Brown1971, Haug2004}. A comparison between the energy loss terms from synchrotron and bremsstrahlung emission with the collisional loss term shows that both forms of radiative losses are completely negligible compared to collisional losses under ordinary conditions, and can safely be ignored. There are a variety of considerations in addition to those covered above that a comprehensive particle transport model would need to address. This includes collisional ionisation of neutral chromospheric hydrogen \citep{Ricchiazzi1983, Abbett1999} and helium \citep{Allred2015}, the potential occurrence of a two-stream instability resulting in the generation of plasma oscillations and turbulence \citep{Emslie1984} as well as a fully relativistic treatment of the transport process \citep{McTiernan1990}. However, these effects tend to be more important for larger flares involving higher particle numbers and energies. For application to weaker acceleration events, the transport model presented here, in which only energy dissipation through Coulomb collisions is included, should be a reasonable first step. \subsection{Model tuning} \label{sec:tuning} \subsubsection{Selection of reconnection sites} \label{sec:selecting_reconnection_sites} The method of identifying reconnection sites that is presented in Sect. \ref{sec:reconnection_sites} relies on an appropriate choice of the threshold $K_\mathrm{min}$. It should be set to a value small enough to include all the potentially important reconnection sites. However, it can not simply be set to zero, because limited spatial resolution and numerical diffusion in the MHD simulation prevent $K$ from ever becoming exactly zero in practice. Every point would then be classified as a reconnection site, which would be both unrealistic and prohibitively computationally expensive. From Eqs. \eqref{eq:electric_field_projection} and \eqref{eq:krec} it can be seen that $K$ scales linearly with the strength of the magnetic field, $B$. The magnetic energy density $u_\mathrm{B}$ is proportional to $B^2$, meaning that $K$ is proportional to $\sqrt{u_\mathrm{B}}$. As $K_\mathrm{min}$ is lowered, the additional reconnection sites that are included thus produce less energetic particle distributions on average. On the other hand, the number of included sites also increases rapidly with decreasing $K_\mathrm{min}$. The choice of $K_\mathrm{min}$ is thus a compromise between the inclusion of more reconnection energy and the computational cost of simulating more electron beams. Fortunately, as shown in Fig. \ref{fig:global_heating_krec_lim}, the growth in the number of sites is balanced by the decrease in energy, and the total energy contained in all included beams begins to stagnate as $K_\mathrm{min}$ becomes sufficiently small. As a reasonable trade-off, we used $K_\mathrm{min} = 10^{-4}$ (in internal Bifrost units) for our results. \begin{figure}[!thb] \includegraphics{global_heating_krec_lim} \centering \caption{Variation in acceleration power (Eq. \eqref{eq:acceleration_power}) with the reconnection factor threshold $K_\mathrm{min}$, for a simulation with $\delta = 4$ and $p = 0.2$. The solid curve is the total power for all included reconnection sites, and the dashed curve is the average power per site (multiplied by $10^5$ for composition purposes). The total numbers of included sites are shown in the dotted curve. Short-range beams have been filtered out in the manner discussed in Sect. \ref{sec:short_range_exclusion}.} \label{fig:global_heating_krec_lim} \end{figure} \subsubsection{Exclusion of short-range beams} \label{sec:short_range_exclusion} Not all of the identified acceleration regions produce electron beams that are worth considering. Most importantly, beams that deposit all their energy in the immediate vicinity of the acceleration region add nothing to the model. This is because the slight displacement of heat quickly would be evened out by other energy transport mechanisms such as thermal conduction, plasma advection, or radiative transfer. The outcome would thus be nearly the same as if all the reconnection energy had been converted directly into thermal energy at the reconnection site in the first place. To filter out the short-range beams, we first had to establish a criterion for when a beam is considered depleted. It can be seen from Eq. \eqref{eq:beam_heating_per_volume} that $Q(s)$ approaches zero only asymptotically with distance. Physically, this can be explained by the presence of arbitrarily energetic electrons in the tail of the power-law distribution. Because the collisional cross-section decreases with electron energy, extremely energetic electrons will practically never thermalise, and hence there will always be some non-thermal energy remaining in the beam. However, once $Q$ becomes sufficiently small, the rest of the beam energy can safely be disregarded, provided that the reason for the small heating rate is the depletion of energy and not that the beam happens to pass through a low-density region. This second criterion can be ensured by considering the part of Eq. \eqref{eq:beam_heating_per_volume} representing energy depletion, which is the monotonically decreasing factor \begin{equation} \label{eq:residual_factor} r(s) = \left(\frac{N_*(s)}{N_\mathrm{c}^*}\right)^{-\delta/2}. \end{equation} This is a convenient heuristic for the amount of energy remaining in the beam, as shown in Fig. \ref{fig:deposited_percentage_vs_residual_factor}, where the percentage of the initial beam power that has been deposited can be seen to approach $100\%$ as $r$ becomes smaller. \begin{figure}[!thb] \includegraphics{deposited_percentage_vs_residual_factor} \centering \caption{Energy deposition as a function of $r(s)$ (Eq. \eqref{eq:residual_factor}). The deposited power per distance, $\mathrm{d}\mathcal{E}/\mathrm{d}s$, is plotted along the trajectories of a representative subset of the electron beams in a simulation with $\delta = 4$ and $p = 0.2$, with colours indicating the height above the photosphere. For each beam, the corresponding proportion of the initial beam power that has been deposited at each $r(s)$, given by ${P_\mathrm{beam}}^{-1}\int_0^s \mathrm{d}\mathcal{E}/\mathrm{d}s(s')\;\mathrm{d}s'$, is indicated by a red curve. These red curves are all overlapping.} \label{fig:deposited_percentage_vs_residual_factor} \end{figure} Using $(\mathrm{d}\mathcal{E}/\mathrm{d}s)_\mathrm{min}$ and $r_\mathrm{min}$ to denote lower thresholds for $\mathrm{d}\mathcal{E}/\mathrm{d}s$ and $r$, respectively, a depletion criterion can thus be defined as \begin{equation} \label{eq:depletion_criterion} \frac{\mathrm{d}\mathcal{E}}{\mathrm{d}s}(s) < \left(\frac{\mathrm{d}\mathcal{E}}{\mathrm{d}s}\right)_\mathrm{min}\quad \mathrm{and} \quad r(s) < r_\mathrm{min}. \end{equation} It is clear from the figure that the vast majority of the initial beam power is depleted once $r$ is below $\sim 10^{-5}$. Therefore, this paper uses $r_\mathrm{min} = 10^{-5}$. Moreover, we set $(\mathrm{d}\mathcal{E}/\mathrm{d}s)_\mathrm{min} = 10^5\;\mathrm{erg}/\mathrm{s}/\mathrm{cm}$. As the figure shows, this enables the beams to reach deep into the lower atmosphere before they are considered depleted. Based on the criteria in Eq. \eqref{eq:depletion_criterion}, an estimate $\tilde{s}_\mathrm{dep}$ for the depletion distance can be computed under the assumption that the plasma properties are approximately uniform between $s = 0$ and $s = s_\mathrm{dep}$, so that $N^*(s_\mathrm{dep}) \approx (n_\mathrm{H}(s=0)\gamma(s=0)/\ln\Lambda) s_\mathrm{dep}$. This assumption holds as long as $s_\mathrm{dep}$ is reasonably short. Equations \eqref{eq:beam_heating_per_volume}, \eqref{eq:beam_heating_per_distance}, \eqref{eq:residual_factor}, and \eqref{eq:depletion_criterion} then yield the following estimate for the depletion distance: \begin{equation} \label{eq:estimated_depletion_distance} \tilde{s}_\mathrm{dep} = \left(\frac{N_\mathrm{c}^*\ln\Lambda}{n_\mathrm{H}(0)\gamma(0)}\right)\mathrm{max}\left(c_Q, c_r \right)^{2/\delta}, \end{equation} where \begin{equation} c_Q = \frac{n_\mathrm{H}(0) \gamma(0)}{(\mathrm{d}\mathcal{E}/\mathrm{d}s)_\mathrm{min}}\left(\frac{\pi e^4 (\delta - 2) P_\mathrm{beam}}{|\mu_0| {E_\mathrm{c}}^2}\right) B\left(1; \frac{\delta}{2}, \frac{1}{3}\right) \end{equation} and \begin{equation} c_r = \frac{1}{r_\mathrm{min}}. \end{equation} The derivation also assumes that $\kappa(s_\mathrm{dep}) = 1$, which is always satisfied when $r < 1$. By evaluating Eq. \eqref{eq:estimated_depletion_distance} at each reconnection site, it can be decided whether the resulting electron beam is worth considering further. The beam can be excluded if $\tilde{s}_\mathrm{dep}$ is shorter than an assigned minimum distance $s_\mathrm{min}$. \begin{figure}[!thb] \includegraphics{depletion_distances} \centering \caption{Estimated depletion distances $\tilde{s}_\mathrm{dep}$ plotted against actual depletion distances $s_\mathrm{dep}$ for a representative subset of the electron beams in the same simulation as in Fig. \ref{fig:deposited_percentage_vs_residual_factor}. Points lying on the dashed line correspond to correct estimates. The colour indicates the mass density $\rho$ in the acceleration region.} \label{fig:depletion_distances} \end{figure} Figure \ref{fig:depletion_distances} confirms that Eq. \eqref{eq:estimated_depletion_distance} is accurate for small values of $s_\mathrm{dep}$. In the cases when $s_\mathrm{dep}$ is over-estimated, it could lead to the inclusion of a beam that turns out to propagate shorter than expected, but this has no impact on the accuracy of the result. More problematically, an under-estimation of $s_\mathrm{dep}$ could lead to the rejection of a beam that indeed would contribute to the long-range energy transport. However, cases like these appear to be relatively uncommon. With Fig. \ref{fig:depletion_distances} as a guideline, we chose a minimum distance of $s_\mathrm{min} = 0.5\;\mathrm{Mm}$ for our results. We note that the figure shows a clear inverse relationship between depletion distance and density, and that practically all beams accelerated at densities higher than about $10^{-12}\;\mathrm{g}/\mathrm{cm}^3$ will be rejected. Consequently, all non-thermal energy that is transported a significant distance in this model comes from the corona and upper transition region. \subsubsection{Exclusion of low-energy beams} \label{sec:weak_exclusion} As discussed in Sect. \ref{sec:selecting_reconnection_sites}, the reconnection factor $K$ correlates with the available magnetic energy. However, because $K$ also depends on the configuration of the electromagnetic field, there will still be reconnection sites with $K > K_\mathrm{min}$ that have very low acceleration energies. As a way of reducing computational cost, these sites can be excluded with little consequence for the accuracy of the model by imposing a suitable lower limit $e_\mathrm{min}$ on the acceleration power density $e_\mathrm{acc}$ in Eq. \eqref{eq:acceleration_power_density_qjoule}. The total power in all included acceleration regions saturates as $e_\mathrm{min}$ is reduced, as shown in Fig. \ref{fig:global_heating_min_beam_en}. \begin{figure}[!thb] \includegraphics{global_heating_min_beam_en} \centering \caption{Variation in global acceleration power with the lower limit $e_\mathrm{min}$ on the local acceleration power density. Like in Fig. \ref{fig:global_heating_krec_lim}, the simulated beams have $\delta = 4$ and $p = 0.2$, and the method discussed in Sect. \ref{sec:short_range_exclusion} was used to filter out short-range beams. The curves have the same meaning as in Fig. \ref{fig:global_heating_krec_lim}.} \label{fig:global_heating_min_beam_en} \end{figure} Much like the situation in Fig. \ref{fig:global_heating_krec_lim}, this is because the increase in the number of included sites is counterbalanced by the decrease in the average power at each site. For very small values of $e_\mathrm{min}$, no additional beams are excluded, so the total power reaches a constant value. For this paper, we used $e_\mathrm{min} = 10^{-2}\;\mathrm{erg}/\mathrm{s}/\mathrm{cm}^3$, which lead to a drastic reduction in the number of acceleration regions with a relatively minor loss in included power. \subsection{Effect of $p$ and $\delta$} \label{sec:effect_p_delta} Variation in the acceleration power fraction $p$ in Eq. \eqref{eq:acceleration_power} effectively leads to a proportional scaling of the energy deposition $Q(s)$ at every depth.\footnote{In principle, the relation is not directly proportional since $p$ also affects the cut-off energy $E_\mathrm{c}$ through $u_\mathrm{acc}$ in Eq. \eqref{eq:lower_cutoff_energy}. However, as Fig. \ref{fig:Ec_parameter_study} shows, this dependence is so weak as to be negligible.} Therefore, $p$ does not affect the spatial distribution of deposited beam energy, and the exact choice of its value has a limited qualitative bearing on the energy transport. Of course, if $p$ was extremely small, any effect of non-thermal electrons would be completely negligible regardless of how the electrons distributed their energy. Yet, based on the current understanding of the acceleration mechanisms taking place during reconnection, this seems unlikely. A value of $p = 0.2$ was therefore chosen for this paper. In contrast to $p$, the choice of $\delta$ has a major influence on the resulting spatial distribution of deposited beam energy. Panel (c) in Fig. \ref{fig:single_beam_parameter_study} shows that a larger value of $\delta$ leads to a significantly faster rate of energy deposition with distance, and thus to a shorter penetration depth for the beam. This can be understood mathematically from the $-\delta/2$ power in Eq. \eqref{eq:beam_heating_per_volume}, and physically from the lower fraction of electrons in the high-energy tail of the non-thermal distribution. Because $\delta$ has the unfortunate feature of being both important and uncertain, we present results for a range of $\delta$-values where appropriate. Otherwise, we used a value of $\delta = 4$, which aids the analysis of the energy transport by giving the beams some penetrative power, while still being a realistic value lying well within the observed range. \section{Results} \label{sec:results} \subsection{Global energy transport} \label{sec:global_transport} Particle acceleration predominantly occurs in localised regions that are aligned with the major magnetic field structures. This can be seen in Figs. \ref{fig:xz_power_change_beams} and \ref{fig:horizontal_power_change_beams}, which show the net electron beam heating power accumulated respectively horizontally and vertically over the simulation domain. \begin{figure*}[!thb] \includegraphics{xz_power_change_beams} \centering \caption{Change in heating power in each grid cell due to the inclusion of acceleration and transport of non-thermal electrons. The power changes are accumulated along the $y$-axis of the simulation snapshot. Blue regions indicate a net reduction of thermal energy compared to the case without non-thermal electrons, which is due to the fraction $p$ of the local reconnection energy being injected into accelerated electrons instead of heating the ambient plasma. Orange regions show where the non-thermal electron energy eventually is deposited into the plasma as heat. The simulated beams have $\delta = 4$ and $p = 0.2$.} \label{fig:xz_power_change_beams} \end{figure*} \begin{figure*}[!thb] \includegraphics{horizontal_power_change_beams} \centering \caption{Same as Fig. \ref{fig:xz_power_change_beams}, but with the power changes accumulated over the full height of the simulation snapshot instead of the $y$-axis.} \label{fig:horizontal_power_change_beams} \end{figure*} Due to the exclusion of short-range electron beams discussed in Sect. \ref{sec:short_range_exclusion}, all significant acceleration takes place within the tenuous plasma above the transition region. The acceleration regions (apparent as blue areas in the figures) have lengths ranging from 1 to 15 Mm, and cross-sections typically smaller than 1 Mm. Longer acceleration regions tend to occur higher in the corona, where the magnetic field is more homogeneous. Interestingly, despite the decrease of magnetic field strength with height (panel (c) in Fig. \ref{fig:atmospheric_height_profiles}), these high regions typically exhibit equally energetic acceleration as regions at lower heights. The most intense acceleration can be found in a thin sheet centred on $x = 11$ Mm and $y = 10.67$ Mm (the $y$-coordinate is the same as for the plane of Fig. \ref{fig:krec_slice}). This is the current sheet associated with the Ellerman bomb and UV burst that are analysed by \citet{Hansteen2019}. Energy deposition from the non-thermal electrons (shown in orange in Figs. \ref{fig:xz_power_change_beams} and \ref{fig:horizontal_power_change_beams}) takes place throughout the corona, with higher concentrations near the ends of the acceleration regions and along the dominant magnetic structures. The strongest non-thermal heating typically occurs in the transition region near low-lying acceleration regions. At these locations, the electrons are often able to reach significant chromospheric depths. Numerous electron beams entering the lower atmosphere from different directions aggregate horizontally due to the convergence of the magnetic field with depth. This produces collections of thin, semi-vertical strands of concentrated non-thermal heating in the chromosphere, which are anchored in the photosphere at locations with a strong vertical magnetic field. \subsection{Selected sets of electron beams} \label{sec:selected_beams} To analyse the beam heating in the lower atmosphere more closely, we consider the three subsets of electron beams shown in Fig. \ref{fig:xz_power_change_selected_beams}. \begin{figure}[!thb] \centering \includegraphics{xz_power_change_selected_beams} \caption{Selected sets of electron beams, plotted in the same manner as Fig. \ref{fig:xz_power_change_beams}. Set 1 is a coherent bundle of beams that originates in a long acceleration region at the top of a coronal loop and terminates at one of the footpoints. Set 2 consists of several electron beam bundles coming from various locations in the simulation domain, all converging at the same chromospheric site. Set 3 encompasses electrons that are accelerated in the strong central current sheet and ejected along one of the magnetic 'legs' that connect the current sheet with the lower chromospheric plasma.} \label{fig:xz_power_change_selected_beams} \end{figure} They represent various ways in which electron beams can join together to produce significant localised heating in the lower atmosphere. This includes a single long bundle originating high up in the corona (set 1), the convergence of multiple thin bundles coming from separate acceleration regions (set 2), and a short bundle associated with an acceleration region that lies just above the transition region (set 3). In Fig. \ref{fig:heating_comparison}, the horizontal average of $Q_\mathrm{beam}$ in the core of the cone that penetrates the lower atmosphere is plotted with height for each beam set. In order to demonstrate the effect of the power-law index $\delta$, each beam heating profile is plotted for $\delta$ ranging from 3 to 6. The shapes of the local transition regions are apparent from the included temperature profiles. \begin{figure*}[!thb] \includegraphics{heating_comparison} \centering \caption{Horizontal averages of heating due to electron beams and thermal conduction with height along the three selected sets of electron trajectories. The averages are taken over the grid cells for which the aggregated $Q_\mathrm{beam}$ exceeds its 75th percentile within each horizontal layer. The dashed line shows the corresponding average temperature profile. We note that the height axes have the same extent, but separate bounds.} \label{fig:heating_comparison} \end{figure*} Transition region beam heating can be seen to be relatively robust to variations in $\delta$. At the same time, this is where the difference between the origins of the electron beams for the three beam sets primarily manifests. The reason for this is that the number of incoming low- and intermediate-energy electrons, which make up the bulk of the transition region heating, does not change considerably with $\delta$, but is highly sensitive to the amount of coronal plasma that the beam has propagated through. The electrons accelerated at the top of the coronal loop (set 1) produce a pronounced peak in the beam heating, centred on the bottom of the transition region. This is because most electrons with too little energy to make it through the transition region already would have stopped on the way through the coronal loop. For the converging electron beam bundles coming from separate locations (set 2), the corresponding peak is less distinct. Some of the beams in this set stem from just above the transition region, and their low-energy electrons provide a significant amount of heat to the upper transition region, making the peak less pronounced. This situation is most evident for the beams coming from the strong current sheet that resides in the lower corona near the centre of the simulation domain (set 3). Here, the full spectrum of electron energies is injected directly into the transition region. As a result, the heating culminates near the top of the transition region and decreases monotonically with depth in the lower atmosphere. Below the transition region, the decrease with depth of the average beam heating rate is highly dependent on $\delta$. At a given chromospheric depth, the decrease in $Q_\mathrm{beam}$ with increasing $\delta$ appears to be approximately exponential. However, the average slopes of the beam heating profiles for a given value of $\delta$ are similar for all three beam sets. They are slightly steeper for sets 1 and 3 than for set 2, but this is because the mass densities at these locations are somewhat higher. In contrast to the situation in the transition region, only electrons in the high-energy tail of the incoming distribution can significantly penetrate the chromosphere. This explains the sensitivity to $\delta$, which controls the relative portion of high-energy electrons in the distribution. The electrons in the high-energy tail are not significantly influenced by the coronal plasma, so the distance travelled by the electrons through the corona has little bearing on the distribution of electrons that enter the chromosphere. As a result, any difference between the shapes of the chromospheric heating profiles, barring local variations in mass density, must be caused by a difference between the shapes of the initial electron distributions in the acceleration regions. In the acceleration model used here, this requires significant variations between the temperatures of the involved acceleration regions, which would lead to different values of the lower cut-off energy $E_\mathrm{c}$ (Fig. \ref{fig:Ec_parameter_study}). The selected beam sets all have similar temperatures in their acceleration regions, so the shape of the electron distribution that penetrates the chromosphere is comparable in all three cases. Energy transport by accelerated electrons plays a similar role as thermal conduction, in that it transports energy from the corona to the transition region along the magnetic field. However, the relative importance of these mechanisms differs greatly with depth in the lower atmosphere. This is evident from the red curve in Fig. \ref{fig:heating_comparison}, which shows the conductive heating along the three sets of beam trajectories. In all cases, conductive heating is 10--100 times stronger than electron beam heating throughout most of the transition region. The strong conductive heating stems from the abrupt drop from coronal to chromospheric temperatures. But as the temperature decreases towards the chromosphere, so does the thermal conductivity of the plasma, causing the conductive heating to nearly vanish at the bottom of the transition region. On the other hand, the heat deposited by non-thermal electrons is close to its peak value at this location, owing to the sudden rise in mass density with depth. For all values of $\delta$, beam heating exceeds conductive heating by many orders of magnitude throughout the chromosphere. \subsection{Energetics} The total power of all accelerated electrons in the simulation snapshot is roughly $10^{24}\;\mathrm{erg}/\mathrm{s}$. Beam sets 1 and 2 each produce approximately $3\cdot 10^{21}\;\mathrm{erg}/\mathrm{s}$ of non-thermal electron power. This value is representative of a typical collection of electron beams forming a coherent lower atmospheric heating site in the simulation. Beam set 3, which is associated with a particularly energetic event, exceeds this power by two orders of magnitude. For $\delta = 4$, roughly $1\%$ of the total beam power in the atmosphere is deposited at densities higher than $10^{-11}\mathrm{g}/\mathrm{cm}^3$, in what might be considered chromospheric plasma. Adjusting the value of $\delta$ was found to roughly give a power-law variation in the percentage of non-thermal power deposited in the chromosphere, with significantly smaller percentages for higher values of $\delta$. However, the power-law exponent describing this relationship is highly dependent on the individual beam trajectory. For beam set 1, 2, and 3, the percentage of chromospheric power is respectively $10\%$, $1\%$, and $6\%$ for $\delta = 4$. The percentage is down-scaled by 1--3 orders of magnitude when going from $\delta = 3$ to $\delta = 6$. \section{Discussion and conclusions} \label{sec:discussion} The key factor in determining the amount and energy of accelerated electrons in any part of the corona is the magnetic topology. Although a stronger magnetic field provides a larger source of energy, it is the magnetic topology that determines the potential for this energy to be released by reconnection. This is evident in the distribution of acceleration regions in our simulation. The upper corona, where the expanding magnetic bubbles collide with the weak overlying ambient field, contains acceleration regions that are equally energetic as acceleration regions in magnetically stronger, but topologically simpler parts of the lower corona. Consequently, the overall complexity of the magnetic field configuration is likely to be the main indicator of the significance of non-thermal energy transport. In our simulation, the non-thermal power deposited at notable beam heating sites in the lower atmosphere range from $10^{18}$ to $10^{22}\;\mathrm{erg}/\mathrm{s}$, depending on the particular site and value of $\delta$. A typical small-scale beam heating event in the atmospheric conditions modelled here may then be estimated to release $10^{20}$--$10^{24}$ erg of non-thermal energy in the lower atmosphere, assuming the events to last $\sim 100$ s. Other heating mechanisms, including local Joule heating, thermal conduction, and magnetoacoustic shocks will make a significant additional contribution to the total energy release in some of these events \citep{Archontis2014, Hansteen2017, Hansteen2019}, one example being the heating near the strong central current sheet (beam set 3). Most of the beam heating events are nevertheless relatively weak, even for nanoflares. But they are highly abundant, and a $10 \times 10\;\mathrm{Mm}$ horizontal area of the chromosphere is likely to host a significant number of small beam heating events at any given time. Even though the particle beams in this simulation are weak, their heating effect on the chromosphere is many orders of magnitude stronger than that of thermal conduction. This demonstrates that heating by energetic particles and thermal conduction in the lower atmosphere are qualitatively different, even under relatively quiet solar conditions. Because efficient thermal conduction requires a hot plasma, conductive transport always ceases at the bottom of the transition region. Incoming energetic particles, on the other hand, are not directly affected by the transition region temperature drop. The increase in mass density causes them to thermalise more quickly, but this occurs more gradually with depth than the abrupt shut-down of thermal conduction. The inclusion of chromospheric heating by electron beams in atmospheric simulations such as the one used here could potentially account for discrepancies between synthetic diagnostics and observations. \citet{Testa2014} found that thermal conduction alone could not explain observed blueshifts of the SI IV spectral line in small-scale brightenings at coronal loop footpoints. Instead, their simulations, together with the extended analysis of \citet{Polito2018}, show that non-thermal electron beams can provide sufficient heating at the depths required to produce the upflows responsible for the blueshifts. An advantage of considering the transport of accelerated particles in a 3D rather than a 1D atmospheric model is that it paints a realistic picture of how the available non-thermal energy is distributed in space. Although coherent large-scale flaring events may be reasonably approximated in a 1D coronal loop model with accelerated particles injected at the top, these types of idealised configurations are probably not representative of the situation in most active regions most of the time, and even less so outside of active regions. The quiet solar magnetic field tends to be tangled and inhomogeneous, which can lead to acceleration at any height in the corona and gives a complicated mapping from the acceleration regions to the locations where the non-thermal energy is deposited. Because acceleration takes place over extended regions in which the magnetic field changes topology, particles that are associated with the same reconnection event may end up on completely different trajectories through the atmosphere. Furthermore, the convergence of the magnetic field with depth can lead energetic particles that originate in separate acceleration regions to deposit their energy near the same location in the lower atmosphere (as exemplified by beam set 2 in Fig. \ref{fig:xz_power_change_selected_beams}). The fact that beam heating sites can receive significant contributions of non-thermal electrons from several acceleration regions may have important observational consequences. The incoming electron beams could have been accelerated under different conditions, and thus do not necessarily have the same initial energy distributions. Moreover, beams coming from separate locations are influenced to varying degrees by collisions in the corona due to the different trajectories they take to the beam heating site. For instance, beams traversing a high column mass of coronal plasma lose a large share of their low-energy electrons, which gives them a hard energy distribution upon impact with the lower atmosphere. The total distribution of non-thermal electrons incident on the beam heating site could thus be a superposition of several distinct distributions. Consequently, the common assumption of a power-law distribution for the bremsstrahlung-emitting electrons in flares may not be applicable in all cases. In future work, the response of the atmosphere to the electron beams will be investigated. It is also of interest to generate synthetic spectra from the beam heating sites. Furthermore, the development of the energetic particle model presented here is ongoing, and various improvements could be implemented. Currently, our particle transport model does not include the effects of magnetic gradient forces. However, the strengthening of the magnetic field with depth in the chromosphere could significantly increase the pitch angle of the incoming electrons. If this effect is sufficiently strong, it will hamper the penetration of the most energetic electrons. Instead of thermalising below the photosphere, they might instead only reach the middle chromosphere, resulting in more beam heating at this depth. In general, a numerical treatment of the energy transport problem is required when considering magnetic gradient forces, although a simplified analytical approach has been suggested by \citet{Chandrashekar1986}. The atmospheric simulation used for this paper assumes LTE and statistical equilibrium in its equation of state. The effects of non-equilibrium hydrogen ionisation are likely to alter the ionisation fraction and electron number densities in the chromosphere \citep{Leenaarts2007}, and could thus have a notable impact on the resulting distribution of beam heating. Moreover, because collisions with the non-thermal electrons can ionise neutral hydrogen atoms, the electron beams themselves contribute to an increase in the hydrogen ionisation rate \citep[see e.g.][]{Fang1993}. The resulting increase in the electron number density will, in turn, affect the chromospheric beam heating. When enhanced collisional ionisation is taken into account, it may also be important to consider electrons accelerated below the transition region \citep[as done by][]{Fang2006}. Although the electrons are unable to transfer energy a significant distance away from the acceleration region due to the high plasma density, they still produce a local increase in the ionisation rate which would not be present if all the reconnection energy was converted directly into Joule heating. \begin{acknowledgements} This research was supported by the Research Council of Norway, project number 250810, through its Centres of Excellence scheme, project number 262622, and through grants of computing time from the Programme for Supercomputing. \end{acknowledgements} \bibliographystyle{aa}
{'timestamp': '2020-06-01T02:09:36', 'yymm': '2005', 'arxiv_id': '2005.14473', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14473'}
arxiv
\section*{\large\refname} \small\@mkboth{\MakeUppercase\refname}{\MakeUppercase\refname} \list{\@biblabel{\@arabic\c@enumiv}} {\settowidth\labelwidth{\@biblabel{#1}}
\leftmargin\labelwidth \advance\leftmargin\labelsep \@openbib@code \usecounter{enumiv}% \let\p@enumiv\@empty \renewcommand\theenumiv{\@arabic\c@enumiv}} \sloppy \clubpenalty4000 \@clubpenalty \clubpenalty \widowpenalty4000% \sfcode`\.\@m} {\def\@noitemerr {\@latex@warning{Empty `thebibliography' environment}} \endlist} \setlength{\parskip}{\baselineskip} \makeatother \title{Multiresolution Decomposition of Areal Count Data} \titlerunning{Multiresolution Decomposition of Areal Count Data} \author{R. Flury$^1$ and R. Furrer$^2$} \authorrunning{R. Flury \emph{et al.}} \address{$^1$ Department of Mathematics, University of Zurich, Switzerland; roman.flury@math.uzh.ch\\ $^2$ Department of Mathematics \& Department of Computational Science, University of Zurich, Switzerland; reinhard.furrer@math.uzh.ch} \abstract{ Multiresolution decomposition is commonly understood as a procedure to capture scale-dependent features in random signals. Such methods were first established for image processing and typically rely on raster or regularly gridded data. In this article, we extend a particular multiresolution decomposition procedure to areal count data, i.e.~discrete irregularly gridded data. More specifically, we incorporate in a new model concept and distributions from the so-called Besag--York--Molli\'{e} model to include a priori demographical knowledge. These adaptions and subsequent changes in the computation schemes are carefully outlined below, whereas the main idea of the original multiresolution decomposition remains. Finally, we show the extension's feasibility by applying it to oral cavity cancer counts in Germany.} \keywords{ Spatial scales; Lattice data; Intrinsic GMRF; Besag--York--Molli\'{e} model; MCMC. } \begin{document} \maketitle \thispagestyle{empty} \section{Introduction} Decomposing an observed signal or spatial field into scale-dependent components allows recognizing its inherent and prominent features. Those features give insight to where local or global phenomena manifest themselves and assist in understanding the structure of hierarchical information. Holmstr\"om et al.~(2011) proposed a procedure in the tradition of image processing that hence is applicable to Gaussian data distributed on regular grids~\cite{H01}. We extend this method to count data which is potentially observed on an irregular grid, often termed \lq{areal count data}\rq~\cite{C93}. The original multiresolution decomposition approach can be divided into three individual steps: 1)~spatial field resampling based on a Bayesian hierarchical model, 2)~smoothing on multiple scales, then calculating differences between these smooths to specify details for each resampled field separately, and 3)~posterior credibility analysis. In the following paragraphs we summarize a) the Bayesian hierarchical model for step 1) and b) how to calculate differences between smooths in step 2). Those are the relevant parts in the procedure for the proposed extension, outlined in Section~2. The original multiresolution decomposition assumes that an observed field $\boldsymbol{y}$ consists of the true field $\boldsymbol{x}$ and additive noise. Based on these flexible model assumptions the hierarchical model is constructed. a) Bayesian hierarchical model: the true field $\boldsymbol{x}$ is presumed to follow a Gaussian distribution, which implies a selfsame likelihood function. Its positive valued variance is modeled with a scaled--inv--$\chi^2$ prior and the spatial component of the field $\boldsymbol{x}$ is captured with an intrinsic Gaussian Markov random field (IGMRF) using a precision matrix $\boldsymbol{Q}$~\cite{R05}. With those choices, the resulting marginal posterior is of closed form and corresponds to a multivariate t-distribution~\cite{E05}. b) Calculate differences between smooths: the proposed penalty smoother is defined as $\boldsymbol{S}_{\lambda} = (\mathbf{I} + \lambda\boldsymbol{Q})^{-1}$, where $\lambda$ is the scale or smoothing parameter, such that $0 = \lambda_1 < \lambda_2 < \ldots < \lambda_L = \infty$. The spatial field $\boldsymbol{x}$ is interpreted as random vector, $\boldsymbol{S}_{\lambda_1}\boldsymbol{x} = \boldsymbol{x}$ defines the identity mapping and $\boldsymbol{S}_{\lambda_L}\boldsymbol{x} =~\boldsymbol{S}_{\infty}\boldsymbol{x}$ the mean field. On the ground of those preliminaries, $\boldsymbol{x}$ can be decomposed as differences of consecutive smooths: $\boldsymbol{x} = \sum_{l=1}^{L-1} \left( \boldsymbol{S}_{\lambda_l} - \boldsymbol{S}_{\lambda_{l+1}} \right)\boldsymbol{x} + \boldsymbol{S}_{\infty}\boldsymbol{x}$. Scale-dependent details are then formalized as $\boldsymbol{z}_l = \left(\boldsymbol{S}_{\lambda_l} - \boldsymbol{S}_{\lambda_{l+1}} \right)\boldsymbol{x}$ for $l = 1, \ldots, L-1$ and $\boldsymbol{z}_L = \boldsymbol{S}_{\infty}\boldsymbol{x}$. Pivotal for a) and b) is the definition of the precision matrix~$\boldsymbol{Q}$: \begin{equation} \boldsymbol{x}^\top\boldsymbol{Qx} = \sum_j \bigg( \sum\limits_{i \sim j} x_i - 4 x_j \biggr)^2, \end{equation} where $i{\sim}j$ denotes neighboring grid locations. To ensure four neighbors at every grid location $i$, the boundary values of $\boldsymbol{x}$ are extended across the initial grid. This definition inherently demands the data allocated to a regular grid but bears the advantage that individual computational steps can be optimized based on $\boldsymbol{Q}$'s fast eigendecomposition, such that large dimensional problems can be solved efficiently. \section{Extension}\label{sec:method} To decompose areal count data, first the resampling pattern described in a) needs modification. Assuming the $n$ observed counts $\boldsymbol{y}=(y_1,\dots,y_n)^\top$ are realizations from a conditionally independent Poisson distribution and the expected counts $\boldsymbol{e}=(e_1,\dots,e_n)^\top$ are known for every location in the spatial field. The Poisson's rate for a location $i$, is defined as the product of the expected count $e_i$ and the respective relative risk, denoted as $\exp{(\eta_i)}$. We construct the hierarchical model, to resample the spatial field, with the likelihood function \begin{equation} \pi(\boldsymbol{y}|\eta_1,\dots,\eta_n) \propto \prod_{i=1}^{n} \exp{\bigl(y_i\eta_i - e_i\exp{(\eta_i)\bigr)}}, \end{equation} which corresponds to the classical Besag--York--Molli{\'e} (BYM) model~\cite{B91}. Whereat $\boldsymbol{\eta}$ is modeled as the composition of the true log-relative risk $\boldsymbol{u}$ and a normal zero-mean noise term $\boldsymbol{v}$, with unknown precision parameter $\kappa_{\boldsymbol{v}}$. Analogous to the original model, we use a first order IGMRF process to model the spatial component with accompanying precision parameter $\kappa_{\boldsymbol{u}}$, such that \begin{equation} \pi(\boldsymbol{u}|\kappa_{\boldsymbol{u}}) \propto \kappa_{\boldsymbol{u}}^{\frac{n-1}{2}} \exp{\left( -\frac{\kappa_{\boldsymbol{u}}}{2} \sum_{i \sim j} (u_i - u_j)^2 \right)} = \kappa_{\boldsymbol{u}}^{\frac{n-1}{2}} \exp{\left( -\frac{\kappa_{\boldsymbol{u}}}{2} \boldsymbol{u}^\top \boldsymbol{R} \boldsymbol{u} \right)}. \end{equation} Again $i{\sim}j$ denotes neighboring lattice locations but here in terms of regions sharing a common border. Assigning Gamma priors for both precision parameters implies a posterior distribution of non-closed form. Hence, we use a Gibbs sampler with a Metropolis-Hastings (MH) step to resample the log-relative risks $\boldsymbol{u}$, the noise components $\boldsymbol{v}$ and parameters~\cite{G15}. Finally, we exploit that the mean of a Poisson distribution is equivalent to its rate and reconstruct the spatial field with $\boldsymbol{e} \cdot \exp{(\boldsymbol{u} + \boldsymbol{v})}$, for every sampled field $\boldsymbol{u}$ and $\boldsymbol{v}$. We form the scale-dependent details still relying on a penalty smoother. Instead of using the matrix $\boldsymbol{Q}$ from the original model, we include the precision matrix $\boldsymbol{R}$ of the first order IGMRF~\cite{R05}. The definition of $\boldsymbol{R}$ does not limit the data to be associated with a regular grid and can be constructed based on adjacency relations of the respective observations. Since we use a different precision matrix, the optimized implementation relying on $\boldsymbol{Q}$ cannot be employed but we alternatively take advantage of the precision's sparse structure and apply tailored algorithms~\cite{F10}. \section{Application}\label{sec:application} The extension's feasibility is demonstrated on the German oral cavity cancer dataset~\cite{K00}. This data includes cancer counts for 544 districts of Germany over 1986--1990, as well as the expected number of cases derived demographically. The main bulk of the oral cavity counts range between one and hundred counts per district but single highly populated districts have up to 500. The data including additional relevant information is available via the \textsc{R} package \textbf{spam}~\cite{F10}. Following the multiresolution decomposition steps, we first resample the areal counts using suitable sampler specifications~\cite{G15} and verify the convergence of the MH sampler with common diagnostic tools~\cite{B98}. Figure~\ref{fig1} shows how well the reconstructed field corresponds to the original data. Only in northeast Germany, where the field is less smooth, the differences are larger. Since the BYM model was designed not to be oversensitive to extreme counts, part of the resampling difference can be explained through its damping effect~\cite{W10}. \begin{figure}[!htb] \centerline{\includegraphics[scale=0.53]{germany_yanderror.png}} \caption{Oral cavity cancer data on logarithmic scale. Left: the observed number of cases; middle: the mean of the reconstructed fields; right: the difference between the left and the middle panels.} \label{fig1} \end{figure} In the second step, we choose suitable scales~(\cite{P13}) $\lambda_1 = 0$, $\lambda_2 = 1$ and $\lambda_3 = 25$ and form scale-dependent details (Figure~\ref{fig2}). Completing the decomposition, we calculate pointwise probability maps~\cite{H01} (Figure~\ref{fig3}). The detail $\boldsymbol{z}_1$ reflects spatial noise as well as the relatively low or high counts in the data. This is also supported by its pointwise probability map, where no large red or blue clusters are visible. $\boldsymbol{z}_2$ catches larger patches of districts and shows local peculiarities. Detail $\boldsymbol{z}_3$ consists of the largest scale range and shows the east-west or nationwide trend but this trend is less distinct compared to the more local ones, indicated by the legends of each panel. \begin{figure}[!htb] \centerline{\includegraphics[scale=0.53]{germany_smMean.png}} \caption{Scale dependent details $\boldsymbol{z}_l=\boldsymbol{S}_{\lambda_l}{\log(\boldsymbol{e} \cdot \exp{(\boldsymbol{u} + \boldsymbol{v})} ) } - \boldsymbol{S}_{\lambda_l+1}{\log(\boldsymbol{e} \cdot \exp{(\boldsymbol{u} + \boldsymbol{v})} )}$, summarized by their posterior means. Left:~$\text{E}(\boldsymbol{z}_1|\boldsymbol{y})$; middle:~$\text{E}(\boldsymbol{z}_2|\boldsymbol{y})$; right:~$\text{E}(\boldsymbol{z}_3|\boldsymbol{y})$.} \label{fig2} \end{figure} \begin{figure}[!htb] \centerline{\includegraphics[scale=0.53]{germany_pw.png}} \caption{Pointwise probability maps. Left:~$\boldsymbol{z}_1$; middle:~$\boldsymbol{z}_2$; right:~$\boldsymbol{z}_3$. The map indicates which features are jointly credible: blue and red areas indicate jointly credibly negative and positive areas, respectively.} \label{fig3} \end{figure} \section{Discussion}\label{sec:discussion} We extended the multiresolution decomposition approach from Holmstr\"om et al. (2011), which originally processes data coming from a Gaussian distribution on a regular grid, to areal count data. Establishing an MH sampling model makes it possible to resample count data and use an arbitrary precision matrix. Employing the BYM model to include prior demographical knowledge, in the form of the known expected counts, enables us to model the data without being oversensitive to possible outliers. The \textsc{R} code to reproduce this example is available at https://git.math.uzh.ch/roflur/bymresa. \bibliographystyle{plain}
{'timestamp': '2021-01-22T02:13:02', 'yymm': '2005', 'arxiv_id': '2005.14503', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14503'}
arxiv
\section{Introduction} We consider parabolic control systems on $L_p(\R^d)$, $p\in [1,\infty)$, of the form \begin{equation} \label{eq:system-intro} \dot{x}(t) = -A_p x(t) + \1_\thickset u(t
),\quad t\in (0,T],\quad x(0) = x_0\in L_p(\R^d), \end{equation} where $-A_p$ is a strongly elliptic differential operator of order $m \in \N$ with constant coefficients, $\1_\thickset \from L_p (\thickset)\to L_p(\R^d)$ is the embedding from a measurable set $\thickset \subset \R^d$ to $\R^d$, $T>0$, and where $u \in L_r ((0,T);L_p (\thickset))$ with some $r \in [1,\infty]$. Hence, the influence of the control function $u$ is restricted to the subset $\thickset$. Note that we allow for lower order terms in the strongly elliptic differential operator. The focus of this paper is laid on null-controllability, that is, for any initial condition $x_0\in L_p(\R^d)$ there is a control function $u\in L_r ((0,T);L_p (\thickset))$ such that the mild solution of \eqref{eq:system-intro} at time $T$ equals zero. We will also be concerned with the notion of approximate null-controllability, which means that for any $\epsilon > 0$ and any $x_0\in L_p(\R^d)$ we can find a control function $u \in L_r ((0,T);L_p (\thickset))$ such that the mild solution of \eqref{eq:system-intro} at time $T$ has norm smaller than $\epsilon$; in reflexive spaces, these two notions agree (see \cite{Carja-88}). By linearity, (approximate) null-controllability implies that any target state in the range of the semigroup generated by $-A_p$ can be reached (up to an error $\epsilon$) within time $T$. \par We will show that if $\thickset$ is a so-called thick set, then the system is approximately null-controllable for $p=1$ and null-controllable if $p\in (1,\infty)$. Note that the case $p \in (1,\infty)$ is already covered by \cite{GallaunST-20}. However, our new proof unifies the cases $p = 1$ and $p \in (1,\infty)$. Moreover, we provide explicit upper bounds on the control cost, i.e.\ on the norm of the control function $u$ which steers the system (approximately) to zero at time $T$, which are explicit in terms of geometric properties of the thick set $\thickset$ and of the final time $T$. Controllability for systems on $L_p(\Omega)$, where $\Omega$ is a bounded domain and $p\in[1,\infty)$, has been studied earlier in the literature, for instance in \cite{FabrePZ-95} in the context of semilinear heat equations, which include as a special case approximate null-controllability for the linear heat equation. For further results in this direction, we refer to \cite{FernandezZ-00} and the survey article \cite{Zuazua-06}. In comparison, we obtain (approximate) null-controllability of linear differential operators of higher orders with $u\in L_r((0,T);L_p(\R^d))$ where $r\in[1,\infty]$. Note that from the physical point of view the case $p=1$ is probably most interesting, since we can then interpret the states as heat densities (and its norms will be the total heat content). \par An equivalent formulation of approximate null-controllability is final-state observability of the adjoint system to \eqref{eq:system-intro}. This means that there is a constant $C_{\mathrm{obs}} \geq 0$ such that for all $\varphi \in L_p(\R^d)'$ we have \begin{equation*} \norm{S'_T \varphi}_{L_p(\R^d)'} \leq \begin{cases} C_{\mathrm{obs}} \left(\int_0^T \norm{(S'_t \varphi)|_\thickset}_{L_p(\thickset)'}^{r'} \drm t\right)^{1/r'} & \text{if } r'\in [1,\infty),\\ C_{\mathrm{obs}}\esssup\limits_{t\in [0,T]} \norm{(S'_t \varphi)|_\thickset}_{L_p(\thickset)'} & \text{if } r'=\infty, \end{cases} \end{equation*} where $(S_t)_{t\geq 0}$ is the $C_0$-semigroup generated by $-A_p$ and $r' \in [1,\infty]$ is such that $1/r + 1/r' = 1$. This equivalence follows from Douglas' lemma, see \cite{Douglas-66} for Hilbert spaces, and \cite{Embry-73,DoleckiR-77,Harte-78,CurtainP-78,Carja-85,Carja-88,Forough-14} for Banach spaces. \par In Section \ref{sec:application} we formulate our results on final-state observability in Theorem \ref{thm:obs} and (approximate) null-controllability in Theorem \ref{thm:null-control}. The proof of Theorem \ref{thm:obs} rests on an abstract observability estimate stated in the appendix (see Theorem \ref{thm:spectral+diss-obs}) and is provided in Section \ref{sec:dissipation}. \par The main strategy we follow to prove observability has first been described in \cite{LebeauR-95,LebeauZ-98,JerisonL-99} for the Hilbert space case (i.e.\ $p=r=2$), and further studied, e.g., in \cite{Miller-10,TenenbaumT-11,WangZ-17,BeauchardP-18,NakicTTV-20}. However, far less is known on its generalization to Banach spaces; to the best of our knowledge, we are only aware of \cite{GallaunST-20}. Note that strong continuity of the semigroup $(S_t)_{t\geq0}$ is assumed there. However, being interested in approximate null-controllability in $L_1$ requires observability in $L_\infty$, and there strong continuity of semigroups is rather rare \cite{Lotz-85}. Theorem \ref{thm:spectral+diss-obs} provides a generalization of \cite{GallaunST-20} to not necessarily strongly continuous semigroups. \section{Observability and Null-controllability in \texorpdfstring{$L_p$}{Lp}-Spaces} \label{sec:application} In order to formulate our main theorems we review some basic facts from Fourier analysis. For details we refer, e.g., to the textbook \cite{Grafakos-14}. We denote by $\mathcal{S}(\R^d)$ the Schwartz space of rapidly decreasing functions, which is dense in $L_p(\R^d)$ for all $p \in[1,\infty)$. The space of tempered distributions, i.e.\ the topological dual space of $\mathcal{S}(\R^d)$, is denoted by $\mathcal{S}'(\R^d)$. For $f\in \mathcal{S}(\R^d)$ let $\F f\from\R^d\to\C$ be the Fourier transform of $f$ defined by \[\F f (\xi) := \int_{\R^d} f(x) \euler^{-\ii\xi\cdot x}\drm x.\] Then $\F\from \mathcal{S}(\R^d)\to \mathcal{S}(\R^d)$ is bijective, continuous and has a continuous inverse, given by \[\F^{-1} f(x) = \frac{1}{(2\pi)^d} \int_{\R^d} f(\xi) \euler^{\ii x\cdot \xi}\drm \xi\] for all $f\in \mathcal{S}(\R^d)$. For $u\in \mathcal{S}'(\R^d)$ the Fourier transform is again denoted by $\F$ and is given by $(\F u)(\phi) = u(\F \phi)$ for $\phi\in \mathcal{S}(\R^d)$. By duality, the Fourier transform is bijective on $\mathcal{S}'(\R^d)$ as well. \par Let $m \in \N$ and \begin{equation*} a(\xi) = \sum _{\abs{\alpha}_1 \leq m} a_\alpha\xi^\alpha, \quad \xi \in \R^d, \end{equation*} be a polynomial of degree $m$ with coefficients $a_\alpha \in \C$. We say that the polynomial $a$ is \emph{strongly elliptic} if there exist constants $c > 0$ and $\omega \in \R$ such that $a$ satisfies for all $\xi \in \R^d$ the lower bound \begin{equation} \label{eq:strongly_elliptic} \re a (\xi) \geq c\abs{\xi}^m - \omega . \end{equation} Note that strong ellipticity implies that $m$ is even. Given a strongly elliptic polynomial $a$ and $p \in [1,\infty]$, we define the associated heat semigroup $S : [0,\infty) \to \mathcal{L} (L_p (\R^d))$ by \begin{equation} \label{eq:semigroup} S_tf = \F^{-1}\euler^{-ta}\F f = \F^{-1}\euler^{-ta}*f . \end{equation} Note that the second equality holds since $\euler^{-ta} \in \mathcal{S}(\R^d)$. It is well known that the operator semigroup $(S_t)_{t\geq 0}$ is strongly continuous if $p \in [1,\infty)$. For $p=\infty$ the semigroup is the dual semigroup of a strongly continuous semigroup on $L_1(\R^d)$ and hence it is only weak*-continuous in general. For details we refer, e.g., to \cite{Arendt-02}. By \cite{TerElstR-96}, the integral kernel $k_t = \mathcal{F}^{-1} \euler^{-ta}$ satisfies the following heat kernel estimate: There exist $c_1,c_2>0$ such that for all $x\in \R^d$ and $t>0$ we have \begin{equation}\label{eq:kernelbound} \lvert k_t(x)\rvert \leq c_1 \euler^{\omega t} t^{-d/m} \euler^{-c_2\left(\frac{\lvert x \rvert^m}{t}\right)^{\frac{1}{m-1}}}. \end{equation} This implies that there is $M \geq 1$ and $\omega \in \R$ such that for all $t \geq 0$ we have \begin{equation} \label{eq:realpart} \lVert S_t \rVert \leq M \euler^{\omega t},\quad t\geq 0. \end{equation} In order to formulate our main result we introduce the notion of a thick subset $\thickset$ of $\R^d$. \begin{Definition} Let $\rho\in (0,1]$ and $L\in (0,\infty)^d$. A set $\thickset \subset \R^d$ is called \emph{$(\rho,L)$-thick} if $\thickset$ is measurable and for all $x \in \R^d$ we have \[ \left\lvert \thickset \cap \left( \bigtimes_{i=1}^d (0,L_i) + x \right) \right\rvert \geq \rho \prod_{i=1}^d L_i . \] Here, $\lvert \cdot \rvert$ denotes Lebesgue measure in $\R^d$. \end{Definition} The following theorem yields a final-state observability estimate for $(S_t)_{t\geq0}$ on thick sets. \begin{Theorem}\label{thm:obs} Let $m \in \N$, $a\from \R^d \to \C$ a strongly elliptic polynomial of order $m$, $c>0$ and $\omega\in\R$ as in \eqref{eq:strongly_elliptic}, and $(S_t)_{t\geq0}$ as in \eqref{eq:semigroup}. Let $\rho\in(0,1]$, $L \in (0,\infty)^d$, $\thickset \subset \R^d$ a $(\rho,L)$-thick set, $p,r \in [1,\infty]$, and $T>0$. Then we have for all $f \in L_p(\R^d)$ \begin{equation*} \norm{S_T f}_{L_p(\R^d)} \leq \begin{cases} \displaystyle{C_{\mathrm{obs}} \left(\int_0^T \norm{(S_t f)|_\thickset}_{L_p(\thickset)}^r \drm t\right)^{1/r}} & \text{if } r\in [1,\infty),\\ \displaystyle{ C_{\mathrm{obs}}\esssup\limits_{t\in [0,T]} \norm{(S_t f)|_\thickset}_{L_p(\thickset)}} & \text{if } r=\infty, \end{cases} \end{equation*} where \[ C_{\mathrm{obs}} = \frac{K_a}{T^{1/r}} \left( \frac{K_d}{\rho} \right)^{K_d(1+\lvert L \rvert_1 \lambda^*)} \exp \left(\frac{K_m (\lvert L \rvert_1 \ln (K_d / \rho))^{m/(m-1)}}{(cT)^{1 / (m - 1)}} + K\max\{\omega,0\} T\right). \] Here, $\lambda^* = (2^{m+4} \max \{\omega , 0\} / c)^{1/m}$, $K>0$ is an absolute constant, and $K_a, K_d, K_m > 0$ are constants depending only on the polynomial $a$, on $d$, or on $m$, respectively. \end{Theorem} By duality, we thus obtain (approximate) null-controllability for \eqref{eq:system-intro}. \begin{Theorem} \label{thm:null-control} Let $m \in \N$, $a\from \R^d \to \C$ a strongly elliptic polynomial of order $m$, $c>0$ and $\omega\in\R$ as in \eqref{eq:strongly_elliptic}, and $(S_t)_{t\geq0}$ as in \eqref{eq:semigroup}. Let $\rho\in(0,1]$, $L \in (0,\infty)^d$, $\thickset \subset \R^d$ a $(\rho,L)$-thick set, $r \in [1,\infty]$, and $T>0$. \begin{enumerate}[(a)] \item For any $f \in L_1 (\R^d)$ and any $\epsilon > 0$ there exits $u \in L_r ((0,T);L_1 (E))$ with \[ \lVert u \rVert_{L_r ((0,T);L_1 (E))} \leq C_{\mathrm{obs}} \lVert f \rVert_{L_1 (\R^d)} \] such that \[ \left\lVert S_T f + \int_0^T S_{T-t} \1_E u (t) \drm t \right\rVert_{L_1 (\R^d)} < \epsilon . \] \item Let $p \in (1,\infty)$. Then for any $f \in L_p (\R^d)$ there exits $u \in L_r ((0,T);L_p (E))$ with \[\lVert u \rVert_{L_r ((0,T);L_p (E))} \leq C_{\mathrm{obs}} \lVert f \rVert_{L_p (\R^d)}\] such that \[ S_T f + \int_0^T S_{T-t} \1_E u (t) \drm t = 0 . \] \end{enumerate} Here, $C_{\mathrm{obs}}$ is as in Theorem~\ref{thm:obs} with $r$ replaced by $r'$ where $r' \in [1,\infty]$ such that $1/r + 1/r' = 1$. \end{Theorem} \begin{Remark}[Discussion on observability and null-controllability] For $p\in [1,\infty)$ let $-A_p$ be the generator of the $C_0$-semigroup $(S_t)_{t\geq 0}$ on $L_p (\R^d)$. Note that for all $f\in \mathcal{S}(\R^d)$ we have \[ A_p f = \sum _{\abs{\alpha}_1 \leq m} a_\alpha (-\ii)^{|\alpha|} \partial^\alpha f . \] Then, the statement of Theorem~\ref{thm:obs} corresponds to a final-state observability estimate for the system \begin{equation* \begin{aligned} \dot{x}(t) & = -A_p x(t), \quad & &t\in (0,T] ,\quad x(0) = x_0 \in L_p(\R^d), \\ y(t) &= x(t)|_\thickset, \quad & & t\in [0,T]. \end{aligned} \end{equation*} \par Let us now turn to the discussion on null-controllability. For a measurable set $\thickset\subset \R^d$ and $T>0$ we consider the linear control problem \begin{align*} \dot{x}(t) & = -A_p x(t) + \1_\thickset u(t), \quad t \in (0,T], \quad x(0) = x_0 \in L_p(\R^d) \end{align*} where $u\in L_r ((0,T);L_p(\thickset))$ with $r\in[1,\infty]$. The unique mild solution is given by Duhamel's formula \begin{equation*} x(t) = S_t x_0 + \mathcal{B}^t u, \quad\text{where}\quad \mathcal{B}^t u = \int_0^t S_{t-\tau} \1_\thickset u(\tau) \drm\tau . \end{equation*} Then, the statement (a) of Theorem~\ref{thm:null-control} corresponds to \emph{approximately null-controllability in time $T$}, that is, for all $\epsilon > 0$ and and $x_0 \in L_1 (\R^d)$, there exists an $u \in L_r((0,T);L_p(\thickset))$ such that $\lVert x(T) \rVert_{L_p(\R^d)} =\lVert S_T x_0 + \mathcal{B}^T u \rVert_{L_p(\R^d)} < \varepsilon$. The statement (b) of Theorem~\ref{thm:null-control} corresponds to \emph{null-controllability in time $T$}, that is, for all $x_0 \in L_p (\R^d)$, $p \in (1,\infty)$, there exists an $u \in L_r((0,T);L_p(\thickset))$ such that $x(T)= 0$. Note that in case $p\in (1,\infty)$ null-controllability and approximate null-controllability are equivalent, see, e.g.,\ \cite{Carja-88}. \end{Remark} It is a standard duality argument that Theorem~\ref{thm:null-control} follows from Theorem~\ref{thm:obs} by means of Douglas' lemma. For the sake of completeness we give a short proof. \begin{proof}[Proof of Theorem~\ref{thm:null-control}] Let $p \in [1,\infty)$, $r \in [1,\infty]$ and $\mathcal{B}^T \colon L_r ((0,T); L_p (E)) \to L_p (\R^d)$ be given by \begin{equation*} \mathcal{B}^T u = \int_0^T S_{T-t} \1_E u (t) \drm\tau . \end{equation*} Then, by \cite[Theorem 2.1]{Vieru-05} we have for all $g \in L_{p'}(\R^d)$ \[ \lVert (\mathcal{B}^T)'g\rVert_{L_{r} ((0,T); L_{p} (E))'} = \sup_{\tau\in [0,T]} \lVert (S'_{T-\tau} g) |_E\rVert_{L_{p'} (E)} = \sup_{t\in [0,T]} \lVert (S'_t g)|_E \rVert_{L_{p'} (E)} \] if $r = 1$, and \[ \lVert (\mathcal{B}^T)'g \rVert_{L_{r} ((0,T); L_{p} (E))'} = \left( \int_0^T \lVert (S'_{t-\tau} g)|_E \rVert_{L_{p'} (E)}^{r'} \drm \tau \right)^{1/{r'}} = \left( \int_0^T \lVert (S'_t g)|_E \rVert_{L_{p'} (E)}^{r'} \drm t\right)^{1/{r'}} \] if $r \in (1,\infty]$, where $r' \in [1,\infty]$ is such that $1 / r + 1/ r' = 1$ and $p' \in (1,\infty]$ is such that $1 / p + 1/ p' = 1$. Since $\mathcal{F}S_t' = \euler^{-ta(-\cdot)}\mathcal{F}$, we have that $(S_t')_{t\geq0}$ is associated to the symbol $a(-\cdot)$ which is strongly elliptic with the same constant $c>0$. Moreover, since the associated heat kernel is given by $(\mathcal{F}^{-1}\euler^{-ta})(-\cdot)$, we have $\norm{S_t'} \leq M\euler^{\omega t}$ with the same $M$ and $\omega$ as in \eqref{eq:realpart}. Thus, Theorem~\ref{thm:obs} and the above equalities imply for all $g \in L_{p'} (\R^d)$ \begin{equation* \lVert S_T' g \rVert_{L_{p'}} \leq C_{\mathrm{obs}} \lVert (\mathcal{B}^T)' g \rVert_{L_{r'} ((0,T);L_{p'} (E))} = C_{\mathrm{obs}} \lVert (\mathcal{B}^T)' g \rVert_{L_{r} ((0,T);L_{p} (E))'}, \end{equation*} where $C_{\mathrm{obs}}$ is as in Theorem~\ref{thm:obs} with $r$ replaced by $r'$. By Douglas' lemma, see e.g.\ \cite{Harte-78,Carja-85,Carja-88}, we conclude \[ \{S_T f \colon \lVert f \rVert_{L_p (\R^d)} \leq 1\} \subset \overline{\{ \mathcal{B}^T u \colon \lVert u \rVert_{L_r ((0,T);L_p (E))} \leq C_{\mathrm{obs}} \}} \quad \text{if} \quad p = 1 \] and \[ \{S_T f \colon \lVert f \rVert_{L_p (\R^d)} \leq 1\} \subset \{ \mathcal{B}^T u \colon \lVert u \rVert_{L_r ((0,T);L_p (E))} \leq C_{\mathrm{obs}} \} \quad \text{if} \quad p \in (1,\infty) . \] By scaling, this implies the statement of the theorem. \end{proof} \section{Proof of Theorem~\ref{thm:obs}} \label{sec:dissipation} For the proof of Theorem~\ref{thm:obs} we apply the abstract observability estimate in Theorem~\ref{thm:spectral+diss-obs}. For this purpose, we define a familiy of operators $P_\lambda$, and verify the uncertainty principle \eqref{eq:ass:uncertainty} and the dissipation estimate \eqref{eq:ass:dissipation}. We start with defining the operators $P_\lambda$. Let $\eta\in C_{\mathrm c}^\infty ([0,\infty) )$ with $0\leq\eta\leq 1$ such that $\eta (r) = 1$ for $r\in [0,1/2]$ and $\eta (r) = 0$ for $r\geq 1$. For $\lambda > 0$ we define $\chi_\lambda\from \R^d\to \R$ by $\chi_\lambda (\xi) = \eta (\lvert \xi \rvert / \lambda)$. Since $\chi_\lambda \in \mathcal{S}(\R^d)$, we have $\mathcal{F}^{-1}\chi_\lambda \in \mathcal{S}(\R^d) \subset L_1(\R^d)$ and for all $p \in [1,\infty]$ we define $P_\lambda \from L_p(\R^d) \to L_p(\R^d)$ by $P_\lambda f = (\mathcal{F}^{-1}\chi_\lambda) * f$. By Young's inequality we have for all $f\in L_p(\R^d)$ \[ \lVert P_\lambda f \rVert_{L_p(\R^d)} = \lVert (\mathcal{F}^{-1} \chi_\lambda) \ast f \rVert_{L_p(\R^d)} \leq \lVert \mathcal{F}^{-1} \chi_\lambda \rVert_{L_1(\R^d)} \lVert f \rVert_{L_p(\R^d)}. \] Moreover, the norm $\lVert \mathcal{F}^{-1} \chi_\lambda \rVert_{L_1(\R^d)}$ is independent of $\lambda >0$. Indeed, by the scaling property of the Fourier transform and by change of variables we have for all $\lambda > 0$ \begin{align} \label{eq:IndependedOfLambda} \lVert \mathcal{F}^{-1} \chi_\lambda \rVert_{L_1(\R^d)} = \lvert \lambda \rvert^d \lVert (\mathcal{F}^{-1} \chi_1) (\lambda \cdot)\rVert_{L_1(\R^d)} = \lVert \mathcal{F}^{-1} \chi_1 \rVert_{L_1(\R^d)}. \end{align} Hence, for all $\lambda > 0$ the operator $P_\lambda$ is a bounded linear operator and the family $(P_\lambda)_{\lambda>0}$ is uniformly bounded by $\lVert \mathcal{F}^{-1} \chi_1 \rVert_{L_1(\R^d)}$. Next, we observe that the uncertainty principle \eqref{eq:ass:uncertainty} is a consequence of the following Logvinenko--Sereda theorem from \cite{Kovrijkine-01}, see also \cite{LogvinenkoS-74, Kovrijkine-00} for predecessors. \begin{Theorem}[Logvinenko--Sereda theorem] \label{Thm:Logvinenko-Sereda_Rd} There exists $K\geq 1$ such that for all $p\in [1,\infty]$, $\lambda > 0$, $\rho \in (0,1]$, $L \in (0,\infty)^d$, $(\rho,L)$-thick sets $\thickset \subset \R^d$, and $f\in L_p(\R^d)$ satisfying $\supp \F f \subset [-\lambda,\lambda]^d$ we have \[ \norm{f}_{L_p(\R^d)} \leq d_0 \euler^{d_1\lambda} \norm{f}_{L_p(\thickset)} , \] where \begin{equation} \label{eq:d0d1} d_0 = \euler^{K d \ln (K^d / \rho)} \quad\text{and}\quad d_1 = 2 \lvert L \rvert_1 \ln (K^d / \rho) . \end{equation} \end{Theorem} Concerning the dissipation estimate \eqref{eq:ass:dissipation}, we first consider the semigroups associated to powers of the Laplacian. \begin{Proposition}\label{Prop:Diss_Laplace} Let $m \in \N$ be even, and $G_t\from L_p(\R^d) \to L_p(\R^d)$ be given by \[ G_tf = \mathcal{F}^{-1}e^{-t\abs{\cdot}^m}\mathcal{F}f\,. \] Then for all $p\in[1,\infty]$, $f \in L_p (\R^d)$, $\lambda > 0$, and $t\geq0$ we have \begin{equation*} \lVert (\operatorname{Id} - P_\lambda) G_t f \rVert_{L_p (\R^d)} \leq K_{m,d} \euler^{-2^{-m-2} t\lambda^m} \lVert f \rVert_{L_p (\R^d)} , \end{equation*} where $K_{m,d} > 0$ is a constant depending only on $m$ and $d$. \end{Proposition} \begin{proof} Let us set $a = \abs{\cdot}^m$. Hence we have for all $f\in L_p(\R^d)$ that \begin{equation*} (\operatorname{Id} - P_\lambda) G_t f = \F^{-1}((1 - \chi_\lambda)\euler^{-ta})\ast f , \end{equation*} and by Young's inequality we obtain for all $\lambda , t > 0$ and all $f \in L_p (\R^d)$ \begin{equation*} \lVert (\operatorname{Id} - P_\lambda) G_t f \rVert_{L_p (\R^d)} \le \lVert \F^{-1}((1 - \chi_\lambda) \euler^{-ta}) \rVert_{L_1 (\R^d)} \lVert f \rVert_{L_p (\R^d)}. \end{equation*} For $\mu > 0$ we define $k_\mu \from \R^d \to \R$ by $k_{\mu} = \F^{-1}\left((1 - \chi_\mu) \euler^{-a}\right)$. By substitution first in Fourier space, and then in direct space we obtain, using $\lvert t^{1/m}\xi \rvert^m = \abs{t}\lvert \xi \rvert^m$, for all $\lambda , t > 0$ \begin{align*} \lVert \F^{-1}((1 -&\chi_\lambda) \euler^{-ta}) \rVert_{L_1 (\R^d)} \\ &= \int_{\R^d} \frac{1}{(2\pi)^{d}} \frac{1}{t^{d/m}} \abs{ \int_{\R^d}\euler^{\ii x \cdot (t^{-1/m} \xi)} (1-\chi_{t^{1/m} \lambda} (\xi))\euler^{-\lvert \xi \rvert^m} \drm \xi } \drm x \\ &= \int_{\R^d} \frac{1}{(2\pi)^{d}} \abs{ \int_{\R^d}\euler^{\ii y \cdot \xi} (1-\chi_{t^{1/m} \lambda} (\xi)) \euler^{-\lvert \xi \rvert^m} \drm \xi } \drm y = \lVert k_{t^{1/m} \lambda} \rVert_{L_1 (\R^d)} . \end{align*} We denote by $K_{m,d} > 0$ constants which depend only on $m$ and the dimension $d$. We allow these constants to change with each occurrence. By Young's inequality and \eqref{eq:IndependedOfLambda}, we have \begin{align*} \lVert \F^{-1}(\chi_\mu \euler^{-a}) \rVert_{L_1 (\R^d)} &= \lVert \mathcal{F}^{-1} \chi_\mu \ast \F^{-1} \euler^{-a} \rVert_{L_1 (\R^d)} \leq K_{m,d} . \end{align*} Hence we find for all $\mu > 0$ the uniform bound \begin{equation} \label{eq:kmu-bounded} \lVert k_\mu \rVert_{L_1 (\R^d)} \leq \lVert \mathcal{F}^{-1} \euler^{-a} \rVert_{L_1 (\R^d)} + \lVert \mathcal{F}^{-1} (\chi_\mu \euler^{-a}) \rVert_{L_1 (\R^d)} \leq K_{m,d} . \end{equation} Next we show that the $L_1$-norm of $k_\mu$ decays even exponentially as $\mu$ tends to infinity. For this purpose, let now $\mu \geq 1$, $\alpha \in \N_0^d$ with $\lvert \alpha \rvert_1 \leq d+1$, and denote by $M_\alpha$ the multiplication with $x^\alpha$. By differentiation properties of the Fourier transform we have \[ M_\alpha k_\mu = M_\alpha \mathcal{F}^{-1} [(1 - \chi_\mu)\euler^{-a}]= \mathcal{F}^{-1} D_\xi^{\alpha} [(1 - \chi_\mu)\euler^{-a}] \] and hence for all $x \in \R^d$ \begin{align} \lvert x^\alpha k_\mu (x) \rvert &= \biggl\lvert \frac{1}{(2\pi)^d}\int_{\R^d} \euler^{i x \cdot \xi} D_\xi^\alpha [(1 - \chi_\mu (\xi))\euler^{-\lvert \xi \rvert^m}] \drm \xi \biggr\rvert \nonumber \\ &\leq \frac{1}{(2\pi)^d}\int_{\R^d} \bigl\lvert D_\xi^\alpha [(1 - \chi_\mu (\xi))\euler^{-\lvert \xi \rvert^m}] \bigr\rvert \drm \xi . \label{eq:MultSatz} \end{align} On the integrand of the right-hand side we apply the product rule and the triangle inequality to obtain \begin{align} \label{eq:product-rule} \bigl\lvert D_\xi^\alpha [(1 - \chi_\mu (\xi))\euler^{-\lvert \xi \rvert^m}] \bigr\rvert & \leq \sum_{\genfrac{}{}{0pt}{2}{\beta \in \N_0^d}{\beta \leq \alpha}} \binom{\alpha}{\beta} \bigl\lvert D_\xi^{\alpha - \beta} (1 - \chi_\mu (\xi)) \bigr\rvert \bigl\lvert D_\xi^\beta \euler^{-\lvert \xi \rvert^m}\bigr\rvert . \end{align} For all $\beta\in \N_0^d$ and $\beta \leq \alpha$ we have \[ \lvert D_\xi^\beta \euler^{-\lvert \xi \rvert^m} \rvert \leq K_{m,d} (1 + \lvert \xi \rvert)^{\lvert \beta \rvert_1 (m-1)} \euler^{-\lvert \xi \rvert^m} \le K_{m,d} \euler^{-\lvert \xi \rvert^m/2}, \] where for the last inequality we used that $\xi\mapsto (1 + \lvert \xi \rvert)^{\lvert \beta \rvert_1 (m-1)} \euler^{-\lvert \xi \rvert^m / 2}$ is bounded on $\R^d$. Since $\mu \geq 1$, for all $\beta\in\N_0^d$, $\beta \leq \alpha$ we have \begin{align*} \bigl\lvert D_\xi^{\alpha - \beta} (1 - \chi_\mu (\xi)) \bigr\rvert &\leq \mu^{\lvert \beta \rvert_1 - \lvert \alpha \rvert_1} (D^{\alpha - \beta}_\xi \chi_1)(\xi/\mu) \leq \sup_{\gamma \leq \alpha} \sup_{\xi \in \R^d} \lvert (D_\xi^\gamma \chi_1) (\xi) \rvert \1_{\R^d \setminus \ballv{\R^d}{\mu/2} }(\xi) \end{align*} and hence \begin{align*} \bigl\lvert D_\xi^{\alpha - \beta} (1 - \chi_\mu (\xi)) \bigr\rvert \bigl\lvert D_\xi^\beta \euler^{-\lvert \xi \rvert^m}\bigr\rvert &\leq K_{m,d} \euler^{-\lvert \xi \rvert^m/2} \1_{\R^d \setminus \ballv{\R^d}{\mu/2} }(\xi) \leq K_{m,d} \euler^{-\lvert \xi \rvert^m/4} \euler^{- \mu^m/ 2^{m+2}} . \end{align*} Thus, \eqref{eq:product-rule} and $\lvert \alpha \rvert_1 \leq d+1$ imply for all $\xi \in \R^d$ that \[ \bigl\lvert D_\xi^\alpha [(1 - \chi_\mu (\xi))\euler^{-\lvert \xi \rvert^m}] \bigr\rvert \leq K_{m,d} \euler^{-\lvert \xi \rvert^m/4} \euler^{-\mu^m/ 2^{m+2}} \sum_{\genfrac{}{}{0pt}{2}{\beta \in \N_0^d}{\beta \leq \alpha}} \binom{\alpha}{\beta} \leq K_{m,d} \euler^{-\lvert \xi \rvert^m/4} \euler^{-\mu^m/ 2^{m+2}}. \] Hence, from \eqref{eq:MultSatz}, for all $x \in \R^d$ we obtain \begin{equation}\label{eq:x^alphakmu} \lvert x^\alpha k_\mu (x) \rvert \leq K_{m,d} \euler^{- \mu^m/ 2^{m+2}} \int_{\R^d} \euler^{-\lvert \xi \rvert^m / 4} \drm \xi = K_{m,d} \euler^{- \mu^m/ 2^{m+2}} . \end{equation} In particular, for $j \in \{1,2,\ldots , d\}$ and $\alpha_j = (d+1)e_j$, where $e_j$ denotes the $j$-th canonical unit vector in $\R^d$, we obtain $\lvert x_j \rvert^{d+1} \lvert k_\mu (x) \rvert \leq K_{m,d} \euler^{- \mu^m/ 2^{m+2}}$, hence $\lVert x \rVert_\infty^{d+1} \lvert k_\mu (x) \rvert \leq K_{m,d} \euler^{- \mu^m/ 2^{m+2}}$, and consequently for all $x \in \R^d$ and all $\mu \geq 1$ we find \begin{equation}\label{eq:consequence} \lvert x \rvert^{d+1} \lvert k_\mu (x) \rvert \leq K_{m,d} \euler^{-\mu^m/ 2^{m+2}}. \end{equation} From \eqref{eq:x^alphakmu} with $\alpha = 0$ and \eqref{eq:consequence} we obtain for all $\mu \geq 1$ that \begin{align*} \lVert k_\mu \rVert_{L_1 (\R^d)} &\leq K_{m,d} \euler^{-\mu^m/ 2^{m+2}} \int_{\ballv{\R^d}{1}} \drm x + K_{m,d} \euler^{- \mu^m/ 2^{m+2}} \int_{\R^d \setminus \ballv{\R^d}{1}} \lvert x \rvert^{-d-1} \drm x \\ &\leq K_{m,d} \euler^{-\mu^m/ 2^{m+2}} . \end{align*} From this inequality and \eqref{eq:kmu-bounded} we obtain for all $\mu > 0$ that \begin{equation*} \lVert k_\mu \rVert_{L_1 (\R^d)} \leq K_{m,d} \euler^{-\mu^m/ 2^{m+2}} . \qedhere \end{equation*} \end{proof} We are now in the position to show the dissipation estimate for the general case. \begin{Proposition} \label{Prop:Dissipation} Let $m \in \N$, $a\from \R^d \to \C$ a strongly elliptic polynomial of order $m$, $c>0$ and $\omega\in\R$ as in \eqref{eq:strongly_elliptic}, $(S_t)_{t\geq0}$ as in \eqref{eq:semigroup}, and $(P_\lambda)_{\lambda>0}$ as above. Then for all $p\in[1,\infty]$, $f\in L_p(\R^d)$, $\lambda > (2^{m+4} \max \{\omega , 0\} / c )^{1/m}$, and $t\geq 0$ we have \begin{equation*} \lVert (\operatorname{Id} - P_\lambda) S_t f \rVert_{L_p (\R^d)} \leq K_a \euler^{-2^{-m-4}c t\lambda^m} \lVert f \rVert_{L_p (\R^d)}, \end{equation*} where $K_a \geq 1$ is a constant depending only on the polynomial $a$ (and therefore also on $m$ and $d$). \end{Proposition} \begin{proof}[Proof of Proposition \ref{Prop:Dissipation}] We introduce $\tilde a\from \R^d \to \C$, $\tilde a (\xi) = (c/2)\lvert \xi \rvert^m$. Then $\tilde a$ and $(a - \tilde a)$ are strongly elliptic polynomials of order $m \in \N$. Note that the semigroup associated to $\tilde a$ is $G_{(c/2) t}$ for all $t\geq0$, where $(G_t)_{t\geq 0}$ is as in Proposition~\ref{Prop:Diss_Laplace}. Moreover, let $T_t$ be the semigroup associated to $a - \tilde a$. Since $a = a -\tilde a + \tilde a$, it follows that $S_t = T_t G_{(c/2) t}$. One can obtain a corresponding heat kernel bound for the kernel of the semigroup $(T_t)_{t\geq 0}$ with the same growth rate $\omega$ as for $(S_t)_{t\geq 0}$ as follows: Since $m$ has to be even, by Young's inequality for products there exists $\sigma,\tilde{\sigma},C \geq 0$ such that \begin{align*} \re (a - \tilde a)(\xi+\ii\eta) &= \re a(\xi+\ii\eta) - (c/2)\lvert \xi +\ii\eta \rvert^m \\ &\geq (3/4)c \lvert \xi \rvert^m - \sigma \lvert \eta \rvert^m - \omega - (c/2)\lvert \xi +\ii\eta \rvert^m \\ &\geq (3/4)c \lvert \xi \rvert^m - \sigma \lvert \eta \rvert^m - \omega - (c/2)\sum_{k=0}^{m/2} \binom{m/2}{k} (\abs{\xi}^2)^k (\abs{\eta}^2)^{m/2-k} \\ &\geq (3/4)c \lvert \xi \rvert^m - \sigma \lvert \eta \rvert^m - \omega - (c/2)(1+1/4)\lvert \xi\rvert^m - C\lvert \eta \rvert^m \\ &\geq (1/8)c \lvert \xi \rvert^m - \tilde{\sigma} \lvert \eta \rvert^m - \omega , \end{align*} with yields \eqref{eq:strongly_elliptic} with $a$ replaced by $a-\tilde{a}$ and $\xi$ replaced by $\xi + \ii \eta$. Then, arguing as in \cite[Proposition 2.1]{TerElstR-96}, one can prove via a heat kernel bound as in \eqref{eq:kernelbound} that there exists $\tilde M \geq 1$ such that $\lVert T_t \rVert \le \tilde M\euler^{\omega t}$ for all $t \geq 0$. By Proposition~\ref{Prop:Diss_Laplace} and since Fourier multipliers commute, we obtain for all $f \in L_p (\R^d)$ \begin{align*} \lVert (\operatorname{Id} - P_\lambda) S_t f \rVert_{L_p (\R^d)} &= \lVert S_t (\operatorname{Id} - P_\lambda) f \rVert_{L_p (\R^d)} \\ &\leq \lVert T_t \rVert \lVert G_{(c/2)t} (\operatorname{Id} - P_\lambda) f \rVert_{L_p (\R^d)} \\ &\leq \tilde M K_{m,d} \euler^{-t(2^{-m-2} (c/2) \lambda^m - \omega)} \lVert f \rVert_{L_p (\R^d)} , \end{align*} where $K_{m,d} > 0$ is a constant depending only on $m$ and $d$. Since $\lambda > (2^{m+4} \max\{\omega,\allowbreak 0\} / c)^{1/m}$, we have $2^{-m-2}(c/2) \lambda^m - \omega > 2^{-m-2} c \lambda^m / 4 = 2^{-m-4} c \lambda^m$. \end{proof} We can finally prove Theorem~\ref{thm:obs}. \begin{proof}[Proof of Theorem~\ref{thm:obs}] Let $(P_\lambda)_{\lambda > 0}$ be the family of operators defined at the beginning of this section. Then we have $\supp \F(P_\lambda f) \subset [-\lambda,\lambda]^d$ for all $\lambda > 0$ and all $f \in L_p(\R^d)$. Thus, Theorem~\ref{Thm:Logvinenko-Sereda_Rd} implies that for all $f \in L_p(\R^d)$ and all $\lambda > 0$ we have \begin{equation*} \norm{P_\lambda f}_{L_p(\R^d)} \leq d_0 \euler^{d_1\lambda}\norm{P_\lambda f}_{L_p(\thickset)} , \end{equation*} where $d_0$ and $d_1$ are as in \eqref{eq:d0d1}. Moreover, according to Proposition~\ref{Prop:Dissipation}, for all $\lambda > \lambda^*$ and all $f \in L_p(\R^d)$ we have \begin{equation*} \norm{(I - P_\lambda)S_t f}_{L_p(\R^d)} \leq d_2\euler^{-d_3\lambda^mt}\norm{f}_{L_p(\thickset)} , \end{equation*} where $\lambda^* = (2^{m+4} \max \{\omega , 0\} / c)^{1/m}$, $d_2 \geq 1$ depends only on the polynomial $a$, and where $d_3 = 2^{-m-4} c$. Moreover, the function $t\mapsto \norm{(S_t f)|_\thickset}_{L_p (\thickset)}$ is Borel-measurable for all $f\in L_p(\R^d)$. Indeed, if $p \in [1,\infty)$ the semigroup $(S_t)_{t \geq 0}$ is strongly continuous and the measurability follows. If $p = \infty$, measurability is a consequence of duality and the representation of the norm in $L_\infty(\thickset)$ by means of the Hahn--Banach theorem. Hence we can apply Theorem~\ref{thm:spectral+diss-obs} with $X = L_p(\R^d)$, $Y = L_p(\thickset)$, $C\from X \to Y$ given by the restriction map on $\thickset$, and obtain that the statement of the theorem holds with $C_{\mathrm{obs}}$ replaced by \begin{align*} \tilde C_{\mathrm{obs}} = \frac{C_1}{T^{1/r}} \exp \left(\frac{C_2}{T^{\frac{1}{m - 1}}} + C_3 T\right), \end{align*} where $T^{1/r} = 1$ if $r=\infty$, and \begin{align*} C_1 &= (4 M d_0) \max \Bigl\{\left( 4d_2 M^2 (d_0 +1) \right)^{8/(\euler \ln 2)}, \euler^{4d_1 2\lambda^*}\Bigr\}, \\ C_2 &= 4 \bigl(2 \cdot 8^\frac{m}{m-1} d_1^{m} / d_3 \bigr)^{\frac{1}{m-1}} , \\[1ex] C_3 & = \max\{\omega , 0\} \bigl(1 + 10 / (\euler \ln 2) \bigr), \end{align*} with $M$ as in \eqref{eq:realpart}. We denote by $K_d$, $K_m$, and $K_a$ positive constants which depend only on the dimension $d$, on $m$, or on the polynomial $a$, respectively. A straightforward calculation shows that \begin{equation*} C_1 \leq K_a \left( \frac{K_d}{\rho} \right)^{K_d(1+\lvert L \rvert_1 \lambda^*)} \quad\text{and}\quad C_2 \leq \frac{K_m (\lvert L \rvert_1 \ln (K_d / \rho))^{m/(m-1)}}{c^{1/(m-1)}} . \end{equation*} Thus we obtain \[ \tilde C_{\mathrm{obs}} \leq \frac{K_a}{T^{1/r}} \left( \frac{K_d}{\rho} \right)^{K_d(1+\lvert L \rvert_1 \lambda^*)} \exp \left(\frac{K_m (\lvert L \rvert_1 \ln (K_d / \rho))^{m/(m-1)}}{(cT)^{1 / (m - 1)}} + C_3 T\right) =: C_{\mathrm{obs}} . \qedhere \] \end{proof}
{'timestamp': '2020-09-01T02:36:50', 'yymm': '2005', 'arxiv_id': '2005.14426', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14426'}
arxiv
\section{Introduction} We study learning the best possible single neuron that captures the relationship between the input $x\in \mathbb{R}^d$ and the output label $y\in \mathbb{R}$ as measured by the
expected square loss over some unknown but fixed distribution $(x,y)\sim \mathcal{D}$. In particular, for a given activation function $\sigma:\mathbb{R}\to \mathbb{R}$, we define the population risk $F(w)$ associated with a set of weights $w$ as \begin{equation} F(w) := (1/2) \mathbb{E}_{(x,y) \sim \mathcal{D}} \l[ \l( \sigma(w^\top x) - y \r)^2\r]. \label{eq:optim.obj} \end{equation} The activation function is assumed to be non-decreasing and Lipschitz, and includes nearly all activation functions used in neural networks such as the rectified linear unit (ReLU), sigmoid, $\tanh$, and so on. In the agnostic PAC learning setting \citep{kearns.agnostic}, no structural assumption is made regarding the relationship of the input and the label, and so the best-fitting neuron could, in the worst case, have nontrivial population risk. Concretely, if we denote \begin{equation} v := \mathrm{argmin}_{\pnorm w2 \leq 1} F(w),\quad \mathsf{OPT}:= F(v), \label{def:v.pop.risk.minimizer} \end{equation} then the goal of a learning algorithm is to (efficiently) return weights $w$ such that the population risk $F(w)$ is close to the best possible risk $\mathsf{OPT}$. The agnostic learning framework stands in contrast to the \textit{realizable} PAC learning setting, where one assumes $\mathsf{OPT}=0$, so that there exists some $v$ such that the labels are given by $y=\sigma(v^\top x)$. The learning algorithm we consider in this paper is empirical risk minimization using vanilla gradient descent. We assume we have access to a set of i.i.d. samples $\{(x_i,y_i)\}_{i=1}^n\sim \mathcal{D}^n$, and we run gradient descent with a fixed step size on the empirical risk $\hat F(w) = (1/2n)\textstyle \summ i n (\sigma(w^\top x_i)-y_i)^2$. A number of early neural network studies pointed out that the landscape of the empirical risk of a single neuron has unfavorable properties, such as a large number of spurious local minima~\citep{brady1989,auer1995}, and led researchers to instead study gradient descent on a convex surrogate loss~\citep{helmbold95worstcase,helmbold99relativeloss}. Despite this, we are able to show that gradient descent on the empirical risk itself finds weights that not only have small empirical risk but small population risk as well. Surprisingly little is known about neural networks trained by minimizing the empirical risk with gradient descent in the agnostic PAC learning setting. We are aware of two works \citep{allenzhu.3layer,allenzhu.kernel} in the \textit{improper} agnostic learning setting, where the goal is to return a hypothesis $h\in \mathcal{H}$ that achieves population risk close to $\hat \mathsf{OPT}$, where $\hat \mathsf{OPT}$ is the smallest possible population risk achieved by a different set of hypotheses $\hat \mathcal{H}$. Another work considered the random features setting where only the final layer of the network is trained and the marginal distribution over $x$ is uniform on the unit sphere~\citep{vempala}. But none of these address the simplest possible neural network: that of a single neuron $x\mapsto \sigma(w^\top x)$. We believe a full characterization of what we can (or cannot) guarantee for gradient descent in the single neuron setting will help us understand what is possible in the more complicated deep neural network setting. Indeed, two of the most common hurdles in the analysis of deep neural networks trained by gradient descent---nonconvexity and nonsmoothness---are also present in the case of the single neuron. We hope that our analysis in this relatively simple setup will be suggestive of what is possible in more complicated neural network models. Our main contributions can be summarized as follows. \begin{enumerate}[1)] \item \textbf{Agnostic setting} (Theorem \ref{theorem:agnostic}). Without any assumptions on the relationship between $y$ and $x$, and assuming only boundedness of the marginal distributions of $x$ and $y$, we show that for any $\varepsilon>0$, gradient descent finds a point $w_t$ with population risk $O(\mathsf{OPT}) + \varepsilon$ with sample complexity $O(\varepsilon^{-2})$ and runtime $O(\varepsilon^{-1})$ when $\sigma(\cdot)$ is strictly increasing and Lipschitz. When $\sigma$ is ReLU, we obtain a population risk guarantee of $O(\mathsf{OPT}^{1/2})+\varepsilon$ with sample complexity $O(\varepsilon^{-4})$ and runtime $O(\varepsilon^{-2})$ when the marginal distribution of $x$ satisfies a nondegeneracy condition (Assumption \ref{assumption:marginal.spread}). The sample and runtime complexities are independent of the input dimension for both strictly increasing activations and ReLU \item \textbf{Noisy teacher network setting} (Theorem \ref{theorem:glm}). When $y = \sigma(v^\top x) + \xi$, where $\xi|x$ is zero-mean and sub-Gaussian (and possibly dependent on $x$), we demonstrate that gradient descent finds $w_t$ satisfying $F(w_t) \leq \mathsf{OPT} + \varepsilon$ for activation functions that are strictly increasing and Lipschitz assuming only boundedness of the marginal distribution over $x$. The same result holds for ReLU under a marginal spread assumption (Assumption \ref{assumption:marginal.spread}). The runtime and sample complexities are of order $\tilde O(\varepsilon^{-2})$, with a logarithmic dependence on the input dimension. When the noise is bounded, our guarantees are dimension independent. If we further know $\xi \equiv 0$, i.e. the learning problem is in the realizable rather than agnostic setting, we can improve the runtime and sample complexity guarantees from $O(\varepsilon^{-2})$ to $O(\varepsilon^{-1})$ by using online stochastic gradient descent (Theorem \ref{theorem:gd.loss}). \end{enumerate} \section{Related work} Below, we provide a high-level summary of related work in the agnostic learning and teacher network settings. Detailed comparisons with the most related works will appear after we present our main theorems in Sections \ref{sec:agnostic} and \ref{sec:noisy}. In Appendix \ref{appendix:comparisons}, we provide tables that describe the assumptions and complexity guarantees of our work in comparison to related results. \noindent \textbf{Agnostic learning:} The simplest version of the agnostic regression problem is to find a hypothesis that matches the performance of the best \textit{linear} predictor. In our setting, this corresponds to $\sigma$ being the identity function. This problem is completely characterized:~\citet{shamir15} showed that any algorithm that returns a linear predictor $v$ has risk $\mathsf{OPT} + \Omega(\varepsilon^{-2}\wedge d\varepsilon^{-1})$ when the labels satisfy $|y|\leq 1$ and the marginal distribution over $x$ is supported on the unit ball, matching upper bounds proved by~\citet{srebro.mirror} using mirror descent. When $\sigma$ is not the identity, related works are scarce.~\citet{goel2017relupoly} studied agnostic learning of the ReLU on distributions supported on the unit sphere but had runtime and sample complexity exponential in $\varepsilon^{-1}$. In another work on learning a single ReLU,~\citet{goel2019relugaussian} showed that learning up to risk $\mathsf{OPT}+\varepsilon$ in polynomial time is as hard as the problem of learning sparse parities with noise, long believed to be computationally intractable. Additionally, they provided an approximation algorithm that could learn up to $O(\mathsf{OPT}^{2/3})+\varepsilon$ risk in $\poly(d, \varepsilon^{-1})$ time and sample complexity when the marginal distribution over $x$ is a standard Gaussian. In a related but incomparable set of results in the improper agnostic learning setting,~\citet{allenzhu.3layer} and \citet{allenzhu.kernel} showed that multilayer ReLU networks trained by gradient descent can match the population risk achieved by multilayer networks with smooth activation functions.~\citet{vempala} studied agnostic learning of a one-hidden-layer neural network when the first layer is fixed at its (random) initial values and the second layer is trained. A very recent work by \citet{diakonikolas2020approximation} showed that population risk $O(\mathsf{OPT})+\varepsilon$ can be achieved for the single ReLU neuron by appealing to gradient descent on a convex surrogate for the empirical risk. \noindent\textbf{Teacher network:} The literature refers to the case of $y= \sigma(v^\top x) + \xi$ for some possible zero mean noise $\xi$ variously as the ``noisy teacher network'' or ``generalized linear model'' (GLM) setting, and is related to the probabilistic concepts model~\citep{kearns.probabilistic}. In the GLM setting, $\sigma$ plays the role of the inverse link function; in the case of logistic regression, $\sigma$ is the sigmoid function. The results in the teacher network setting can be broadly characterized by (1) whether they cover arbitrary distributions over $x$ and (2) the presence of noise (or lackthereof). The GLMTron algorithm proposed by~\citet{kakade2011}, itself a modification of the Isotron algorithm of~\citet{kalai2009isotron}, is known to learn a noisy teacher network up to risk $\mathsf{OPT}+\varepsilon$ for any Lipschitz and non-decreasing $\sigma$ and any distribution with bounded marginals over $x$.~\citet{mei2016landscape} showed that gradient descent learns the noisy teacher network under a smoothness assumption on the activation function for a large class of distributions.~\citet{foster2018} provided a meta-algorithm for translating $\varepsilon$-stationary points of the empirical risk to points of small population risk in the noisy teacher network setting. A recent work by~\citet{mukherjee} develops a modified SGD algorithm for learning a ReLU with bounded adversarial noise on distributions where the input is bounded. Of course, any guarantee that holds for a neural network with a single fully connected hidden layer of arbitrary width holds for the single neuron, so in this sense our work can be connected to a larger body of work on the analysis of gradient descent used for learning neural networks. The majority of such works are restricted to particular input distributions, whether it is Gaussian or uniform distributions~\citep{soltanolkotabi2017relus,tian2017relu,soltanolkotabi2019theoretical,zhanggu2019,goel.convotron,cao2019cnn}.~\citet{du2017} showed that in the noiseless (a.k.a. realizable) setting, a single neuron can be learned with SGD if the input distribution satisfies a certain subspace eigenvalue property.~\citet{yehudai20} studied the properties of learning a single neuron for a variety of increasing and Lipschitz activation functions using gradient descent, as we do in this paper, although their analysis was restricted to the noiseless setting. \section{Agnostic learning setting} \label{sec:agnostic} We begin our analysis by assuming there is no \textit{a priori} relationship between $x$ and $y$, so the population risk $\mathsf{OPT}$ of the population risk minimizer $v$ defined in \eqref{def:v.pop.risk.minimizer} may, in general, be a large quantity. If $\mathsf{OPT} =0$, then $\sigma(v^\top x) = y$ a.s. and the problem is in the realizable PAC learning setting. In this case, we can use a modified proof technique to get stronger guarantees for the population risk; see Appendix \ref{appendix:realizable} for the complete theorems and proofs in this setting. We will thus assume without loss of generality that $0 < \mathsf{OPT} \leq 1$. The gradient descent method we use in this paper is as follows. We assume we have a training sample $\{(x_i,y_i)\}_{i=1}^n\stackrel{\rm i.i.d.}{\sim} \mathcal{D}^n$, and define the empirical risk for weight $w$ by \[ \hat F(w) = (1/2n)\textstyle \summ i n (\sigma(w^\top x_i) - y_i)^2.\] We perform full-batch gradient updates on the empirical risk using a fixed step size $\eta$, \begin{equation} w_{t+1} = w_t - \eta \nabla \hat F(w_t) = w_t - (\eta/n) \textstyle \summ i n (\sigma(w_t^\top x_i) - y_i) \sigma'(w_t^\top x_i) x_i, \label{eq:gd.updates} \end{equation} where $\sigma'(\cdot)$ is the derivative of $\sigma(\cdot)$. If $\sigma$ is not differentiable at a point $z$, we will use its subderivative. We begin by describing one set of activation functions under consideration in this paper. \begin{assumption} \begin{enumerate}[(a)] \item $\sigma$ is continuous, non-decreasing, and differentiable almost everywhere. \item For any $\rho > 0$, there exists $\gamma >0$ such that $\inf_{|z| \leq \rho} \sigma'(z) \geq \gamma > 0$. If $\sigma$ is not differentiable at $z\in[-\rho,\rho]$, assume that every subderivative $g$ on the interval satisfies $g(z)\geq \gamma$. \item $\sigma$ is $L$-Lipschitz, i.e. $|\sigma(z_1)-\sigma(z_2)|\leq L|z_1-z_2|$ for all $z_1,z_2$ \end{enumerate} \label{assumption:activation.fcn} \end{assumption} We note that if $\sigma$ is strictly increasing and continuous, then $\sigma$ satisfies Assumption \ref{assumption:activation.fcn}(b) since its derivative is never zero. In particular, the assumption covers the typical activation functions in neural networks like leaky ReLU, softplus, sigmoid, tanh, etc., but excludes ReLU. \citet{yehudai20} recently showed that when $\sigma$ is ReLU, there exists a distribution $\mathcal{D}$ supported on the unit ball and unit length target neuron $v$ such that \textit{even in the realizable case} of $y = \sigma(v^\top x)$, if the weights are initialized randomly using a product distribution, there exists a constant $c_0$ such that with high probability, $F(w_t) \geq c_0 >0$ throughout the trajectory of gradient descent. This suggests that gradient-based methods for learning ReLUs are likely to fail without additional assumptions. Because of this, they introduced the following marginal spread assumption to handle the learning of ReLU. \begin{assumption} There exist constants $\alpha, \beta >0$ such that the following holds. For any $w\neq u$, denote by $\mathcal{D}_{w,u}$ the marginal distribution of $\mathcal{D}$ on $\mathrm{span}(w,u)$, viewed as a distribution over $\mathbb{R}^2$, and let $p_{w,u}$ be its density function. Then $\inf_{z\in \mathbb{R}^2:\norm{z}\leq \alpha} p_{w,u}(z) \geq \beta$. \label{assumption:marginal.spread} \end{assumption} This assumption covers, for instance, log-concave distributions like the Gaussian and uniform distribution with $\alpha, \beta = O(1)$~\citep{lovasz}. We note that a similar assumption was used in recent work on learning halfspaces with Massart noise~\citep{diakonikolas2020}. We will use this assumption for all of our results when $\sigma$ is ReLU. Additionally, although the ReLU is not differentiable at the origin, we will denote by $\sigma'(0)$ its subderivative, with the convention that $\sigma'(0)=1$. Such a convention is consistent with the implementation of ReLUs in modern deep learning software packages With the above in hand, we can describe our main theorem. \begin{theorem}\label{theorem:agnostic} Suppose the marginals of $\mathcal{D}$ satisfy $\pnorm{x}2\leq B_X$ a.s. and $|y|\leq B_Y$ a.s. Let $a:=(|\sigma(B_X)|+B_Y)^2$. When $\sigma$ satisfies Assumption \ref{assumption:activation.fcn}, let $\gamma>0$ be the constant corresponding to $\rho=2B_X$ and fix a step size $\eta \leq (1/8)\gamma L^{-3} B_X^{-2}$. For any $\delta>0$, with probability at least $1-\delta$, gradient descent initialized at the origin and run for $T = \ceil{\eta^{-1}\gamma^{-1} L^{-1} B_X^{-1} [\mathsf{OPT} + a n^{-1/2} \log^{1/2}(4/\delta)]^{-1}}$ iterations finds weights $w_t$, $t<T$, such that \begin{equation} F(w_t) \leq C_1 \mathsf{OPT} + C_2 n^{-1/2}, \label{eq:agnostic.F.bound} \end{equation} where $C_1 = 12\gamma^{-3} L^3 + 2$ and $C_2 = O(L^3 B_X^2\sqrt{\log(1/\delta)} + C_1 a \sqrt{\log(1/\delta)})$. When $\sigma$ is ReLU, further assume that $\mathcal{D}_x$ satisfies Assumption \ref{assumption:marginal.spread} for constants $\alpha, \beta >0$, and let $\nu = \alpha^4 \beta / 8 \sqrt 2$. Fix a step size $\eta \leq (1/4) B_X^{-2}$. For any $\delta>0$, with probability at least $1-\delta$, gradient descent initialized at the origin and run for $T = \ceil{\eta^{-1}B_X^{-1}[\mathsf{OPT} +an^{-1/2}\log^{1/2}(4/\delta)]^{-1/2}}$ iterations finds a point $w_t$ such that \begin{equation} F(w_t) \leq C_1 \mathsf{OPT}^{1/2} + C_2 n^{-1/4}+ C_3 n^{-1/2}, \label{eq:agnostic.F.bound} \end{equation} where $C_1 = O(B_X \nu^{-1})$, $C_2 = O( C_1 a^{1/2} \log^{1/4} (1/\delta))$, and $C_3 = O(B_X^2 \nu^{-1} \log^{1/2}(1/\delta))$. \end{theorem} We remind the reader that the optimization problem for the empirical risk is highly nonconvex~\citep{auer1995} and thus any guarantee for the empirical risk, let alone the population risk, is nontrivial. This makes us unsure if the suboptimal guarantee of $O(\mathsf{OPT}^{1/2})$ for ReLU is an artifact of our analysis or a necessary consequence of nonconvexity. In comparison to recent work,~\citet{goel2019relugaussian} considered the agnostic setting for the ReLU activation when the marginal distribution over $x$ is a standard Gaussian and showed that learning up to risk $\mathsf{OPT}+\varepsilon$ is as hard as learning sparse parities with noise. By using an approximation algorithm of~\citet{awasthi}, they were able to show that one can learn up to $O(\mathsf{OPT}^{2/3})+\varepsilon$ with $O(\mathrm{poly}(d, \varepsilon^{-1}))$ runtime and sample complexity. In a very recent work, \citet{diakonikolas2020approximation} improved the population risk guarantee for the ReLU to $O(\mathsf{OPT}) + \varepsilon$ when the features are sampled from an isotropic log-concave distribution by analyzing gradient descent on a convex surrogate loss. Projected gradient descent on this surrogate loss produces the weight updates of the GLMTron algorithm of~\citet{kakade2011}. Using the solution found by gradient descent on the surrogate loss, they proposed an improper learning algorithm that improves the population risk guarantee from $O(\mathsf{OPT})+\varepsilon$ to $(1+\delta) \mathsf{OPT} + \varepsilon$ for any $\delta>0$. By contrast, we show that gradient descent on the empirical risk learns up to a population risk of $O(\mathsf{OPT})+\varepsilon$ for \textit{any} joint distribution with bounded marginals when $\sigma$ is strictly increasing and Lipschitz, even though the optimization problem is nonconvex. In the case of ReLU, our guarantee holds for the class of bounded distributions over $x$ that satisfy the marginal spread condition of Assumption \ref{assumption:marginal.spread} and hence covers (bounded) log-concave distributions, although the guarantee is $O(\mathsf{OPT}^{1/2})$ in this case. For all activation functions we consider, the runtime and sample complexity guarantees do not have (explicit) dependence on the dimension.\footnote{We note that for some distributions, the $B_X$ term may hide an implicit dependence on $d$; more detailed comments on this are given in Appendix \ref{appendix:comparisons}.} Moreover, we shall see in the next section that if the data is known to come from a noisy teacher network, the guarantees of gradient descent improve to $\mathsf{OPT}+\varepsilon$ for both strictly increasing activations and ReLU. In the remainder of this section we will prove Theorem \ref{theorem:agnostic}. Our proof relies upon the following auxiliary errors for the true risk $F$: \begin{align} \nonumber G(w) &:= (1/2) \mathbb{E}_{(x,y)\sim \mathcal{D}} \l[ \l( \sigma(w^\top x) - \sigma(v^\top x) \r)^2 \r],\\ \label{def:auxiliary.loss} H(w) &:= (1/2)\mathbb{E}_{(x,y)\sim \mathcal{D}} \l[ \l( \sigma(w^\top x) - \sigma(v^\top x) \r)^2 \sigma'(w^\top x) \r]. \end{align} We will denote the corresponding empirical risks by $\hat G(w)$ and $\hat H(w)$. We first note that $G$ trivially upper bounds $F$: this follows by a simple application of Young's inequality and, when $\mathbb{E}[y|x]~=~\sigma(v^\top x)$, by using iterated expectations. \begin{claim} \label{claim:Gbound:implies:Fbound} For any joint distribution $\mathcal{D}$, for any vector $u$, and any continuous activation function $\sigma$, $F(u) \leq 2 G(u) + 2 F(v)$. If additionally we know that $\mathbb{E}[y|x] = \sigma(v^\top x)$, we have $F(u)~=~G(u)~+~F(v)$. \end{claim} This claim shows that in order to show the population risk is small, it suffices to show that $G$ is small. It is easy to see that if $\inf_{z\in \mathbb{R}} \sigma'(z) \geq \gamma > 0$, then $H(w)\leq \varepsilon$ implies $G(w) \leq \gamma^{-1} \varepsilon$, but the only typical activation function that satisfies this condition is the leaky ReLU. Fortunately, when $\sigma$ satisfies Assumption \ref{assumption:activation.fcn}, or when $\sigma$ is ReLU and $\mathcal{D}$ satisfies Assumption \ref{assumption:marginal.spread}, Lemma \ref{lemma:Hsurrogate} below shows that $H$ is still an upper bound for $G$. The proof is deferred to Appendix~\ref{appendix:Hsurrogate}. \begin{lemma} \label{lemma:Hsurrogate} \label{lemma:relu.of.implies.f} If $\sigma$ satisfies Assumption \ref{assumption:activation.fcn}, $\pnorm{x}2\leq B$ a.s., and $\pnorm{w}2\leq W$, then for $\gamma$ corresponding to $\rho = W B$, $H(w)\leq \varepsilon$ implies $G(w)\leq \gamma^{-1} \varepsilon$. If $\sigma$ is ReLU and $\mathcal{D}$ satisfies Assumption \ref{assumption:marginal.spread} for some constants $\alpha, \beta >0$, and if for some $\varepsilon>0$ the bound $H(w)\leq \beta \alpha^4 \varepsilon / 8 \sqrt 2$ holds, then $\pnorm{w-v}2 \leq 1$ implies $G(w) \leq \varepsilon$. \end{lemma} Claim \ref{claim:Gbound:implies:Fbound} and Lemma \ref{lemma:Hsurrogate} together imply that if gradient descent finds a point with auxiliary error $H(w_t) \leq O(\mathsf{OPT}^\alpha)$ for some $\alpha \leq 1$, then gradient descent achieves population risk $O(\mathsf{OPT}^\alpha)$. In the remainder of this section, we will show that this is indeed the case. In Section \ref{sec:strictly.increasing.activation}, we first consider activations satisfying Assumption \ref{assumption:activation.fcn}, for which we are able to show $H(w_t) \leq O(\mathsf{OPT})$. In Section \ref{sec:relu.activation}, we show $H(w_t)\leq O(\mathsf{OPT}^{1/2})$ for the ReLU. \subsection{Strictly increasing activations}\label{sec:strictly.increasing.activation} In Lemma \ref{lemma:agnostic.key.strictly.increasing} below, we show that $\hat H(w_t)$ is a natural quantity of the gradient descent algorithm that in a sense tells us how good a direction the gradient is pointing at time $t$, and that $\hat H(w_t)$ can be as small as $O(\hat F(v))$. Our proof technique is similar to that of~\citet{kakade2011}, who studied the GLMTron algorithm in the (non-agnostic) noisy teacher network setup. \begin{lemma}\label{lemma:agnostic.key.strictly.increasing} Suppose that $\pnorm{x}2 \leq B_X$ a.s. under $\mathcal{D}_x$. Suppose $\sigma$ satisfies Assumption \ref{assumption:activation.fcn}, and let $\gamma$ be the constant corresponding to $\rho=2B_X$. Assume $\hat F(v)>0$. Gradient descent with fixed step size $\eta \leq (1/8) \gamma L^{-3} B_X^{-2}$ initialized at $w_0=0$ finds weights $w_t$ satisfying $\hat H(w_t) \leq 6 L^3\gamma^{-2} \hat F(v)$ within $T = \ceil{ \eta^{-1} \gamma^{-1} L^{-1} B_X^{-1} \hat F(v)^{-1}}$ iterations, with $\pnorm {w_t-v}2 \leq 1$ for each $t=0, \dots, T-1$. \end{lemma} Before beginning the proof, we first note the following fact, which allows us to connect terms that appear in the gradient to the square loss. \begin{fact}\label{fact:sigma.strictly.increasing} If $\sigma$ is strictly increasing on an interval $[a,b]$ with $\sigma'(z) \geq \gamma>0$ for all $z\in [a,b]$, and if $z_1,z_2\in [a,b]$, then, it holds that \begin{equation} \gamma (z_1 - z_2)^2 \leq \l( \sigma(z_1 ) - \sigma(z_2) \r) (z_1 -z_2). \label{eq:lb.identity.zs} \end{equation}\end{fact} \begin{proof}[Proof of Lemma \ref{lemma:agnostic.key.strictly.increasing}] The proof comes from the following induction statement. We claim that for every $t\in \mathbb{N}$, either (a) $\hat H(w_\tau) \leq 6 L^3 \gamma^{-2} \hat F(v)$ for some $\tau < t$, or (b) $\pnorm{w_t-v}2^2 \leq \pnorm{w_{t-1}-v}2^2 - \eta L\hat F(v)$ holds. If this claim is true, then until gradient descent finds a point where $\hat H(w_t) \leq 6 L^3 \gamma^{-2} \hat F(v)$, the squared distance $\pnorm{w_t-v}2^2$ decreases by $\eta L \hat F(v)$ at every iteration. Since $\pnorm{w_0-v}2^2 = 1$, this means there can be at most $1/(\eta L\hat F(v)) = \eta^{-1} L^{-1} \hat F(v)^{-1}$ iterations until we reach $\hat H(w_t) \leq 6 L^3 \gamma^{-2} \hat F(v)$. So let us now suppose the induction hypothesis holds for $t$, and consider the case $t+1$. If (a) holds, then we are done. So now consider the case that for every $\tau \leq t$, we have $\hat H(w_\tau) > 6 L^3 \gamma^{-2} \hat F(v)$. Since (a) does not hold, $\pnorm{w_\tau -v}2^2 \leq \pnorm{w_{\tau-1}-v}2^2 - \eta L \hat F(v)$ holds for each $\tau=1, \dots, t$, and so $\pnorm{w_0-v}2=1$ implies \begin{equation} \pnorm{w_\tau - v}2 \leq 1\ \forall \tau \leq t. \label{eq:bounded.weights.agnostic} \end{equation} In particular, $\pnorm {w_\tau}2 \leq 1 + \pnorm{v}2 \leq 2$ holds for all $\tau\leq t$. By Cauchy--Schwarz, this implies $|w_\tau^\top x|\vee |v^\top x| \leq 2 B_X$ a.s. By defining $\rho = 2 B_X$ and letting $\gamma$ be the constant from Assumption \ref{assumption:activation.fcn}, this implies $\sigma'(z) \geq \gamma>0$ for all $|z|\leq 2 B_X$. Fact \ref{fact:sigma.strictly.increasing} therefore implies \begin{equation} \sigma'(w_\tau^\top x) \geq \gamma > 0 \quad \text { and } \quad (\sigma(w_\tau^\top x) - \sigma(v^\top x))\cdot(w_\tau^\top x - v^\top x) \geq \gamma (w_\tau^\top x- v^\top x)^2 \quad \forall \tau \leq t. \label{eq:lb.identity} \end{equation} We proceed with the proof by demonstrating an appropriate lower bound for the quantity \[ \pnorm{w_t-v}2^2 - \pnorm{w_{t+1}-v}2^2 = 2\eta \ip{\nabla \hat F(w_t)}{w_t-v} - \eta^2 \pnorm{\nabla \hat F(w_t)}2^2.\] We begin with the inner product term. We have \begin{align} \nonumber \big \langle \nabla \hat F(w_t) , w_t-v\big \rangle &= (1/n) \summ i n \l( \sigma(w_t^\top x_i) - \sigma(v^\top x_i) \r) \sigma'(w_t^\top x_i) (w_t^\top x_i - v^\top x_i) \\ \nonumber &\quad + (1/n) \summ i n \l( \sigma(v^\top x_i) - y_i \r) \gamma^{-1/2} \cdot \gamma^{1/2} \sigma'(w_t^\top x_i)(w_t^\top x_i -v^\top x_i)\\ \nonumber &\geq (\gamma/n) \summ i n \l( w_t^\top x_i - v^\top x_i \r)^2 \sigma'(w_t^\top x_i)\\ \nonumber &\quad - \frac {\gamma^{-1}}{2n} \summ i n \l( \sigma(v^\top x_i) - y_i\r)^2 \sigma'(w_t^\top x_i) - \frac \gamma {2n} \summ i n \l( w_t^\top x_i - v^\top x_i\r)^2 \sigma'(w_t^\top x_i)\\ \nonumber &\geq \frac \gamma 2 \summ i n (w_t^\top x_i-v^\top x_i)^2 \sigma'(w_t^\top x_i) - L \gamma^{-1} \hat F(v)\\ &\geq \gamma L^{-2} \hat H(w_t) - L \gamma^{-1} \hat F(v). \label{eq:ip.lb.strictly.increasing} \end{align} In the first inequality we used \eqref{eq:lb.identity} for the first term and Young's inequality for the second (and that $\sigma'\geq 0$). For the final two inequalities, we use that $\sigma$ is $L$-Lipschitz. For the gradient upper bound, \begin{align} \nonumber \norm{\nabla \hat F(w)}^2 &\leq 2 \norm{\frac 1n \summ i n (\sigma(w^\top x_i) - \sigma(v^\top x_i)) \sigma'(w^\top x_i) x_i}^2 \\ \nonumber &\quad + 2 \norm{\frac 1 n \summ i n (\sigma(v^\top x_i) - y_i) \sigma'(w^\top x_i) x_i}^2 \\ \nonumber &\leq \frac 2n \summ i n (\sigma(w^\top x_i) - \sigma(v^\top x_i))^2 \sigma'(w^\top x_i)^2 \pnorm{x_i}2^2 \\ \nonumber &\quad + \frac 2 n \summ i n (\sigma(v^\top x_i) - y_i)^2 \sigma'(w^\top x_i)^2 \pnorm{ x_i}2^2 \\ \nonumber &\leq \frac {2L B_X^2} n \summ i n (\sigma(w^\top x_i)- \sigma(v^\top x_i))^2\sigma'(w^\top x_i) + 4L^2 B_X^2 \hat F(v)\\ &= 4L B_X^2 \hat H(w) + 4L^2 B_X^2 \hat F(v). \label{eq:grad.ub.strictly.increasing} \end{align} The first inequality is due to Young's inequality, and the second is due to Jensen's inequality. The last inequality holds because $\sigma$ is $L$-Lipschitz and $\pnorm{x}2\leq B_X$ a.s. Putting \eqref{eq:ip.lb.strictly.increasing} and \eqref{eq:grad.ub.strictly.increasing} together and taking $\eta \leq (1/8) L^{-3} B_X^{-2} \gamma$, \begin{align*} \norm{w_t-v}^2 - \norm{w_{t+1}-v}^2 &\geq 2 \eta ( \gamma L^{-2} \hat H(w_t) - L\gamma^{-1} \hat F(v)) - 4\eta^2 (L B_X^2 \hat H(w_t) + L^2 B_X^2 \hat F(v))\\ &\geq 2 \eta \l( \frac {\gamma L^{-2}} 2 \hat H(w_t) - \frac 5 2 L \gamma^{-1} \hat F(v)\r)\\ &\geq \eta \gamma L \hat F(v). \end{align*} The last inequality uses the induction assumption that $\hat H(w_t) \geq 6 L^3 \gamma^{-2} \hat F(v)$, completing the proof. \end{proof} Since the auxiliary error $\hat H(w)$ is controlled by $\hat F(v)$, we need to bound $\hat F(v)$. When the marginals of $\mathcal{D}$ are bounded, Lemma \ref{lemma:F(v).opt.concentration} below shows that $\hat F(v)$ concentrates around $F(v)=\mathsf{OPT}$ at rate $n^{-1/2}$ by Hoeffding's inequality; for completeness, the proof is given in Appendix \ref{appendix:simple.proofs}. \begin{lemma}\label{lemma:F(v).opt.concentration} If $\pnorm{x}2 \leq B_X$ and $|y|\leq B_Y$ a.s. under $\mathcal{D}_x$ and $\mathcal{D}_y$ respectively, and if $\sigma$ is non-decreasing, then for $a := \l( |\sigma(B_X)| + B_Y\r)^2$ and $\pnorm{v}2\leq 1$, we have with probability at least $1-\delta$, \begin{align*} |\hat F(v) - \mathsf{OPT}| \leq 3a \sqrt{n^{-1} \log(2/\delta)}. \end{align*} \end{lemma} The final ingredient to the proof is translating the bounds for the empirical risk to one for the population risk. Since $\mathcal{D}_x$ is bounded and we showed in Lemma \ref{lemma:agnostic.key.strictly.increasing} that $\pnorm{w_t-v}2\leq 1$ throughout the gradient descent trajectory, we can use standard properties of Rademacher complexity to get it done. The proof for Lemma \ref{lemma:rademacher.complexity} can be found in Appendix \ref{appendix:simple.proofs}. \begin{lemma}\label{lemma:rademacher.complexity} Suppose $\sigma$ is $L$-Lipschitz and $\pnorm{x}2\leq B_X$ a.s. Denote $\ell(w; x) := (1/2) \l( \sigma(w^\top x) - \sigma(v^\top x)\r)^2$. For a training set $S\sim \mathcal{D}^n$, let $\mathfrak{R}_S(\mathcal{G})$ denote the empirical Rademacher complexity of the following function class \[ \mathcal{G} := \{ x\mapsto w^\top x : \pnorm{w-v}2\leq 1, \ \pnorm v 2 = 1 \}. \] Then we have \[ \mathfrak R(\ell \circ \sigma \circ \mathcal{G}) = \mathbb{E}_{S\sim \mathcal{D}^n} \mathfrak R_S(\ell \circ \sigma \circ \mathcal{G}) \leq 2L^3 B_X^2/\sqrt n.\] \end{lemma} With Lemmas \ref{lemma:agnostic.key.strictly.increasing}, \ref{lemma:F(v).opt.concentration} and \ref{lemma:rademacher.complexity} in hand, the bound for the population risk follows in a straightforward manner. \begin{proof}[Proof of Theorem \ref{theorem:agnostic} for strictly increasing activations.] By Lemma \ref{lemma:agnostic.key.strictly.increasing}, there exists some $w_t$ with $t<T$ and $\pnorm{w_t-v}2\leq 1$ such that $\hat H(w_t) \leq 6L^3 \gamma^{-2} \hat F(v)$. By Lemmas \ref{lemma:Hsurrogate} and Lemma \ref{lemma:F(v).opt.concentration}, this implies that with probability at least $1-\delta/2$, \begin{equation} \hat G(w_t) \leq 6 L^3 \gamma^{-3} \l( \mathsf{OPT} + 3a n^{-1/2} \log^{1/2}(4/\delta)\r).\label{eq:agnostic.nonrelu.hatG.bound} \end{equation} Since $\pnorm{w-v}2\leq 1$ implies $\ell(w; x) = (1/2)(\sigma(w^\top x) - \sigma(v^\top x))^2 \leq L^2 B_X^2/2$, standard results from Rademacher complexity (e.g., Theorem 26.5 of~\cite{shalevschwartz}) imply that with probability at least $1-\delta/2$, \[ G(w_t) \leq \hat G(w_t) + \mathbb{E}_{S\sim \mathcal{D}^n} \mathfrak{R}_S(\ell \circ \sigma \circ \mathcal{G}) + 2 L^2 B_X^2 \sqrt{\frac{ 2 \log(8/\delta)}{n}},\] where $\ell$ is the loss and $\mathcal{G}$ is the function class defined in Lemma \ref{lemma:rademacher.complexity}. We can combine \eqref{eq:agnostic.nonrelu.hatG.bound} with Lemma \ref{lemma:rademacher.complexity} and a union bound to get that with probability at least $1-\delta$, \[ G(w_t) \leq 6 L^3 \gamma^{-3} \l( \mathsf{OPT} + 3a\sqrt{\frac{ \log(4/\delta)}{n}}\r)+ \frac{ 2 L^3 B_X^2}{\sqrt n} + \frac{ 2L^2 B_X^2 \sqrt{2\log(8/\delta)}}{\sqrt n} .\] This shows that $G(w_t) \leq O(\mathsf{OPT} + n^{-1/2})$. By Claim \ref{claim:Gbound:implies:Fbound}, we have \[ F(w_t) \leq 2 G(w_t) + 2 \mathsf{OPT} \leq O(\mathsf{OPT} + n^{-1/2}),\] completing the proof for those $\sigma$ satisfying Assumption \ref{assumption:activation.fcn}.\end{proof} \subsection{ReLU activation}\label{sec:relu.activation} The proof above crucially relies upon the fact that $\sigma$ is strictly increasing so that we may apply Fact \ref{fact:sigma.strictly.increasing} in the proof of Lemma \ref{lemma:agnostic.key.strictly.increasing}. In particular, it is difficult to show a strong lower bound for the gradient direction term in \eqref{eq:ip.lb.strictly.increasing} if it is possible for $(z_1-z_2)^2$ to be arbitrarily large when $(\sigma(z_1)-\sigma(z_2))^2$ is small. To get around this, we will use the same proof technique wherein we show that the gradient lower bound involves a term that relates the auxiliary error $\hat H(w_t)$ to $\hat F(v)$, but our bound will involve a term of the form $O(\hat F(v)^{1/2})$ rather than $O(\hat F(v))$. To do so, we will use the following property of non-decreasing Lipschitz functions. \begin{fact}\label{fact:sigma.L.lipschitz} If $\sigma$ is non-decreasing and $L$-Lipschitz, then for any $z_1, z_2$ in the domain of $\sigma$, it holds that $(\sigma(z_1) - \sigma(z_2))(z_1 - z_2) \geq L^{-1}(\sigma(z_1)-\sigma(z_2))^2$. \end{fact} With this fact we can present the analogue to Lemma \ref{lemma:agnostic.key.strictly.increasing} that holds for a general non-decreasing and Lipschitz activation and hence includes the ReLU. \begin{lemma}\label{lemma:agnostic.key.relu} Suppose that $\pnorm{x}2 \leq B_X$ a.s. under $\mathcal{D}_x$. Suppose $\sigma$ is non-decreasing and $L$-Lipschitz. Assume $\hat F(v) \in (0,1)$. Gradient descent with fixed step size $\eta \leq (1/4) L^{-2} B_X^{-2}$ initialized at $w_0=0$ finds weights $w_t$ satisfying $\hat H(w_t) \leq 2 L^2 B_X \hat F(v)^{1/2}$ within $T = \ceil{\eta^{-1} L^{-1} B_X^{-1} \hat F(v)^{-1/2}}$ iterations, with $\pnorm {w_t-v}2 \leq 1$ for each $t=0, \dots, T-1$. \end{lemma} \begin{proof} Just as in the proof of Lemma \ref{lemma:agnostic.key.strictly.increasing}, the lemma is proved if we can show that for every $t\in \mathbb{N}$, either (a) $\hat H(w_\tau) \leq 2 L^2 B_X \hat F(v)^{1/2}$ for some $\tau < t$, or (b) $\pnorm{w_t-v}2^2 \leq \pnorm{w_{t-1}-v}2^2 - \eta LB_X \hat F(v)^{1/2}$ holds. To this end we assume the induction hypothesis holds for some $t\in \mathbb{N}$, and since we are done if (a) holds, we assume (a) does not hold and thus for every $\tau \leq t$, we have $\hat H(w_\tau) > 2 L^2 B_X\hat F(v)^{1/2}$. Since (a) does not hold, $\pnorm{w_\tau -v}2^2 \leq \pnorm{w_{\tau-1}-v}2^2 - \eta L B_X \hat F(v)^{1/2}$ holds for each $\tau=1, \dots, t$ and hence the identity \begin{equation} \pnorm{w_\tau - v}2 \leq 1 \quad \forall \tau \leq t, \label{eq:bounded.weights.agnostic.relu} \end{equation} holds. We now proceed with showing the analogues of \eqref{eq:ip.lb.strictly.increasing} and \eqref{eq:grad.ub.strictly.increasing}. We begin with the lower bound, \begin{align} \nonumber \big \langle \nabla \hat F(w_t) , w_t-v\big \rangle &= (1/n) \summ i n \l( \sigma(w_t^\top x_i) - \sigma(v^\top x_i) \r) \sigma'(w_t^\top x_i) (w_t^\top x_i - v^\top x_i) \\ \label{eq:agnostic.glm.key.difference} &\quad + \big \langle (1/n) \summ i n \l( \sigma(v^\top x_i) - y_i \r) \sigma'(w_t^\top x_i) x_i , w_t-v\big \rangle\\ \nonumber &\geq (1/Ln) \summ i n \l( \sigma(w_t^\top x_i) - \sigma(v^\top x_i) \r)^2 \sigma'(w_t^\top x_i)\\ \nonumber &\quad - \pnorm{w_t-v}2 \bigg \| (1/n) \summ i n \l( \sigma(v^\top x_i) - y_i \r) \sigma'(w_t^\top x_i) x_i \bigg\|_2 \\ &\geq 2 L^{-1} \hat H(w_t) - L B_X \hat F(v)^{1/2}. \label{eq:agnostic.glm.ip.lb} \end{align} In the first inequality, we have used Fact \ref{fact:sigma.L.lipschitz} and that $\sigma'(z) \geq 0$ for the first term. For the second term, we use Cauchy--Schwarz. The last inequality is a consequence of \eqref{eq:bounded.weights.agnostic.relu}, Cauchy--Schwarz, and that $\sigma'(z) \leq L$ and $\pnorm{x}2\leq B_X$. As for the gradient upper bound at $w_t$, the bound \eqref{eq:grad.ub.strictly.increasing} still holds since it only uses that $\sigma$ is $L$-Lipschitz. The choice of $\eta\leq (1/4) L^{-2} B_X^{-2}$ then ensures \begin{align} \nonumber \pnorm{w_t-v}2^2 - \pnorm{w_{t+1}-v}2^2 &\geq 2 \eta \l( 2 L^{-1} \hat H(w_t) - L B_X\hat F(v)^{1/2} \r) \\ \nonumber &\quad - \eta^2 \l( 4B_X^2 L \hat H(w_t) + 4L^2 B_X^2 \hat F(v)\r) \\ \nonumber &\geq \eta \l( 3L^{-1} \hat H(w_t) - 3 L B_X \l( \hat F(v) \vee \hat F(v)^{1/2} \r)\r)\\ &\geq \eta L B_X \hat F(v)^{1/2}, \end{align} where the last line comes from the induction hypothesis that $\hat H(w_t) \geq 2 L^2 B_X \hat F(v)^{1/2}$ and since $\hat F(v)\in (0,1)$. This completes the proof. \end{proof} With this lemma in hand, the proof of Theorem \ref{theorem:agnostic} follows just as in the strictly increasing case. \begin{proof}[Proof of Theorem \ref{theorem:agnostic} for ReLU] We highlight here the main technical differences with the proof for the strictly increasing case. Although Lemma \ref{lemma:rademacher.complexity} applies to the loss function $\ell(w; x) = (1/2) \l(\sigma(w^\top x) - \sigma(v^\top x)\r)^2$, the same results hold for the loss function $\tilde \ell(w; x) = \ell(w; x) \sigma'(w^\top x)$ for ReLU, since $\nabla \sigma'(w^\top x) \equiv 0$ a.e. Thus $\tilde \ell$ is still $B_X$-Lipschitz, and we have \begin{equation} \mathbb{E}_{S\sim \mathcal{D}^n} \mathfrak{R}_S \l( \tilde \ell \circ \sigma \circ \mathcal{G} \r) \leq \frac{ 2 B_X^2}{\sqrt n}. \label{eq:rademacher.relu} \end{equation} With this in hand, the proof is essentially identical: By Lemmas \ref{lemma:agnostic.key.relu} and \ref{lemma:F(v).opt.concentration}, with probability at least $1-\delta/2$ gradient descent finds a point with \begin{equation} \hat H(w_t) \leq 2 B_X \hat F(v)^{1/2} \leq 2 B_X \l( \mathsf{OPT}^{1/2} + \frac{ \sqrt {3a} \log^{1/4} (4/\delta)}{n^{1/4}}\r). \end{equation} We can then use \eqref{eq:rademacher.relu} to get that with probability at least $1-\delta$, \begin{equation} H(w_t) \leq 2 B_X \l( \mathsf{OPT}^{1/2} + \frac{ \sqrt {3a} \log^{1/4} (4/\delta)}{n^{1/4}}\r) + \frac{ 2B_X^2}{\sqrt n} + 2B_X^2 \sqrt{\frac{ 2\log(8/\delta)}n}. \end{equation} Since $\mathcal{D}_x$ satisfies Assumption \ref{assumption:marginal.spread} and $\pnorm{w_t-v}2\leq 1$, Lemma \ref{lemma:Hsurrogate} yields $G(w_t)\leq 8 \sqrt 2 \alpha^{-4} \beta^{-1} H(w_t)$. Then applying Claim \ref{claim:Gbound:implies:Fbound} completes the proof. \end{proof} \begin{remark} An examination of the proof of Theorem \ref{theorem:agnostic} shows that when $\sigma$ satisfies Assumption \ref{assumption:activation.fcn}, any initialization with $\pnorm{w_0-v}2$ bounded by a universal constant will suffice. In particular, if we use Gaussian initialization $w_0\sim N(0,\tau^2 I_d)$ for $\tau^2=O(1/d)$, then by concentration of the chi-square distribution the theorem holds with (exponentially) high probability over the random initialization. For ReLU, initialization at the origin greatly simplifies the proof since Lemma \ref{lemma:agnostic.key.relu} shows that $\pnorm{w_{t}-v}2\leq \pnorm{w_0-v}2$ for all $t$. When $w_0=0$, this implies $\pnorm{w_t-v}2\leq 1$ and allows for an easy application of Lemma \ref{lemma:Hsurrogate}. For isotropic Gaussian initialization, one can show that with probability approaching 1/2 that $\pnorm{w_0-v}2<1$ provided its variance satisfies $\tau^2 = O(1/d)$ (see e.g. Lemma 5.1 of~\citet{yehudai20}). In this case, the theorem will hold with constant probability over the random initialization. \end{remark} \section{Noisy teacher network setting} \label{sec:noisy} In this section, we consider the teacher network setting, where the joint distribution of $(x,y)\sim \mathcal{D}$ is given by a target neuron $v$ (with $\pnorm{v}2\leq 1$) plus zero-mean $s$-sub-Gaussian noise, \[ y | x \sim \sigma(v^\top x) + \xi,\quad \mathbb{E}\xi|x =0.\] We assume throughout this section that $\xi\not \equiv 0$; we deal with the realizable setting separately (and achieve improved sample complexity) in Appendix \ref{appendix:realizable}. We note that this is precisely the setup of the generalized linear model with (inverse) link function $\sigma$. We further note that we only assume that $\mathbb{E}[y|x] = \sigma(v^\top x)$, i.e., the noise is \textit{not} assumed to be independent of the input $x$, and thus falls into the probabilistic concept learning model of~\citet{kearns.probabilistic}. With the additional structural assumption of a noisy teacher, we can improve the agnostic result from $O(\mathsf{OPT})+\varepsilon$ (for strictly increasing activations) and $O(\mathsf{OPT}^{1/2})+\varepsilon$ (for ReLU) to $\mathsf{OPT}+\varepsilon$. The key difference from the proof in the agnostic setting is that when trying to show the gradient points in a good direction as in \eqref{eq:ip.lb.strictly.increasing} and \eqref{eq:agnostic.glm.key.difference}, since we know $\mathbb{E}[y|x] = \sigma(v^\top x)$, the average of terms of the form $a_i (\sigma(v^\top x_i) - y_i)$ with fixed and bounded coefficients $a_i$ will concentrate around zero. This allows us to improve the lower bound from $\langle\nabla \hat F(w_t), w_t-v\rangle \geq C( \hat H(w) - \hat F(v)^{\alpha})$ to one of the form $\geq C( \hat H(w) - \varepsilon)$, where $C$ is an absolute constant. The full proof of Theorem \ref{theorem:glm} is given in Appendix \ref{appendix:glm}. \begin{theorem}\label{theorem:glm} Suppose $\mathcal{D}_x$ satisfies $\pnorm{x}2\leq B_X$ a.s. and $\mathbb{E}[y | x] = \sigma(v^\top x)$ for some $\pnorm{v}2 \leq 1$. Assume that $\sigma(v^\top x) - y$ is $s$-sub-Gaussian. Assume gradient descent is initialized at $w_0=0$ and fix a step size $\eta \leq (1/4) L^{-2} B_X^{-2}$. If $\sigma$ satisfies Assumption \ref{assumption:activation.fcn}, let $\gamma$ be the constant corresponding to $\rho = 2B_X$. There exists an absolute constant $c_0>0$ such that for any $\delta >0$, with probability at least $1-\delta$, gradient descent for $T = \eta^{-1} \sqrt n / (c_0 LB_x s\sqrt{\log(4d/\delta)})$ iterations finds weights $w_t$, $t<T$, satisfying \begin{equation} F(w_t) \leq \mathsf{OPT} + C_1 n^{-1/2} + C_2 n^{-1/2} \sqrt{\log(8/\delta) }+ C_3 n^{-1/2}\sqrt{ \log(4d/\delta)}, \label{eq:glm.thm.bound} \end{equation} where $C_1 = 4 L^3 B_X^2$, $C_2 = 2\sqrt 2 L^2 B_X^2 \sqrt 2$, and $C_3 = 4 c_0 \gamma^{-1} L^2 sB_X$. When $\sigma$ is ReLU, further assume that $\mathcal{D}_x$ satisfies Assumption \ref{assumption:marginal.spread} for constants $\alpha, \beta>0$, and let $\nu = \alpha^{4}\beta/8\sqrt 2$. Then \eqref{eq:glm.thm.bound} holds for $C_1 = B_X^2\nu^{-1}$, $C_2 = 2C_1$, and $C_3 = 4 c_0 s \nu^{-1} B_X$. \end{theorem} We first note that although \eqref{eq:glm.thm.bound} contains a $\log(d)$ term, the dependence on the dimension can be removed if we assume that the noise is bounded rather than sub-Gaussian; details for this are given in Appendix \ref{appendix:glm}. As mentioned previously, if we are in the realizable setting, i.e. $\xi \equiv 0$, we can improve the sample and runtime complexities to $O(\varepsilon^{-1})$ by using online SGD and a martingale Bernstein bound. For details on the realizable case, see Appendix \ref{appendix:realizable}. In comparison with existing literature,~\citet{kakade2011} proposed GLMTron to show the learnability of the noisy teacher network for a non-decreasing and Lipschitz activation $\sigma$ when the noise is bounded.\footnote{A close inspection of the proof shows that sub-Gaussian noise can be handled with the same concentration of norm sub-Gaussian random vectors that we use for our results.} In GLMTron, updates take the form $w_{t+1} = w_t - \eta \tilde g_t$ where $\tilde g_t = ( \sigma(w_t^\top x) - y)x$, while in gradient descent, the updates take the form $w_{t+1} = w_t - \eta g_t$ where $g_t = \tilde g_t \sigma'(w_t^\top x)$. Intuitively, when the weights are in a bounded region and $\sigma$ is strictly increasing and Lipschitz, the derivative satisfies $\sigma'(w_t^\top x) \in [\gamma, L]$ and so the additional $\sigma'$ factor will not significantly affect the algorithm. For ReLU this is more complicated as the gradient could in the worst case be zero in a large region of the input space, preventing effective learnability using gradient-based optimization, as was demonstrated in the negative result of~\citet{yehudai20}. For this reason, a type of nondegeneracy condition like Assumption \ref{assumption:marginal.spread} is essential for gradient descent on ReLUs. In terms of other results for ReLU, recent work by~\citet{mukherjee} introduced another modified version of SGD, where updates now take the form $w_{t+1}=w_t - \eta \hat g_t$, with $\hat g_t = \tilde g_t \sigma'(y>\theta)$, and $\theta$ is an upper bound for an adversarial noise term. They showed that this modified SGD recovers the parameter $v$ of the teacher network under the nondegeneracy condition that the matrix $\mathbb{E}_x [ xx^\top {\mathbbm{1}}(v^\top x\geq 0)]$ is positive definite. A similar assumption was used by~\citet{du2017} in the realizable setting. Our GLM result is also comparable to recent work by~\citet{foster2018}, where the authors provide a meta-algorithm for translating guarantees for $\varepsilon$-stationary points of the empirical risk to guarantees for the population risk provided that the population risk satisfies the so-called ``gradient domination'' condition and the algorithm can guarantee that the weights remain bounded (see their Proposition 3). By considering GLMs with bounded, strictly increasing, Lipschitz activations, they show the gradient domination condition holds, and any algorithm that can find a stationary point of an $\ell^2$-regularized empirical risk objective is guaranteed to have a population risk bound. In contrast, our result concretely shows that vanilla gradient descent learns the GLM, even in the ReLU setting. \section{Conclusion and remaining open problems} In this work, we considered the problem of learning a single neuron with the squared loss by using gradient descent on the empirical risk. We first analyzed this in the agnostic PAC learning framework and showed that if the activation function is strictly increasing and Lipschitz, then gradient descent finds weights with population risk $O(\mathsf{OPT}) + \varepsilon$, where $\mathsf{OPT}$ is the smallest possible population risk achieved by a single neuron. When the activation function is ReLU, we showed that gradient descent finds a point with population risk at most $O(\mathsf{OPT}^{1/2})+\varepsilon$. Under the more restricted noisy teacher network setting, we showed the population risk guarantees improve to $\mathsf{OPT}+\varepsilon$ for both strictly increasing activations and ReLU. Our work points towards a number of open problems. Does gradient descent on the empirical risk provably achieve population risk with a better dependence on $\mathsf{OPT}$ than we have shown in this work, or are there distributions for which this is impossible? Recent work by~\citet{goel2020sqlowerbounds} provides a statistical query lower bound for learning a sigmoid with respect to the correlation loss $\mathbb{E}[\ell(y \sigma(w^\top x))]$, but we are not aware of lower bounds for learning non-ReLU single neurons under the squared loss. It thus remains a possibility that gradient descent (or another algorithm) can achieve $\mathsf{OPT}+\varepsilon$ risk for such activation functions. For ReLU,~\cite{diakonikolas2020approximation} showed that gradient descent on a convex surrogate for the empirical risk can achieve $O(\mathsf{OPT}) + \varepsilon$ population risk for log concave distributions; it would be interesting if such bounds could be shown for gradient descent on the empirical risk itself \section*{Acknowledgement} We thank Adam Klivans for his helpful comments on our work. We also thank Surbhi Goel for pointing out how to extend the results of~\citet{diakonikolas2020approximation} to more general distributions and to leaky-ReLU-type activation functions.
{'timestamp': '2020-06-01T02:09:08', 'yymm': '2005', 'arxiv_id': '2005.14460', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14460'}
arxiv
\section{Introduction} Volterra processes appear naturally in models with non-local features. In this article, we will investigate the analytic and probabilistic properties of Volterra processes cons
tructed as pathwise integrals of a kernel $K$ against a Gaussian process $W$. For generality, we will assume that the Gaussian process $W$ takes values in a Hilbert space $H$ with a covariance operator $Q_W$, and that the kernel $(t,s)\mapsto K(t,s)$ for $t>s$ is a linear operator on the same Hilbert space. In particular, we define the process $X:[0,T]\rightarrow H$ formally by the integral \begin{equation}\label{f v p} X(t)=\int_0^t K(t,s)dW(s). \end{equation} At a discrete level, one can think of this process as assigning different weights through the kernel $K$ to the increments of $W$. Volterra processes have received much attention in the field of stochastic analysis over the past decades. The canonical examples are the Ornstein-Uhlenbeck process where $K(t,s)=\exp(-\alpha(t-s))$ and $W$ a Brownian motion, or the fractional Brownian motion where $K(t,s)=(t-s)^{H-\frac{1}{2}}$ and $W$ is a Brownian motion. These processes are typically used to model phenomena where some sort of memory is inherent in the dynamics, and applications are found in various fields ranging from physics and turbulence modelling \cite{BarndorffSchmiege2007} to biology \cite{PangPardoux2020} and financial mathematics \cite{GathJaiRosen2018,Benth_2020}. See also \cite{BBV} and the references therein for an introduction to these processes and their applications. In order to make sense of the integral appearing on the right-hand side of \eqref{f v p}, one must assume some type of regularity conditions on $K$ and $W$. The type of regularity conditions needed, typically depends on the choice of integral that is used in the construction of $X$. For example, if $W$ is a $Q_W$-Wiener process (the infinite dimensional extension of the classical Brownian motion) one would need that $K$ is (Bochner) square integrable in time $s\mapsto K(t,s)$ up to (and including) $t$ (see e.g. \cite{RockLiu}). However, for general real-valued Gaussian processes, this is not a sufficient criterion. Indeed, in the case of general Gaussian processes on the real line, it is well known that the Volterra processes appearing in \eqref{f v p} makes sense as a Wiener integral if \begin{equation}\label{covar rep smooth} \int_0 ^T \int_0 ^T K(T,r)K(T,r')\frac{\partial^2}{\partial r\partial r'}Q_W(r,r')dr dr' <\infty, \end{equation} where $Q_W$ is the real valued covariance of the Gaussian process $W$ (see e.g. \cite{HuCam}). This construction requires of course that $Q_W$ is differentiable in both variables, or at least of bounded variation simultaneously in both variables, which excludes several interesting Gaussian processes (particular examples of which will be discussed in detail later). An extension of the above condition to the infinite dimensional setting when $W$ is an Hilbert-valued process is quite straightforward, but one would still require strong regularity of the covariance operator $Q_W$ (say Fr\'echet differentiable). In several interesting examples, such regularity requirements on the covariance operator are too strong. For example, consider a Gaussian processes $(B(t))_{t\in [0,T]}$ time-shifted along an irregular (possibly deterministic) path $(Z(t))_{t\in [0,T]}$, given as the composition process $(B(Z(t)))_{t\in [0,T]}$. The regularity of the covariance would typically be given as the composition of the regularity of the covariance associated to $B$ and the regularity of $Z$. Thus if $t\mapsto Z(t)$ is only H\"older continuous, one would not expect to get better regularity of the covariance than that of $Z$. The canonical example of such processes is the iterated Brownian motion given as \begin{equation*} \mathbb{B}(t,\omega_1,\omega_2) = B^1(\omega_1,|B^2(\omega_2,t)|) \end{equation*} where $B^1:[0,T]\times \Omega_1\rightarrow \mathbb{R}$ and $B^2:[0,T]\times \Omega_2\rightarrow \mathbb{R}$ are two independent Brownian motions on the real line. Such processes have received much attention due to their curious probabilistic properties, as well as applications towards modelling of diffusions in cracks \cite{orsingher2009,Burdzy1993,BurdzyKhoshnevisan1998}. If we now fix a trajectory of $B^2$, it is readily seen that $\mathbb{B}(\cdot,\omega_2)$ is Gaussian, with covariance given by \begin{equation*} Q_{\mathbb{B}(\omega_2)}(t,s)={\rm min}(B^2(t,\omega),B^2(s,\omega)), \end{equation*} and therefore the regularity of $Q_{\mathbb{B}}$ is inherited by the regularity of $t\mapsto B^2(t,\omega)$. Hence, the covariance function associated to a Volterra process driven by $\mathbb{B}(\cdot,\omega_2)$ can not be constructed as in \eqref{covar rep smooth}, but an extension of this construction is needed. In recent years, pathwise analysis of stochastic processes has become prevalent in the literature. This plays a fundamental role when applying these processes in the theory of rough paths \cite{LyonsLevy,FriHai}, where analytic properties of the paths and associated "iterated integrals" constitute the main ingredients. The advantage of the rough path theory lies in the flexibility to construct pathwise solutions to controlled ODEs on the form \begin{equation*} dY(t)=f(Y(t))dX(t),\qquad Y(0)=y\in H, \end{equation*} even when $X$ is not a semimartingale. Furthermore, one directly obtains stability in the solution mapping $(X,y)\mapsto \Gamma(X,y)$ induced by the equation above. The rough path theory opens for considering equations controlled by noise given as Volterra processes, which typically is of a non-semimartingale nature due to the kernel $K$. Much work has therefore been devoted to the construction of the so-called rough path above a given Volterra process driven by a Brownian motion \cite{nualart2011,Unterberger2010}. On the other hand, to the best of our knowledge, there is no construction of the rough path above Volterra processes driven by Gaussian noise with irregular covariance structures. In \cite[Sec. 10.2]{FriHai} the authors provide a simple criterion for the existence of a geometric rough path connected to a given Gaussian process, given that the covariance structure of this process is sufficiently regular. This requires of course the existence of a covariance function, which in the case of Volterra processes driven by Gaussian noise is given by \eqref{covar rep smooth}. A relaxation of the existence criteria for \eqref{covar rep smooth} to the case of non-smooth covariances $Q_W$ and with singular Volterra kernels $K$ is therefore needed in order to construct the rough path associated to this class of processes. The main goal of this article is therefore to extend the sufficient conditions for construction of the covariance operator on the form of \eqref{covar rep smooth} to the case when $W$ is an infinite dimensional stochastic process and $Q_W$ is possibly nowhere differentiable in both variables. To this end, we start by giving a pathwise description of the Volterra process $X$ stated in \eqref{f v p}. Given a sample path of a Gaussian process $W$ which is $\alpha$-H\"older regular, we will show that \eqref{f v p} can be constructed in a pathwise sense through a slight modification of the newly developed Volterra Sewing Lemma from \cite{HarTind}. In this way, one directly obtains the regularity of the process $X$ as the composition of the possible singularity coming from $K$ and the regularity of $W$. On a heuristic level, if the kernel $K(t,s)$ is behaving locally like $(t-s)^{-\eta}$ for $t\sim s$, and the Gaussian process has H\"older continuous trajectories of order $\alpha\in (0,1)$, then \begin{equation}\label{a min e} |K(t,s)(W(t)-W(s))|_H \lesssim (t-s)^{\alpha-\eta}, \end{equation} and henceforth this composition is only finite for $t\rightarrow s$ whenever $\alpha-\eta>0$. Recalling that both in the classical probabilistic framework and in the modern approach of rough path theory, one would construct the integral as the limit when the mesh size of the partition $\mathcal{P}$ of $[0,t]$ goes to zero in the Riemann-type sum (in either $L^2(\Omega)$ if possible, or pathwise topology induced by variation or H\"older norms, see e.g. \cite{HarTind}) \begin{equation}\label{sum a min e} \sum_{[u,v]\in \mathcal{P}} K(t,u)(W(v)-W(u)). \end{equation} Thus at first glance, in order for this sum to converge, it seems natural to require $\alpha-\eta>0$ to (at least) avoid any explosions when the mesh of the partition $\mathcal{P}$ goes to $0$. We next extend the Volterra Sewing Lemma to the two dimensional case, in order to construct two dimensional operator-valued Volterra integrals on the form \begin{equation}\label{covariance} \bar{Q}:=\int_0^T\int_0^{T'} K(T,r)d^2Q(r,r')K(T',r')^* \end{equation} for linear operators $K$ and $Q$ on the Hilbert space $H$, and where $(s,t)\mapsto K(t,s)$ is possibly singular on the diagonal as described above. In this expression, $K^*$ is the adjoint operator of $K$, and the ordering of the integral appears naturally when considering operator-valued integrands which might be non-commutative. Our construction is based on Young-type integration theory with Volterra kernels, and only requires that $Q$ is H\"older regular, and $K$ does not blow up too fast at its singular point(s). In particular, we do not assume that $Q$ needs to be differentiable nor of bounded variation, and thus our construction truly extends the notion of the integral given in \eqref{covar rep smooth}. An immediate consequence of our construction is stability of the two dimensional Volterra integral with respect to changes in the Volterra operator $K$ and the operator $Q$. Through a consideration of the characteristic functional associated to the Volterra processes \eqref{f v p}, we next show that when $Q=Q_W$ is of sufficient regularity, then the covariance operator $Q_X$ associated the Volterra process $X$ from \eqref{f v p} is given by $\bar{Q}$ in \eqref{covariance}. In the end we discuss several application areas of our results, including an analysis of the covariance structure arising from general Gaussian Volterra iterated processes, the construction of the rough path associated to Volterra processes driven by Gaussian processes with irregular covariance structures, as well as a representation of the covariance structure of certain linear fractional stochastic differential equations of Ornstein-Uhlenbeck type in Hilbert space. In the last example, we discuss the potential application towards rough volatility modelling, proposing an extension of the rough Heston model to infinite dimensions. Already in 2002, Towghi \cite{Towghi2002} proved that for two functions $f,g:[0,T]^2\rightarrow \mathbb{R}$, the following integral \begin{equation}\label{eq:Young 2D} \int_{[0,T]^2} f(s,t)dQ_W(s,t) \end{equation} makes sense as the limit of a two dimensional Riemann type sum under suitable assumptions of complementary regularity between $f$ and $g$, and can thus be seen as an extension of the classical Young integral developed in \cite{Young}. The construction of the integral \eqref{covariance} can therefore be seen as an infinite dimensional extension of \eqref{eq:Young 2D} to the case when the the integrand is given as a Volterra operator of singular type. In the case when the covariance $Q_W$ itself is a real-valued covariance function associated to a Volterra process, Lim provided in \cite{Lim2020} a relaxation of the complementary regularity conditions originally proposed in \cite{Towghi2002} for existence of the two dimensional integral in \eqref{eq:Young 2D}. There, $f$ is assumed to be a sufficiently regular function, and thus singular Volterra kernels as we consider here fall outside of the scope of that article. Furthermore, in the current article, we do not impose any further structure on the covariance operator, other than regularity to keep it as general as possible, which in the end will prove useful in applications. \subsection{Outline of the article} The article is structured into the following sections: \begin{enumerate} \item[Sec. 2] We give an introductory account of Gaussian processes in Hilbert space, as well as continuity of the trajectories. \item[Sec. 3] We introduce the concept of Volterra paths (as described in \cite{HarTind}), and provide a pathwise construction of Gaussian Volterra paths from the regularity of the trajectories of the Gaussian noise as well as the possible singularity of the Volterra kernel. \item[Sec. 4] This section is devoted to prove that the pathwise Volterra processes driven by Gaussian processes, are again Gaussian. To this end, we show the construction of a general covariance operator, even when the covariance function of the driving noise is nowhere differentiable. \item[Sec. 5] We discuss a series of applications, including the construction of a rough path above Volterra processes driven by Gaussian noise with irregular covariance structures, and compute explicitly the covariance operators to various well-known Volterra processes, driven by Gaussian noise. \end{enumerate} We provide background material on fractional calculus and some proofs of auxiliary results in two appendices. \subsection{Notation}\label{subsec:notation} We assume $(\Omega,\mathcal F, \mathbb P)$ to be a complete probability space equipped with a filtration $(\mathcal F_t)_{t\geq 0}$ satisfying the {\it usual hypotheses}. We will work with a separable Hilbert space which will be denoted by $H$. The inner product in $H$ is denoted $\langle\cdot,\cdot\rangle_H$ with associated norm $\vert\cdot\vert_H$. The (Banach) space of bounded linear operators from $H$ to $E$, $E$ being another Hilbert space, is denoted $\mathcal L(H,E)$, with $\mathcal L(H):=\mathcal L(H,H)$. Sometimes $E$ may also be a general Banach space, but this will be clear from the context. We will frequently use the $n$-simplex $\Delta_n^T$ over an interval $[0,T]$ defined by \begin{equation}\label{n-simp} \Delta_n^T:=\{(s_1,\ldots,s_n)\in[0,T]^n\,|\,s_1\geq \ldots \geq s_n\}. \end{equation} Also, define the diagonal in $[0,T]^n$ by $\mathrm{D}_{n}^T$, i.e. \begin{equation}\label{n-diag} \mathrm{D}_{n}^T:=\{(s_1,\ldots,s_n)\in[0,T]\,|\,s_1=\ldots=s_n\}. \end{equation} We will denote by $\mathcal{C}^\gamma([0,T],H)$ the space of $\gamma$-H\"older continuous functions $f:[0,T]\rightarrow H$, with the norm $\|f\|_{\mathcal{C}^\gamma} =|f(0)|_H+\|f\|_{\gamma,[0,T]}$ where \begin{equation*} \|f\|_{\gamma,[0,T]}=\sup_{(t,s)\in\Delta_{2}^T}\frac{|f(t)-f(s)|_{H}}{|t-s|^\gamma}. \end{equation*} Whenever the interval $[0,T]$ is clear from the context, we will write $\|f\|_{\gamma}$ for the quantity $\|f\|_{\gamma,[0,T]}$. Aiming towards an analysis of possibly non-smooth (i.e., only H\"older continuous) covariance functions, we will also be working with increments of two-parameter functions. To this end, we will need to introduce some new notation. Consider two points $s=(s_1,s_2)$ and $t=(t_1,t_2)$ in $[0,T]^2$ and a function $f:[0,T]^2\rightarrow H$. Let us denote by $\square_{s,t}f$ the generalized (or rectangular) increment of $f$ over the rectangle $[s,t]=[s_1,t_1]\times[s_2,t_2]\subset [0,T]^2 $ (notice the implied partial order of the variables in $s=(s_1,s_2)$ and $t=(t_1,t_2)$) given by \begin{equation}\label{eq:rec increment} \square_{s,t}f=f(t_1,t_2)-f(t_1,s_2)-f(s_1,t_2)+f(s_1,s_2). \end{equation} Note in particular that if $f$ has a mixed partial derivative $\partial ^2f(r_1,r_2)/\partial r_1 \partial r_2$ which is integrable over the rectangle $[s,t]$, we have \begin{equation}\label{2d ftc} \square_{s,t}f=\int_{s_1}^{t_1}\int_{s_2}^{t_2}\frac{\partial ^2f(r_1,r_2)}{\partial r_1 \partial r_2}dr_2dr_1 . \end{equation} We remark in passing that in the literature $\square_{s,t}f$ is sometimes referred to as the $f$-volume of the rectangle $[s,t]$. \section{Gaussian Stochastic Processes in Hilbert Spaces}\label{sec: inf dim gaussian analysis} One of the main objectives in this article is to study the regularity properties of various stochastic processes in Hilbert spaces, together with their covariance operators. In this Section we provide some background material on the important class of Gaussian stochastic processes in Hilbert space which will be at the core of our studies. % For a Gaussian process in Hilbert space, one associates a covariance operator on the Hilbert space where the process lives, which describes the covariance structure of the process. A special case of such Gaussian processes is the $Q$-Wiener process, where the covariance operator $Q$ is a non-negative definite trace class linear operator. This process can be seen as an infinite dimensional extension of the well known Brownian motion, as these processes share many of the same probabilistic and analytic properties. The infinite dimensional Wiener process is a special case of a more general class of Hilbert-valued Gaussian stochastic processes. Below we give a general definition of Hilbert-valued Gaussian random variables, and then extend this definition to Hilbert-valued Gaussian processes. We highlight this definition with the example of the construction of the Hilbert-valued fractional Brownian motion. We say that an $H$-valued random variable $X$ is square-integrable if $\mathbb E[\vert X\vert_H^2]<\infty$. If $X$ is square-integrable with zero mean, that is, $\mathbb E[X]=0$ where $0\in H$ is the zero element and the expectation is in the sense of Bochner integration with respect to the probability $\mathbb P$, we introduce the {\it covariance functional} $Q$ associated to $X$ by \begin{equation*} Q=\mathbb E[X\otimes X]. \end{equation*} Here, $\otimes$ is the tensor product such that for any $g,h,x\in H$, $(g\otimes h)(x)=\langle g,x\rangle_H h$. Note that by square-integrability of $X$, the expectation defining $Q$ is well-defined as a Bochner integral. It is known that $Q\in\mathcal L(H)$ is a symmetric, positive semi-definite trace class operator. In fact, we have, $\text{Tr}(Q)=\mathbb E[\vert X\vert_H^2]$ and $$ \mathbb E[\langle X,g\rangle_H\langle X,h\rangle_H]=\langle Qg,h\rangle_H, $$ for any $g,h\in H$. We have the following standard definition of a Gaussian random variable in Hilbert space: \begin{defn} \label{def:Hilbert Gaussian variable} An $H$-valued random variable $X$ is said to be {\it Gaussian} if $\langle X,h\rangle_H$ is a real-valued Gaussian random variable for every $h\in H$. \end{defn} We remark that Gaussian variables in Hilbert space are square-integrable (see \cite[Thm. 3.31]{PesZab}). We introduce a Gaussian process in Hilbert space by the following definition (see \cite[Def. 3.30]{PesZab}): \begin{defn} \label{def:Hilbert Gaussian process} An $H$-valued stochastic process $(X(t))_{t\geq 0}$ is said to be {\it Gaussian} if for every $n\in\mathbb N$, $0\leq t_1<t_2\cdots<t_n<\infty$, $(X(t_1),X(t_2),\ldots,X(t_n))$ is an $H^{ n}$-valued Gaussian random variable. \end{defn} By definition, we have that a Gaussian process can be equivalently characterised by saying that for every $n\in\mathbb N$, $0\leq t_1<t_2\cdots<t_n<\infty$ and $h_1,\ldots,h_n\in H$, $(\langle X(t_1),h_1\rangle_H,\ldots,\langle X(t_n),h_n\rangle_H)$ is an $n$-variate Gaussian random variable on $\mathbb R^n$. We have a covariance operator defined as (for $s,t\geq 0$) \begin{equation*} Q(s,t):=\mathbb E[X(s)\otimes X(t)]\in\mathcal L(H). \end{equation*} Here we have implicitly assumed that the process has zero mean. Note that generally $Q(s,t)\neq Q(t,s)$. But, $$ \langle Q(s,t)g,h\rangle_H=\mathbb E[\langle X(s),g\rangle_H\langle X(t),h\rangle_H]=\langle g,Q(t,s)f\rangle_H, $$ and thus, $Q(s,t)^*=Q(t,s)$. But, on the other hand, $Q(t,t)$ is a positive semi-definite and symmetric trace class operator. An important Gaussian process in Hilbert space is the $Q$-Wiener process, which has a covariance operator $Q(s,t)=Q\min(s,t)$ where $Q$ is a symmetric positive definite trace class operator. As in \cite{TindelTudorViens2003,DuncanPasikMaslowski2002,GRECKSCH1999} we can define a $Q$-fractional Brownian motion with values in Hilbert space by letting \begin{equation}\label{eq:covar-fbm} Q(s,t):=R^h(s,t)Q, \end{equation} for a symmetric positive definite trace class operator $Q$ and the real-valued function \begin{equation} \label{eq:fbm-r-func} R^h(s,t)=\frac12\left(s^{2h}+t^{2h}-\vert t-s\vert^{2h}\right), \end{equation} with the Hurst index $h\in(0,1)$ and $s,t\geq 0$. Letting $h=0.5$, the $Q$-fractional Brownian motion is a $Q$-Wiener process. In our analysis, the continuity properties of paths play an important role. For this purpose, we recall the Kolmogorov continuity theorem (see e.g. \cite[Thm. 3.3]{DaPraZab}, where a full proof of the below statement can be found) \begin{thm} \label{thm:Kolmogorov} {\rm (Kolmogorov's continuity theorem)} Let $W:\Omega \times [0,T]\rightarrow H$ be a stochastic process such that for some positive constants $C>0 $, $\epsilon>0$, $\delta>1$ and all $(t,s)\in \Delta_2^T$ the following inequality holds \begin{equation*} \mathbb{E}\left[\vert W(t)-W(s)\vert_H^\delta\right] \leq C|t-s|^{1+\epsilon}. \end{equation*} Then there exists a pathwise continuous modification $\widetilde{W}$ of $W$. More specifically, the mapping $t\mapsto \widetilde{W}(\omega,t)$ is $\alpha$-H\"older continuous with $\alpha=\frac{\epsilon}{\delta}$, $\mathbb{P}-a.s$. \end{thm} For a $Q$-Wiener process, we readily see that \begin{equation*} \mathbb E[\vert W(t)-W(s)\vert_H^2]=\vert t-s\vert \text{Tr}(Q), \end{equation*} while for the fractional Brownian motion with covariance operator defined in \eqref{eq:covar-fbm} we have \begin{equation*} \mathbb E[\vert W(t)-W(s)\vert_H^2]=\vert t-s\vert^{2h} \text{Tr}(Q). \end{equation*} We have the following result on the H\"older continuity of the fractional Brownian motion (which seems to be known but we include a proof for the convenience of the reader): \begin{prop}\label{Prop: Holder cont of fbm} Let $W$ be a $Q$-fractional Brownian motion with values in $H$ and covariance operator given in \eqref{eq:covar-fbm} with Hurst parameter $h\in(0,1)$. Then, for $(t,s)\in\Delta_2^T$ $$ \mathbb E[\vert W(t)-W(s)\vert_H^{2n}]\leq \vert t-s\vert^{2hn}({\rm Tr}(Q))^n\mathbb E[Z^{2n}] $$ for any $n\in\mathbb N$ and with $Z$ being a standard normal random variable in $\mathbb{R}$. Moreover, there exists a version of $W$ which is H\"older continuous of order $\alpha<h$, $\mathbb P-$ a.s. \end{prop} \begin{proof} Let $(e_i)_{i\in\mathbb N}$ be the ONB of eigenvectors of $Q$, with the covariance operator $Q(s,t)$ of $W$ defined in \eqref{eq:covar-fbm}. We have that $W(t)-W(s)$ is a Gaussian mean-zero random variable, and a straightforward calculation yields that it has the covariance operator $\vert t-s\vert^{2h} Q$. Thus, $X_i:=\langle W(t)-W(s),e_i\rangle_H$ is a mean-zero real-valued Gaussian random variable, with variance equal to $\vert t-s\vert^{2h}\lambda_i$. Here, $\lambda_i>0$ is the $i$th eigenvalue of $Q$. As $(e_i)_{i\in\mathbb N}$ are the eigenvectors of $Q$, $X_i$ is independent of $X_j$ for any $i\neq j, i.j\in\mathbb N$. Let $(Z_i)_{i\in\mathbb N}$ be a sequence of independent identically distributed real valued standard normal variables. Then, in distribution, we have $X_i=\vert t-s\vert^h\sqrt{\lambda_i}Z_i$. Parseval's equality yields \begin{align*} \mathbb E[\vert W(t)-W(s)\vert_H^{2n}]&=\mathbb E\left[\left(\sum_{i=1}^{\infty}\langle W(t)-W(s),e_i\rangle_H^2\right) ^n\right] \\ &=\vert t-s\vert^{2hn}\mathbb E\left[\left(\sum_{i=1}^{\infty}\lambda_i Z_i^2\right)^n\right]. \end{align*} If $n=1$, we are done. Suppose that $n\geq 2$. For $p>1$ and $q$ being the reciprocal of $p$, we find by H\"older's inequality \begin{align*} \sum_{i=1}^{\infty}\lambda_i Z_i^2&=\sum_{i=1}^{\infty}\lambda_i^{1/q}\lambda_i^{1/p}Z_i^2 \\ &\leq\left(\sum_{i=1}^{\infty}\lambda_i\right)^{1/q} \left(\sum_{i=1}^{\infty}\lambda_iZ_i^{2p}\right)^{1/p} \\ &=(\text{Tr}(Q))^{1/q}\left(\sum_{i=1}^{\infty}\lambda_iZ_i^{2p}\right)^{1/p}. \end{align*} Choosing $p=n>1$ and $q=n/n-1$, we find \begin{align*} \mathbb E[\vert W(t)-W(s)\vert_H^{2n}]&\leq\vert t-s\vert^{2hn}(\text{Tr}(Q))^{n-1}\sum_{i=1}^{\infty}\lambda_i\mathbb E[Z_i^{2n}], \end{align*} and the first result of the Proposition follows. For the second conclusion, suppose that $n\in\mathbb N$ is such that $2hn>1$. Then we obtain existence of an $\alpha:=h-\frac1{2n}$ H\" older continuous version of $W$ from Kolmogorov's continuity theorem \ref{thm:Kolmogorov}. As $n$ can be chosen arbitrary large, we conclude that there exists a version of $W$ which is H\"older continuous of order $\alpha<h$, $\mathbb P$-a.s. \end{proof} As a simple consequence of the above, we see that a $Q$-Wiener process has a version with H\"older continuous paths of order $\alpha<1/2$. In the analysis that follows in the next sections, we will make use of processes with specific regularity properties of the paths. The discussion in this Section shows that we have available specific cases of (Gaussian) stochastic processes with various H\"older regularity of the paths. Gaussian processes will constitute our canonical class of models, and whenever we refer to such processes we will have their H\"older continuous version in mind. \section{Pathwise Volterra processes in Hilbert Spaces} In this Section we introduce and study Volterra processes of the form \eqref{f v p}. In order to give a pathwise description of Volterra integrals driven by generic H\"older paths, we will apply a variant of the celebrated Sewing Lemma from the theory of rough paths, modified to accommodate the Volterra structure inherit in the processes of interest. This lemma was first proved in \cite{HarTind} where the authors extend aspects of the theory of rough paths to the analysis of Volterra equations with singular kernels driven by irregular paths. In order to discuss Volterra integration in a pathwise manner, we will introduce an abstract space of Volterra paths, as defined in \cite{HarTind}. This definition allows us to discuss the continuity properties of Volterra paths, independent of the Volterra integral representation. However, it will be instructive for the reader to think of the expression \begin{equation}\label{vp} X^{\tau}(t):=X(\tau,t)=\int_{0}^{t}K(\tau,r)dW(r), \end{equation} where we have chosen to let a Volterra process have two arguments by decoupling the first argument $\tau$ in the kernel, and the upper integration parameter $t$, with $\tau\geq t$. The classical Volterra process is of course given by the mapping $t\mapsto X^t(t)$ (recall \eqref{f v p}). Thus, if $W:[0,T]\rightarrow H$ is a smooth path,\footnote{Notice here that $W$ is a general path, not necessarily a Gaussian process as we discussed in the previous section. However, in typical cases we have $W$ being a $Q$-Wiener or fractional Brownian motion, which explains why we use the notation $W$.} then the integral in \eqref{vp} can be interpreted in the Riemann sense, provided that the kernel $K\in\mathcal{L}(H)$ is Riemann integrable with respect to $W$, and thus we can view $X$ as a path from $\Delta_2^T$ to $H$. Note that we can then measure the regularity of $X$ in both $t$ and $\tau$ separately, where at least at a heuristic level, the regularity of $X$ in the $\tau$ parameter can be expected to be inherited from the regularity of the kernel $K$ in $\tau$. On the other hand the regularity of $X$ in the $t$ parameter will typically be inherited by the path $W$. \begin{defn}\label{Volterra Holder Space} Let $\gamma,\eta \in(0,1)$ and assume $\gamma-\eta>0$. We denote by $\mathcal{V}^{(\gamma,\eta)}(\Delta_{2}^{T},H)$ the space of all functions $f:\Delta_{2}^T \rightarrow H$ such that \begin{equation*} \|f\|_{(\gamma,\eta)}:=\|f\|_{(\gamma,\eta),1}+\|f\|_{(\gamma,\eta),1,2}<\infty \end{equation*} where we define the semi-norms by \begin{equation}\label{eq:Volterra holder norms} \begin{aligned} \|f\|_{(\gamma,\eta),1}&:=\sup_{\left(\tau,t,s\right)\in\Delta_{3}^T}\frac{\vert f^{\tau}(t)-f^\tau(s)\vert_H}{[|\tau-t|^{-\eta}|t-s|^{\gamma}]\wedge |\tau-s|^{\gamma-\eta}} \\ \|f\|_{(\gamma,\eta),1,2}&:=\sup_{\substack{\left(\tau',\tau,t,s\right)\in\Delta_{4}^T \\ \theta\in [0,1],\zeta\in [0,\gamma-\eta)}}\frac{\vert f^{\tau'}(t)-f^\tau(t)-f^{\tau'}(s)+f^\tau(s)\vert_H}{|\tau'-\tau|^{\theta}|\tau-t|^{-\theta+\zeta}\left\{[|\tau-t|^{-\eta-\zeta}|t-s|^{\gamma}]\wedge |\tau-s|^{\gamma-\eta-\zeta}\right\}}. \end{aligned} \end{equation} Here we have used the notation $f^{\tau}(t):=f(\tau,t)$ for $(\tau,t)\in\Delta_2^T$. \end{defn} \begin{rem} Consider a subspace $\hat{\mathcal{V}}^{(\gamma,\eta)}(\Delta_2^T,H) \subset \mathcal{V}^{(\gamma,\eta)}(\Delta_2^T,H)$ containing all Volterra paths $f\in \mathcal{V}^{(\gamma,\eta)}(\Delta_2^T,H)$ such that $f_0:=f^\tau(0)=c\in H$ for all $\tau\in [0,T]$. Under the norm \begin{equation*} \|f\|_{(\gamma,\eta),*}:=|f_0|_H+\|f\|_{(\gamma,\eta)} \end{equation*} the space $\hat{\mathcal{V}}^{(\gamma,\eta)}(\Delta_2^T,H)$ is a Banach space, see e.g. \cite{HarTind}. \end{rem} \begin{rem}\label{rem: two variable extension of Votlerra-Holder space} We can extend the definition of $\mathcal{V}^{(\gamma,\eta)}(\Delta_2^T,H)$ above to functions $f:\Delta_3^T\rightarrow H$, where $f$ has one upper variable and two lower variables, that is, \begin{equation*} (\tau,t,s)\mapsto f^\tau(t,s). \end{equation*} In this case, we consider the semi-norms $\|f\|_{(\gamma,\eta),1}$ and $\|f\|_{(\gamma,\eta),1,2}$ to be given by \begin{align*} \|f\|_{(\gamma,\eta),1}&:=\sup_{\left(\tau,t,s\right)\in\Delta_{3}^T}\frac{\vert f^{\tau}(t,s)\vert_H}{|\tau-t|^{-\eta}|t-s|^{\gamma}\wedge |\tau-s|^{\gamma-\eta}} \\ \|f\|_{(\gamma,\eta),1,2}&:=\sup_{\substack{\left(\tau',\tau,t,s\right)\in\Delta_{4}^T \\ \theta \in [0,1],\zeta\in [0,\gamma-\eta) }}\frac{\vert f^{\tau'}(t,s)-f^\tau(t,s)\vert_H}{|\tau'-\tau|^{\theta}|\tau-t|^{-\theta+\zeta}\left[|\tau-t|^{-\eta-\zeta}|t-s|^{\gamma}\wedge |\tau-s|^{\gamma-\eta-\zeta}\right]}. \end{align*} We denote the space of such three-variable functions by $\mathcal{V}^{(\gamma,\eta)}_3(\Delta_3^T,H)$. \end{rem} The next proposition shows the relation between the space of classical H\"older paths $\mathcal{C}^\rho([0,T],H)$ and $\mathcal{V}^{(\gamma,\eta)}(\Delta_2^T,H)$ when $\gamma-\eta=\rho>0$. \begin{prop}\label{prop: Volterra implies Holder} Suppose $f\in \mathcal{V}^{(\gamma,\eta)}(\Delta_2^T,H)$ with $\gamma-\eta=\rho>0$ and $f^\tau(0)=c\in H$ is constant (in $H$) for all $\tau\in [0,T]$. Then the restriction of $\tilde{f}(t):=f^t(t)$ of $f$ to the diagonal of $\Delta_2^T$ is $\zeta$-H\"older continuous for any $\zeta\in [0,\rho)$, i.e. $\tilde{f}\in \mathcal{C}^{\zeta}([0,T],H)$. \end{prop} \begin{proof} By assumption it follows that $f^t(0)-f^s(0)=0$ for all $s,t\in [0,T]$. Furthermore, by definition of the norms, we have that \begin{equation*} \begin{aligned} |f^t(t)-f^s(s)|_H& \leq |f^t(t)-f^t(s)|_H+|f^t(s)-f^s(s)|_H\\ &\leq |f^t(t)-f^t(s)|_H + |f^t(s)-f^s(s) -f^t(0)+f^s(0)|_H \\ &\leq \|f\|_{(\gamma,\eta),1}|t-s|^{\rho}+ T^{\gamma-\eta-\zeta}\|f\|_{(\gamma,\eta),1,2}|t-s|^{\zeta}. \end{aligned} \end{equation*} In the second majorization of the last inequality above, we applied the definition of $\|\cdot\|_{(\gamma,\eta),1,2}$ in \eqref{eq:Volterra holder norms} with $\theta=\zeta\in [0,\gamma-\eta)$, i.e., for any $(\tau',\tau,t,s)\in\Delta_4^T$, the following relation holds \begin{equation*} |f^{\tau'}(t)-f^\tau(t) -f^{\tau'}(s)+f^\tau(s)|_H\leq \|f\|_{(\gamma,\eta),1,2} |\tau'-\tau|^\zeta|\tau-t|^0|\tau-s|^{\gamma-\eta-\zeta}. \end{equation*} Thus, for $(\tau',\tau,t,s):=(t,s,s,0)$ we get the desired inequality after observing that $s^{\gamma-\eta-\zeta}\leq T^{\gamma-\eta-\zeta}$. As $\zeta\in [0,\rho)$ is arbitrary, we see that $\tilde{f}\in \mathcal{C}^{\zeta}([0,T],H)$ and the result follows. \end{proof} In order to accommodate pathwise Volterra integrals, we will need a modified version of the Sewing Lemma. But first we define a suitable space of abstract Volterra integrands. In the sequel, we will work with integrals taking values in a space of linear operators on Hilbert spaces. We therefore state the Volterra Sewing Lemma in general Banach spaces. \begin{defn}\label{abstract integrnds} Consider a Banach space $E$, and suppose $\gamma,\eta\in (0,1)$, $\beta\in (1,\infty)$ and $\kappa\in (0,1)$ is such that the following relations hold~$\beta-\kappa\geq \gamma-\eta>0$. Denote by $\mathscr{V}^{(\gamma,\eta)(\beta,\kappa)}\left(\Delta_{3}^T,E\right)$, the space of all functions $\Xi:\Delta_{3}^T \rightarrow E$ such that \begin{equation}\label{abstract integrand space} \vertiii{\Xi}_{(\gamma,\eta)(\beta,\kappa)}:=\|\Xi\|_{\left(\gamma,\eta\right)}+\vertiii{\delta \Xi}_{\left(\beta,\kappa\right)}<\infty. \end{equation} Here $\delta$ is the operator defined for any $s\leq u\leq t\leq \tau$ acting on functions $g$ by \begin{equation}\label{delta} \delta_{u}g^\tau(t,s)=g^\tau(t,s)-g^\tau(t,u)-g^\tau(u,s). \end{equation} The norm $\|\Xi \|_{(\gamma,\eta)}$ is given as in Remark \ref{rem: two variable extension of Votlerra-Holder space}, while the quantity $\vertiii{\delta \Xi}_{(\beta,\kappa)}$ is a slight modification of the norms from Remark \ref{rem: two variable extension of Votlerra-Holder space} defined by \begin{equation*} \vertiii{\delta\Xi}_{\left(\beta,\kappa\right)}:=\vertiii{\delta\Xi}_{\left(\beta,\kappa\right),1}+\vertiii{\delta\Xi}_{\left(\beta,\kappa\right),1,2} \end{equation*} where \begin{equation}\label{dd3} \begin{aligned} \vertiii{\delta\Xi}_{\left(\beta,\kappa\right),1}&:=\sup_{\left(\tau,t,u,s\right)\in\Delta_{4}^T}\frac{|\delta_{u}\Xi^{\tau}(t,s)|_E}{|\tau-t|^{-\kappa}|t-s|^{\beta}\wedge |t-s|^{\beta-\kappa}}, \\ \vertiii{\delta\Xi}_{\left(\beta,\kappa\right),1,2}&:=\sup_{\substack{\left(\tau',\tau,t,u,s\right)\in\Delta_{5}^T \\ \theta \in [0,1],\zeta\in [0,\beta-\kappa) }}\frac{|\delta_{u}\left[\Xi^{\tau'}(t,s)-\Xi^{\tau}(t,s)\right]|_E}{|\tau'-\tau|^{\theta}|\tau-t|^{-\theta+\zeta}\left[|\tau-u|^{-\kappa-\zeta}|t-s|^{\beta}\right]}. \end{aligned} \end{equation} where we mean $\Xi^{\tau}(t,s):=\Xi(\tau,t,s)$. In the sequel we call $\mathscr{V}^{(\gamma,\eta)(\beta,\kappa)}(\Delta_3^T,E)$ the space of all abstract Volterra integrands. \end{defn} We are now ready to state the Sewing Lemma adapted to Volterra integrands. The following lemma is a trivial extension of \cite[Lemma 21]{HarTind} to the case of Banach-valued Volterra kernels. \begin{lem}\label{lem:(Volterra-sewing-lemma)} \emph{(Volterra Sewing Lemma)} Let $E$ be a Banach space, and consider parameters $\gamma,\eta\in (0,1),$ $\beta\in (1,\infty)$, and $\kappa\in (0,1)$ such that $\beta-\kappa\geq \gamma-\eta>0$. There exists a unique continuous map $\mathcal{I}:\mathscr{V}^{(\gamma,\eta)(\beta,\kappa)}\left(\Delta_{3}^T,E\right)\rightarrow\mathcal{V}^{\left(\gamma,\eta\right)}\left(\Delta_{2}^T,E\right)$ such that the following statements holds true \begin{itemize}[leftmargin=0.7cm] \setlength\itemsep{.1in} \item[{\rm (i)}] The quantity $\mathcal{I}(\Xi^{\tau})(t,s):=\lim_{|\mathcal{P}|\rightarrow 0} \sum_{[u,v]\in\mathcal{P}} \Xi^{\tau}(v,u) $ exists (in $E$) for all tuples $(\tau,t,s)\in \Delta_{3}^T$, where $\mathcal{P}$ is a generic partition of $[s,t]$ and $|\mathcal{P}|$ denotes the mesh size of the partition. \item[{\rm (ii)}] For all $(\tau,t,s)\in \Delta_{3}^T$ the following inequality holds \begin{equation}\label{sy lemma bound1} \vert\mathcal{I}\left(\Xi^{\tau}\right)(t,s)-\Xi^{\tau}(t,s)\vert_E\lesssim\vertiii{\delta\Xi}_{\left(\beta,\kappa\right),1}\left(|\tau-t|^{-\kappa}|t-s|^{\beta}\wedge |\tau-s|^{\beta-\kappa}\right), \end{equation} \item[{\rm (iii)}] For all $(\tau',\tau,t,s)\in \Delta_{4}^T$, $\theta\in[0,1]$ and $\zeta\in [0,\beta-\kappa)$, we denote by $\Xi^{\tau',\tau}(t,s)=\Xi^{\tau'}(t,s)-\Xi^{\tau}(t,s)$, and the following inequality holds \begin{multline}\label{sy lemma bound2} \vert\mathcal{I}(\Xi^{\tau',\tau})(t,s)-\Xi^{\tau',\tau}(t,s)\vert_E \\ \lesssim\vertiii{\delta\Xi}_{\left(\beta,\kappa\right),1,2}\left(|\tau'-\tau|^\theta|\tau-t|^{-\theta+\zeta}\left[|\tau-t|^{-\kappa-\zeta}|t-s|^{\beta}\wedge |\tau-s|^{\beta-\kappa-\zeta}\right]\right). \end{multline} \end{itemize} Moreover, $t\mapsto \mathcal{I}(\Xi^\tau)(t):=\mathcal{I}(\Xi^\tau)(t,0)$ is additive, in the sense that $\mathcal{I}(\Xi^\tau)(t,s)=\mathcal{I}(\Xi^\tau)(t,0)-\mathcal{I}(\Xi^\tau)(s,0)$, and we conclude that $(\tau,t)\mapsto \mathcal{I}\left(\Xi^{\tau}\right)(t)\in \mathcal{V}^{(\gamma,\eta)}(\Delta_2^T,E)$. \end{lem} We are frequently going to work with Volterra kernels in various contexts, and we therefore state a common hypothesis on the regularity on the kernels we consider in this article. \begin{defn}\label{hyp} For $\eta\in(0,1)$, suppose $K:\Delta_2^T \rightarrow \mathcal{L}(H)$ is a linear operator on the Hilbert space $H$ which satisfies for $(\tau,t,s,r)\in\Delta_4^T$ and any $\theta,\nu\in [0,1]$ the following inequalities \begin{align}\label{bound1} \vert K(t,s)f\vert_H &\lesssim |t-s|^{-\eta}\vert f\vert_H \\\label{bound2} \vert (K(t,s)-K(t,r))f\vert_H&\lesssim |t-s|^{-\eta-\theta}|s-r|^\theta\vert f\vert_H. \\ \label{bound 22} \vert (K(\tau,s)-K(t,s))f\vert_H&\lesssim |t-s|^{-\eta-\theta}|\tau-t|^\theta\vert f\vert_H. \\ \label{bound 3} \vert(K(\tau,s)-K(\tau,r)-K(t,s)+K(t,r))f\vert_H&\lesssim |\tau-r|^{-\nu-\theta-\eta} |\tau-t|^\theta|r-s|^\nu \vert f\vert_H. \end{align} for every $f\in H$. Then we say that the kernel $K$ is a Volterra kernel of order $\eta$. We denote the space of all Volterra kernels $K$ of order $\eta\in(0,1)$ satisfying \eqref{bound1}-\eqref{bound 3} by $\mathcal{K}_{\eta}$. We equip this space with the following semi-norm \begin{equation}\label{Knorm} \|K\|_{\mathcal{K}_{\eta}}:=\|K\|_{\eta,1}+\|K\|_{\eta,2}+\|K\|_{\eta,3},\|K\|_{\eta,4}, \end{equation} where we define the three semi-norms on the right-hand side above by \begin{align}\label{Knorm 1} &\|K\|_{\eta,1}:=\sup_{(t,s) \in \Delta_2^T} \frac{\|K(t,s)\|_{\text{op}}}{|t-s|^{-\eta}}, \\\label{Knorm 2} & \|K\|_{\eta,2}:=\sup_{\substack{(t,u,s) \in \Delta_3^T \\ \theta \in [0,1]}} \frac{\|K(t,s)-K(u,s)\|_{\text{op}}}{|t-u|^\theta|u-s|^{-\theta-\eta}}, \\ \label{Knorm 3} & \|K\|_{\eta,3}:=\sup_{\substack{(t,u,s) \in \Delta_3^T \\ \theta \in [0,1]}} \frac{\|K(t,u)-K(t,s)\|_{\text{op}}}{|u-s|^\theta|t-u|^{-\theta-\eta}}, \\\label{Knorm 4} & \|K\|_{\eta,4}:=\sup_{\substack{(\tau',\tau,s,r) \in \Delta_4^T \\ \theta,\nu \in [0,1]}} \frac{\|K(\tau',s)-K(\tau',r)-K(\tau,s)+K(\tau,r)\|_{\text{op}}}{|\tau-r|^{-\nu-\theta-\eta} |\tau'-\tau|^\nu|r-s|^\theta}, \end{align} with $\Vert\cdot\Vert_{\text{op}}$ denoting the operator norm. \end{defn} \begin{rem} Note that if $K\in \mathcal{K}_{\eta}$, then also $K^*\in \mathcal{K}_{\eta}$. Indeed, this follows from the well-known fact that $\|K(t,s)\|_{\text{op}}=\|K^*(t,s)\|_{\text{op}}$ for any $(t,s)\in\Delta_2^T$. \end{rem} \begin{rem} We restrict our analysis here to $K(t,s)\in \mathcal{L}(H)$. One could easily extend our results to operators $K(t,s)\in \mathcal{L}(H,H')$ for some general Hilbert spaces $H$ and $H'$, or even to $K(t,s)\in\mathcal L(E,E')$ for some general Banach spaces $E$ and $E'$, by adjusting the spaces of paths and functions accordingly. However, to increase readability we confine our considerations to $\mathcal{L}(H)$. \end{rem} With Definition \ref{hyp} at hand we will now show that we can construct Volterra processes from H\"older paths in a deterministic manner using the Volterra Sewing Lemma \ref{lem:(Volterra-sewing-lemma)}. \begin{prop}\label{regularity W} Suppose $\gamma,\eta\in(0,1)$ are such that $\gamma-\eta>0$. Let $W\in\mathcal{C}^{\gamma}\left([0,T],H\right)$ and consider a kernel $K\in \mathcal{K}_{\eta}$ as introduced in Definition \ref{hyp}. Let the abstract integrand $\Xi$ be given as \begin{equation}\label{spec varxi} \Xi^{\tau}(t,s):=K(\tau,s)\left(W(t)-W(s)\right). \end{equation} Then we define the pathwise Volterra process as the integral \begin{equation}\label{fractional process} X^\tau(t):=\int_{0}^{t}K(\tau,s)dW(s):=\mathcal{I}\left(\Xi^{\tau}\right)(t), \end{equation} where, for a partition $\mathcal{P}$ of $[0,t]$, the integral $\mathcal{I}\left(\Xi^{\tau}\right)$ is defined as in Lemma \ref{lem:(Volterra-sewing-lemma)} by \begin{equation}\label{sum int spec} \mathcal{I}(\Xi^{\tau})(t):=\lim_{|\mathcal{P}|\rightarrow 0} \sum_{[u,v]\in\mathcal{P}} \Xi^{\tau}(v,u), \end{equation} and the limit is taken in $H$. Moreover, we have that $(\tau,t)\mapsto X^\tau(t)\in \mathcal{V}^{(\gamma,\eta)}(\Delta_2^T,H)$. \end{prop} \begin{proof} With Lemma \ref{lem:(Volterra-sewing-lemma)} in mind, we recall that in order to show convergence of the integral in \eqref{sum int spec}, it is sufficient to prove that $\Xi$ given as in \eqref{spec varxi} satisfies the following conditions \begin{align*} \|\Xi\|_{(\gamma,\eta)}&=\|\Xi\|_{(\gamma,\eta),1}+\|\Xi\|_{(\gamma,\eta),1,2}<\infty \qquad {\rm and} \\ \vertiii{\delta\Xi}_{(\beta,\kappa)} & =\vertiii{\delta\Xi}_{(\beta,\kappa),1}+\vertiii{\delta\Xi}_{(\beta,\kappa),1,2}<\infty , \end{align*} for some $(\beta,\kappa)\in(1,\infty)\times[0,1)$ with $\beta-\kappa\geq\gamma-\eta$. The fact that $\|\Xi\|_{(\gamma,\eta)}<\infty$ follows directly from the assumptions on the noise $W$ and kernel $K$: Indeed, since $K\in \mathcal{K}_{\eta}$ and $W\in \mathcal{C}^\gamma([0,T],H)$ yields \begin{equation*} \vert K(\tau,s)(W(t)-W(s))\vert_H\lesssim \|K\|_{\eta,1}\|W\|_{\gamma} |\tau-s|^{-\eta}|t-s|^{\gamma}. \end{equation*} Notice that since $\tau\geq t\geq s$ we have \begin{equation}\label{simple inequality} |\tau-s|^{-\eta}|t-s|^{\gamma}\leq [|\tau-t|^{-\eta}|t-s]^{\gamma}]\wedge|\tau-s|^{\gamma-\eta}. \end{equation} This shows that $\Vert\Xi\Vert_{(\gamma,\eta),1}<\infty.$ For the second part of the semi-norm $\Vert\Xi\Vert_{(\gamma,\eta)}$, we argue as follows. Firstly, for $\tau'\geq\tau$, we find $$ \Xi^{\tau'}(t,s)-\Xi^{\tau}(t,s)=(K(\tau',s)-K(\tau,s))(W(t)-W(s)) $$ Hence, from the semi-norm in \eqref{Knorm 3}, we find \begin{equation} \vert(K(\tau',s)-K(\tau,s))(W(t)-W(s))\vert_H\leq \|W\|_{\gamma}\Vert K\Vert_{\eta,3}\vert\tau'-\tau\vert^{\theta}\vert\tau-s\vert^{-\theta-\eta}|t-s|^{\gamma}. \end{equation} for any $\theta\in[0,1]$. Invoking \eqref{simple inequality}, it is readily seen that also $\|\Xi\|_{(\gamma,\eta),1,2}<\infty$. Next we move on with showing the finiteness of $\vertiii{\delta\Xi}_{(\beta,\kappa)}$: First, we investigate the action of $\delta$ on the integrand $\Xi$ given in \eqref{spec varxi}. By elementary algebraic manipulations, we observe that for $(\tau,t,u,s)\in \Delta_4^T$ the following relation holds \begin{equation*} \delta_{u}\Xi^\tau(t,s)=\left(K(\tau,s)-K(\tau,u)\right)\left(W(t)-W(u)\right). \end{equation*} Again using that the kernel $K\in \mathcal{K}_{\eta}$ and the assumption that $W\in \mathcal{C}^\gamma([0,T],H)$, it is readily checked that for any $\theta\in[0,1]$ \begin{equation*} \vert\delta_{u}\Xi^\tau(t,s)\vert_H\leq \|K\|_{\eta,3}\|W\|_\gamma |\tau-s|^{-\eta-\theta}\vert s-u\vert^{\theta}|t-u|^{\gamma}\lesssim \vert\tau-s\vert^{-\eta-\theta}\vert t-s\vert^{\gamma+\theta}. \end{equation*} We therefore set $\beta=\gamma+\theta$ and $\kappa=\eta+\theta$, and choose $\theta\in [0,1]$ such that $(\beta,\kappa)\in (1,\infty)\times(0,1)$ (we note that this is always possible due to the restriction $\gamma-\eta>0$). Then we see that $\vert\tau-s\vert\geq\vert t-s\vert$ and $\vert\tau-s\vert\geq\vert\tau-t\vert$, and therefore $$ \vert\tau-s\vert^{-\kappa}\vert t-s\vert^{\beta}\leq [ \vert\tau-t\vert^{-\kappa}\vert t-s\vert^{\beta}]\wedge\vert t-s\vert^{\beta-\kappa} $$ It follows that $ \vertiii{\delta\Xi}_{(\beta,\kappa),1}<\infty$. We also point out that $\beta-\kappa=\gamma-\eta$. To prove that also $\vertiii{\delta\Xi}_{(\beta,\kappa),1,2}<\infty$, we follow in the same direction as outlined above. However, rather than invoking \eqref{bound2}, we will need to make use of \eqref{bound 3}. In particular, we need to consider the increment in the upper variables in $\Xi$, i.e \begin{equation*} \Xi^{\tau',\tau}(t,s)=\Xi^{\tau'}(t,s)-\Xi^{\tau}(t,s), \end{equation*} and then the action of $\delta_u$ on $\Xi^{\tau',\tau}(t,s)$ is given by \begin{equation*} \delta_u\Xi^{\tau',\tau}(t,s)=\left(K(\tau',s)-K(\tau,s)-K(\tau',u)+K(\tau,u)\right)\left(W(t)-W(u)\right). \end{equation*} Thus invoking \eqref{bound 3} on the kernel $K$, and we can follow the exact same routine as for the proof that $\vertiii{\delta\Xi}_{(\beta,\kappa),1}<\infty$. One sees that for any parameters $\theta,\nu\in [0,1]$ and $(\tau',\tau,t,u,s)\in \Delta_5^T$ we have \begin{equation*} \vert\delta_u\Xi^{\tau',\tau}(t,s)\vert_H \leq \|K\|_{\eta,4}\|W\|_\gamma |\tau'-\tau|^\nu |\tau-u|^{-\nu-\theta-\eta}|u-s|^\theta |t-u|^\gamma. \end{equation*} Using that for any $\zeta\geq 0$ we have $|\tau-u|^{-\nu-\theta-\eta}\leq |\tau-t|^{-\theta+\zeta}|\tau-u|^{-\eta-\nu-\zeta}$, we obtain that \begin{equation} \vert\delta_u\Xi^{\tau',\tau}(t,s)\vert_H \leq \|K\|_{\eta,4}\|W\|_\gamma |\tau'-\tau|^\nu|\tau-t|^{-\nu+\zeta}\left[|\tau-u|^{-\eta-\theta-\zeta}|t-s|^{\gamma+\theta}\right] \end{equation} We then choose $\theta\in [0,1]$ such that $\gamma+\theta>1$ and $\theta+\eta+\zeta<1$, which is possible by restricting $\zeta\in [0,\gamma-\rho)$, and $\gamma-\rho>0$ by assumption. We therefore set $\beta=\theta+\gamma$ and $\kappa=\theta+\eta$, and it follows that $\vertiii{ \delta \Xi}_{(\beta,\kappa),1,2}<\infty $, where we recall that this norm is defined in \eqref{dd3}. Thus we may invoke Lemma \ref{lem:(Volterra-sewing-lemma)} for the construction of the integral $\mathcal{I}(\Xi)$ as given in \ref{sum int spec}, and we get that this integral exists with a unique limit. It follows directly from Lemma \ref{lem:(Volterra-sewing-lemma)} that $X\in \mathcal{V}^{(\gamma,\eta)}(\Delta_2^T,H)$. This concludes the proof. \end{proof} Let us illustrate Proposition \ref{regularity W} by providing an example which will be discussed in the applications, Section \ref{sect:applications}. \begin{example}\label{example fbm} For $(\tau,s)\in\Delta_2^T$, assume that the kernel $K(\tau,s)\in \mathcal{L}(H)$ is given on the form $K(\tau,s)=(\tau-s)^{-\eta}A$, where $\eta \in (0,\frac{1}{2})$ and $A\in \mathcal{L}(H)$. Furthermore, for any $\alpha\in (0,\frac{1}{2})$ such that $\alpha>\eta$, consider an $\alpha$-H\"older continuous trajectory of an $H$-valued $Q$-Wiener process $W$. Then we can give a pathwise construction of an infinite dimensional version of what is known as the Riemann-Liouville fractional Brownian motion by setting \begin{equation*} X^\tau (t)=\int_0^t(\tau-s)^{-\eta}AdW(s)=\mathcal{I}\left(\Xi^\tau\right)(t,0) \end{equation*} where the integral is constructed in terms of Proposition \ref{regularity W}. An interesting observation here is that the construction of this processes is given as a purely deterministic functional $\mathscr{I}$ applied to the Wiener process $W$, i.e. $X=\mathscr{I}\left(W\right)$. This tells us in particular that when we have constructed a Wiener processes on a probability space $(\Omega,\mathcal{F},\mathbb{P})$, and according to Kolmogorov's continuity theorem \ref{thm:Kolmogorov} found the set $\mathcal{N}^c\subset \Omega$ of full measure such that for each $\omega\in \mathcal{N}^c$ the mapping $t\mapsto W(\omega,t)$ has $\alpha-$H\"older continuous trajectories for $\alpha\in (0,\frac{1}{2})$, then the trajectory $(\tau,t)\mapsto X^\tau(\omega,t)\in\mathcal{V}^{(\alpha,\eta)}(\Delta_2^T,H)$. Recall in particular from Proposition \ref{prop: Volterra implies Holder} that the restriction mapping $t\mapsto X(\omega,t):=X^t(\omega,t)$ is $\rho$-H\"older continuous with $\rho=\alpha-\eta$. This illustrates the point that simply from a probabilistic construction of the Wiener process, and the identification of the set of $\mathcal{N}^c\subset \Omega$ on which the Wiener process is continuous, one can construct a vast class of processes $X:\mathcal{N}^c\times[0,T]\rightarrow H$ given by $X(\omega,t)=\mathscr{I}(W(\omega,\cdot))(t)$. In the next section we will show that under mild conditions on the deterministic operators $K$, the random variable \begin{equation*} \omega \mapsto X(\omega,t)=\mathscr{I}(W(\omega,\cdot))(t) \end{equation*} is Gaussian on the probability space $(\Omega,\mathcal{F},\mathbb{P})$, with an explicit covariance operator given as a two-dimensional, possibly singular, integral with respect to the covariance operator of W. \end{example} \section{Gaussian Volterra Processes}\label{sec: Gaussain Volterra processes} With the Sewing Lemma \ref{lem:(Volterra-sewing-lemma)} at hand, we are now ready to investigate Volterra paths driven by Gaussian processes. The processes we consider will be constructed in a pathwise manner, as limits of Riemann-type sums through the application of Lemma \ref{lem:(Volterra-sewing-lemma)}. When the deterministic Volterra kernel $K$ is a linear operator on $H$ with sufficient regularity of the singularity, we show that these processes are again Gaussian. More specifically, we consider Volterra processes on the form \begin{equation}\label{eq:general Volt proc} X^\tau(t)=\int_0^t K(\tau,s)dW(s), \end{equation} where $\tau\geq t$ and $W$ is a Gaussian process with zero mean and a sufficiently regular covariance operator \begin{equation}\nonumber Q_W(u,u'):=\mathbb{E}[W(u)\otimes W(u')]. \end{equation} Recall from Section \ref{sec: inf dim gaussian analysis} that the covariance operator is a bounded linear operator on $H$. When the Volterra kernel $K\in \mathcal{K}_{\eta}$ for some $\eta\in [0,1)$, and the covariance function $Q_W$ is sufficiently regular, we show that $X$ given in \eqref{eq:general Volt proc} is again a Gaussian process. We derive the characteristic functional of $X$, and from this give an explicit computation of the covariance structure of $X$, denoted by $Q_X$. In fact, we show that the covariance operator $Q_X$ can be written as a deterministic functional of the kernel $K$ and the covariance of $W$. That is, the covariance operator $Q_X$ can be written as \begin{equation} \label{covar-functional-motivation} Q_X=\mathscr{I}\left(K,Q_W\right), \end{equation} where $\mathscr{I}$ is an integral operator, given as a double Young-Volterra integral. Furthermore, we prove that the operator $\mathscr{I}$ is Lipschitz continuous in both of its arguments. Stability of the covariance operator tells us in particular that if we do small (sufficiently regular) perturbations of the covariance associated to a Gaussian process $W$, then the covariance associated to the Gaussian process $X$ does not change by more than the size of these perturbations. In view of statistical estimation, this demonstrates robustness of the model with respect to data. Let us begin to motivate the construction of the integral functional $\mathscr{I}$ in \eqref{covar-functional-motivation}. The covariance operator $Q_X$ associated to $X$ will be defined by the double integral from $(0,0)$ to a point $(t,t') \in [0,T]^2$ as follows \begin{equation}\label{eq:Qw integral rep} Q_X ^{\tau,\tau'}(t,t') = \int_0^t \int_0 ^{t'}K(\tau,r)d^2Q_W(r, r')K(\tau',r')^*, \end{equation} where $K^*$ denotes the dual operator of $K$, and the differential $d^2Q_W$ will be given meaning below. If $Q_W$ is smooth, then we can think of this as given by the mixed partial derivative $d^2Q_W(r,r')= \frac{\partial^2Q_W}{\partial t\partial s}(r,r')drdr'$. From the proposed representation of $Q_W$, $Q_X^{\tau,\tau'}(t,t')f\in H$ since $K$ and $Q_W$ are both bounded linear operators on $H$. At this stage, we would like to comment that the order of the integrands in \eqref{eq:Qw integral rep} is natural when working with operator-valued integrals corresponding to covariance functions. Since the covariance operators $Q_W$ and $K$ are linear operators on $H$, their non-commutative nature requires special care. Indeed, first recall that for $(\tau,v,u), (\tau',v',u')\in \Delta_3^T$ and $f,g\in H$ we have \begin{equation*} \mathbb{E}\left[ \langle K(\tau,u) W(u),f\rangle_H \langle K(\tau',v)W(v),g\rangle_H \right] =\mathbb{E}\left[ \langle W(u),K(\tau,u)^*f\rangle_H \langle W(v),K(\tau',v)^*g\rangle_H \right]. \end{equation*} Since $X$ given in \eqref{eq:general Volt proc} is constructed as a limit of a Riemann sum as in Proposition \ref{regularity W}, let us motivate the construction of \eqref{eq:Qw integral rep} by considering an approximation of $X$ given by a partition $\mathcal P$ of $[0,t]$ as \begin{equation} X^\tau_{\mathcal{P}}(t):=\sum_{[u,v]\in \mathcal{P}} K(\tau,u)( W(v)-W(u)). \end{equation} Then, the covariance operator between $X^\tau_{\mathcal{P}}(t)$ and $X^{\tau'}_{\mathcal{P}'}(t')$ (where $\mathcal{P}'$ is a partition of $[0,t']$) is computed in the following way \begin{equation}\label{eq: co-variance comp} \begin{aligned} \mathbb{E}\bigg[& \langle \sum_{[u,v]\in \mathcal{P}} K(\tau,u)( W(v)-W(u)),f\rangle_H \langle \sum_{[u',v']\in \mathcal{P}'}K(\tau',u')(W(v')-W(u')),g\rangle_H \bigg] \\ &= \sum_{\substack{[u,v]\in \mathcal{P} \\ [u',v']\in \mathcal{P}'}} \mathbb{E}\left[ \langle( W(v)-W(u)), K(\tau,u)^*f\rangle_H \langle (W(v')-W(u')),K(\tau',u')^*g\rangle_H \right] \\ &= \sum_{\substack{[u,v]\in \mathcal{P} \\ [u',v']\in \mathcal{P}'}} \left\langle \square_{(u,u'),(v,v')}Q_W K(\tau,u)^*f, K(\tau',u')^*g\right\rangle_H \\ &= \langle \sum_{\substack{[u,v]\in \mathcal{P} \\ [u',v']\in \mathcal{P}'}} K(\tau',u') \square_{(u,u'),(v,v')}Q_W K(\tau,u)^* f, g\rangle_H. \end{aligned} \end{equation} Here, we used the duality of linear operators and \begin{equation} \square_{(u,u'),(v,v')}Q_W= \mathbb{E}[(W(v)-W(u))\otimes(W(v')-W(u'))] \end{equation} by recalling the definition of the increment operator $\square$ in \eqref{eq:rec increment}. If $Q_W$ is mixed-differentiable in its two variables, we have \begin{equation} \square_{(u,u'),(v,v')}Q_W\simeq \frac{\partial^2 Q_W}{\partial t\partial s}(u,v)(v-u)(v'-u'). \end{equation} whenever $u$ is close to $u'$ and $v$ is close to $v'$. However, we would like to allow for possibly singular covariance functions where the mixed partial derivative $\frac{\partial^2Q_W}{\partial t\partial s} $ does not exist (possibly everywhere). Thus, taking the limit when $|\mathcal{P}|\vee|\mathcal{P}'| \rightarrow 0$ in $X^{\tau}_{\mathcal{P}}$ and $X^{\tau'}_{\mathcal{P}'}$, one would need to show that the corresponding covariance integral appearing as the limit \begin{equation} \lim_{|\mathcal{P}|\vee|\mathcal{P}'| \rightarrow 0} \sum_{\substack{[u,v]\in \mathcal{P} \\ [u',v']\in \mathcal{P}}} K(\tau',u') \square_{(u,u'),(v,v')}Q_W K(\tau,u)^* \end{equation} converges in $\mathcal{L}(H)$. \subsection{Construction of irregular covariance functions} Our first goal will be to show the existence of the integral appearing on the right-hand side of \eqref{eq:Qw integral rep}. To this end, we will give an extension of the Volterra Sewing Lemma presented in Lemma \ref{lem:(Volterra-sewing-lemma)}, to allow for two-dimensional operator-valued integrals. As the integrals we are concerned with have the very specific form given in \eqref{eq:Qw integral rep}, we will tailor the construction of the two-dimensional integral to this specific case. Our second goal is to show that the process defined in \eqref{eq:general Volt proc} is Gaussian if $W$ is Gaussian and $K$ is deterministic, whenever the integral on the right-hand side of \eqref{eq:Qw integral rep} exists. Before moving on to the construction of the double integral in \eqref{eq:Qw integral rep}, we give a definition of a class of suitable two-parameter functions $Q$ which shall be used in the sequel for the construction of covariance operators. \begin{defn}\label{def:reg covar} Let $\alpha\in (0,1)$ and let $Q:[0,T]^2\rightarrow \mathcal{L}(H)$. We say that $Q$ is an $\alpha$-regular covariance operator if it satisfies \begin{equation}\label{QNorm} \|Q\|_{\mathcal{Q}_\alpha}:= \|Q\|_{\alpha, (1,0)}+\|Q\|_{\alpha, (0,1)}+\|Q\|_{\alpha, (1,1)}<\infty, \end{equation} where we define \begin{align}\label{Qnorm1} &\|Q\|_{\alpha, (1,0)}:=\sup_{\substack{(t,s)\in \Delta_2^T \\ t' \in [0,T]}} \frac{\|Q(t,t')-Q(s,t')\|_{op}}{|t-s|^\alpha} \\\label{Qnorm2} &\|Q\|_{\alpha, (0,1)}:=\sup_{\substack{t\in [0,T] \\ (t',s') \in \Delta_2^T}} \frac{\|Q(t,t')-Q(t,s')\|_{op}}{|t'-s'|^\alpha} \\\label{Qnorm3} & \|Q\|_{\alpha, (1,1)}:=\sup_{\substack{(t,s)\in \Delta^T_2 \\ (t',s') \in \Delta_2^T}} \frac{\|\square_{(s,s'),(t,t')}Q\|_{op}}{\left[|t-s||t'-s'|\right]^\alpha}, \end{align} where we recall the rectangular increment is given by \begin{equation}\nonumber \square_{(u,u'),(v,v')}Q=Q(v,v')-Q(u,v')-Q(v,u')+Q(u,u'). \end{equation} We denote the class of all $\alpha-$regular covariance operators by $\mathcal{Q}_\alpha$. \end{defn} The reader should notice that the space $\mathcal Q_{\alpha}$ of $\alpha$-regular covariance operators is larger than the space of true covariance operators $Q:[0,T]^2\rightarrow\mathcal L(H)$ (with the same path-regularity, of course). Indeed, we have $Q(t,t)$ being a symmetric and positive-semidefinite trace class operator if $Q(s,t)=\mathbb E[X(s)\otimes X(t)]$ is the covariance operator for a mean-zero and square-integrable $H$-valued stochastic process $X(t)$, a restriction not imposed on the elements in $\mathcal Q_{\alpha}$. Thus, our results in the next subsection cover a larger family of mappings $Q:[0,T]^2\rightarrow\mathcal L(H)$ than merely those which arise as covariance operators. We prefer to keep the adjective "covariance" associated to this larger class simply because we typically have such operators in mind. \begin{rem}\label{rem: zero boundary co-variance} If $Q:[0,T]^2\rightarrow \mathcal{L}(H)$ is $0$ when one of the variables is $0$, i.e. $Q(0,t)=Q(t,0)=0$, then $Q\in \mathcal{Q}_\alpha$ if $\|Q\|_{\alpha,(1,1)}<\infty$. Indeed, by subtraction of $0=Q(t,0)-Q(s,0)$ in \eqref{Qnorm1}, we observe that \begin{equation*} \|Q\|_{\alpha,(1,0)}=\sup_{\substack{(t,s)\in \Delta^T_2 \\ (t',s') \in \Delta_2^T}} \frac{\|Q(t,t')-Q(s,t')-Q(t,0)+Q(s,0)\|_{op}}{|t-s|^\alpha} \leq \|Q\|_{\alpha,(1,1)}T^\alpha. \end{equation*} Similarly we can bound $\|Q\|_{\alpha,(0,1)}$. \end{rem} The space $\mathcal{Q}_\alpha$ is somewhat non-standard (at least from the point of view of covariance functions), and thus we provide below an example of a co-variance operator contained in this space. We consider here the covariance operator of a fractional Brownian motion, and show that it is contained in such a space. For conciseness we only consider the case of fractional Brownian motion with Hurst parameter $0<h\leq \frac{1}{2}$, as this is the case which will be discussed in later applications. \begin{example}\label{co-variance of fBm} Let $Q(t,s)=R^h(t,s)Q$ be the covariance operator of a fractional Brownian motion on a Hilbert space $H$ with Hurst parameter $h\in (0,\frac{1}{2}]$, where $R^h:[0,T]\rightarrow \mathbb{R}$ is given as in \eqref{eq:fbm-r-func}. Then, we have that \begin{equation} \square_{(u,u'),(v,v')}R^h=\frac{1}{2}( -|v-v'|^{2h} +|v'-u|^{2h}+|v-u'|^{2h}-|u-u'|^{2h}). \end{equation} Using that for $\alpha\in (0,1]$, there exists a $c>0$ such that for two numbers $a,b\in \mathbb{R}$, $||a|^\alpha-|b|^\alpha|\leq c |a-b|^\alpha$, it follows that \begin{equation} |\square_{(u,u'),(v,v')}R^h|\leq c |v-u|^{2h} \wedge |v'-u'|^{2h}. \end{equation} By using the interpolation inequality $a\wedge b\leq a^\theta b^{1-\theta}$ for any $\theta\in [0,1]$ and $a,b\in \mathbb{R}_+$, we find that \begin{equation} |\square_{(u,u'),(v,v')}R^h|\leq c |v-u|^{h} |v'-u'|^{h}. \end{equation} It follows that the covariance operator $R^h(t,s)Q$ associated to a fractional Brownian motion with Hurst parameter $h\in (0,\frac{1}{2}]$ is contained in the space $\mathcal{Q}_h$. \end{example} The following theorem can be viewed as an extension (or combination) of the Volterra Sewing Lemma \ref{lem:(Volterra-sewing-lemma)} proven in \cite{HarTind} and the multi-parameter Sewing lemma found in \cite{Harang}. \begin{thm}\label{thm:general covariance integrals} Let $\alpha\in (0,1)$, $\eta\in [0,1)$ such that $\alpha-\eta>0$. Consider a covariance operator $Q:[0,T]^2 \rightarrow \mathcal{L}(H)$ in $\mathcal{Q}_\alpha$, and suppose $K\in \mathcal{K}_{\eta}$ is a Volterra kernel. For partitions $\mathcal{P}$ of $[0,t]$ and $\mathcal{P}'$ of $[0,t']$, define the approximating Volterra covariance function by \begin{equation}\label{partition M} M_{\mathcal{P}\times \mathcal{P}'}^{\tau,\tau'}(t,t'):=\sum_{\substack{[u,v]\in \mathcal{P} \\ [u',v']\in \mathcal{P}'}} K(\tau,u)\square_{(u,u'),(v,v')}Q K(\tau',u')^*. \end{equation} Then there exists a unique operator in $\mathcal{L}(H)$ given as the limit (in operator-norm) \begin{equation}\label{integral def} \mathcal{I}(K,Q)^{\tau,\tau'}(t,t'):=\int_{0}^t\int_{0}^{t'} K(\tau,r)d^2Q(r,r')K(\tau',r')^*:=\lim_{\substack{|\mathcal{P}|\rightarrow 0 \\ |\mathcal{P}'|\rightarrow 0}}M_{\mathcal{P}\times \mathcal{P}'}^{\tau,\tau'}(t,t'), \end{equation} satisfying the additivity relation \begin{equation}\label{additivity} \square_{(u,u'),(v,v')} \mathcal{I}(K,Q)^{\tau,\tau'}=\int_{u}^v\int_{u'}^{v'} K(\tau,r)d^2Q(r,r')K(\tau',r')^* \end{equation} Furthermore, there exists a pair $(\beta,\kappa)\in (1,\infty)\times[0,1)$ with $\beta-\kappa\geq \rho$ and a constant~$C>0$ such that the following statements holds \begin{itemize}[leftmargin=0.8cm] \setlength\itemsep{.1in} \item[{\rm (i)}] For $(\tau,t,s),(\tau',t',s')\in \Delta_3^T$ the following inequality holds \begin{multline}\label{reg of covar} \|\int_{s}^t \int_{s'}^{t'} \left(K(\tau,r)-K(\tau,s)\right)d^2Q(r,r')\left(K(\tau',r')^*-K(\tau',s')^*\right)\|_{op} \\ \leq C \|K\|_{\mathcal{K}_{\eta}}^2\|Q\|_{\mathcal{Q}_\alpha} \left( \left[|\tau-t||\tau-t'|\right]^{-\kappa}\left[|t'-s'||t-s|\right]^{\beta} \right)\wedge \left[|\tau'-s'||\tau-s|\right]^{\beta-\kappa}, \end{multline} \item[{\rm (ii)} ] For $(\tau,t,s)\in \Delta_3^T$, $(\tau_1',\tau_2',t',s')\in \Delta_4^T$ and any $\zeta\in [0,\rho)$ we have \begin{multline}\label{reg of covar inc rec} \qquad \|\int_{s}^t \int_{s'}^{t'} \left(K(\tau,r)-K(\tau,s)\right)d^2Q(r,r')\left(\square_{(\tau_2',s'),(\tau_1',r')}K^*\right)\|_{op} \leq C \|K\|_{\mathcal{K}_{\eta}}^2\|Q\|_{\mathcal{Q}_\alpha} |\tau_1'-\tau_2'|^\theta \\ \times |\tau_2'-t'|^{-\theta+\zeta}\left( \left[|\tau-t||\tau-t'|^{-\zeta}\right]^{-\kappa}\left[|t'-s'||t-s|\right]^{\beta} \right)\wedge \left[|\tau-s||\tau'-s'|^{-\zeta}\right]^{\beta-\kappa}. \end{multline} \item[{\rm (iii)} ] For $(\tau_1,\tau_2,t,s)\in \Delta_4^T$, $(\tau',t',s')\in \Delta_3^T $ and any $\zeta\in [0,\rho)$ we have \begin{multline}\label{reg of covar rec inc} \qquad \|\int_{s}^t \int_{s'}^{t'} \left(\square_{(\tau_2,s),(\tau_1,r)}K\right)d^2Q(r,r')\left(K(\tau',r')^*-K(\tau',s')^*\right)\|_{op} \leq C \|K\|_{\mathcal{K}_{\eta}}^2\|Q\|_{\mathcal{Q}_\alpha} |\tau_1-\tau_2|^\theta \\ \times |\tau_2-t|^{-\theta+\zeta}\left( \left[|\tau-t||\tau-t'|^{-\zeta}\right]^{-\kappa}\left[|t'-s'||t-s|\right]^{\beta} \right)\wedge \left[|\tau-s||\tau'-s'|^{-\zeta}\right]^{\beta-\kappa}. \end{multline} \item[{\rm (iv)} ] For $(\tau_1,\tau_2,t,s),(\tau_1',\tau_2',t',s')\in \Delta_4^T$ and any $\zeta\in [0,\rho)$ we have \begin{multline}\label{reg2 of covar} \qquad \|\int_{s}^t \int_{s'}^{t'} \left(\square_{(\tau_2,s),(\tau_1,r)}K\right)d^2Q(r,r')\left(\square_{(\tau_2',s'),(\tau_1',r')}K^*\right)\|_{op} \leq C \|K\|_{\mathcal{K}_{\eta}}^2\|Q\|_{\mathcal{Q}_\alpha} [|\tau_1-\tau_2||\tau_1'-\tau_2'|]^\theta \\ \times [|\tau_2-t||\tau_2'-t'|]^{-\theta+\zeta}\left( \left[|\tau-t||\tau-t'|^{-\zeta}\right]^{-\kappa}\left[|t'-s'||t-s|\right]^{\beta} \right)\wedge \left[|\tau-s||\tau'-s'|^{-\zeta}\right]^{\beta-\kappa}. \end{multline} \end{itemize} \end{thm} \begin{proof} The first objective of this proof is to show the existence and uniqueness of the two-dimensional integral defined in \eqref{integral def}. To this end, we will also encounter one-dimensional integrals formed from the two-dimensional integrand $K(\tau,u)\square_{(u,u'),(v,v')}Q K(\tau',u')^*$ used in the definition \eqref{partition M} which will be called boundary integrals. Since these integrals are simply constructed from the one-dimensional Volterra Sewing Lemma \ref{lem:(Volterra-sewing-lemma)} in the Banach space $\mathcal{L}(H)$, we will only briefly comment on their construction here, and focus on the two-dimensional integral. The one-dimensional integrals are given on the form \begin{align}\label{N1} \mathcal{I}_{1}^{\tau,\tau'}(s,s',t,t') :=& \int_{s}^t K(\tau,r)\left[dQ(r,t')-dQ(r,s')\right]K(\tau',s)^* \\\nonumber =&\int_{s}^t\int_{s'}^{t'} K(\tau,r)d^2Q(r,r')K(\tau',s)^*, \end{align} where we note that there is no dependence on the integration variable $r'$ in the integrand of the second integral, and thus the integral in this variable exists naturally as an integral over a constant. The differential $dQ(r,t')$ with fixed second argument $t'$ is then meant as the regular one-variable differential. We can define the second boundary integral $\mathcal{I}_2$ in the same way, by integrating over the interval $[s',t']$. In the sequel, we will frequently analyse the mapping $$ (s,s',t,t',\tau,\tau')\mapsto K(\tau,s)\square_{(s,s'),(t,t')}QK(\tau',s'). $$ From time to time, we will simply write $K\square QK^*$ as a generic notation. At this point we let $\delta^1$ and $\delta^2$ denote the $\delta$ given in \eqref{delta} restricted to the first and third, and second and fourth variable, respectively, of a four-variable function $f(s,s',t,t')$. That is, the action on $\delta^1$ on $f$ for $s\leq u\leq t$ is given by \begin{equation}\label{delta action} \delta^1_uf(s,s',t,t')=f(s,s',t,t')-f(u,s',t,t')-f(s,s',u,t'), \end{equation} and the action of $\delta^2$ is defined similarly over the variables $s' \leq u' \leq t'$. Then, using the Volterra Sewing Lemma \ref{lem:(Volterra-sewing-lemma)} we can show that there exists a pair $(\beta,\kappa)\in (1,\infty)\times[0,1)$ such that \begin{align}\label{1 dim reg N} \|\mathcal{I}_{1}^{\tau,\tau'}(s,s',t,t')-&K(\tau,s)\square_{(s,s'),(t,t')}Q K(\tau',s')^*\|_{op} \\\nonumber \leq &C \vertiii{\delta^1 \left(K(\cdot,\cdot) \square_{(\cdot,s'),(\cdot,t')} QK(\tau',s')^*\right)}_{(\beta,\kappa),1}\left[|\tau-t|^{-\kappa}|t-s|^\beta\right]\wedge |\tau-s|^{\beta-\kappa}, \end{align} Indeed, in order to apply Lemma \ref{lem:(Volterra-sewing-lemma)} we need to show that $\delta^1$ acting on the increment $K\square Q K^*$ is sufficiently regular. By elementary algebraic manipulations, we observe in particular that \begin{align}\label{delta 1 on incr realtion} \delta^1_{z} K(\tau,u)&\square_{(u,s'),(v,t)}QK(\tau',s')^* =\left(K(\tau,u)-K(\tau,z)\right)\square_{(z,s'),(v,t)}Q K(\tau',s')^*. \end{align} With this relation at hand, invoking the assumption that $K\in \mathcal{K}_{\eta}$ and the regularity condition on $Q$ given by \eqref{Qnorm1}, we obtain that for any $\theta\in [0,1]$ \begin{align}\label{1 dim delta bound} \| \delta^1_{z} K(\tau,u)\square_{(u,s'),(v,t')}QK(\tau',s')^*\|_{op} \leq \|K\|_{\eta,3} \|Q\|_{\alpha,(1,1)} \|K\|_{\eta,1} |\tau-z|^{-\eta-\theta} |v-u|^{\alpha+\theta} T^{\alpha-\eta}, \end{align} Here we have used that \begin{equation} \| \square_{(u,s'),(v,t')}Q\|_{op} \leq |v-u|^\alpha|t'-s'|^\alpha \qquad {\rm and}\qquad \|K(\tau',s')\|_{op}\leq |\tau'-s'|^{-\eta}, \end{equation} and thus \begin{equation} \| \square_{(u,s'),(v,t')}Q\|_{op}\|K(\tau',s')\|_{op}\leq |v-u|^\alpha T^{\alpha-\eta}, \end{equation} where $\rho=\alpha-\eta$ In the same way, one can verify a similar bound for~$\delta^2K\square Q K^*$. It follows that that the bound in~\eqref{1 dim reg N} holds by setting $\beta=\alpha+\theta$ and $\kappa=\eta+\theta$ and choosing $\theta\in[0,1]$ such that $(\beta,\kappa)\in (1,\infty)\times[0,1)$ (which is always possible due to the fact that $\eta<\alpha$). The fact that $C$ in \eqref{1 dim reg N} can be chosen uniformly in all the time variables, follows from the assumption that $\eta<\alpha$, which in particular implies that any singularity coming from $K$ as $s' \rightarrow \tau'$ is killed by the regularity of $\square_{(\cdot,s'),(\cdot,t')} Q$, since $s' \leq t' \leq \tau'$. Similarly, we can define the integral $\mathcal{I}_2$ in the same way by letting the integrand be independent of the first integration variable $r$. By the same analysis as above, with application of the one-dimensional Volterra Sewing Lemma \ref{lem:(Volterra-sewing-lemma)}, one gets that also $\mathcal{I}_2$ is a well-defined integral, satisfying a similar bound to \eqref{1 dim reg N}. Now let us focus on the two-dimensional integral operator $\mathcal{I}$ in \eqref{integral def}. First, we note that the additivity relation in \eqref{additivity} is a straightforward consequence of the additivity of the limit of the one-dimensional Riemann sum, corresponding to the property \begin{equation}\nonumber \int_0^t f(r)dr-\int_0^sf(r)dr=\int_s^t f(r)dr. \end{equation} See \cite{FriHai} or \cite{Harang} for more details on this property in connection with the Sewing lemma both in the one-parameter and multi-parameter setting. For the {\it uniqueness} of the integral defined in \eqref{integral def}, assume for now that the integral exists and satisfies {\rm (i)-(iv)}, and consider the following argument: Assume $\mathcal{M}$ and $\bar{\mathcal{M}}$ are two candidates for $\mathcal{I}$, both constructed from the integrand $K\square Q K^*$. We obtain for both $\mathcal M$ and $\bar{\mathcal M}$ one-dimensional integrals, which are unique by the one-dimensional sewing lemma. Thus, the boundary integrals $\mathcal I_1$ and $\mathcal I_2$ of both are identical. Note that this implies in particular that the boundary integrals corresponding to the difference $\mathcal{M}-\bar{\mathcal{M}}$ is equal to $0$. Invoking this fact, it follows from \eqref{reg of covar} that the increment $\square_{(s,s'),(t,t')}(\mathcal{M}-\bar{\mathcal{M}})$ satisfies the following bound for $s\leq t\leq \tau$ and $s'\leq t'\leq \tau'$ and some $(\beta,\kappa)\in (1,\infty)\times [0,1)$ \begin{equation}\label{bound diff M} \|\square_{(s,s'),(t,t')}(\mathcal{M}-\bar{\mathcal{M}})\|_{op} \leq C\left[|\tau-t||\tau-t'|\right]^{-\kappa}\left[|t'-s'||t-s|\right]^{\beta}. \end{equation} Furthermore, due to the additive property in \eqref{additivity} of the increment we have that \begin{equation}\label{sum exp} \mathcal{M}^{\tau,\tau \prime}(s,s',t,t')-\bar{\mathcal{M}}^{\tau,\tau \prime}(s,s', t,t')=\sum_{\substack{[u,v]\in \mathcal{P} \\ [u',v']\in \mathcal{P}'}} \square_{(u,u'),(v,v')}\left[\mathcal{M}^{\tau,\tau \prime}-\bar{\mathcal{M}}^{\tau,\tau \prime}\right], \end{equation} where $\mathcal{P}$ and $\mathcal{P}'$ are now partitions of $[s,t]$ and $[s',t']$ respectively. Thanks to~\eqref{bound diff M} it follows that we can bound the left hand side of \eqref{sum exp} in the following way \begin{align* \| \mathcal{M}^{\tau,\tau \prime}(0,0,t,t')-\bar{\mathcal{M}}^{\tau,\tau \prime}(0,0, t,t')\|_{op} &\leq C\sum_{\substack{[u,v]\in \mathcal{P} \\ [u',v']\in \mathcal{P}'}} \left[|\tau-v||\tau-v'|\right]^{-\kappa}\left[|v'-u'||v-u|\right]^{\beta}\\ &\leq C\left[|\mathcal{P}||\mathcal{P}'|\right]^{\beta-1}\int_{s}^t\int_{s'}^{t'} \left[|\tau-r||\tau'-r' |\right]^{-\kappa} dr'dr, \end{align*} where we have appealed to the restriction of the parameters $(\beta,\kappa)\in (1,\infty)\times [0,1)$ in the last inequality, as well as recalling that $|\mathcal{P}|$ denotes the size of the mesh of the partition $\mathcal{P}$. We can now choose the partition arbitrarily fine, which implies that the difference $\mathcal{M}^{\tau,\tau \prime}(0,0,t,t')-\bar{\mathcal{M}}^{\tau,\tau \prime}(0,0, t,t')\equiv 0$. We conclude that the integral constructed in \eqref{integral def} is unique. We continue with the proof of the {\it existence} of the integral $ \mathcal{I}(K,Q)$ given in \eqref{integral def}. To shorten slightly the notation, we from now on write $ \mathcal{I}:= \mathcal{I}(K,Q)$ for the two-dimensional integral. In our argument, we first consider a sequence of approximating integrals constructed from dyadic partitions, and show that this sequence is Cauchy. We show in particular that the integral constructed from dyadic partitions satisfy the regularity condition in {\rm (i)}. Second, we show that the definition may be extended to any partition, and thus the limit in Equation \eqref{integral def} is independent of the chosen partition~$\mathcal{P}$. Consider now dyadic partitions $\mathcal{P}^n$ for $n\in \mathbb{N}_0$ defined in the following way: $\mathcal{P}^0=\{[s,t]\}$ and $\mathcal{P}^n$ is defined iteratively for $n\geq 1$ by \begin{equation}\label{dyadic part} \mathcal{P}^n:=\bigcup_{[u,v]\in \mathcal{P}^{n-1}} \{[u,z],[z,v]\}, \end{equation} where the point $z=(v+u)/2$. It is readily checked that each interval $[u,v]\in \mathcal{P}^n$ is of length $2^{-n}|t-s|$, and that $\mathcal{P}^n$ consists of $2^n$ intervals. Construct the dyadic partition $\mathcal{P}^{\prime,n'}$ of $[s',t']$ similarly for $n'\in\mathbb N_0$. For $n,n'\in \mathbb{N}$, observe that \begin{equation} M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}-M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'-1}}^{\tau,\tau'} = \sum_{\substack{ [u,v]\in \mathcal{P}^{n} \\ [u',v']\in \mathcal{P}^{\prime,n'-1}}}\delta^2_{z'} K(\tau,u)\square_{(u,u'),(v,v')}Q K(\tau',u')^*. \end{equation} From this we see that the following relation holds \begin{align}\label{M diff} M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}-M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'-1}}^{\tau,\tau'}-&M_{\mathcal{P}^{n-1}\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'} + M_{\mathcal{P}^{n-1}\times\mathcal{P}^{\prime,n'-1}}^{\tau,\tau'} \\\nonumber &= \sum_{\substack{ [u,v]\in \mathcal{P}^{n-1} \\ [u',v']\in \mathcal{P}^{\prime,n'-1}}}\delta^1_{z}\delta^2_{z'} K(\tau,u)\square_{(u,u'),(v,v')}Q K(\tau',u')^*. \end{align} Here $\delta^1\delta^2=\delta^1\circ \delta^2=\delta^2\circ \delta^1$ is the composition of the one-dimensional delta's given in \eqref{delta action}. We have already seen the action of $\delta^i $ for $i=1,2$ applied to the increment~$K\square Q K^*$ in \eqref{delta 1 on incr realtion}, and we will therefore now compute the action of the composition $\delta^1\delta^2$. We get for $(v,z,u),(v',z',u')\in \Delta_3^T$ that \begin{equation}\label{eq:d1d2 relation} \delta^1_{z}\delta^2_{z'} K(\tau,u) \square_{(u,u'),(v,v')}Q K(\tau',u')^* =\left(K(\tau,u)-K(\tau,z)\right)\square_{(z,z'),(v,v')}Q \left(K(\tau',u')-K(\tau',z')\right)^*. \end{equation} Using the relation in \eqref{eq:d1d2 relation} together with the assumption that $K,K^*\in \mathcal{K}_{\eta}$ and that the covariance $Q$ is $\alpha$-regular, we obtain the following bound for any $\theta\in [0,1]$ \begin{align}\label{deltadelta bound} \| \delta^1_{z}\delta^2_{z'} K(\tau,u)\square_{(u,u'),(v,v')}&Q K(\tau',u')^*\|_{op} \\\nonumber &\leq \|K\|_{\eta,3}^2\|Q\|_{\alpha,(1,1)} \left[|\tau-z| |\tau'-z'|\right]^{-\eta-\theta}\left[ |v-u| |v'-u'|\right]^{\alpha+\theta}, \end{align} where we have used that $|z-u|^\theta\leq |v-u|^\theta$, and similarly for the difference $|z'-u'|^\theta$. With this inequality at hand, we will now go back to the difference in \eqref{M diff}. By telescoping sums, we observe that for $n'>m'$ we have \begin{equation}\label{incr} M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}-M_{\mathcal{P}^n\times\mathcal{P}^{\prime,m'}}^{\tau,\tau'}=\sum_{i=m'+1}^{n'} M_{\mathcal{P}^n\times\mathcal{P}^{\prime,i}}^{\tau,\tau'}-M_{\mathcal{P}^n\times\mathcal{P}^{\prime,i-1}}^{\tau,\tau'}, \end{equation} with the same type of relation for the difference $M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}-M_{\mathcal{P}^m\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}$ when $n>m$. Combining the two and inserting the relation in \eqref{eq:d1d2 relation} yields that \begin{align}\label{M diff full} M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}-M_{\mathcal{P}^n\times\mathcal{P}^{\prime,m'}}^{\tau,\tau'}-&M_{\mathcal{P}^m\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}+ M_{\mathcal{P}^m\times\mathcal{P}^{\prime,m'}}^{\tau,\tau'} \\\nonumber &=\sum_{\substack{i\in \{m+1,\ldots,n\} \\ j\in \{m'+1,\ldots,n'\}}} \sum_{\substack{[u,v]\in \mathcal{P}^i \\ [u',v'] \in \mathcal{P}^{\prime,j}}} \delta^1_{z}\delta^2_{z'} K(\tau,z)\square_{(u,u'),(v,v')} Q K(\tau',z')^* . \end{align} Invoking the inequality obtained in \eqref{deltadelta bound}, we can bound the left-hand side of \eqref{M diff full} in the following way \begin{align}\label{eq:gen cauchy two d} \| & M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}-M_{\mathcal{P}^n\times\mathcal{P}^{\prime,m'}}^{\tau,\tau'}-M_{\mathcal{P}^m\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}+ M_{\mathcal{P}^m\times\mathcal{P}^{\prime,m'}}^{\tau,\tau'}\|_{op} \\\nonumber &\leq \|K\|_{\eta,3}^2\|Q\|_{\alpha,(1,1)} \sum_{\substack{i\in \{m+1,\ldots,n\} \\ j\in \{m'+1,\ldots,n'\}}} \sum_{\substack{[u,v]\in \mathcal{P}^i \\ [u',v'] \in \mathcal{P}^{\prime,j}}} \left[|\tau-z| |\tau'-z'|\right]^{-\eta-\theta} \left[|v-u| |v'-u'|\right]^{\alpha+\theta} =: V_{\mathcal{P},\mathcal{P}'}. \end{align} Then we choose $\kappa=\theta+\eta$ and $\beta=\theta+\alpha$ such that $(\beta,\kappa)\in (1,\infty)\times [0,1)$ (again we note that this is always possible since $\alpha-\eta>0$). Using that for any $[v,u]\in \mathcal{P}^n$ it holds $|v-u|=2^{-n}|t-s|$, we obtain that \begin{align}\nonumber V_{\mathcal{P},\mathcal{P}'}&\leq C \left[|t-s||t'-s'|\right]^{\beta-1}\sum_{\substack{i\in \{m+1,\ldots,n\} \\ j\in \{m'+1,\ldots,n'\}}} 2^{-(i+j)(\beta-1)}\int_{s}^t\int_{s'}^{t'} \left[|\tau-r||\tau'-r'|\right]^{-\kappa}dr' dr \\\label{eq:two i j} & \leq C \left( \left[|\tau-t||\tau'-t'|\right]^{-\kappa}\left[|t-s||t' -s'|\right]^{\beta}\right) \wedge \left[|\tau-s||\tau'-s'|\right]^{\beta-\kappa} \sum_{\substack{i\in \{m+1,\ldots,n\} \\ j\in \{m'+1,\ldots,n'\}}} 2^{-(i+j)(\beta-1)}. \end{align} Inserting \eqref{eq:two i j} into the right hand side of \eqref{eq:gen cauchy two d}, it follows that $\{M_{\mathcal{P}^n\times \mathcal{P}^{\prime, n \prime}}\}_{(n,n')\in \mathbb{N}^2}$ is a Cauchy sequence (with multi-index $(n,n')\in \mathbb{N}$). We define the limit of this sequence as $n,n' \rightarrow \infty $ to be \begin{equation*} \mathcal{I}^{\tau,\tau'}\left(s,s',t,t'\right):=\lim_{n,n'\rightarrow \infty } M_{\mathcal{P}^n\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}. \end{equation*} It now follows directly from the additivity property \eqref{additivity} proven in the beginning of the proof that the following identity holds \begin{equation*} \mathcal{I}^{\tau,\tau'}\left(s,s',t,t'\right)=\square_{(s,s'),(t,t')} \mathcal{I}^{\tau,\tau'}(0,0,\cdot,\cdot). \end{equation*} Furthermore, due to the relations \eqref{1 dim delta bound} and \eqref{incr}, and by deriving a one-dimensional estimate similar to \eqref{eq:gen cauchy two d}, it follows that the boundary terms $\left\{M_{\mathcal{P}^n\times[s',t']}^{\tau,\tau'}\right\}_{n\in \mathbb{N}}$ and $\left\{M_{[s,t]\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}\right\}_{n' \in \mathbb{N}}$ are both Cauchy sequences. Moreover, we observe that the boundary integrals are given as \begin{align*} \mathcal{I}_{1}^{\tau,\tau'}(s,s',t,t')&=\lim_{n\rightarrow\infty} M_{\mathcal{P}^n\times[s',t']}^{\tau,\tau'}, \\ \mathcal{I}_{2}^{\tau,\tau'}(s,s',t,t')&=\lim_{n'\rightarrow\infty} M_{[s,t]\times\mathcal{P}^{\prime,n'}}^{\tau,\tau'}. \end{align*} Note that these two objects are only additive in one pair of its variables, i.e. we have \begin{equation*} \mathcal{I}_{1}^{\tau,\tau'}(0,s',t,t')- \mathcal{I}_{1}^{\tau,\tau'}(0,s',s,t')= \mathcal{I}_{1}^{\tau,\tau'}(s,s',t,t') \end{equation*} while on the other hand \begin{equation*} \mathcal{I}_{1}^{\tau,\tau'}(s,0,t,t')- \mathcal{I}_{1}^{\tau,\tau'}(s,0,t,s') \neq \mathcal{I}_{1}^{\tau,\tau'}(s,s',t,t'). \end{equation*} The opposite relation holds for $ \mathcal{I}_2$. This is due to the nature of the integrand $K\square Q K^*$ and since \begin{equation*} \mathcal{I}_{1}^{\tau,\tau'}(s,s',t,t')=\lim_{n\rightarrow \infty} M_{\mathcal{P}^n\times[s',t']}^{\tau,\tau'}=\lim_{n\rightarrow \infty} \sum_{[u,v]\in\mathcal{P}^n} K(\tau,u)\square_{u,s',v,t'}Q K(\tau',s')^*. \end{equation*} Recall that the boundary integrals $ \mathcal{I}_1$ should be thought of as the integral appearing in \eqref{N1}, and similarly for $ \mathcal{I}_{2}$. Now, we observe from \eqref{eq:gen cauchy two d} with $m=m'=0$ that if we let \begin{equation*} \mathcal{H}^{\tau,\tau'}(s,s',t,t'):= \mathcal{I}^{\tau,\tau'}\left(s,s',t,t'\right)- \mathcal{I}_{1}^{\tau,\tau'}(s,s',t,t') - \mathcal{I}_{2}^{\tau,\tau'}(s,s',t,t')+K(\tau,s)\square_{(s,s'),(t,t')}Q K(\tau',s')^* \end{equation*} then we have \begin{multline}\label{4 comp ineq} \| \mathcal{H}^{\tau,\tau'}(s,s',t,t')\|_{op} \leq C\|K\|_{\eta,3}^2\|Q\|_{\alpha,(1,1)} \\ \left[\frac{1}{1-2^{1-\beta}}\right]^2 \left( \left[|\tau-t||\tau'-t'|\right]^{-\kappa}\left[|t-s||t' -s'|\right]^{\beta}\right) \wedge \left[|\tau-s||\tau'-s'|\right]^{\beta-\kappa}, \end{multline} where $(\beta,\kappa)\in (1,\infty)\times[0,1)$ is chosen according to the rules specified below \eqref{eq:gen cauchy two d}. We conclude that the limiting objects $ \mathcal{I}$, $ \mathcal{I}_1$ and $ \mathcal{I}_2$ made from limits of dyadic partitions exist uniquely and satisfy the regularity condition in \eqref{reg of covar}. The reader would note that we can construct the limiting object $ \mathcal{I}$ as a limit of a Riemann sum over either $ \mathcal{I}_1$ or $ \mathcal{I}_2$. By this we mean that the two-dimensional integral, can be obtained as a limit of a Riemann sum over the one-dimensional boundary integral. In particular we have that \begin{equation}\label{build M from N} \mathcal{I}^{\tau,\tau'}(s,s',t,t')=\lim_{n'\rightarrow \infty} \sum_{[u',v']\in \mathcal{P}^{\prime,n'}} \mathcal{I}_1^{\tau,\tau'}(s,u',t,v'), \end{equation} and similarly for $ \mathcal{I}_2$ where integration is done over a dyadic partition of $[s,t]$. Indeed, recall that $ \mathcal{I}_1$ is additive in the first variable, by which we mean that for any partition $\mathcal{P}$ we have \begin{equation}\label{sum relation} \mathcal{I}_1^{\tau,\tau'}(s,s',t,t')=\sum_{[u,v]\in \mathcal{P}} \mathcal{I}_1^{\tau,\tau'}(u,s',v,t'). \end{equation} A similar property holds for $\mathcal{I}_2$. Given the structure of the integrand $K\square QK^*$, and in the spirit of Lemma \ref{lem:(Volterra-sewing-lemma)} together with linearity of the integral it is readily checked that for any $\theta\in[0,1]$ \begin{equation}\label{N1 delta bound} \| \delta^2_{m'} \mathcal{I}_{1}^{\tau,\tau'}(s,u',t,v') \|_{op}\leq C \|K\|_{\eta,1}\|Q\|_{\alpha,(0,1)}\|K\|_{\eta,2} [|\tau-m|^{-\eta-\theta}|v'-u'|^{\alpha+\theta}]|\tau-s|^{-\eta}|t-s|^\alpha, \end{equation} where the estimate is uniform in the ordered variables $(s,t,\tau)$. Then again setting $\beta=\rho+\theta$ and $\kappa=\eta+\theta$ and choosing $\theta\in [0,1]$ such that $(\beta,\kappa)\in (1,\infty)\times[0,1)$, and next invoking \eqref{sum relation} together with \eqref{N1 delta bound}, it follows from the Volterra Sewing Lemma \ref{lem:(Volterra-sewing-lemma)} that \begin{multline}\label{M diff N1} \| \mathcal{I}^{\tau,\tau'}(s,s',t,t')- \mathcal{I}_1^{\tau,\tau'}(s,s',t,t')\|_{op} \\ \leq C\|K\|_{\eta,1}\|Q\|_{\alpha,(0,1)}\|K\|_{\eta,2} |\tau-t'|^{-\kappa}|t'-s'|^\beta \wedge |\tau-s'|^{\beta-\kappa}]|\tau-s|^{-\eta}|t-s|^\alpha, \end{multline} and thus relation \eqref{build M from N} follows directly. Similarly, one can show that \begin{align*} \| \mathcal{I}^{\tau,\tau'}(s,s',t,t')&- \mathcal{I}_2^{\tau,\tau'}(s,s',t,t')\|_{op} \\ &\leq C \|K\|_{\eta,2}\|Q\|_{\alpha,(1,0)}\|K\|_{\eta,1}|\tau-t|^{-\kappa}|t-s|^\beta \wedge |\tau-s|^{\beta-\kappa}]|\tau'-s'|^{-\eta}|t'-s'|^\alpha. \end{align*} Our next goal is to show that the limiting object $\mathcal{I}$ is independent of the chosen partition $\mathcal{P}$. Note that the one-dimensional integral terms are in fact independent of the partition chosen, as a consequence of the one-dimensional Volterra Sewing Lemma \ref{lem:(Volterra-sewing-lemma)}. Therefore, following from the relation \eqref{build M from N}, it is sufficient to show that the differences \begin{align}\label{M-N1 generic} & \mathcal{I}^{\tau,\tau'}(s,s',t,t') - \sum_{[u,v]\in \mathcal{P}^{\prime}} \mathcal{I}_1^{\tau,\tau'}(s,u',t,v'), \\\label{M-N2 generic} & \mathcal{I}^{\tau,\tau'}(s,s',t,t') - \sum_{[u,v]\in \mathcal{P}} \mathcal{I}_2^{\tau,\tau'}(u,s',v,t'), \end{align} converge to zero for generic partitions $\mathcal{P}'$ and $\mathcal{P}$, where $\vert\mathcal{P}'\vert \rightarrow 0$ and $\vert\mathcal{P}\vert \rightarrow 0$. Let us prove this for \eqref{M-N1 generic}. The same result for \eqref{M-N2 generic} can be found by an analogous procedure. By additivity of $ \mathcal{I}$ and $ \mathcal{I}_1$, we can write \begin{equation}\label{M diff MPprime} \mathcal{I}^{\tau,\tau'}\left(s,s',t,t'\right)- \sum_{[u,v]\in \mathcal{P}^{\prime}} \mathcal{I}_1^{\tau,\tau'}(s,u',t,v')= \sum_{[u,v]\in \mathcal{P}^{\prime}} \mathcal{I}^{\tau,\tau'}\left(s,u',t,v'\right)- \mathcal{I}_1^{\tau,\tau'}(s,u',t,v'). \end{equation} Invoking the bounds we found in \eqref{M diff N1}, we can majorize the right-hand side of \eqref{M diff MPprime}, which yields that \begin{align}\nonumber \| \mathcal{I}^{\tau,\tau'}\left(s,s',t,t'\right)- \sum_{[u,v]\in \mathcal{P}^{\prime}} \mathcal{I}_1^{\tau,\tau'}(s,u',t,v')\|_{op} &\leq C \sum_{[u',v']\in \mathcal{P}' } |\tau'-v'|^{-\kappa}|u'-v'|^\beta \\\nonumber &\leq C |\mathcal{P}' |^{\beta-1} \int_{s'}^{t'}|\tau'-r|^{-\kappa}dr, \end{align} where the integral is convergent since $\kappa<1$, and the constant $C>0$ may depend on $T^\rho$. Thus, letting $|\mathcal{P}'|\rightarrow 0$ we observe that \begin{equation*} \| \mathcal{I}^{\tau,\tau'}\left(s,s',t,t'\right)- \sum_{[u,v]\in \mathcal{P}^{\prime}} \mathcal{I}_1^{\tau,\tau'}(s,u',t,v')\|_{op}\rightarrow 0, \end{equation*} since $\beta>1$, and we conclude that the integral $ \mathcal{I}$ in \eqref{integral def} is independent of the choice of partition. We conclude that the limit in \eqref{integral def} exists uniquely, and it follows from \eqref{4 comp ineq} that the inequality in {\rm (i)} holds. It now remains to show that also {\rm (ii)-(iv)} holds. From the proof above, all the integrals appearing in these expressions exist, and so the different regularity estimates differ from {\rm (i)} in the sense that they have various increments in the upper parameters of the Volterra kernels. As the proof of these inequalities are essentially identical with the proof of {\rm (i)} above, we will only show the inequality in {\rm (iv)} here, and leave the details for {\rm (ii)-(iii)} to the reader. This we do because {\rm (ii)-(iii)} can be seen as mixtures of {\rm (i)} and {\rm (iv)}, and it will therefore be simple to verify that also these inequalities hold. To illustrate this point, for $(\tau_1,\tau_2,t,s),(\tau_1',\tau_2',t',s')\in \Delta_4^T$ define $G^{\tau_1,\tau_2}(r)=K(\tau_1,r)-K(\tau_2,r)$, and observe that \begin{multline} \int_{s}^t \int_{s'}^{t'} \left(\square_{(\tau_2,s),(\tau_1,r)}K\right)d^2Q(r,r')\left(\square_{(\tau_2',s'),(\tau_1',r')}K^*\right) \\ = \int_{s}^t \int_{s'}^{t'} \left(G^{\tau_1,\tau_2}(r)-G^{\tau_1,\tau_2}(s)\right)d^2Q(r,r')\left(G^{\tau_1',\tau_2'}(r')^*-G^{\tau_1',\tau_2'}(s')^*\right). \end{multline} The right-hand side is an integral expression on the same form as in in {\rm (i)}, however with different Volterra kernel. Similarly, we observe that {\rm (ii)} and {\rm (iii)} can be written as mixtures of integrals over the kernels $K$ and $G$ defined above. Following the strategy outlined above to prove {\rm (i)}, we now consider the integrand \begin{equation} G^{\tau_1,\tau_2}(s)\square_{(s,s'),(t,t')}Q G^{\tau_1',\tau_2'}(s')=(K(\tau_1,s)-K(\tau_2,s))\square_{(s,s'),(t,t')} Q (K(\tau_1',s')^*-K(\tau_2',s')^*), \end{equation} and by the same techniques as above our goal is to obtain an analytic inequality as in {\rm (iv)}. Consider the approximating integral given by \begin{equation*} N_{\mathcal{P}\times \mathcal{P}^{\prime}} := \sum_{\substack{[u,v]\in \mathcal{P} \\ [u',v']\in \mathcal{P}'}} (K(\tau_1,u)-K(\tau_2,u))\square_{(u,u'),(v,v')} Q (K(\tau_1',u')^*-K(\tau_2',u')^*). \end{equation*} Thus, we obtain the inequality in {\rm (iv)} by following the exact same steps as for the existence with the integrand $K\square Q K^*$. However, instead of relying on the norm $\|K\|_{\eta,3}$ as given in \eqref{Knorm 3} to obtain our bounds, we need to use $\|K\|_{\eta,4}$ given in \eqref{Knorm 4} as this represents the regularity of the kernel over the rectangular increment (i.e. in both upper and lower variables). Indeed, when arriving at the step similar to \eqref{eq:d1d2 relation}, we set \begin{equation*} \Xi^{\tau_1,\tau_2,\tau_1',\tau_2'}(u,u',v,v'):=(K(\tau_1,u)-K(\tau_2,u))\square_{(u,u'),(v,v')} Q (K(\tau_1',u')^*-K(\tau_2',u')^*) \end{equation*} and observe that \begin{multline*} \delta^1_z\delta^2_{z'} \Xi^{\tau_1,\tau_2,\tau_1',\tau_2'}(u,u',v,v')= (K(\tau_1,u)-K(\tau_2,u)-K(\tau_1,z)+K(\tau_2,z)) \\ \times \square_{(z,z'),(v,v')} Q (K(\tau_1',u')^*-K(\tau_2',u')^*-K(\tau_1',z')^*+K(\tau_2',z')^*). \end{multline*} We then need to bound this expression in a similar way as we did in \eqref{deltadelta bound}. Using the quantity defined in \eqref{Knorm 3}, it is readily seen that for $\theta_1,\theta_2\in [0,1]$ \begin{multline*} \| \delta^1_z\delta^2_{z'} \Xi^{\tau_1,\tau_2,\tau_1',\tau_2'}(u,u',v,v')\|_{{\rm op}} \lesssim \|K\|_{\eta,4}^2\|Q\|_{\alpha,(1,1)} \\ \times [|\tau_1-\tau_2||\tau_1'-\tau_2'|]^{\theta_1}[|\tau_2-v||\tau_2'-v'|]^{-\theta_1-\theta_2-\eta}|[|v-u||u'-v'|]^{\alpha+\theta_2}. \end{multline*} We then observe that for a parameter $\zeta\in [0,1]$ we have \begin{equation*} |\tau_2-v|^{-\theta_1-\theta_2-\eta}\leq |\tau_2-t|^{-\theta_1+\zeta}|\tau_2-v|^{-\zeta-\theta_2-\eta} \end{equation*} and similarly for the parameters $(\tau_2',t',v')\in \Delta_3^T$. it follows that \begin{multline*} \| \delta^1_z\delta^2_{z'} \Xi^{\tau_1,\tau_2,\tau_1',\tau_2'}(u,u',v,v')\|_{{\rm op}} \lesssim \|K\|_{\eta,4}^2\|Q\|_{\alpha,(1,1)} \\ \times [|\tau_1-\tau_2||\tau_1'-\tau_2'|]^{\theta_1}[|\tau_2-t||\tau_2'-t'|]^{-\theta_1+\zeta}[|\tau_2-v||\tau_2'-v'|]^{-\theta_2-\eta-\zeta}|[|v-u||u'-v'|]^{\alpha+\theta_2}. \end{multline*} Note that we are not integrating over the variables $(\tau_1,\tau_2,t),(\tau_1',\tau_2',t')\in \Delta_3^T$, and these will therefore not affect the sewing arguments in \eqref{eq:gen cauchy two d} and below. We now choose $\theta_1,\theta_2,\zeta\in [0,1]$ in the following way: $\beta=\alpha+\theta_2>1$ , $\kappa:=\eta+\theta_2+\zeta<1$. Since $\rho=\alpha-\eta>0$ we can choose $\zeta\in [0,\rho)$. Then one can simply check that there exists a $\theta_2\in [0,1]$ such that $\beta>1$ and $\kappa<1$. By following the steps from \eqref{deltadelta bound} and below, one can conclude that {\rm (iv)} holds. This completes the proof. \end{proof} \begin{rem}\label{rem: linearity of integral} We point out that the integral $\int_0^t \int_0^{t'} K(\tau,r)d^2Q(r,r')K(\tau',r')^*$ is linear in $Q$, and bilinear in $K$. By this we mean that for $Q,\tilde{Q}\in \mathcal{Q}_\alpha$ \begin{multline} \int_0^t \int_0^{t'} K(\tau,r)d^2[Q+\tilde{Q}](r,r')K(\tau',r')^*\\ =\int_0^t \int_0^{t'} K(\tau,r)d^2Q(r,r')K(\tau',r')^*+\int_0^t \int_0^{t'} K(\tau,r)d^2\tilde{Q}(r,r')K(\tau',r')^*, \end{multline} and similarly for the bilinearity with respect to $K$. This follows directly from the construction of the integral as a limit of Riemann sums, and a simple verification can be done by going through the proof above using the integrand $K(\tau,u)\square_{(u,u'),(v,v')}[Q+\tilde{Q}]K(\tau',r')^*$. For conciseness we omit a more detailed proof here. \end{rem} \begin{rem} From the derivations in \eqref{eq: co-variance comp}, a different notation for the integral $\int_0^t \int_0^{t'} K(\tau,r)d^2Q(r,r')K(\tau',r')^*$ could be used. By $$ \int_0^{t}K(\tau,r)\int_0^{t'}Q(dr,dr')K(\tau',r')^* $$ we mean the integration of $K(\tau',r')^*$ with respect to $Q(r,dr')$ to form the integral $\int_0^{t'}Q(r,dr')K(\tau',r')^*$, followed by the integration of $K(\tau,r)$ with respect to the integrand $\int_0^{t'}Q(dr,dr')K(\tau',r')$. \end{rem} A direct consequence of Theorem \ref{thm:general covariance integrals} is that the covariance functions constructed from $K\in \mathcal{K}_{\eta}$ and $Q\in \mathcal{Q}_\alpha$ is again a covariance function in $\mathcal{Q}_{\zeta}$ for any $\zeta\in [0,\alpha-\eta)$. We summarize this in the next Proposition. \begin{prop}\label{Holder reg of covar} By restricting the domain of $\mathcal{I}$ to the square $[0,T]^2$ by considering the map $(t,t')\mapsto \mathcal{I}^{t,t'}(K,Q)(0,0,t,t')$, the integration map $\mathcal{I}$ is a continuous operator from $\mathcal{K}_\eta \times \mathcal{Q}_\alpha$ to $ \mathcal{Q}_{\zeta}$ for any $\zeta\in [0,\alpha-\eta)$. Moreover, we have that \begin{equation}\label{IQ mapto Q} \|\mathcal{I}(K,Q)\|_{\mathcal{Q}_{\zeta}}\leq C \|K\|^2_{\mathcal{K}_\eta} \|Q\|_{\mathcal{Q}_\alpha}. \end{equation} \end{prop} \begin{proof} This follows from a combination of the estimates in {\rm(i)-(iv)} given in Theorem \ref{thm:general covariance integrals}. We denote by $\Delta_{s,t}K(\cdot,r)$ the increment $K(t,r)-K(s,r)$. Observe that \begin{equation}\label{square inc exp} \begin{aligned} \square_{(s,s'),(t,t')} \int_{0}^\cdot \int_{0}^{\cdot'}& K(\cdot,r) d^2Q(r,r')K(\cdot',r')^* = \int_s^t \int_{s'}^{t'}K(t,r)d^2Q(r,r')K(t',r')^* \\ &+\int_0^s\int_{s'}^{t'} \Delta_{s,t}K(\cdot,r)d^2Q(r,r')K(t',r')^* + \int_s^t\int_{0}^{s'} K(t,r)d^2Q(r,r')\Delta_{s',t'}K(\cdot',r')^* \\ & + \int_0^s\int_{0}^{s'} \Delta_{s,t}K(\cdot,r)d^2Q(r,r')\Delta_{s',t'}K(\cdot',r')^*. \end{aligned} \end{equation} Our goal is to check that \begin{equation} \|\square_{(s,s'),(t,t')} \int_{0}^\cdot \int_{0}^{\cdot'} K(\cdot,r) d^2Q(r,r')K(\cdot',r')^*\|_{op} \lesssim [|t-s||t'-s'|]^{\alpha-\eta}, \end{equation} and thus, by verifying that each of the integrals on the right-hand side of \eqref{square inc exp} satisfies the above bound, we are done. Each of the four terms on the right hand side above corresponds to the inequalities in {\rm (i)-(iv)} in Theorem \ref{thm:general covariance integrals} plus some one-dimensional integral terms which can be treated with the one-dimensional Volterra Sewing Lemma~\ref{lem:(Volterra-sewing-lemma)}. We will illustrate this by considering the first term on the right hand side of the above equality \eqref{square inc exp}. It is readily checked that by addition and subtraction of three terms $$ \int_s^t\int_{s'}^{t'} K(t,s)d^2Q(r,r')K(t',r')^*, \,\,\, \int_s^t\int_{s'}^{t'} K(t,r)d^2Q(r,r')K(t',s')^*,\,\,\, K(t,s)\square_{(s,s'),(t,t')}Q(r,r')K(t',s')^*, $$ it follows that \begin{align*} \int_s^t \int_{s'}^{t'}K(t,r)d^2Q(r,r')K(t',r')^*&= \int_s^t \int_{s'}^{t'}[K(t,r)-K(t,s)]d^2Q(r,r')[K(t',r')-K(t',s')]^* \\ &\qquad+ \int_s^t \int_{s'}^{t'}K(t,s)d^2Q(r,r')K(t',r')^* \\ &\qquad+\int_s^t \int_{s'}^{t'}K(t,r)d^2Q(r,r')K(t',s')^*-K(t,s)\square_{(s,s'),(t,t')} Q K(t',s'). \end{align*} We can bound the first integral expression on the right-hand side by application of {\rm (i)} in Theorem \ref{thm:general covariance integrals}. The two other integral terms are one-dimensional in the sense that we are only integrating one of the kernels $K$ in either $r$ or $r'$. By the inequality obtained in \eqref{1 dim reg N} and \eqref{1 dim delta bound}, it follows that \begin{align}\label{eq:bound for 1 int 1} \|\int_s^t \int_{s'}^{t'}K(t,s)d^2Q(r,r')K(t',r')^*-K(t,s)&\square_{(s,s'),(t,t')} Q K(t',s')\|_{op} \nonumber \\ &\qquad\lesssim \|K\|^2_{\mathcal{K}_\eta}\|Q\|_{\alpha} |t-s|^{\alpha-\eta}|t'-s'|^{\alpha-\eta}, \end{align} and similarly we get \begin{align}\label{eq:bound for 1 int 2} \|\int_s^t \int_{s'}^{t'}K(t,r)d^2Q(r,r')K(t',s')^*-K(t,s)&\square_{(s,s'),(t,t')} Q K(t',s')\|_{op} \nonumber \\ &\qquad\lesssim \|K\|^2_{\mathcal{K}_\eta}\|Q\|_{\alpha} |t-s|^{\alpha-\eta}|t'-s'|^{\alpha-\eta}. \end{align} At last, it is readily checked that also \begin{equation}\label{eq: bound for 0 int} \|K(t,s)\square_{(s,s'),(t,t')} Q K(t',s')\|_{op} \lesssim \|K\|^2_{\mathcal{K}_\eta}\|Q\|_{\alpha} |t-s|^{\alpha-\eta}|t'-s'|^{\alpha-\eta}. \end{equation} Thus a combination of \eqref{eq:bound for 1 int 1}, \eqref{eq:bound for 1 int 2} and \eqref{eq: bound for 0 int} as well as the bound in {\rm (i)} of Theorem \ref{thm:general covariance integrals}, we obtain \begin{equation*} \|\int_s^t \int_{s'}^{t'}K(t,r)d^2Q(r,r')K(t',r')^*\|_{op} \lesssim \|K\|^2_{\mathcal{K}_\eta}\|Q\|_{\alpha} |t-s|^{\alpha-\eta}|t'-s'|^{\alpha-\eta}. \end{equation*} By a similar analysis, one obtains equivalent bounds for the three other integral terms on the right-hand side of \eqref{square inc exp} by appealing to {\rm (ii)-(iv)} of Theorem \ref{thm:general covariance integrals} as well as bounds for one-dimensional integral terms treated (as done above) by application of Lemma \ref{lem:(Volterra-sewing-lemma)}. However, in this case, the bound will be with respect to any exponent $\zeta\in [0,\alpha-\gamma)$, as the inequalities in {\rm (ii)-(iv)} satisfy this type of regularity condition. It therefore follows that the left-hand side of \eqref{square inc exp} satisfies \begin{equation*} \|\square_{(s,s'),(t,t')} \int_{0}^\cdot \int_{0}^{\cdot'} K(\cdot,r) d^2Q(r,r')K(\cdot',r')^*\|_{op} \lesssim \|K\|^2_{\mathcal{K}_\eta}\|Q\|_{\alpha} |t-s|^{\zeta}|t'-s'|^{\zeta}, \end{equation*} for any $\zeta\in [0,\alpha-\gamma)$. Since the map $(t,t')\mapsto \int_{0}^t \int_{0}^{t'} K(\cdot,r) d^2Q(r,r')K(\cdot',r')^*$ is zero on the boundary of $[0,T]^2$, we conclude by Remark \ref{rem: zero boundary co-variance} that the covariance operator is contained in $\mathcal{Q}_{\zeta}$ for any $\zeta\in [0,\alpha-\gamma)$. \end{proof} Another consequence of the construction of the double Young-Volterra integral is stability estimates in terms of the driving covariance $Q$ and the Volterra kernel $K$. We summarize this in the following proposition. \begin{prop} \label{prop:stability} Let $K,\tilde{K}\in \mathcal{K}_{\eta}$, with $\eta\in(0,1)$. For a constant $\alpha\in (\eta,1]$, assume that $Q$ and $\tilde{Q}$ are both $\alpha$-regular covariance functions in $\mathcal{Q}_\alpha$. Furthermore, let $M>0 $ be a constant such that $\|K\|_{\mathcal{K}_{\eta}}\vee \|\tilde{K}\|_{\mathcal{K}_{\eta}} \vee \|\mathcal{Q}\|_{\mathcal{Q}_\alpha}\vee \|\tilde{\mathcal{Q}}\|_{\mathcal{Q}_\alpha}\leq M$. Then the following stability estimate holds for any $\zeta\in [0,\alpha-\gamma)$. \begin{equation}\label{eq:stability} \| \mathcal{I}(K,Q)- \mathcal{I}(\tilde{K},\tilde{Q})\|_{\mathcal{Q}_{\zeta}}\leq C_M\left(\|K-\tilde{K} \|_{\mathcal{K}_{\eta}} +\|Q-\tilde{Q}\|_{\mathcal{Q}_\alpha}\right). \end{equation} \end{prop} \begin{proof} This follows directly from the proof of Theorem \ref{thm:general covariance integrals}, and Proposition \ref{Holder reg of covar}. First, it is readily checked that Theorem \ref{thm:general covariance integrals} may canonically be extended to integrals on the form \begin{equation*} \mathcal{I}(K,Q,L)(t,t'):=\int_0 ^t \int_{0}^{t'}K(t,r)d^2Q(r,r')L(t',r')^*, \end{equation*} where $K,L\in \mathcal{K}_{\eta}$ and $Q\in \mathcal{Q}_\alpha$. We therefore assume at this point that the above integral is well defined in the same way as shown in Theorem \ref{thm:general covariance integrals}. This leads to an extension of inequality \eqref{IQ mapto Q} given on the form \begin{equation}\label{int KQL} \| \mathcal{I}(K,Q,L)\|_{\mathcal{Q}_{\zeta}} \leq C \|K\|_{\mathcal{K}_\eta} \|Q\|_{\mathcal{Q}_\alpha} \|L\|_{\mathcal{K}_\eta}, \end{equation} for $\zeta\in [0,\alpha-\gamma)$. Observe that the difference $\mathcal{I}(K,Q)- \mathcal{I}(\tilde{K},\tilde{Q})$ is equal to \begin{equation}\label{rel DKK DQQ} \mathcal{I}(K,Q)- \mathcal{I}(\tilde{K},\tilde{Q})=D(K,\tilde{K})+ D(Q,\tilde{Q}) \end{equation} where we define \begin{equation*} D(K,\tilde{K}):=\mathcal{I}(K,Q)-\mathcal{I}(\tilde{K},Q) \,\,\, {\rm and} \,\,\, D(Q,\tilde{Q}):=\mathcal{I}(\tilde{K},Q)-\mathcal{I}(\tilde{K},\tilde{Q}). \end{equation*} Recall from Remark \ref{rem: linearity of integral} that the integral operator is bilinear in $K$ and linear in $Q$. Moreover, since $K$ and $\tilde{K}$ are both linear operators on $H$, their difference is also a linear operator on $H$, and since $\mathcal{K}_\eta$ is a linear space, it follows that $K-\tilde{K}\in \mathcal{K}_\eta$. Similarly, $Q-\tilde{Q}\in \mathcal{Q}_{\alpha}$. This yields, \begin{align*} D(K,\tilde{K})(t,t')&= \int_0^t\int_{0}^{t'} \tilde{K}(t,r)d^2Q(r,r')(K-\tilde{K})(t',r')^*\\ &\qquad+ \int_0^t\int_{0}^{t'} (K-\tilde{K})(t,r)d^2Q(r,r')K(t',r')^* \\ &=\mathcal I(\tilde{K},Q,K-\tilde{K})(t,t')+\mathcal I(K-\tilde{K},Q,K)(t,t'). \end{align*} Invoking the inequality \eqref{int KQL} twice, we find \begin{equation}\label{DKK} \|D(K,\tilde{K})\|_{\mathcal{Q}_{\zeta}} \leq C_M \|K\|_{\mathcal{K}_\eta}\|K-\tilde{K}\|_{\mathcal{K}_\eta}\|Q\|_{\mathcal{Q}_\alpha}. \end{equation} Through similar manipulations using that $Q-\tilde{Q}\in \mathcal{Q}_\alpha$ it is seen from Proposition \ref{Holder reg of covar} that $D(Q,\tilde{Q})$ can be bounded by \begin{equation}\label{DQQ} \|D(Q,\tilde{Q})\|_{\mathcal{Q}_{\zeta}}\leq C_M \|K\|_{\mathcal{K}_\eta} \|Q-\tilde{Q}\|_{\mathcal{Q}_\alpha}. \end{equation} We can now majorize the difference on the left hand side of \eqref{eq:stability} by using relation \eqref{rel DKK DQQ} and the triangle inequality, as well as the estimates in \eqref{DKK} and \eqref{DQQ} to obtain \begin{equation*} \| \mathcal{I}(K,Q)- \mathcal{I}(\tilde{K},\tilde{Q})\|_{\mathcal{Q}_{\zeta}}\leq C_M\left(\|K-\tilde{K} \|_{\mathcal{K}_\eta} +\|Q-\tilde{Q}\|_{\mathcal{Q}_\alpha}\right), \end{equation*} which proves our claim. \end{proof} The stability estimate in Proposition \ref{prop:stability} tells us that the Volterra processes are Lipschitz continuous in both the kernel $K$ and the covariance functional $Q$ of the noise. Thus, small model errors or statistical estimation errors in the kernel $K$ and/or the covariance functional $Q$ lead to small errors in the resulting Volterra processes. This holds $\omega$-wise and is therefore a very strong stability in a probabilistic context. \subsection{Characteristic functionals of Volterra processes driven by Gaussian noise} An important question to ask is whether the pathwise Volterra process constructed in Proposition (\ref{regularity W}) is a Gaussian process when the driving noise $W$ is a Hilbert-valued Gaussian process. The next proposition gives an affirmative answer to this question. \begin{prop}\label{prop: pathwise process is gaussian } Consider a Hilbert-valued zero-mean Gaussian process $W:[0,T]\times \Omega \rightarrow H$ with covariance operator $Q_W:[0,T]^2\rightarrow \mathcal{L} (H)$, and assume $t\mapsto W(t,\omega)$ is $\beta$-H\"older continuous with $\beta\in (0,1)$ for $\omega\in \mathcal{N}^c\in\mathcal F$, where $\mathcal{N}^c$ is of full measure. Let $K\in \mathcal{K}_{\eta}$ with $\zeta:=\beta-\eta >0$, and that the covariance operator $Q_W\in \mathcal{Q}_\alpha$ for $\rho=\alpha-\eta>0$. For any $\omega\in \mathcal{N}^c$, let $X^\cdot(\cdot,\omega)$ be given as the Volterra process \begin{equation}\label{eq:general gaussian proc} X^\tau(t,\omega)=\int_{0}^{t}K(\tau,s)dW(s,\omega), \end{equation} where the integral is constructed as in Proposition \ref{regularity W}. Then $(t,\omega)\mapsto X^t(t,\omega)$ is a Hilbert-valued zero-mean Gaussian process on the probability space $\left(\Omega,\mathcal{F},\mathbb{P}\right)$, and the characteristic functional of $X$ is given by \begin{equation}\label{char func X} \mathbb{E}\left[\exp\left(i\langle X^\tau(t),f\rangle \right)\right]=\exp\left(-\frac{1}{2}\langle \int_0^t\int_0 ^t K(\tau,r)d^2Q_W(r,r')K(\tau,r')^* f,f\rangle \right), \end{equation} for any $f\in H$. \end{prop} \begin{proof} We begin to prove that for each $(\tau,t)\in \Delta_2^T$, $X^\tau(t,\cdot)$ is a Gaussian random variable. To this end, it is sufficient to prove that the characteristic functional of $X^\tau(t,\cdot)$ is that of a Gaussian, and that it is given by \eqref{char func X}. Observe that by continuity of the exponential function and the construction of $X$ as the limit of a Riemann type sum as given in Proposition \ref{regularity W}, we have \begin{align}\label{int to sum lim} \mathbb{E}\bigg[\exp\bigg(i\langle \int_{0}^{t}K(\tau,s)dW(s,\omega),f\rangle \bigg)\bigg] &=\mathbb{E}\bigg[\lim_{|\mathcal{P}|\rightarrow0}\exp\bigg(i\sum_{[u,v]\in \mathcal{P}} \langle K(\tau,u)(W(v)-W(u)),f\rangle \bigg)\bigg] \\\nonumber &=\mathbb{E}\bigg[\lim_{|\mathcal{P}|\rightarrow0}\exp\bigg(i\sum_{[u,v]\in \mathcal{P}} \langle W(v)-W(u),K(\tau,u)^*f\rangle \bigg)\bigg]. \end{align} Since the exponential $|\exp\left(i\langle g,f\rangle\right)|\leq 1$ for any $f,g\in H$, it follows from the dominated convergence theorem that \begin{align}\label{lim outside} &\mathbb{E}\bigg[\lim_{|\mathcal{P}|\rightarrow 0}\exp\bigg(i\sum_{[u,v]\in \mathcal{P}} \langle W(v) -W(u),K(\tau,u)^*f\rangle \bigg)\bigg] \\\nonumber &\qquad\qquad= \lim_{|\mathcal{P}|\rightarrow 0}\mathbb{E}\bigg[\exp\bigg(i\sum_{[u,v]\in \mathcal{P}} \langle W(v)-W(u),K(\tau,u)^*f\rangle \bigg)\bigg]. \end{align} Using that the sum $\sum_{[u,v]\in \mathcal{P}} \langle W(v)-W(u),K(\tau,u)^* f\rangle$ is Gaussian, since $W$ is a Gaussian process (see Def. \ref{def:Hilbert Gaussian process}), and by similar computations as given in \eqref{eq: co-variance comp} we obtain that the following identity holds \begin{align}\label{car gaussian step} \mathbb{E}\bigg[\exp\bigg(i\sum_{[u,v]\in \mathcal{P}} \langle& W(v)-W(u),K(\tau,u)^*f\rangle \bigg)\bigg] \\\nonumber = \exp\bigg(-\frac{1}{2}\sum_{\substack{[u,v]\in \mathcal{P}\\ [u',v']\in \mathcal{P}}} \langle \square_{(u,u'),(v,v')} Q_W K(\tau,u' )^*f,K(\tau,u)^*f\rangle \bigg). \end{align} By using the dual formulation of the operators again, and moving the double sum on this inside, we recognise that \begin{align*} \exp\bigg(-\frac{1}{2}&\sum_{\substack{[u,v]\in \mathcal{P}\\ [u',v']\in \mathcal{P}}} \langle \square_{(u,u'),(v,v')} Q_W K(\tau,u' )^* f,K(\tau,u)^*f\rangle \bigg) \\ = \exp\bigg(-\frac{1}{2} \langle \sum_{\substack{[u,v]\in \mathcal{P}\\ [u',v']\in \mathcal{P}}} K(\tau,u) \square_{(u,u'),(v,v')} Q_W K(\tau,u' )^*f,f\rangle \bigg). \end{align*} Taking limits as the partition goes to zero, we obtain exactly the operator-valued integral \begin{equation}\label{covar} \int_0^t\int_0 ^t K(\tau,r)d^2Q_W(r,r')K(\tau,r')^* = \lim_{|\mathcal{P}|\rightarrow 0} \sum_{\substack{[u,v]\in \mathcal{P}\\ [u',v']\in \mathcal{P}}} K(\tau,u) \square_{(u,u'),(v,v')} Q_W K(\tau,u' )^*. \end{equation} By again recalling the derivations in \eqref{eq: co-variance comp}, we have that $$ \mathbb E\left[(X_{\mathcal P}^{\tau}(t))^{\otimes 2}\right]= \sum_{\substack{[u,v]\in \mathcal{P}\\ [u',v']\in \mathcal{P}}} K(\tau,u) \square_{(u,u'),(v,v')} Q_W K(\tau,u' )^*. $$ This shows that the right-hand side is symmetric and positive semi-definite operator, properties which are preserved after taking limits. Thus, the operator in \eqref{covar} a bounded linear operator on $H$ which is symmetric and positive semi-definite. Combining our considerations and identities obtained in \eqref{covar}, \eqref{car gaussian step}, \eqref{lim outside} and \eqref{int to sum lim}, we can see that \begin{equation} \mathbb{E}\left[\exp\left(i\langle \int_{0}^{t}K(\tau,s)dW(s),f\rangle \right)\right]=\exp\left(-\frac{1}{2}\langle \int_0^t\int_0 ^t K(\tau,r)d^2Q_W(r,r')K(\tau,r')^* f,f\rangle \right). \end{equation} Recognising that this is the characteristic functional of a Gaussian random variable in a Hilbert space with trace class covariance operator $Q^{\tau,\tau}_X(t,t)\in \mathcal{L}(H)$ given by \begin{equation} Q^{\tau,\tau}_X(t,t)=\int_0^t\int_0 ^{t} K(\tau,r)d^2Q_W(r,r')K(\tau,r')^*, \end{equation} proves that $X^\tau(t)=\int_0^t K(\tau,s)dW(s)$ is a Gaussian random variable in $H$ for each $(\tau,t)\in \Delta_2^T$. In order to prove that $t\mapsto X(t)=X^t(t)=\int_0^tK(t,s)dW(s)$ is a Gaussian {\it process}, recall from Definition \ref{def:Hilbert Gaussian process} that we need to show that for any $n\geq 1$, $\{t_i\}_{i=1}^n\subset [0,T]$, and $\{f_i\}_{i=1}^n\in H^{\times n}$, $(\langle X(t_1),f_1\rangle ,\ldots,\langle X(t_n),f_n\rangle )$ is an $n$-variate Gaussian random variable in $\mathbb{R}^{ n}$. We prove this claim for $n=2$, and the case for $n\geq 2$ follows by by a similar argument, however being notationally much more involved. For $t_1,t_2\in [0,T]$, we consider \begin{equation}\label{2 dim volterra process} \begin{pmatrix} \int_0^{t_1}K(t_1,r)dW(r) \\ \int_0^{t_2}K(t_2,r)dW(r) \end{pmatrix} \in H^2. \end{equation} Define an operator $G:[0,T]^4\rightarrow \mathcal{L}(H^2)$ by \begin{equation} G(t_1,t_2,u_1,u_2)= \begin{pmatrix} K(t_1,u_1) & 0 \\ 0 & K(t_2,u_2) \end{pmatrix}. \end{equation} Both integrals in \eqref{2 dim volterra process} are constructed as limits of Riemann type sums (as in Proposition \ref{regularity W}) in the following way: Set $\mathcal{P}^1$ to be a partition over $[0,t_1]$ and $\mathcal{P}^2$ to be a partition over $[0,t_2]$, and we have that \begin{equation} \begin{pmatrix} \int_0^{t_1}K(t_1,r)dW(r) \\ \int_0^{t_1}K(t_2,r)dW(r) \end{pmatrix} =\lim_{|\mathcal{P}^1|\rightarrow 0} \lim_{|\mathcal{P}^2|\rightarrow 0} \sum_{[u_1,v_1]\in \mathcal{P}^1 } \sum_{[u_2,v_2]\in \mathcal{P}^2 } G(t_1,t_2,u_1,u_2)\begin{pmatrix} W(v_1)-W(u_1) \\ W(v_2)-W(u_2) \end{pmatrix} \end{equation} Set $F=(f_1,f_2)\in H^2$, $u=(u_1,u_2),v=(v_1,v_2),t=(t_1,t_2)\in [0,T]^2$, and define $$ Z(v)-Z(u):=\begin{pmatrix} W(v_1)-W(u_1) \\ W(v_2)-W(u_2) \end{pmatrix}. $$ It is then readily checked that \begin{multline} \mathbb{E} \left[\langle G(t,u) (Z(v)-Z(u)),F\rangle_{H^2} \langle G(t,u') (Z(v')-Z(u')),F\rangle_{H^2}\right] \\ = \mathbb{E} \left[\langle Z(v)-Z(u),G(t,u)^*F\rangle_{H^2} \langle Z(v')-Z(u'),G(t,u')^*F\rangle_{H^2}\right], \end{multline} and by similar computations as in \eqref{eq: co-variance comp} we obtain the following expression \begin{multline*} \mathbb{E} \left[\langle G(t,u) (Z(v)-Z(u)),F\rangle_{H^2} \langle G(t,u') (Z(v')-Z(u')),F\rangle_{H^2}\right] =\langle G(t,u) \square_{(u,v),(u',v')} Q_Z G(t,u')^* F, F\rangle_{H^2} . \end{multline*} Let us first investigate the covariance $Q_Z$ associated to $Z$. By definition of $Z$, it follows that \begin{equation*} \begin{aligned} \square_{(u,u'),(v,v')} & Q_Z \\ &=\begin{pmatrix} \mathbb{E}[(W(v_1)-W(u_1))\otimes (W(v_1')-W(u_1'))] & \mathbb{E}[(W(v_1)-W(u_1))\otimes (W(v_2')-W(u_2'))] \\ \mathbb{E}[(W(v_2)-W(u_2))\otimes (W(v_1')-W(u_1'))] & \mathbb{E}[(W(v_2)-W(u_2))\otimes (W(v_2')-W(u_2'))] \end{pmatrix} \\ &=\begin{pmatrix} \square_{(u_1,u_1'),(v_1,v_1')}Q_W & \square_{(u_1,u_2'),(v_1,v_2')}Q_W \\ \square_{(u_1',u_2),(v_1',v_2)}Q_W &\square_{(u_2,u_2'),(v_2,v_2')}Q_W \end{pmatrix}. \end{aligned} \end{equation*} The above expression for the covariance leads to the following expression for the appropriate composition of operators \begin{multline*} G(t,u)\square_{(u,u'),(v,v')}Q_Z G(t,u')^* \\ = \begin{pmatrix} K(t_1,u_1)\square_{(u_1,u_1'),(v_1,v_1')}Q_W K(t_1,u_1')^* &K(t_1,u_1)\square_{(u_1,u_2'),(v_1,v_2')}Q_W K(t_2,u_2')^* \\ K(t_2,u_2)\square_{(u_1',u_2),(v_1',v_2)}Q_W K(t_1,u_1')^* & K(t_2,u_2)\square_{(u_2,u_2'),(v_2,v_2')}Q_W K(t_2,u_2')^* \end{pmatrix}. \end{multline*} The key observation here is that each of the elements in the above matrix only depends on four variables (in addition to $t_1$ and $t_2$). With this expression at hand, let $\mathcal{P}:=\mathcal{P}^1\times \mathcal{P}^2$ and $\mathcal{P}':=\mathcal{P}^{\prime,1}\times \mathcal{P}^{\prime,2}$ be two partitions of the rectangle $[0,t_1]\times[0,t_2]$. In particular, for $[u,v]=[u_1,v_1]\times[u_2,v_2]\in \mathcal{P}$, $[u_1,v_1]\in \mathcal{P}^1$ and $[u_2,v_2]\in \mathcal{P}^2$. For notational ease define $\sum_{\mathcal{P}^i\times \mathcal{P}^j}:=\sum_{[u_i,v_i]\in \mathcal{P}^i}\sum_{[u_j,v_j]\in \mathcal{P}^j}$ for $i,j=1,2$. We then have that \begin{multline*} \sum_{[u,v]\in \mathcal{P}} \sum_{[u',v']\in \mathcal{P}'} G(t,u) \square_{(u,v),(u',v')} Q_Z G(t,u')^* \\ = \begin{pmatrix} \sum_{\mathcal{P}^1\times \mathcal{P}^{\prime,1}} K(t_1,u_1)\square_{(u_1,u_1'),(v_1,v_1')}Q_W K(t_1,u_1')^* & \sum_{\mathcal{P}^1\times \mathcal{P}^{\prime,2}}K(t_1,u_1)\square_{(u_1,u_2'),(v_1,v_2')}Q_W K(t_2,u_2')^* \\ \sum_{\mathcal{P}^2\times \mathcal{P}^{\prime,1}} K(t_2,u_2)\square_{(u_1',u_2),(v_1',v_2)}Q_W K(t_1,u_1')^* & \sum_{\mathcal{P}^2\times \mathcal{P}^{\prime,2}} K(t_2,u_2)\square_{(u_2,u_2'),(v_2,v_2')}Q_W K(t_2,u_2')^* \end{pmatrix} \end{multline*} On the right-hand side we obtain four double-sums approximating different covariance operators, as constructed in Theorem \ref{thm:general covariance integrals}. In particular we have that for $i,j=1,2$ \begin{equation*} \lim_{\substack{|\mathcal{P}^i|\rightarrow 0 \\ |\mathcal{P}^j|\rightarrow 0}} \sum_{\mathcal{P}^i\times \mathcal{P}^j} K(t_i,u_i)\square_{(u_i,u_j),(v_i,v_j)}Q_W K(t_j,u_j)^* = \int_0^{t_i}\int_0^{t_j} K(t_i,r)d^2Q_W(r,r')K(t_j,r')^* \end{equation*} from which we conclude that also the following expression is well-defined as a linear operator on $H^2$ \begin{equation*} \lim_{\substack{|\mathcal{P}|\rightarrow 0 \\ |\mathcal{P}'|\rightarrow 0}} \sum_{[u,v]\in \mathcal{P}} \sum_{[u',v']\in \mathcal{P}'} G(t,u) \square_{u,v,u',v'} Q_Z G(t,u')^* = \int_0^t\int_0^t G(t,s)d^2Q_Z(s,s')G(t,s')^*, \end{equation*} where $|\mathcal{P}|=|\mathcal{P}^1|\vee |\mathcal{P}^2|$, and similarly for $\mathcal{P}'$. With all these tools at hand, we follow along the same lines of arguments leading to the proof that $\int_0^tK(t,s)dW(s)$ is a Gaussian random variable on $H$ as done in the first part of this proof, to see that \begin{equation*} \mathbb{E} \left[ \exp(i\langle \begin{pmatrix} \int_0^{t_1}K(t_1,r)dW(r) \\ \int_0^{t_2}K(t_2,r)dW(r) \end{pmatrix},F \rangle_{H^2})\right] =\exp\left( -\frac{1}{2} \langle \int_0^t\int_0^t G(t,s)d^2Q_Z(s,s')G(t,s')^* F,F\rangle_{H^2}\right), \end{equation*} where $\int_0^t=\int_0^{t_1}\int_0^{t_2}$. From this it follows that $\begin{pmatrix} \int_0^{t_1}K(t_1,r)dW(r) \\ \int_0^{t_2}K(t_2,r)dW(r) \end{pmatrix}$ is a Gaussian random variable on $H^2$. A similar argument can be extended to any collection of times $t_1,\ldots,t_n\in [0,T]$, and thus we conclude that $t\mapsto \int_0^{t}K(t,r)dW(r)$ is a Gaussian process. \end{proof} We remark in passing that the proof of Proposition \ref{prop: pathwise process is gaussian } shows more than only the covariance operator of $X^{\tau}(t)$. Indeed, the proof provides (by inductive arguments) the covariance operator associated with the $H^n$-valued random variable $(X(t_1),\ldots,X(t_n))$ for any sequence of times $\{t_i\}_{i=1}^n\subset[0,T]^n$, where $X(t):=X^t(t)$. \section{Applications}\label{sect:applications} In this Section we have collected some possible applications of our results on Gaussian Volterra processes in Hilbert space and the corresponding covariance functionals. \subsection{Iterated stochastic process and their covariance operators} Iterated stochastic processes has received much attention (e.g. \cite{orsingher2009,Burdzy1993,BurdzyKhoshnevisan1998,ThiullenVigot2017}). In \cite{BurdzyKhoshnevisan1998}, the authors propose to model a diffusion in a crack by iterated Brownian motions. In particular, one considers two independent Brownian motions $B^i:[0,T]\times \Omega_i \rightarrow \mathbb{R}^n$ for $i=1,2$, and then studies properties of the process $B^1(|B^2(t)|)$. We refer to $B^1$ as the state process and $B^2$ as the time process. Several interesting probabilistic and analytic properties can be obtained from these processes, see in particular \cite{Burdzy1993} for a study of the pathwise properties of these processes, and \cite{orsingher2009} for relations with higher order fractional parabolic PDEs. A natural extension would be to consider infinite dimensional Gaussian processes indexed by irregular paths. The advantage of this pathwise approach is that the time process and the state process does not need to be independent. By this we mean that we fix an $\omega_2\in \Omega_2$, such that $t\mapsto B^2(t,\omega_2)$ is a continuous path, and one look at the conditional process $\mathbb{B}(t)=B^1(|B^2(\omega_2,t)|)$ as a random variable. This process is then a Gaussian process, and its covariance function is given by the composition of the covariance function of $B^1$ with the path $|B^2(\omega_2,t)|$. Due to the fact that $t\mapsto B^2(\omega_2,t)$ is H\"older continuous of order $\alpha<\frac{1}{2}$, it follows that the regularity of the covariance function is reduced accordingly. More generally, one can study infinite dimensional Gaussian processes with irregular time shifts. Let $I\subset \mathbb{R}_+$, $\alpha\in (0,1)$, and suppose $X:[0,T]\rightarrow I$, is a nowhere differentiable path, which is $\alpha$-H\"older continuous. Let $W:I\times \Omega\rightarrow H$ be a Gaussian process with an $\gamma$-regular covariance function $Q_W:I\times I\rightarrow \mathcal{L}(H)$ (according to Definition \ref{def:reg covar}). Then the composition $W\circ X:[0,T]\rightarrow H$ is a Gaussian process, with covariance function \begin{equation*} Q_W\circ X (t,s)(f,g)=\mathbb{E}[\langle W\circ X(t),f\rangle \langle W\circ X(s),g\rangle ]= Q_W(X(t),X(s))\langle f,g \rangle. \end{equation*} It follows that the covariance $Q_W\circ X$ is $\alpha \gamma$-regular with $\alpha\gamma\in(0,1)$. Furthermore, one can study the Volterra process $Y(t)=\int_0^t K(t,r)d(W\circ X) (r)$, in order to introduce memory in the iterated process. Then Proposition \ref{prop: pathwise process is gaussian } tells us that $Y$ is again Gaussian, given that the singularity of $K$ is integrable with respect to the regularity of the covariance function $Q_W$. In fact, since the covariance operator $Q_W\circ X$ is only H\"older continuous, the covariance operator needs to be constructed in terms Theorem \ref{thm:general covariance integrals}, in order to make sense of this integral. This is of course due to the fact that $Q_W\circ X(t,s)$ is nowhere differentiable in a Fr\`{e}chet sense, and thus classical constructions of the covariance functions of Gaussian Volterra processes (for example given in \cite{HuCam}) are not applicable. \subsection{Construction of the rough path lift of Gaussian processes with irregular covariance functions} At the core of rough paths lies a solution theory for controlled differential equations on the form \begin{equation*} dY(t)=f(Y(t))dX(t),\qquad Y(0)=y\in H, \end{equation*} where $f$ is sufficiently regular function and $X$ is an $\alpha$-H\"older continuous signal with $\alpha\in (0,1)$. If $\frac{1}{3}<\alpha\leq \frac{1}{2}$, one needs to lift the signal $X$ into a tuple $(X,\mathbb{X})$, where $\mathbb{X}:[0,T]^2\rightarrow H \otimes H$ represents the iterated integral of $X$. This tuple is then called the rough path corresponding to $X$. In fact, one requires the following two conditions to hold for $X$ and $\mathbb{X}$ for $s\leq u\leq t$, \begin{align*} \mathbb{X}(s,t)-\mathbb{X}(s,u)-\mathbb{X}(u,t)&=(X(u)-X(s))\otimes (X(t)-X(u)), \end{align*} and \begin{align*} \sup_{t\neq s\in [0,T]} \frac{|X(t)-X(s)|_H}{|t-s|^\alpha}<\infty \qquad &{\rm and} \qquad \sup_{t\neq s\in [0,T]} \frac{|\mathbb{X}(s,t)|_H}{|t-s|^{2\alpha}}<\infty. \end{align*} Therefore, much attention is given to construct an object $\mathbb{X}$ which satisfies the above conditions for a given path $X$. In \cite[Sec. 10.2]{FriHai}, the construction of this object corresponding to a Gaussian noise is shown under a sufficient smoothness condition on the covariance function. This smoothness condition is stated in terms of two-dimensional $p$-variation norms, which can be seen to be equivalent to the H\"older continuity of the covariance operators introduced in Definition \ref{def:reg covar} under the assumption of continuity on the $p$-variation functions In particular, in order to construct a "geometric" version of $\mathbb{X}$ when $X$ is a centred Gaussian process, \cite[Thm. 10.4]{FriHai} tells us that it is sufficient that the covariance operator $Q_X$ is contained in $\mathcal{Q}_\gamma$ with $\gamma>\frac{1}{2}$ \footnote{The condition is actually stated in terms of a two-dimensional $\rho$-variation norm for the covariance function, with~$\rho\in [1,2)$. It is readily checked that the H\"older norms in Definition \ref{def:reg covar} are equivalent to this variation norm, under the assumption of continuity.}. Thus, the construction of covariance operators and their corresponding regularity provided in Theorem \ref{thm:general covariance integrals} and Proposition \ref{Holder reg of covar}, open up for the construction of a rough path for Volterra processes driven by Gaussian paths with nowhere differentiable covariance operators. Such processes are, for example, illustrated in the above subsection by the class of iterated processes. \subsection{Fractional Ornstein-Uhlenbeck process driven by irregular paths} Fractional differential equations (FDEs) provide an alternative to classical ODEs, by introducing memory in the evolution of the process. This results in a non-local equation with interesting applications to several physical and social systems (e.g. \cite{FracDiff2019,EuchRosen,SINGH2018}) Our concern here is an $H$-valued fractional Ornstein-Uhlenbeck stochastic differential equation on a given time interval $[0,T]$. Consider two parameters $(\alpha,\gamma)\in \mathbb{R}_+\times(0,1)$ with the relation $\alpha+\gamma-1>0$, and consider the equation formally given by \begin{equation}\label{FDE} D^{\alpha}\left(Y-y\right)(t)=AY(t)+\dot{W}(t). \end{equation} Here, $y\in H$, $A\in\mathcal L(H)$, $W\in\mathcal{C}^{\gamma}([0,T],H)$ and $D^{\alpha}$ is the fractional time-derivative of order $\alpha$, given as in Definition \ref{fractional integral and derivative} in the Appendix. The object $\dot{W}$ is interpreted only formally and is corresponding to the time-derivative $\frac{d}{dt}W(t)$. Since $W$ is only H\"older continuous, the derivative $\frac{d}{dt}W(t)$ does not exist, and thus we rather consider an integrated version of~\eqref{FDE}. With $I^{\alpha}$ being the fractional integral operator (see Definition \ref{fractional integral and derivative} in the appendix), let us denote by $X=I^\alpha(\dot{W})$ which we interpret as the integral \begin{equation}\label{X=I(W)} X(t)=\int_0^t(t-s)^{\alpha-1}dW(s). \end{equation} This integral is understood in the sense of Proposition \ref{regularity W} with $K(t,s)=(t-s)^{\alpha-1}I$ and~$I\in \mathcal{L}(H)$ being the identity operator on $H$. The integral exists due to the assumption that $\alpha+\gamma-1>0$. Applying the fractional integral operator $I^\alpha $ on both sides of~\eqref{FDE}, we obtain the equation \begin{equation}\label{mildFDE} Y(t)=y+I^{\alpha}\left(AY\right)(t)+X(t), \qquad t\in[0,T]. \end{equation} We will need a few extra tools to be able to obtain an explicit representation of its solution, as well as the associated covariance operator in the Gaussian case. First we present a version of Fubini's theorem, showing that we can exchange the order of integration of double integrals involving Riemann integration and Volterra-Young integration. This property, that may be interesting in itself, is a crucial tool in proving a specific analytic representation of the fractional Ornstein-Uhlenbeck process in \eqref{mildFDE}. As a corollary to our Fubini theorem, we show that the order of the fractional integral operator and a Young-Volterra integral can be interchanged. For conciseness, all proofs in this section are relegated to Appendix \ref{A: auxiliary proofs}. \begin{prop}\label{Fubinis prop} For $\gamma,\eta>0$ with $\rho:=\gamma-\eta>0$, let $Z:[0,T]\rightarrow H$ be given as \begin{equation*} Z(t)=\int_0 ^t K(t,s)dW(s), \end{equation*} for $K\in \mathcal{K}_\eta$ and $W\in \mathcal{C}^\gamma([0,T],H)$, with the integral being defined as in Proposition \ref{regularity W}. Assume $G:\Delta_2^T\rightarrow \mathcal{L}(H)$ is in $\mathcal{K}_\kappa$ for some $\kappa\in (0,1)$. Then the following equality holds \begin{equation}\label{fubini} \int_{0}^{t}G(t,s)Z(s)ds=\int_{0}^{t}\int_{s}^{t}G(t,r)K(r,s)drdW(s), \end{equation} where the integral on the right-hand side is again interpreted in terms of Lemma \ref{lem:(Volterra-sewing-lemma)} with $\Xi^{\tau}(t,s):=\int_{s}^{\tau}G(\tau,r)K(r,s)dr\left(W(t)-W(s)\right)$. \end{prop} As already indicated, we apply the Fubini theorem to the fractional integral operator, and as we see in ther next Corollary, we can further establish a connection to Mittag-Leffler functions. \begin{cor} \label{cor:Fubini with cont path}Let $0<\alpha<\gamma<1$ and $\alpha+\gamma-1>0$ and $W\in\mathcal{C}^{\gamma}\left([0,T],H\right)$. Let furthermore $X$ be defined as in Proposition \ref{regularity W}, with $K(t,s)=\frac{1}{\Gamma(1-\eta)}(t-s)^{-\eta}$ and $\gamma-\eta>0$. In particular, $X$ is given as the Volterra integral \begin{equation}\nonumber X(t)=\Gamma(1-\eta)^{-1}\int_0^t (t-s)^{-\eta}dW(s). \end{equation} Then, for $A\in \mathcal{L}(H)$, the following relation holds \begin{equation} \label{infinite sume mittag lef} \sum_{i=0}^{\infty}A^{\circ i}I^{i\alpha}\left( X\right)(t)=\int_{0}^{t}\left(t-s\right)^{-\eta}E_{\alpha,1-\eta}\left(A \left(t-s\right)^{\alpha}\right)dW(s), \end{equation} where $E_{\alpha,\beta}\left(At\right):=\sum_{i=0}^{\infty}\frac{A^{\circ i}t^{i}}{\Gamma\left(\alpha i+\beta\right)}$ for $t\in [0,T]$ is called the Mittag-Leffler operator, and the integrals are interpreted in sense of Proposition \ref{regularity W}. Indeed, since $x\mapsto E_{\alpha,\beta}(x^\alpha)$ is smooth everywhere except at $0$ where it is $\alpha$-H\"older continuous, we interpret the right-hand side of \eqref{infinite sume mittag lef} using Proposition \ref{regularity W} with $K(t,s)=(t-s)^{-\eta}E_{\alpha,1-\eta}(A(t-s)^\alpha)$. \end{cor} \begin{rem} The fact that the Mittag-Leffler operator is a bounded linear operator on $H$ is readily checked: for any $f\in H$ we have by the triangle inequality that \begin{equation}\label{tripple ineq} |E_{\alpha,\beta}(At)f|_H=|\sum_{i=0}^{\infty}\frac{A^{\circ i}t^if}{\Gamma\left(\alpha i+\beta\right)}|_H\leq \sum_{i=0}^{\infty}\frac{|A^{\circ i}t^i f|_H}{\Gamma\left(\alpha i+\beta\right)}\leq |f|_H E_{\alpha,\beta}(\|A\|_{op}t), \end{equation} where, in the last inequality, we have used that for a bounded linear operator $A$ and for any $i\geq 0$ we have $|A^{\circ i}f|_H\leq \|A\|_{op}^i |f|_H$. The expression $E_{\alpha,\beta}(\|A\|_{op}t)$ appearing on the right-hand side of \eqref{tripple ineq} is the classical Mittag-Leffler function evaluated at $\Vert A\Vert_{op}t$. \end{rem} \begin{thm}\label{thm:FDE} For some $\gamma\in\left(0,1\right)$, let $W\in\mathcal{C}^{\gamma}\left(\left[0,T\right],H\right)$, and assume that $A\in \mathcal{L}(H)$. For any $\alpha>1-\gamma$ let $X=I^\alpha (W)$ as given in \eqref{X=I(W)}, and assume $y\in H$. Then there exists a unique solution $Y\in\mathcal{C}^{\rho}([0,T],H)$ with $\rho<\gamma+\alpha-1 $ to the equation \begin{equation} Y(t)=y+AI^{\alpha}\left(Y\right)(t)+X(t).\label{eq: fractional equation} \end{equation} Moreover, the solution satisfies the following analytic formula \begin{equation}\label{eq: representation mittag lefler} Y(t)=E_{\alpha,1}\left(At^{\alpha}\right)y+\int_{0}^{t}\left(t-s\right)^{\alpha-1}E_{\alpha,\alpha}\left(A\left(t-s\right)^{\alpha}\right)dW(s), \end{equation} where the integral on the right-hand side of \eqref{eq: representation mittag lefler} is interpreted in sense of Corollary \ref{cor:Fubini with cont path}. \end{thm} We observe from our analysis in Section \ref{sec: inf dim gaussian analysis} and \ref{sec: Gaussain Volterra processes}, that $Y$ is a Gaussian process. In the next Corollary we apply Theorem \ref{thm:general covariance integrals} to state the covariance operator of $Y$. \begin{cor} \label{cor:ML-covariance} Consider parameters $\gamma,\alpha\in(0,1)$ such that $\rho=\gamma+\alpha-1>0$ and $\beta>0$ such that $\beta+\alpha-1>0$. Let $W$ be a Gaussian process in $\mathcal{C}^\gamma([0,T],H)$, with covariance operator $Q_W\in \mathcal{Q}_\beta$, and suppose $Y\in \mathcal{C}^\rho([0,T],H)$ is the solution to the fractional Ornstein-Uhlenbeck process given in Theorem \ref{thm:FDE} driven by $W$ with linear operator $A\in \mathcal{L}(H)$. Then the covariance operator associated to $Y$ is given by \begin{equation} Q_Y(t,t')=\int_{0}^t\int_0^{t'}(t-r)^{\alpha-1}E_{\alpha,\alpha}(A(t-r)^\alpha) d^2Q_W(r,r')(t'-r')^{\alpha-1}E_{\alpha,\alpha}(A^*(t'-r')^\alpha) . \end{equation} and $Q_Y\in \mathcal{Q}_{\eta}$ for any $\eta<\beta+\alpha-1$. \end{cor} \begin{rem} Observe that the regularity of $Y$ constructed as the Volterra process in \eqref{eq: representation mittag lefler} is of order $0<\rho<\gamma+\alpha-1$, where $\gamma$ is the regularity of $W$, However the regularity of the covariance $Q_Y$ is of order $\eta<\beta+\alpha-1$ where $\beta$ is the regularity of the covariance $Q_W$. A-priori, there is no imposed relationship between $\beta$ and $\gamma$, although in typical examples they will be strongly related (if not the same, see Example \ref{example fbm} for the case of fractional Brownian motion). On the other hand, given that we know the regularity of the covariance operator $Q_W$, then through Kolmogorov's continuity theorem \ref{thm:Kolmogorov}, one can deduce the regularity of $W$ (which then relates $\gamma$ to $\beta$). However, this theorem is not an if and only if statement, and thus given that a stochastic process is $\gamma$-H\"older continuous, it is not obvious what regularity its covariance might have. \end{rem} \subsection{Rough stochastic volatility models} In this subsection we discuss various infinite dimensional extensions of {\it rough} stochastic volatility models that have attracted interest in recent years. Our starting point is the fractional Ornstein-Uhlenbeck process $Y$ defined in~\eqref{eq: representation mittag lefler}, where we for simplicity assume $y=0$. Consider first a state space $H=\mathbb R$, and let the risk-neutral stock price dynamics with stochastic volatility be $$ \frac{dS(t)}{S(t)}=\sigma(t,Y(t))dB(t), $$ for some Brownian motion $B$, possibly correlated with $W$, and where we suppose the risk-free interest rate to be zero. Recall that $W$ is the Gaussian process driving the fractional Ornstein-Uhlenbeck dynamics of $Y$. For example, choosing $\sigma(t,y)=\exp(y)$ would give a rough stochastic volatility model extending the class of models proposed by Gatheral, Jaisson and Rosenbaum \cite{GathJaiRosen2018}. In their paper, an Ornstein-Uhlenbeck process driven by a fractional Brownian motion is shown to provide an excellent fit the volatility of stock prices. We extend this class of models to allow for a fractional time-derivative in the dynamics as well, opening for further flexibility in the modelling. Furthermore, we can define a simple rough Heston model as the variance process $$ V(t):=Y^2(t) $$ or, more generally, taking $n$ independent copies of $\mathbb R$-valued processes $W_i, i=1,\ldots,n$ driving $Y_i(t)$ as in \eqref{eq: representation mittag lefler}, $$ V(t)=\sum_{i=1}^n Y_i^2(t) $$ Choosing $\sigma(t,v)=\sqrt{v}$ would give a rough Heston stochastic volatility, providing a possible extension of the class of models considered by El Euch and Rosenbaum \cite{EuchRosen}. Let us return to a general separable Hilbert space $H$. Forward and futures prices can be realized as infinite dimensional stochastic processes, which call for operator-valued stochastic volatility models (see Benth, R\"udiger and S\"uss \cite{BRS} and Benth and Kr\"uhner \cite{BK-SIFINpaper}). To this end, let $H$ be the state space of the forward curves, given by some separable Hilbert space of real-valued functions on $\mathbb R_+$. We restrict to $\mathbb R_+$ as this plays the role of the time to maturity. A possible (simplistic) model for the risk-neutral forward price at time $t\geq 0$ is defined as \begin{equation} df(t)=\partial_xf(t)dt+\Sigma(t)dB(t) \end{equation} where $B$ is some $H$-valued Wiener process with covariance operator $Q_B$. A direct extension of the rough stochastic volatility model could be the following: supposing that $H$ is a Banach algebra, we define $$ \Sigma(t):=\exp(Y(t)). $$ From the assumed algebra-structure of $H$, we can conclude that $\Sigma(t)$ is again an element of $H$. Moreover, $\Sigma(t)$ defines a linear operator on $H$, given as the multiplication operator $\Sigma(t)(f)=\Sigma(t)f, f\in H$. An example of a natural Hilbert space $H$ to use for modelling forward prices is the Filipovic space, which also happens to be a Banach algebra (see \cite{BK-COMSpaper}). The detailed knowledge of the covariance operator of $Y$ (recall Corollary \ref{cor:ML-covariance}) provides a starting point for empirical analysis of the volatility and its dependency across maturities for forward prices. For a fixed time to maturity, we will have a dynamics following a fractional stochastic volatility model similar to the one in \cite{GathJaiRosen2018} as discussed above. We refer to the recent paper \cite{AN} where clear evidence of rough stochastic volatility in commodity forward markets has been found (see also \cite{GathJaiRosen2018}). In particular, they show that for front month contracts, the roughness of the stochastic volatility is in general lower than for stock markets. Indeed, the authors find empirical evidence of Hurst parameters below 0.05 for metals and below 0.15 in other commodity markets. We can also introduce infinite dimensional extensions of the fractional Heston model. To this end, following Benth and Simonsen \cite{BS}, for some $H$-valued adapted process $Z$ with $\vert Z(t)\vert_H=1$, define $$ \Sigma(t):=Y(t)\otimes Z(t). $$ Then, $\Sigma(t)(f)=\langle Y(t),f\rangle Z(t)$, and moreover, $\Sigma(t)^*=Z(t)\otimes Y(t)$. We take $\Sigma(t)$ as our infinite dimensional volatility process, where we observe that $$ \Sigma(t)^*\Sigma(t)=Y^{\otimes 2}(t) $$ I.e., $\Sigma(t)$ is in a sense the Cholesky decomposition of $Y^{\otimes 2}$. Notice that we use the convention $(f\otimes g)(h)=\langle f,h\rangle g$. We are concerned with the variance/volatility of elements like $$ U(t)=\mathcal L\int_0^t\Sigma(s)dB(s) $$ where $\mathcal L\in H^*$, that is, a linear functional on $H$. If for any $x\geq 0$ the evaluation operator $e_x: f\mapsto e_xf:=f(x)$ is a continuous linear functional on $H$\footnote{This is the case for the Filipovic space, say.}, we can think of $\mathcal L:=e_x$ as the noise process of the forward contract with time to maturity $x$. In power markets, say, the forwards deliver electricity over a settlement period. From e.g. \cite{BK-COMSpaper}, one finds that $\mathcal L$ in this case can be represented as some integral operator which is averaging over the maturities $x\geq 0$ in some domain (corresponding to the settlement period). The {\it total quadratic variation} of $U$ is given by the operator angle bracket process, see Cor. 8.17 in \cite{PesZab}, $$ \langle\langle U,U\rangle\rangle(t)=\int_0^t\mathcal L\Sigma(s)Q\Sigma(s)^*\mathcal L^*1ds. $$ The {\it instantaneous quadratic variation} is the time-derivative of this expression, thus, \begin{equation} \sigma_{\mathcal L}^2(t):=\mathcal L\Sigma(t)Q\Sigma(t)^*\mathcal L^*1. \end{equation} The instantaneous quadratic variation is the stochastic variance process (that is, the squared volatility) of $U$, and has the form, \begin{prop} It holds $$ \sigma_{\mathcal L}^2(t)=\vert\mathcal L(Y(t))\vert^2\vert Q_B^{1/2}Z(t)\vert_H^2 $$ \end{prop} \begin{proof} For $\mathcal T\in H^*$, we have $$ \vert\mathcal T^*1\vert_H^2=\langle\mathcal T^*1,\mathcal T^*1\rangle=\mathcal T\mathcal T^*1. $$ Hence, $$ \sigma_{\mathcal L}^2(t)=\vert Q_B^{1/2}\Sigma(t)^*\mathcal L^*1\vert_H^2 $$ By definition, $$ \Sigma^*(t)(f)=(Y(t)\otimes Z(t))(f)=\langle Y(t),f\rangle Z(t) $$ Hence, $$ Q_B^{1/2}\Sigma(t)^*(\mathcal L^*1)=\langle Y(t),\mathcal L^*1\rangle Q_B^{1/2}Z(t)=(\mathcal L Y(t))Q_B^{1/2}Z(t) $$ The Proposition follows. \end{proof} One may take $Z(t):=z$, with $\vert z\vert_H=1$. Thus, the stochastic variance is given as $\sigma_{\mathcal L}^2(t)=c\vert\mathcal L Y(t)\vert^2$, where $c$ is a scaling factor given by $c=\vert Q_B^{1/2}z\vert_H^2$. Let us now look at the stochastic process $\vert\mathcal LY(t)\vert^2$. From Theorem \ref{thm:FDE} we find that $t\mapsto\mathcal LY(t)$ has paths which are $\rho=\gamma+\alpha-1$-regular, where we recall that $\gamma\in(0,1)$ is the path regularity of $W$ and $\alpha\in(0,1)$ is the fractional derivative in the Ornstein-Uhlenbeck dynamics of $Y$. Moreover, $\gamma+\alpha>1$. Denoting by $v(t)$ the expected value of $\vert\mathcal L Y(t)\vert ^2$, we find from Corollary \ref{cor:ML-covariance} $$ v(t)=\mathbb E[\vert\mathcal L Y(t)\vert^2]=\mathbb E[\langle Y(t),\mathcal L^*1\rangle_H^2]=\langle Q_Y(t,t)\mathcal L^*1,\mathcal L^*1\rangle_H=\mathcal L Q_Y(t,t)\mathcal L^*1. $$ Thus, the expected moments of $\sigma_{\mathcal L}^2$ is given by $$ \mathbb E[\sigma_{\mathcal L}^{2k}(t)]=c^k\mathbb E [\vert\mathcal LY(t)\vert^{2k}]=c^k\xi_{2k} v(t)^k, $$ where $\xi_{2k}$ is the $2k$th moment of a standard normal random variable, $k\in\mathbb N$. From Corollary \ref{cor:ML-covariance}, we have that $Q_Y\in\mathcal Q_{\eta}$ for $\eta<\beta+\alpha-1$ and $\beta>0$ such that $\beta+\alpha>1$. We recover a fractional behaviour in the moments of the stochastic variance process similar to what has been observed empirically for a number of assets (see \cite{GathJaiRosen2018}) and more recently for commodity forwards \cite{AN} as noted above. In our context, we have the "roughness" split into a rough noise $W$ and a fractional derivative $\alpha$ which opens for a more flexible modeling of the stochastic volatility. Moreover, we have provided an infinite-dimensional extension of the classical models.
{'timestamp': '2020-06-01T02:11:28', 'yymm': '2005', 'arxiv_id': '2005.14541', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14541'}
arxiv
\section{Introduction} One way to arrive at the Bogoliubov axioms of perturbative quantum field theory (pQFT) is by analogy with non-relativistic quantum mechanics \cite{Gl}, \cite{H}; a discussion o
n this point can also be found in \cite{third order}. We give the main ideas. Suppose that we have a time-dependent interaction potential $V$. Then one goes to the interaction picture and the time evolution is governed by the evolution equation: \begin{equation} {d \over dt}U(t,s) = - i V_{\rm int}(t) U(t,s); \qquad U(s,s) = I. \end{equation} This equation can be solved in some cases by a perturbative method, namely the series \begin{equation} U(t,s) \equiv \sum {(-i)^{n}\over n!} \int_{\mathbb{R}^{n}} dt_{1} \cdots dt_{n} T(t_{1},\dots,t_{n}) \end{equation} makes sense. The operators $ T_{n}(t_{1},\dots,t_{n}) $ are called {\it chronological products}; $n$ is called the {\it order} of the perturbation theory. They verify a number of propertiesspelled in detail in the references from above. Basically theay are unitarity and causality; the causality property means: \begin{eqnarray} T_{n}(t_{1},\dots,t_{n}) = T_{m}(t_{1},\dots,t_{m})~T_{n-m}(t_{m+1},\dots,t_{n}), \nonumber\\ {\rm for} \quad t_{j} > t_{k}, \quad j = 1,\dots,m; k = m+1,\dots,n. \end{eqnarray} An explicit formula is available (see the references above). The purpose is to generalize this idea in the relativistic context especially the causality property. Essentially we try to substitute $ t \in \mathbb{R} $ by a Minkowski variable $ x \in \mathbb{R}^{4}. $ The chronological operators will be some operators $ T(x_{1},\dots,x_{n}) $ and all the axioms from the non-relativistic case can be easily generalized rather naturally. The causally axiom is more subtle. We have to replace temporal succession $ t_{1} > t_{2} $ by causal succession $ x_{1} \succ x_{2} $ which means that $ x_{1} $ should not be in the past causal shadow of $ x_{2} $ i.e. $ x_{2} \cap (x_{1} + \bar{V}^{+}) = \emptyset. $ In formulas: if $x_{i} \succ x_{j}, \quad \forall i \leq k, \quad j \geq k+1$ then we have: \begin{equation} T(x_{1},\dots,x_{n}) = T(x_{1},\dots,x_{k})~T(x_{k+1},\dots,x_{n}). \label{causality1} \end{equation} From here it follows that the ``initial condition" $ T(x) $ should satisfy \begin{equation} [ T(x), T(y) ] = 0,\qquad (x - y)^{2} < 0 \end{equation} where for the Minkowski product we use the convention $ 1,-1,-1,-1. $ It a difficult problem to obtain solutions of the preceding equation. The solutions for pQFT are distribution-valued operators, (Wick monomials) and act in some Fock space where we can describe scattering processes with creation and annihilation of particles. Acording to Epstein and Glaser, we should solve directly the axioms of pQFT in an recursive way. So we start from Bogoliubov axioms \cite{BS}, \cite{EG} as presented in \cite{DF}, \cite{DB}; for every set of Wick polynomials $ A_{1}(x_{1}),\dots,A_{n}(x_{n}) $ acting in some Fock space $ {\cal H} $ one associates the operator-valued distributions $ T^{A_{1},\dots,A_{n}}(x_{1},\dots,x_{n}) $ called chronological products; it will be convenient to use another notation: $ T(A_{1}(x_{1}),\dots,A_{n}(x_{n})) $ and we should require skew-symmetry in all arguments: for arbitrary $ A_{1}(x_{1}),\dots,A_{n}(x_{n}) $ we should have \begin{equation} T(\dots,A_{i}(x_{i}),A_{i+1}(x_{i+1}),\dots,) = (-1)^{f_{i} f_{i+1}} T(\dots,A_{i+1}(x_{i+1}),A_{i}(x_{i}),\dots) \label{sqew} \end{equation} where $f_{i}$ is the number of Fermi fields appearing in the Wick monomial $A_{i}$. There are a number of rigorous ways to construct the chronological products: (a) {\it Hepp axioms} \cite{H} (one rewrites the axioms in terms of vacuum averages of chronological products); (b) {\it Polchinski flow equations} \cite{P}, \cite{S} (one considers an ultra-violet cut-off for the Feynman amplitudes and establishes some differential equations in this parameter); (c) {\it The causal approach} due to Epstein and Glaser \cite{EG}, \cite{Gl}: is a recursive procedure for the basic objects $ T(A_{1}(x_{1}),\dots,A_{n}(x_{n})) $ and reduces the induction procedure to a distribution splitting of some distributions with causal support, or to the process of extension of distributions \cite{PS}. An equivalent point of view uses retarded products \cite{St1}. The causal method is the most elementary one from the point of view of conceptual clarity and also for practical computations. It is a very good approach for the study of gauge models \cite{Sc1}, \cite{Sc2}. The basic recursive idea of Epstein and Glaser starts from the chronological products $$ T(A_{1}(x_{1}),\dots,A_{m}(x_{m})) \quad m = 1,2,\dots $$ up to order $ n -1 $ and constructs a causal commutator in order $n$. For instance for $ n = 2 $ the {\it causal commutator} according to: \begin{equation} D(A(x),B(y)) = A(x)~B(y) - (-1)^{|A||B|}~B(y)~A(x) \label{D2causal} \end{equation} and after the operation of causal splitting one can obtain the second order chronological products. Generalizations of this formula are available for higher orders of the perturbation theory. In particular we have in the third order \begin{eqnarray} D(A(x), B(y);C(z)) \equiv - [ \bar{T}(A(x), B(y)), C(z)] \nonumber\\ + (-1)^{|B||C|} [ T(A(x), C(z)), B(y)] + (-1)^{|A|(|B|+|C|)} [ T(B(y), C(z)), A(x)] \label{Dcausal} \end{eqnarray} where all commutators are understood to be graded. The causal commutators (\ref{D2causal}) and (\ref{Dcausal}) have the generic structure \begin{equation} D = \sum d_{j}(X)~W_{j}(X) \end{equation} where $ d_{j}(X) $ are numerical distributions with causal support and $ W_{j}(X) $ are Wick monomials. The numerical distributions $ d_{j} $ have various Lorentz indexes, so to compute them we need some sort of procedure which reduces everything to a certain master scalar causal distribution. To obtain the corresponding chronological products one has to causally split only the master distribution. A more popular approach is the so-called functional formalism; here one computes the chronological products making sense of Feynman amplitudes. They are expressions of the type: \begin{eqnarray} I_{N} \sim \int {d^{4}l \over (2\pi)^{4}}~ {{\cal N}(l) \over \prod_{j = 1}^{N} [(l + q_{j-1})^{2} - m_{j}]} \end{eqnarray} which are associated to one-loop Feynman graphs \cite{EKMZ}. Here $N$ is the number of external particles and the denominator $ {\cal N}(l) $ collects kinematic factors coming from vector and spinor propagators. Only the cases $ N \leq 4 $ Then one is faced with the problem of computing integrals of the type can produce ultra-violet divergences and a regularization is needed (usually the dimensional regularization.) In the particular case of a triangle graph one needs to consider the regularized integrals of type $C$ (rel. (2.9) of \cite{EKMZ}). The idea is to use Lorentz covariance and express everything in terms of some scalar integrals. A recursive procedure due to Passarino and Veltman \cite{PV} is used. In this procedure a singular region appears due to the annihilation of a certain Gram determinant. The procedure to circumvent this singularity is to use different variables. For the general case more sophisticated methods are available \cite{EKMZ}. The avoidance of the infra-red singularities is rather complicated in this approach. The purpose of this paper is to present how the computations are done in the framework of the causal approach. The idea is to compute some expressions with causal support properties called in \cite{EG} causal commutators. We will consider only the second and third order of perturbation theory. There causal commutators are sums of products between numerical distributions with causal support and Wick monomials. The numerical distributions are similar to the type $C$ Feynman amplitudes from \cite{PV}, but no regularization procedure is needed. Also infra-red divergences do not appear because the chronological products do not have such divergences: they appear only if we do the adiabatic limit. Finally, the treatment of the singularity region associated to the Gram determinant seems to be easier. We will present the computation of one-loop contributions in second and third order of perturbation theory in Sections \ref{second} and \ref{third}. \newpage \section{Second Order Distributions with Causal Support\label{second}} In second order we have some typical distributions. We remind the fact that the Pauli-Villars distribution is defined by \begin{equation} D_{m}(x) = D_{m}^{(+)}(x) + D_{m}^{(-)}(x) \end{equation} where \begin{equation} D_{m}^{(\pm)}(x) = \pm {i \over (2\pi)^{3}}~ \int dp e^{i p\cdot x} \theta(\pm p_{0}) \delta(p^{2} - m^{2}) \end{equation} such that \begin{equation} D^{(-)}(x) = - D^{(+)}(- x). \end{equation} This distribution has causal support. In fact, it can be causally split (uniquely) into an advanced and a retarded part: \begin{equation} D = D^{\rm adv} - D^{\rm ret} \end{equation} and then we can define the Feynman propagator and anti-propagator \begin{equation} D^{F} = D^{\rm ret} + D^{(+)}, \qquad \bar{D}^{F} = D^{(+)} - D^{\rm adv}. \end{equation} All these distributions have singularity order $ \omega(D) = -2 $. These distributions do appear in the tree contributions to the chronological products. For one-loop contributions in the second order we need the basic distributions \begin{equation} d_{D_{1},D_{2}}(x) \equiv d^{(+)}_{D_{1},D_{2}}(x) + d^{(-)}_{D_{1},D_{2}}(x), \quad d^{(\pm)}_{D_{1},D_{2}}(x) \equiv \pm~{1 \over 2}~D_{1}^{(\pm)}(x)~D_{2}^{(\pm)}(x) \label{d12} \end{equation} (where $ D_{j} \equiv D_{m_{j}} $) with causal support also. This expression is linear in $ D_{1} $ and $ D_{2} $. We will also use the notation \begin{equation} d_{12} \equiv d(D_{1},D_{2}) \equiv d_{D_{1},D_{2}} \end{equation} and when no confusion about the distributions $ D_{j} = D_{m_{j}} $ can appear, we skip all indexes altogether. The causal split \begin{equation} d_{12} = d_{12}^{adv} - d_{12}^{ret} \end{equation} is not unique because $ \omega(d_{12}) = 0 $ so we make the redefinitions \begin{equation} d_{12}^{adv(ret)}(x) \rightarrow d_{12}^{adv(ret)}(x) + c~\delta(x) \end{equation} without affecting the support properties and the order of singularity. The corresponding Feynman propagators can be defined as above and will be denoted as $ d_{12}^{F} $. In \cite{loop} one can find the expressions of the dominant one-loop contributions from the chronological products. It is necessary to consider the case $ D_{1} = D_{2} = D_{m} $ and determine its Fourier transform. By direct computations it can be obtained that \begin{equation} \tilde{d}_{m,m}(k) \equiv {1 \over (2\pi)^{2}} \int dx~ e^{i k\cdot x} d_{m,m}(x) = - {1 \over 8 (2\pi)^{3}}~\varepsilon(k_{0})~\theta(k^{2} - 4 m^{2}) \sqrt{1 - {4 m^{2} \over k^{2}}}. \label{d-mm} \end{equation} We can consider associated causal distributions substituting in (\ref{d12}) $ D_{j} \rightarrow \partial_{\alpha}D_{j} $ etc. It can be proved that we can reduce such causal distributed to some polynomials in partial derivatives applied to $ d_{12}. $ Detailed examples are provided in \cite{sr-gr}. \newpage \section{Third Order Causal Distributions of Triangle Type\label{third}} First, we take $ D_{j} = D_{m_{j}}, j = 1,2,3 $ and define \begin{eqnarray} d_{D_{1},D_{2},D_{3}}(x,y,z) \equiv \bar{D}^{F}_{3}(x - y) [ D^{(-)}_{2}(z - x) D^{(+)}_{1}(y - z) - D^{(+)}_{2}(z - x) D^{(-)}_{1}(y - z) ] \nonumber \\ + D^{F}_{1}(y - z) [ D^{(-)}_{3}(x - y) D^{(+)}_{2}(z - x) - D^{(+)}_{3}(x - y) D^{(-)}_{2}(z - x) ] \nonumber \\ + D^{F}_{2}(z - x) [ D^{(-)}_{1}(y - z) D^{(+)}_{3}(x - y) - D^{(+)}_{1}(y - z) D^{(-)}_{3}(x - y) ] \label{d123} \end{eqnarray} which are with causal support \cite{third order}. These distributions have the singularity order $ \omega(d_{D_{1},D_{2},D_{3}}) = - 2 $. As in the previous Section we use the alternative notation \begin{equation} d_{123} \equiv d(D_{1},D_{2},D_{3}) \equiv d_{D_{1},D_{2},D_{3}} \end{equation} and when there is no ambiguity about the distributions $ D_{j} $ we simply denote $ d = d_{123} $. There are some associated distributions obtained from $ d_{D_{1},D_{2},D_{3}}(x,y,z) $ applying derivatives on the factors $ D_{j} = D_{m_{j}}, j = 1,2,3 $ for instance \begin{eqnarray} {\cal D}_{1}^{\mu}d_{D_{1},D_{2},D_{3}} \equiv d_{\partial^{\mu}D_{1},D_{2},D_{3}},\quad {\cal D}_{2}^{\mu}d_{D_{1},D_{2},D_{3}} \equiv d_{D_{1},\partial^{\mu}D_{2},D_{3}},\quad {\cal D}_{3}^{\mu}d_{D_{1},D_{2},D_{3}} \equiv d_{D_{1},D_{2},\partial^{\mu}D_{3}}, \end{eqnarray} and so on for more derivatives $ \partial_{\alpha} $ distributed on the factors $ D_{j} = D_{m_{j}}, j = 1,2,3 $. It is known that these distributions can be causally split in such a way that the order of singularity, translation invariance and Lorentz covariance are preserved. The same will be true for the corresponding Feynman distributions. Because $ \omega(d_{123}) = - 2 $ and $ \omega({\cal D}_{i}^{\mu}d_{123}) = - 1 $ the corresponding advanced, retarded and Feynman distributions are unique. For more derivatives we have some freedom of redefinition. As in the previous Section, let us consider the case $ D_{1} = D_{2} = D_{3} = D_{m},~m > 0 $ and study the corresponding distribution $ d_{m,m,m}. $ We consider it as distribution in two variables $ X \equiv x - z,\quad Y \equiv y - z $ and we will need its Fourier transform which we define by \begin{equation} \tilde{d}(p,q) \equiv {1 \over (2\pi)^{4}}~\int e^{i (p\cdot X + q \cdot Y)}~d(X,Y). \end{equation} We will also need the distributions with causal support \begin{eqnarray} f_{1}(x,y,z) = \delta(y - z)~d_{m,m}(x - y) \nonumber \\ f_{2}(x,y,z) = \delta(z - x)~d_{m,m}(y - z) \nonumber \\ f_{3}(x,y,z) = \delta(x - y)~d_{m,m}(y - z) \end{eqnarray} with \begin{equation} \omega(f_{j}) = 0 \end{equation} and the Fourier transforms are: \begin{equation} \tilde{f}_{1}(p,q) = {1 \over (2\pi)^{2}}~\tilde{d}_{m,m}(p),\quad \tilde{f}_{2}(p,q) = {1 \over (2\pi)^{2}}~\tilde{d}_{m,m}(q),\quad \tilde{f}_{3}(p,q) = {1 \over (2\pi)^{2}}~\tilde{d}_{m,m}(P) \label{f} \end{equation} with $ P = p + q. $ \newpage \begin{thm} The following formula is valid: \begin{equation} \tilde{d}_{m,m,m}(p,q) = {1\over 8 (2\pi)^{5}} {1 \over \sqrt{N}}~ [\epsilon(p_{0}) \theta(p^{2} - 4 m^{2})~ln_{1} + \epsilon(q_{0}) \theta(q^{2} - 4 m^{2})~ln_{2} + \epsilon(P_{0}) \theta(P^{2} - 4 m^{2})~ln_{3} ] \label{d-mmm} \end{equation} where \begin{eqnarray} ln_{1} \equiv ln\left({P\cdot q + \sqrt{N (1 - 4 m^{2}/p^{2})} \over P\cdot q - \sqrt{N (1 - 4 m^{2}/p^{2})}}\right) \nonumber \\ ln_{2} \equiv ln\left({P\cdot p + \sqrt{N (1 - 4 m^{2}/q^{2})} \over P\cdot p - \sqrt{N (1 - 4 m^{2}/q^{2})}}\right) \nonumber \\ ln_{3} \equiv ln\left({- p\cdot q + \sqrt{N (1 - 4 m^{2}/P^{2})} \over - p\cdot q - \sqrt{N (1 - 4 m^{2}/P^{2})}}\right) \end{eqnarray} with the notations $ P = p + q $ and $ N \equiv (p\cdot q)^{2} - p^{2} q^{2}. $ The previous expression is continuous in the limit $ N \rightarrow 0 ~(\Leftrightarrow p \parallel q) $ and it is \begin{equation} \tilde{d}_{m,m,m}(p,q) = 2(F_{1} + F_{2} + F_{3}) \end{equation} where \begin{equation} F_{1} \equiv {1 \over P\cdot q}~\tilde{f}_{1}, \quad F_{2} \equiv {1 \over P\cdot p}~\tilde{f}_{2}, \quad F_{3} \equiv {1 \over p\cdot q}~\tilde{f}_{3}. \end{equation} \end{thm} {\bf Proof:} (i) From the definition (\ref{d123}) it follows that we have six contributions: \begin{equation} d(X,Y) = \sum_{j=1}^{6}~d^{(j)}(X,Y) \end{equation} of the form \begin{equation} d^{(j)}(X,Y) = d^{(j)}_{3}(X - Y)~d^{(j)}_{2}(- X)~d^{(j)}_{1}(Y),~j = 1,\dots,6 \end{equation} If we substitute \begin{equation} d^{(j)}(X) = {1 \over (2\pi)^{2}}~\int e^{- i k\cdot X}~\tilde{d}^{(j)}(k) \end{equation} we get \begin{equation} \tilde{d}^{(j)}(p,q) = {1 \over (2\pi)^{2}}~\int dk~\tilde{d}^{(j)}_{3}(k)~\tilde{d}^{(j)}_{2}(k - p)~\tilde{d}^{(j)}_{1}(k + q) \end{equation} We consider for illustration the case $ j = 1 $ for which \begin{eqnarray} \tilde{d}^{(1)}_{3}(k) = {1 \over (2\pi)^{2}}~{1 \over k^{2} - m^{2} - i~0}, \nonumber\\ \tilde{d}^{(1)}_{2}(k) = - {i \over 2\pi}~\theta( - k_{0})~\delta(k^{2} - m^{2}),\quad \tilde{d}^{(1)}_{1}(k) = {i \over 2\pi}~\theta(k_{0})~\delta(k^{2} - m^{2}). \end{eqnarray} We substitute in the previous formula and obtain \begin{equation} \tilde{d}^{(1)}(p,q) = {1 \over (2\pi)^{6}}~\int dk {1 \over k^{2} - m^{2} - i~0}~ \theta(p_{0} - k_{0})~\delta((p - k)^{2} - m^{2})~\theta(k_{0} + q_{0})~\delta((k + q)^{2} - m^{2}) \label{d1} \end{equation} We make the change of variables $ k \rightarrow k + p $ leading to \begin{equation} \tilde{d}^{(1)}(p,q) = {1 \over (2\pi)^{6}}~\int dk {1 \over (k + p)^{2} - m^{2} - i~0}~ \theta(- k_{0})~\delta(k^{2} - m^{2})~\theta(k_{0} + P_{0})~\delta((k + P)^{2} - m^{2}) \label{d1a} \end{equation} and afterwards we use the distribution $ \delta(k^{2} - m^{2}) $ to integrate over $ k_{0}. $ The result is \begin{equation} \tilde{d}^{(1)}(p,q) = {1 \over (2\pi)^{6}}~\int_{\omega_{\bf k} \leq P_{0}} {d{\bf k} \over 2 \omega_{\bf k}}~ \delta(P^{2} - 2 P_{0}\omega_{\bf k} - 2 {\bf P}\cdot {\bf k})~ (p^{2} - 2 p_{0}\omega_{\bf k} - 2 {\bf p}\cdot {\bf k} - i~0)^{-1} \label{d1b} \end{equation} where we have defined $ \omega_{\bf k} \equiv \sqrt{{\bf k}^{2} + m^{2}}. $ This expression is Lorentz invariant. We can use this fact to prove that the integral is zero in the cases $ P^{2} \leq 0 $ and $ P^{2} > 0, P_{0} < 0. $ We are left with the case $ P^{2} = M^{2}~ (M > 0), P_{0} \geq 0 $ so we can evaluate it in a frame where $ P = (M,{\bf 0}). $ In this frame we get \begin{equation} \tilde{d}^{(1)}(p,q) = {1 \over (2\pi)^{6}}~\int_{\omega_{\bf k} \leq M} {d{\bf k} \over 2 M^{2}}~ \delta\Bigl(\omega_{\bf k} - {M\over 2}\Bigl)~ (p^{2} - M p_{0} - 2 {\bf p}\cdot {\bf k} - i~0)^{-1} \label{d1c} \end{equation} It is obvious that we must consider two cases: $ {\bf p} \not= {\bf 0} $ and $ {\bf p} = {\bf 0}. $ (ii) We first consider the case $ {\bf p} \not= {\bf 0}. $ We perform the integration in spherical coordinates $ (r, \theta,\phi) $ with the third axis $ {\bf e}_{3} \parallel {\bf p}. $ The integrals over $ \phi $ and $r$ are elementary. In particular we find out that the integral is non-zero only if $ M \geq 2m $ and we are left with \begin{equation} \tilde{d}^{(1)}(p,q) = \theta(P_{0})~\theta(P^{2} - 4m^{2})~{r_{0} \over 4(2\pi)^{5}M}~ \int d\theta sin\theta~(p^{2} - M p_{0} - 2 |{\bf p}|r_{0} cos\theta - i~0)^{-1} \label{d1d} \end{equation} where $ r_{0} \equiv \sqrt{{M^{2}\over 4} - m^{2}} $. With the new variable $ z = cos\theta $ we get \begin{equation} \tilde{d}^{(1)}(p,q) = \theta(P_{0})~\theta(P^{2} - 4m^{2})~{r_{0} \over 4(2\pi)^{5}M}~I_{0}(A,B) \label{d1e} \end{equation} where \begin{equation} I_{0}(A,B) \equiv \int_{-1}^{1} {dz \over A - B z} \end{equation} and \begin{equation} A = p^{2} - M p_{0} - i~0,\quad B = 2 |{\bf p}|r_{0}. \end{equation} The integral is elementary \begin{equation} I_{0}(A,B) = {1 \over B}~ln\Bigl({A + B\over A - B}\Bigl). \end{equation} Now we want to rewrite the expression $ \tilde{d}^{(1)}(p,q) $ in covariant coordinates. We will use the invariant $N$ defined in the statement of the theorem and also $ I = P\cdot p. $ In the particular frame we have used we have $ I = M~p_{0}, \quad N = M^{2} {\bf p}^{2} $ so it follows that we also have in this frame $ A = - p\cdot q, r_{0} = \sqrt{{P^{2}\over 4} - m^{2}}, {r_{0} \over B} = \sqrt{P^{2}\over N}. $ So, the formula \begin{equation} \tilde{d}^{(1)}(p,q) = \theta(P_{0})~\theta(P^{2} - 4m^{2})~{1 \over 8(2\pi)^{5}}~{1 \over \sqrt{N}} ln_{3} \label{d1f} \end{equation} is valid in the particular frame and, because of Lorentz invariance, it is valid in general. Next we use the relation \begin{equation} \tilde{d}^{(2)}(p,q) = -\tilde{d}^{(1)}(- q,- p) \label{d2} \end{equation} and the obtain the other piece proportional to $ ln_{3}. $ In a similar way we obtain \begin{equation} \tilde{d}^{(3)}(p,q) = - \tilde{d}^{(1)}(q,- P)^{*} \label{d3} \end{equation} \begin{equation} \tilde{d}^{(4)}(p,q) = - \tilde{d}^{(3)}(- p,- q) \label{d4} \end{equation} and these relations lead to the $ ln_{1} $ contribution. Finally \begin{equation} \tilde{d}^{(5)}(p,q) = \tilde{d}^{(3)}(q,p) \label{d5} \end{equation} \begin{equation} \tilde{d}^{(6)}(p,q) = \tilde{d}^{(4)}(q,p) \label{d6} \end{equation} and these relations lead to to the $ ln_{2} $ contribution. (iii) We consider now the case $ {\bf p} = {\bf 0}. $ We return to (\ref{d1c}) which is in this case \begin{equation} \tilde{d}^{(1)}(p,q) = {1 \over (2\pi)^{6}}~\int_{\omega_{\bf k} \leq M} {d{\bf k} \over 2 M^{2}}~ \delta\Bigl(\omega_{\bf k} - {M\over 2}\Bigl)~(p^{2} - M p_{0} - i~0)^{-1} \label{d1g} \end{equation} We also perform the integration in spherical coordinates, but now we can chose the axis $ {\bf e}_{3} $ at will. The result is similar to (\ref{d1e}): \begin{equation} \tilde{d}^{(1)}(p,q) = \theta(P_{0})~\theta(P^{2} - 4m^{2})~{r_{0} \over 2(2\pi)^{5}M A}. \label{d1h} \end{equation} (iv) We prove now that the expression (\ref{d1e}) is continuous in the limit $ {\bf p} \rightarrow {\bf 0} $ and gives us the preceding formula. This is in fact, equivalent to \begin{equation} lim_{B \rightarrow 0} I_{0}(A,B) = {2 \over A} \end{equation} and this is elementary. Lastly, we give the covariant form of (\ref{d1h}). As in the previous case we have: \begin{equation} \tilde{d}^{(1)}(p,q) = {2 \over (2\pi)^{2}}~{1 \over p\cdot q}~\tilde{d}_{m,m}^{(+)}(P) \label{d1h)} \end{equation} where the expression $ \tilde{d}_{m,m} $ was defined in the previous section. We obtain the formula from the statement. $\blacksquare$ \newpage We proceed in the same way for the distributions \begin{equation} d_{i}^{\mu} \equiv {\cal D}_{i}^{\mu} d \end{equation} and we have \begin{equation} \omega(d_{j}^{\mu}) = - 1 \label{deg1} \end{equation} and the result is \begin{thm} For $ N \not= 0 $ the following formula is true: \begin{equation} \tilde{d}_{3}^{\mu}(p,q) = i~({\cal A}^{\mu}_{1}~\tilde{d} + {\cal A}^{\mu}_{2}~\tilde{f}_{3} + {\cal A}^{\mu}_{3}~\tilde{f}_{1} + {\cal A}^{\mu}_{4}~\tilde{f}_{2} ) \label{d3mu} \end{equation} where \begin{equation} {\cal A}^{\mu}_{j}(p,q) = p^{\mu}~a_{j} + q^{\mu}_{j}~b_{j}, \quad j = 1,\dots,4 \end{equation} and \begin{eqnarray} a_{1} = {q^{2} (p\cdot P)\over 2 N}, \quad b_{1} = - {p^{2} (q\cdot P)\over 2 N} \nonumber\\ a_{2} = - {q\cdot P\over N}, \quad b_{2} = {p\cdot P\over N} \nonumber\\ a_{3} = {p\cdot q\over N}, \quad b_{3} = - {p^{2} \over N} \nonumber\\ a_{4} = {q^{2} \over N}, \quad b_{4} = - {p\cdot q\over N}. \end{eqnarray} In the limit $ N \rightarrow 0 $ the previous expression is continuous and we have \begin{equation} \tilde{d}_{3}(p,q) = - i~(p - q)^{\mu}~F_{3} + i~P^{\mu}~(F_{1} + F_{2}). \end{equation} \end{thm} {\bf Proof:} As in the previous Theorem, we obtain the first of the six contributions: \begin{equation} \tilde{d}^{\mu(1)}_{3}(p,q) = - {i \over (2\pi)^{6}}~\int dk {k^{\mu} \over k^{2} - m^{2} - i~0}~ \theta(p_{0} - k_{0})~\delta((p - k)^{2} - m^{2})~\theta(k_{0} + q_{0})~\delta((k + q)^{2} - m^{2}). \label{d1mu} \end{equation} If we make the change of variables $ k \rightarrow k + p $ we obtain \begin{equation} \tilde{d}^{\mu(1)}(p,q) = - i~[ p^{\mu}~\tilde{d}^{(1)}(p,q) + e^{\mu}(p,q) ] \label{d1mua} \end{equation} where \begin{equation} e^{\mu}(p,q) = {1 \over (2\pi)^{6}}~\int dk {k^{\mu} \over (k + p)^{2} - m^{2} - i~0}~ \theta(- k_{0})~\delta(k^{2} - m^{2})~\theta(k_{0} + P_{0})~\delta((k + P)^{2} - m^{2}). \label{e} \end{equation} We proceed as in the previous theorem and obtain as in (\ref{d1b}) \begin{equation} e^{\mu}(p,q) = {1 \over (2\pi)^{6}}~\int_{\omega_{\bf k} \leq P_{0}} {d{\bf k} \over 2 \omega_{\bf k}}~\tau^{\mu}({\bf k})~ \delta(P^{2} - 2 P_{0}\omega_{\bf k} - 2 {\bf P}\cdot {\bf k})~ (p^{2} - 2 p_{0}\omega_{\bf k} - 2 {\bf p}\cdot {\bf k} - i~0)^{-1} \label{e1} \end{equation} where $ \tau^{\mu}({\bf k}) = (- \omega_{\bf k}, {\bf k}). $ Next, we use Lorentz covariance and do the computations in the particular frame we have used above; the result is (for $ P^{2} > 0,~P^{0} \geq 0 $): \begin{equation} e^{\mu}(p,q) = {1 \over (2\pi)^{6}}~\int_{\omega_{\bf k} \leq M} {d{\bf k} \over 2 M^{2}}~~\tau^{\mu}({\bf k})~ \delta\Bigl(\omega_{\bf k} - {M\over 2}\Bigl)~ (p^{2} - M p_{0} - 2 {\bf p}\cdot {\bf k} - i~0)^{-1} \label{e2} \end{equation} We consider the case $ {\bf p} \not= {\bf 0} $ and treat separately the cases $ \mu = 0 $ and $ \mu \not= 0. $ The first case is easy: \begin{equation} e^{0}(p,q) = - {1 \over 2}~M~\tilde{d}^{(1)}(p,q). \end{equation} We also have \begin{equation} e^{1} = e^{2} = 0. \end{equation} The remaining case can be treated as in the preceding theorem; \begin{equation} e^{3}(p,q) = \theta(P_{0})~\theta(P^{2} - 4m^{2})~{r_{0}^{2} \over 4(2\pi)^{5}M}~I_{1}(A,B) \label{e3} \end{equation} where \begin{equation} I_{1}(A,B) \equiv \int_{-1}^{1} {dz z\over A - B z} \end{equation} and $A$ and $B$ have the same values as before: $ A = p^{2} - M p_{0} - i~0,\quad B = 2 |{\bf p}|r_{0}. $ The integral is elementary: \begin{equation} I_{1}(A,B) = {1 \over B}~\Bigl[ - 2 + {A \over B}~ln\Bigl({A + B\over A - B}\Bigl)\Bigl]. \end{equation} In the case $ |{\bf p}| = 0 $ we easily obtain \begin{equation} e^{3}(p,q) = 0. \end{equation} Again, as in the previous theorem, we obtain that the limit $ |{\bf p}| \rightarrow 0 $ of (\ref{e3}) exists and is $0$. It remains to go to an arbitrary frame. After a tedious computation we obtain for $ N \not= 0 $ \begin{equation} \tilde{d}_{3}^{\mu(1)}(p,q) = i~({\cal A}^{\mu}_{1}~\tilde{d}^{(1)} + {\cal A}^{\mu}_{2}~\tilde{f}^{(+)}_{3}) \end{equation} where the expressions $ {\cal A}_{j}~, j = 1,2 $ are those from the statement. For $ N = 0 $ we get \begin{equation} \tilde{d}_{3}^{\mu(1)}(p,q) = - {i \over p\cdot q}~(p - q)^{\mu}~\tilde{f}^{(+)}_{3} \end{equation} If we use now relations similar to (\ref{d2}) - (\ref{d6}) we get the other five contributions and the relation from the statement follows. $\blacksquare$ The expression $ \tilde{d}^{\mu}_{1}, \tilde{d}^{\mu}_{2} $ can be obtained from $ \tilde{d}^{\mu}_{3} $ by clever changes of variables, as in \cite{loop}. We note that for $ N \not= 0 $ the expressions $ \tilde{d}^{\mu}_{j} $ obtained above are identical to those from \cite{loop} where the derivation was made by another method. \newpage Finally we define \begin{equation} d_{jk}^{\mu\nu} \equiv {\cal D}_{j}^{\mu} {\cal D}_{k}^{\nu}d \label{dij} \end{equation} and we have the following orders of singularity: \begin{equation} \quad \omega(d_{jk}^{\mu\nu}) = 0. \label{deg2} \end{equation} We will first consider the case $ d_{33}. $ The result is \begin{thm} For $ N \not= 0 $ the following formula is true: \begin{equation} \tilde{d}_{33}^{\mu\nu}(p,q) = {\cal A}^{\mu\nu}_{1}~\tilde{d} + {\cal A}^{\mu\nu}_{2}~\tilde{f}_{3} + {\cal A}^{\mu\nu}_{3}~\tilde{f}_{1} + {\cal A}^{\mu\nu}_{4}~\tilde{f}_{2} \label{d3munu} \end{equation} where \begin{equation} {\cal A}^{\mu\nu}_{j}(p,q) = - [ p^{\mu} p^{\nu}~\alpha_{j} + q^{\mu} q^{\nu}~\beta_{j} + (p^{\mu} q^{\nu} + p^{\nu} q^{\mu})~\gamma_{j} + \eta^{\mu\nu}~\delta_{j}], \quad j = 1,\dots,4 \end{equation} and \begin{eqnarray} \alpha_{1} = {3 P^{2} p^{2} (q^{2})^{2} \over 8 N^{2}} + {(q^{2})^{2} \over 4 N} + {m^{2} q^{2} \over 2 N}, \quad \beta_{1} = {3 P^{2} q^{2} (p^{2})^{2} \over 8 N^{2}} + {(p^{2})^{2} \over 4 N} + {m^{2} p^{2} \over 2 N} \nonumber\\ \gamma_{1} = - {3 P^{2} p^{2} q^{2} (p\cdot q) \over 8 N^{2}} + {p^{2} q^{2} \over 4 N} - {m^{2} (p\cdot q) \over 2 N}, \quad \delta_{1} = {P^{2} p^{2} q^{2} \over 8 N} + {m^{2} \over 2} \nonumber\\ \alpha_{2} = - {3 (P\cdot q)^{2} (p\cdot q) \over 4 N^{2}} + {4 P\cdot q + p\cdot q \over 4 N}, \quad \beta_{2} = - {3 (P\cdot p)^{2} (p\cdot q) \over 4 N^{2}} + {4 P\cdot p + p\cdot q \over 4 N} \nonumber\\ \gamma_{2} = {3 (P\cdot p) (P\cdot q) (p\cdot q) \over 4 N^{2}} - {2 P^{2} - p\cdot q \over 4 N}, \quad \delta_{2} = - {P^{2} (p\cdot q) \over 4 N} \nonumber\\ \alpha_{3} = {3 (p\cdot q)^{2} (P\cdot q) \over 4 N^{2}} - {4 p\cdot q + P\cdot q \over 4 N}, \quad \beta_{3} = {3 (P\cdot q) (p^{2})^{2} \over 4 N^{2}} \nonumber\\ \gamma_{3} = - {3 (P\cdot q) (p\cdot q) p^{2} \over 4 N^{2}} + {p^{2} \over 2 N}, \quad \delta_{3} = {p^{2} (P\cdot q) \over 4 N} \end{eqnarray} and the expressions $ \alpha_{4},\dots,\delta_{4} $ are obtained from $ \alpha_{3},\dots,\delta_{3} $ making $ p \leftrightarrow q. $ In the limit $ N \rightarrow 0 $ the previous expression is continuous and we have \begin{eqnarray} \tilde{d}_{33}^{\mu\nu}(p,q) = - [ \alpha_{33}(p,q) P^{\mu} P^{\nu} + \eta^{\mu\nu}~\beta_{33}(p,q) ]F_{3} \nonumber\\ - [ \alpha_{33}(q,- P) p^{\mu} p^{\nu} + \eta^{\mu\nu}~\beta_{33}(q, - P) ]F_{1} - [ \alpha_{33}(- p,P) q^{\mu} q^{\nu} + \eta^{\mu\nu}~\beta_{33}(- p,P) ]F_{2} \end{eqnarray} where \begin{eqnarray} \alpha_{33}(p,q) = {1 \over 6}~\Bigl[ 4 - {m^{2} \over 4 P^{2}} - 12 { (P\cdot p) (P\cdot q) \over (P^{2})^{2}} \Bigl] \nonumber\\ \beta_{33}(p,q) = {P^{2} \over 6}~\Bigl( 1 - {m^{2} \over 4 P^{2}} \Bigl) \end{eqnarray} \end{thm} {\bf Proof:} As in the previous Theorems, we obtain the first of the six contributions: \begin{equation} \tilde{d}^{\mu\nu(1)}(p,q) = - {1 \over (2\pi)^{6}}~\int dk {k^{\mu} k^{\nu} \over k^{2} - m^{2} - i~0}~ \theta(p_{0} - k_{0})~\delta((p - k)^{2} - m^{2})~\theta(k_{0} + q_{0})~\delta((k + q)^{2} - m^{2}). \label{d1mnuu} \end{equation} If we make the change of variables $ k \rightarrow k + p $ we obtain \begin{equation} \tilde{d}^{\mu\nu(1)}(p,q) = - p^{\mu} p^{\nu}~\tilde{d}^{(1)}(p,q) - [ p^{\mu} e^{\nu}(p,q) + p^{\nu} e^{\mu}(p,q)] - e^{\mu\nu}(p,q) \label{d1munua} \end{equation} where the expressions $ e^{\mu}(p,q) $ have been defined before - rel (\ref{e}) and \begin{equation} e^{\mu\nu}(p,q) = {1 \over (2\pi)^{6}}~\int dk {k^{\mu}k^{\nu} \over (k + p)^{2} - m^{2} - i~0}~ \theta(- k_{0})~\delta(k^{2} - m^{2})~\theta(k_{0} + P_{0})~\delta((k + P)^{2} - m^{2}). \label{ee} \end{equation} We proceed as in the previous theorem and obtain as in (\ref{d1b}) \begin{equation} e^{\mu\nu}(p,q) = {1 \over (2\pi)^{6}}~\int_{\omega_{\bf k} \leq P_{0}} {d{\bf k} \over 2 \omega_{\bf k}}~ \tau^{\mu}({\bf k})~\tau^{\nu}({\bf k})~ \delta(P^{2} - 2 P_{0}\omega_{\bf k} - 2 {\bf P}\cdot {\bf k})~ (p^{2} - 2 p_{0}\omega_{\bf k} - 2 {\bf p}\cdot {\bf k} - i~0)^{-1} \label{ee1} \end{equation} where $ \tau^{\mu}({\bf k}) = (- \omega_{\bf k}, {\bf k}). $ Next, we use Lorentz covariance and do the computations in the particular frame we have used above; the result is: \begin{equation} e^{\mu\nu}(p,q) = {1 \over (2\pi)^{6}}~\int_{\omega_{\bf k} \leq M} {d{\bf k} \over 2 M^{2}}~~\tau^{\mu}({\bf k})~\tau^{\nu}({\bf k})~ \delta\Bigl(\omega_{\bf k} - {M\over 2}\Bigl)~ (p^{2} - M p_{0} - 2 {\bf p}\cdot {\bf k} - i~0)^{-1} \label{ee2} \end{equation} We consider the case $ {\bf p} \not= {\bf 0}. $ We easily obtain \begin{equation} e^{00}(p,q) = {1 \over 4}~M^{2}~\tilde{d}^{(1)}(p,q), \quad e^{\mu 0}(p,q) = - {1 \over 2}~M~e^{\mu}(p,q) \end{equation} We also have \begin{equation} e^{jk} = 0,~j, k = 1,2,3,~j \not= k. \end{equation} Next \begin{equation} e^{33}(p,q) = \theta(P_{0})~\theta(P^{2} - 4m^{2})~{r_{0}^{3} \over 4(2\pi)^{5}M}~I_{2}(A,B) \end{equation} where \begin{equation} I_{2}(A,B) \equiv \int_{-1}^{1} {dz z^{2}\over A - B z} \end{equation} and $A$ and $B$ have the same values as before: $ A = p^{2} - M p_{0} - i~0,\quad B = 2 |{\bf p}|r_{0}. $ The integral is elementary: \begin{equation} I_{2}(A,B) = {A \over B}~I_{1}(A,B) \end{equation} In the case $ |{\bf p}| = 0 $ the expression $ e^{33}(p,q) $ is the limit $ |{\bf p}| \rightarrow 0 $ of the previous expression. The expressions $ e^{11} $ and $ e^{22} $ can be obtained similarly. It remains to to an arbitrary frame. After a tedious computation we obtain for $ N \not= 0 $ \begin{equation} \tilde{d}_{3}^{\mu\nu(1)}(p,q) = {\cal A}^{\mu\nu}_{1}~\tilde{d}^{(1)} + {\cal A}^{\mu\nu}_{2}~\tilde{f}^{(+)}_{3} \end{equation} where the expressions $ {\cal A}_{j}~, j = 1,2 $ are those from the statement. For $ N = 0 $ we get \begin{equation} \tilde{d}_{3}^{\mu\nu(1)}(p,q) = - (P^{\mu} P^{\nu} \alpha + \eta^{\mu\nu} \beta) \end{equation} where \begin{eqnarray} \alpha = - {1 \over 6 p\cdot q}~\Bigl[ 4 - {m^{2} \over 4 P^{2}} - 12 { (P\cdot p) (P\cdot q) \over (P^{2})^{2}} \Bigl]f^{(+)}_{3} \nonumber\\ \beta = - {P^{2} \over 6 p\cdot q}~\Bigl( 1 - {m^{2} \over 4 P^{2}} \Bigl)f^{(+)}_{3} \end{eqnarray} If we use now relations similar to (\ref{d2}) - (\ref{d6}) we get the other five contributions and the relation from the statement follows. $\blacksquare$ The expression $ \tilde{d}^{\mu}_{11}, \tilde{d}^{\mu}_{22} $ can be obtained from $ \tilde{d}^{\mu}_{33} $ by clever changes of variables, as in \cite{loop}. We note that for $ N \not= 0 $ the expressions $ \tilde{d}^{\mu}_{jj} $ obtained above are identical to those from \cite{loop} where the derivation was made by another method. We still have to consider the case $ d_{12}^{\mu\nu}. $ The result is \begin{thm} For $ N \not= 0 $ the following formula is true: \begin{equation} \tilde{d}_{12}^{\mu\nu}(p,q) = {\cal B}^{\mu\nu}_{1}~\tilde{d} + {\cal B}^{\mu\nu}_{2}~\tilde{f}_{3} + {\cal B}^{\mu\nu}_{3}~\tilde{f}_{1} + {\cal B}^{\mu\nu}_{4}~\tilde{f}_{2} \end{equation} where \begin{equation} {\cal B}^{\mu\nu}_{j}(p,q) = p^{\mu} p^{\nu}~A_{j} + q^{\mu} q^{\nu}~B_{j} + p^{\mu} q^{\nu} C_{j}^{(1)} + p^{\nu} q^{\mu}~C_{j}^{(2)} + \eta^{\mu\nu}~D_{j}, \quad j = 1,\dots,4 \end{equation} and \begin{eqnarray} A_{1} = {3 P^{2} p^{2} (q^{2})^{2} \over 8 N^{2}} + {(q^{2})^{2} \over 4 N} + {q^{2} (P\cdot p) \over 4 N} + {m^{2} q^{2} \over 2 N}, \nonumber\\ B_{1} = {3 P^{2} q^{2} (p^{2})^{2} \over 8 N^{2}} + {(p^{2})^{2} \over 4 N} + {p^{2} (P\cdot q) \over 4 N} + {m^{2} p^{2} \over 2 N}, \nonumber\\ C_{1}^{(1)} = - {3 P^{2} p^{2} q^{2} (p\cdot q) \over 8 N^{2}} + {p^{2} q^{2} \over 4 N} - {m^{2} p\cdot q \over 2 N}, \nonumber\\ C_{1}^{(2)} = - {3 P^{2} p^{2} q^{2} (p\cdot q) \over 8 N^{2}} + {p^{2} q^{2} \over 4 N} - {P^{2} (p\cdot q) \over 2 N} - {m^{2} (p\cdot q) \over 2 N},\quad D_{1} = {P^{2} p^{2} q^{2} \over 8 N} + {m^{2} \over 2} \nonumber\\ A_{2} = - {3 (P\cdot q)^{2} (p\cdot q) \over 4 N^{2}} + {p\cdot q \over N}, \quad B_{2} = - {3 (P\cdot p)^{2} (p\cdot q) \over 4 N^{2}} + {p\cdot q \over N} \nonumber\\ C_{2}^{(1)} = {3 (P\cdot p) (P\cdot q) (p\cdot q) \over 4 N^{2}} - {P^{2} \over 2 N} + {p\cdot q \over 4 N},\quad \nonumber\\ C_{2}^{(2)} = {3 (P\cdot p) (P\cdot q) (p\cdot q) \over 4 N^{2}} + {P^{2} \over 2 N} + {p\cdot q \over 4 N}, \quad D_{2} = - {P^{2} (p\cdot q) \over 4 N} \nonumber\\ A_{3} = {3 (p\cdot q)^{2} (P\cdot q) \over 4 N^{2}} - {P\cdot q \over 4 N}, \quad B_{3} = {3 (p^{2})^{2} (P\cdot q) \over 4 N^{2}} + {p^{2} \over N} \nonumber\\ C_{3}^{(1)} = - {3 (p\cdot q) (P\cdot q) p^{2} \over 4 N^{2}} + {p^{2} \over 2 N},\quad \nonumber\\ C_{3}^{(2)} = - {3 (p\cdot q) (P\cdot q) p^{2} \over 4 N^{2}} - {p^{2} \over 2 N} - {p\cdot q \over N},\quad D_{3} = {p^{2} (P\cdot q) \over 4 N} \nonumber\\ A_{4} = {3 (q^{2})^{2} (P\cdot p) \over 4 N^{2}} + {q^{2} \over N}, \quad B_{4} = {3 (p\cdot q)^{2} (P\cdot p) \over 4 N^{2}} - {p\cdot P \over 4 N} \nonumber\\ C_{4}^{(1)} = - {3 (p\cdot q) (P\cdot p) q^{2} \over 4 N^{2}} + {q^{2} \over 2 N},\quad \nonumber\\ C_{4}^{(2)} = - {3 (p\cdot q) (P\cdot p) q^{2} \over 4 N^{2}} - {q^{2} \over 2 N} - {p\cdot q \over N},\quad D_{4} = {q^{2} (P\cdot p) \over 4 N} \end{eqnarray} In the limit $ N \rightarrow 0 $ the previous expression is continuous and we have \begin{eqnarray} \tilde{d}_{12}^{\mu\nu}(p,q) = [ \alpha_{12}(p,q) P^{\mu} P^{\nu} + \eta^{\mu\nu}~\beta_{12}(p,q) ]F_{3} \nonumber\\ + [ \alpha_{12}(q,- P) p^{\mu} p^{\nu} + \eta^{\mu\nu}~\beta_{12}(q, - P) ]F_{1} + [ \alpha_{12}(- p,P) q^{\mu} q^{\nu} + \eta^{\mu\nu}~\beta_{12}(- p,P) ]F_{2} \end{eqnarray} where \begin{eqnarray} \alpha_{12}(p,q) = - {1 \over 6}~\Bigl( 2 + {m^{2} \over 4 P^{2}} \Bigl), \quad \beta_{12}(p,q) = {P^{2} \over 6}~\Bigl( 1 - {m^{2} \over 4 P^{2}} \Bigl). \end{eqnarray} \end{thm} {\bf Proof:} As in the previous Theorems, we obtain the first of the six contributions: \begin{equation} \tilde{d}^{\mu\nu(1)}_{12}(p,q) = - {1 \over (2\pi)^{6}}~\int dk {(k + q)^{\mu} (k - p)^{\nu} \over k^{2} - m^{2} - i~0}~ \theta(p_{0} - k_{0})~\delta((p - k)^{2} - m^{2})~\theta(k_{0} + q_{0})~\delta((k + q)^{2} - m^{2}). \label{d12mnuu} \end{equation} If we make the change of variables $ k \rightarrow k + p $ we obtain \begin{equation} \tilde{d}^{\mu\nu(1)}_{12}(p,q) = P^{\mu} e^{\nu}(p,q) + e^{\mu\nu}(p,q) \label{d12munua} \end{equation} where the expressions $ e^{\mu}(p,q) $ and $ e^{\mu\nu} $ have been defined before - rel (\ref{e}) and (\ref{ee}). Proceeding as before we get the formulas from the statement. $\blacksquare$ The expression $ \tilde{d}^{\mu}_{23}, \tilde{d}^{\mu}_{31} $ can be obtained from $ \tilde{d}^{\mu}_{12} $ by clever changes of variables, as in \cite{loop}. We note that for $ N \not= 0 $ the expressions $ \tilde{d}^{\mu}_{jk},~j \not= k $ obtained above are identical to those from \cite{loop} where the derivation was made by another method. One can obtain in the same way the expressions \begin{equation} d_{jkl}^{\mu\nu\rho} \equiv {\cal D}_{j}^{\mu} {\cal D}_{k}^{\nu} {\cal D}_{l}^{\rho}d. \label{dijk} \end{equation} We only emphasize in the end the main idea: the chronological products can be obtained from the preceding theorems by a simple operation, namely the causal splitting of a master distribution $d$ given by (\ref{d123}). An explicit procedure to do this causal splitting can be found in \cite{Sc1} and \cite{Sc2}. In fact, if we want to split causally (\ref{d3mu}) it is better to multipy it by $N$ so, if we go in the coordinate spsce, we will have in both hand sides some polynomials in the partial derivatives acting on distributions with causal support. The same idea is valid for (\ref{d3munu}), but we have to multipy by $ N^{2}. $ \newpage
{'timestamp': '2020-06-01T02:10:19', 'yymm': '2005', 'arxiv_id': '2005.14500', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14500'}
arxiv
\section{Supplemental Materials for ``Topological second-order spin-$\frac{3}{2}$ liquids with hinge Fermi arcs"} \section{Symmetric and traceless quadratic forms} The five independent symmetric
and traceless $3\times 3$ quadratic forms are explicitly given by \begin{equation} Q^1=\frac{1}{\sqrt{3}}\begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{pmatrix},~~Q^2=\frac{1}{\sqrt{3}}\begin{pmatrix} 0 & 0 & 1\\ 0 & 0 & 0\\ 1 & 0 & 0 \end{pmatrix},~~Q^3=\frac{1}{\sqrt{3}}\begin{pmatrix} 0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix}, \end{equation} and \begin{equation} Q^4=\frac{1}{\sqrt{3}}\begin{pmatrix} 1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 0 \end{pmatrix},~~Q^5=\frac{1}{3}\begin{pmatrix} -1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 2 \end{pmatrix}. \end{equation} \section{The ground-state flux configuration and static vortex mass} We numerically verify the ground-state flux configuration on the graphite lattice of size $(L \mathbf{n}_1, L \mathbf{n}_2+\mathbf{n}_1, L_z \mathbf{n}_3 )$ with periodic boundary condition, where the basis vectors read $\mathbf{n}_1 = \{\frac{1}{2}, \frac{\sqrt{3}}{2}, 0\}$, $\mathbf{n}_2 = \{-\frac{1}{2}, \frac{\sqrt{3}}{2}, 0\}$, and $\mathbf{n}_3 = \{0, 0, 1\}$. By setting $J_a=1$ for $a=1$, 2, 3 and $J_4=J_5=0$, we recover Kitaev's honeycomb model with ground state energy extrapolating to $E_0 \approx -1.5746$ per unit cell (u.c.). The single vortex energy $\Delta E_\mathrm{vortex} = E_\mathrm{vortex}(L)-E_0(L)$ extrapolates to $\Delta E_\mathrm{vortex} \approx 0.1536$. This is in agreement with Kitaev's results. In the following, we set $J_4=J_5=1$, which renders the two configurations in Fig.~1 equivalent. We note that other values of $J_4$ and $J_5$ are also tested, and qualitatively the same results are found. We further fix $L=32$, which gives relatively well-converged results for both the ground-state and single-vortex energies in the 2D case. Figure~\ref{fig:0vsPi}(a) and (b) show the ground-state energy per u.c. for flux configurations with $0$ flux in each hexagonal plaquette and $\pi$ flux or $0$ flux in each square plaquette, respectively. The latter can be realized by setting the negative interlayer coupling terms $-J_{4/5}$ to $J_{4/5}$. With increasing $L_z$, the former configuration converges to $E_0 \approx -4.2699$ and the latter to $-3.6546$. Now we can further test the validity of the ground state by creating vortices in the $\pi$-flux phase. In the 2D case, string operators can be defined to create isolated vortices. Here, however, for a given unit cell, reversing either one of its in-plane or out-of-plane bonds changes the flux of its two square plaquettes at the same time. Therefore, it is not possible to create isolated vortices (one flux change in a u.c.) as in the 2D case. For simplicity, we create three edge-sharing vortices on the square plaquettes by reversing one single vertical bond. Figure~\ref{fig:3vortices} shows three-vortex energy as a function of $L_z$ (with fixed $L=32$), which converges towards $E_\mathrm{3vort} \approx 0.3972$. \section{Surface States} In the Dirac semimetal phase, helical Fermi arcs appear on the surfaces parallel the the zigzag direction, as shown in Fig.\ref{Surface-States}(a). The Fermi arcs connect the projections of the two Dirac points crossing the boundary of the Brillouin zone. In the nodal-line phase, drumhead states appear on the surface normal to the $z$-direction [Fig.\ref{Surface-States}(b)], and are bounded by the projected images of the nodal lines in the surface Brillouin zone. The drumhead states come from the $1$D topological charges of the nodal lines. In the nodal-line semimetal phase, the zigzag-$z$ surface spectrum has a finite energy gap as shown in Fig. \ref{Surface-States}(c). \begin{figure} \includegraphics[scale=0.6]{GS_0vsPi.pdf} \caption{\label{fig:0vsPi} Ground-state energy per unit cell for flux configurations with $0$ flux in each hexagonal plaquette and (a) $0$ flux or (b) $\pi$ flux in each square plaquette.} \end{figure} \begin{figure} \includegraphics[scale=0.6]{3vortices.pdf} \caption{\label{fig:3vortices} The energy of three edge-sharing vortices on the square plaquettes.} \end{figure} \begin{figure} \includegraphics[scale=1]{Surface_States.pdf} \caption{Surface states. (a) The Fermi arcs on the zigzag-$z$ surface in the real Dirac semimetal phase. (b) The drumhead states on the $xy$-surface in the nodal-line semimetal phase when $J_{-}\ne 0$. (c) The gapped spectrum for the zigzag-$z$ surface in the nodal-line semimetal phase. \label{Surface-States}} \end{figure} \section{Space symmetries of the model in the main text} In this section we discuss the representation of space symmetries in our model in detail. The translational distance for the unit cells along the $z$-direction is $2c$. Hence, the Hamiltonian in the main text is not periodic in the first Brillouin zone for $k_z\in[-\pi/2c,\pi/2c)$. For the non-periodic Hamiltonian, the twofold screw rotation symmetry $\mathcal{S}_{2z} $ and mirror reflection symmetry $M_z$ are represented by \begin{equation} \begin{split} \hat{\mathcal{S}}_{2z} &=\sigma_2\otimes\tau_0 \hat{I}_{xy}=\mathrm{i}\Gamma^4\Gamma^2\hat{I}_{xy},\\ \hat{M}_z &=\sigma_1\otimes\tau_2\hat{I}_z=\mathrm{i}\Gamma^4\Gamma^5\hat{I}_z. \end{split} \end{equation} As we see from Fig.1(a) in the main text, the twofold screw rotation symmetry $\mathcal{S}_{2z}$ is nonsymmorphic. In other words, it is the twofold rotation followed by a half translation along the $z$-direction. The combination of $\mathcal{S}_{2z}$ and $M_z$ is the off-centered spatial inversion symmetry $\mathcal{P}$. $\mathcal{P}$ and time reversal symmetry $T$ are represented by \begin{equation} \begin{split} \hat{\mathcal{P}} &=\hat{\mathcal{S}}_{2z}\hat{M}_z=-\mathrm{i}\sigma_3\otimes\tau_2\hat{I}=\Gamma^2\Gamma^5\hat{I},\\ \hat{T} &=\sigma_0\otimes\tau_3\hat{\mathcal{K}}\hat{I}=\Gamma^5\hat{\mathcal{K}}\hat{I}. \end{split} \end{equation} The combination of time-reversal and the off-centered inversion is the off-centered spacetime inversion symmetry $\mathcal{P}T$, which is represented as \begin{equation} \begin{split} \hat{\mathcal{P}}\hat{T}=\sigma_3\otimes\tau_1=\Gamma^2\hat{I}. \end{split} \end{equation} To see the effects of half-translation in $\mathcal{S}$ for the representations of relevant symmetry operators, we need to resume the periodicity of the Hamiltonian. This is implemented by the unitary transformation \begin{equation} V(k_z)=\begin{pmatrix} e^{\mathrm{i} k_z/4} & 0 \\ 0 & e^{-\mathrm{i} k_z/4} \end{pmatrix}\otimes\tau_0. \end{equation} Then, the interlayer terms are transformed as \begin{equation} \begin{split} & V(k_z)\left(2J_+\cos \frac{k_z}{2} \sigma_2\otimes\tau_2+2J_-\sin \frac{k_z}{2}\sigma_2\otimes\tau_2\right) V^\dagger(k_z)\\ =& J_+(1+\cos k_z)\sigma_2\otimes\tau_1+J_+\sin k_z\sigma_1\otimes\tau_1 + J_{-}(1-\cos k_z)\sigma_1\otimes\tau_2+J_{-}\sin k_z \sigma_2\otimes\tau_2\\ =& J_{+}[(1+\cos k_z)\Gamma^3+\sin k_z \Gamma^4]+J_{-}[(1-\cos k_z)\mathrm{i}\Gamma^4\Gamma^5+\sin k_z\mathrm{i}\Gamma^3\Gamma^5], \end{split} \end{equation} where \begin{equation} J_{\pm}=\frac{1}{2}(J_5\pm J_4). \end{equation} It is manifest that the periodicity along $k_z$ is satisfied. The twofold screw rotation operator $\hat{\mathcal{S}}_{2z}$ is accordingly transformed as \begin{equation} \begin{split} \hat{\mathcal{S}}'_{2z}=V(k_z)\hat{\mathcal{S}}V^\dagger(k_z) &=\begin{pmatrix} 0 & -\mathrm{i} e^{\mathrm{i} k_z/2} \\ \mathrm{i} e^{-\mathrm{i} k_z/2} & 0 \end{pmatrix}\otimes\tau_0 \hat{I}_{xy}\\ &=-\mathrm{i} \mathcal{G}\begin{pmatrix} 0 & e^{\mathrm{i} k_z/2}\\ e^{-\mathrm{i} k_z/2} & 0 \end{pmatrix}\otimes\tau_0 \hat{I}_{xy} \end{split} \end{equation} with \begin{equation}\label{Gauge} \mathcal{G}=\sigma_3\otimes\tau_0. \end{equation} Now it is manifest that the momentum dependence of the operators comes from the half translation in the screw rotation. The followed $\mathcal{G}$ is the $\mathbb{Z}_2$ gauge transformation performed to restore the $\mathbb{Z}_2$ phases in the original hopping pattern in Fig.1(a) in the main text. $M_z$ and $T$ are symmorphic, and therefore their representations are invariant under the transformation $V(k_z)$. Hence, the operators $\hat{\mathcal{P}}$ and $\hat{\mathcal{P}}\hat{T}$ are transformed as \begin{equation} \hat{\mathcal{P}}'=V(k_z)\hat{\mathcal{P}}V^\dagger(k_z)=\begin{pmatrix} -\mathrm{i} e^{\mathrm{i} k_z/2} & 0\\ 0 & \mathrm{i} e^{-\mathrm{i} k_z/2} \end{pmatrix}\otimes\tau_2\hat{I}, \end{equation} and \begin{equation} \hat{\mathcal{P}}'\hat{T}=V(k_z)\hat{\mathcal{P}}\hat{T}V^\dagger(k_z)=\begin{pmatrix} e^{\mathrm{i} k_z/2} & 0\\ 0 & -e^{-\mathrm{i} k_z/2} \end{pmatrix}\otimes\tau_1 \hat{\mathcal{K}}. \end{equation} The momentum-dependence for the operators comes from the fact that they are off-centered. \section{Another exactly solvable model and its symmetries} \begin{figure} \includegraphics[scale=1]{Lattice-2.pdf} \caption{\label{fig:Lattice-2} The coloring of the exactly solvable model and the Dirac points related by mirror symmetry on the zigzag-$z$ surface in the crystalline topological superconductor phase.} \end{figure} The second exactly solvable model is illustrated in Fig.\ref{fig:Lattice-2}. The tight-binding model for the Majorana spinors is given by \begin{equation}\label{Graphite-Model} \mathcal{H}^c(\bm{k})=\sum_{a=1}^{3}f_a(\bm{k})\Gamma^a+g(k_z)\Gamma^4. \end{equation} The last term violates the twofold screw rotation symmetry $\mathcal{S}_{2z}$, but preserves $\hat{M}_z$ and $\hat{T}$. This can also be seen clearly from the hopping pattern in Fig.\ref{fig:Lattice-2}. Since $PT$ symmetry is broken, the Dirac points lost their protective symmetry, and the spectrum is fully gapped. Actually, the Majorana spinors are in a crystalline topological superconductor phase in class BDI. Two Dirac points reside on any surface parallel to the zigzag direction, which are related by the mirror symmetry $M_x$. We now discuss the symmetries of the model. It has the symmorphic twofold rotation symmetry $C_{2z}$, which is represented as \begin{equation} \hat{C}_{2z}=\mathcal{G}\,\sigma_0\otimes \tau_2\hat{I}_{xy}=\mathrm{i}\Gamma^2\Gamma^5 \hat{I}_{xy}, \end{equation} where the spatial rotation $\sigma_0\otimes\tau_2$ is followed by the gauge transformation $\mathcal{G}$, Eq.~\eqref{Gauge}, to restore the phases in the hopping pattern. The mirror symmetry $M_z$ is represented by \begin{equation} \hat{M}_z=\sigma_1\otimes\tau_2\hat{I}_z=\mathrm{i}\Gamma^4\Gamma^5\hat{I}_z. \end{equation} We observe that the gauge transformation $\mathcal{G}$ anti-commutes with $\hat{M}_z$, \begin{equation}\label{Modified-commu} \{\mathcal{G},\hat{M}_z\}=0. \end{equation} Because of this, the commutation relation of $M_z$ and $C_{2z}$ is projectively modified by the $\mathbb{Z}_2$ coefficient as \begin{equation} \{\hat{M}_z,\hat{C}_{2z}\}=0. \end{equation} The (centered) spatial inversion symmetry is represented by the combination, \begin{equation} \hat{P}=\hat{C}_{2z}\hat{M}_z\hat{I}=\Gamma^2\Gamma^4\hat{I}. \end{equation} Because of Eq.~\eqref{Modified-commu}, we find \begin{equation} \hat{P}^2=-1. \end{equation} The time-reversal symmetry $T$ is still represented by $\hat{T}=\Gamma^5\hat{\mathcal{K}}\hat{I}$, which commutes with $\hat{P}$, \begin{equation} [\hat{P},\hat{T}]=0. \end{equation} The spacetime inversion symmetry $PT$ is represented by \begin{equation} \hat{P}\hat{T}=\mathrm{i}\sigma_2\otimes\tau_3\hat{\mathcal{K}}=\Gamma^3\Gamma^2\hat{\mathcal{K}}. \end{equation} It satisfies \begin{equation} (\hat{P}\hat{T})^2=-1 \end{equation} as claimed in the main text. Hence, the (centered) spacetime inversion is consistent with that of the spin-$\frac{3}{2}$, whose square is also equal to $-1$. \end{document}
{'timestamp': '2020-06-01T02:09:24', 'yymm': '2005', 'arxiv_id': '2005.14468', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14468'}
arxiv
\section{Introduction} Evaluating the performance of a power deliver network (PDN) has become a critical issue in very large-scale integration (VLSI) designs. The power supply from the package do
wn to on-chip integrated circuits is distributed through metal layers and vias, which could be modeled as a linear network consisting of resistors, capacitors and inductors \cite{nassif2008power}. The on-chip circuit modules are simplified as time-varying current sources in PDN analysis. Due to the shrinking feature size and increasing design complexity, the network could easily consist of millions to billions of elements which result in an extremely huge system. Moreover, the values of elements in a system level PDN may vary greatly and the transient responses include many different scaled time constants, which makes the whole differential system very stiff. In order to characterize the long term dynamic behavior, an extended time span at small-scaled time steps is necessary and extra computation efforts are required. At the same time, the stiffness of the system is increased which degrades the performance of traditional simulation methods. All the challenges make a fast and accurate simulator in high demand. Let $x(t)\in \IR^N$ be the solution to a system of stiff differential equations,\cite{chen2018transient, AWESOME} \beqq\label{sys0} \frac{dq(t)}{dt}+f(x(t))=u(t), \; x(0)=x_0, \eeqq where $u(t)$ is the input signal to the circuit system, $x(t)\in \IR^N$ of large dimension $N$ denotes nodal voltages and branch currents at time $t$ and $q,f\in \IR^N$ are the charge(or flux) and current (or voltage) terms, respectively. The system is governed by Kirchhoff's current law and voltage law. With linearization, we have \beqq\label{sys} C \frac{dx}{dt}+G x=u(t), x(0)=x_0,\eeqq where $C$ and $G$ both are $N\times N$ matrices, which are the Jacobian matrices of $q$ and $f$ with respect to $x$, respectively. In the study, we assume that $C,G$ are constant matrices and \begin{equation}\label{assGC} \begin{cases} &\textrm{ $G$ is positive definite, but not necessarily symmetric}; \\ & \textrm{ $C$ is positive semi-definite and symmetric. } \end{cases} \end{equation} Every node is supposed to connect to power or ground via a path of resistors, which makes $G$ nonsingular. For a stiff system, the solution can be of multiple timescales, i.e., the attractive solution is surrounded with fast-changing nearby solutions. When $C$ is nonsingular, the solution can be formulated as exponentials of the matrix $A:=C^{-1} G$. There are various ways to implement the computation\cite{Moler78},\cite{Moler03} depending on the state companion matrix $A$. When $A$ is a matrix in small size, the most effective algorithm is a scaling-and-squaring method based on Pad{\'e} approximation\cite{Tref12}. When $A$ is sparse and large, one general and well-established technique is approximating the action of the matrix exponentials in the class of Krylov subspaces. One essential ingredient is the evaluation or approximation of the product of the exponential of the Jacobian $A$ with a vector $v$. The application of Krylov subspace techniques has been actively investigated in the literatures\cite{Fri_1989, Saad92,Moler03,Hoch09,Wright_phi_2012,JIMENEZ2020112758}. In general, the nonlinear form in (\ref{sys0}) can be numerically handled by various exponential Runge-Kutta schemes with the aid of exponential integrators\cite{Hoch09,hochbruck_ostermann_2010} and references therein. It is well-known that Krylov subspace methods for matrix functions exhibits super-linear convergence behavior under sufficient large Krylov dimension (larger than the norm of the operator)\cite{Saad92}\cite{Hoch}. Recently, researchers observe the superiority of rational Krylov subspace methods over standard Krylov subspace methods, in particular, the spectrum of the operator lies in the half-plane, e.g., the Laplacian operators in PDEs\cite{Drus98},\cite{Grim08}. The convergence of computing exponential integrators of evolution equations in the resolvent Krylov subspace is independent of the operator norm of $A$ from one numerical discretization, when $A$ in $\exp(-A)$ has numerical range(or called field of values) in the right half plane~\cite{Grim12}\cite{Tanj17}. Exponential integrator based methods have been introduced for PDN transient simulations~\cite{zhuang2016simulation,chen2018transient}. Compared to the traditional linear multi-step methods, the matrix exponential based method is not bounded by the Dahlquist stability barrier thus larger step size can be employed~\cite{wanner2006dahlquist, zhuang2016simulation}. The stability of matrix exponential based method when applied to ODEs has been well established in previous work \cite{Weng12_TCAD, zhuang2016simulation}. For general circuit simulation with DAEs, the stability remains an interesting topic \cite{freund2000krylov, ilchmann2014surveys, winkler2003stochastic, takamatsu2010index}. Numerical stability issues are reported in \cite{chen2018transient, AWESOME} and reveal the limitation of matrix exponential computations with Krylov subspace. Similar problems occur in the eigenvalue computation \cite{IRA, nour1987implement} and model order reduction for interconnect simulation \cite{rommes2009exploiting, IRA2}, where Krylov subspace methods are widely used. As one shift-and-invert method, one modified Arnoldi algorithm for matrix exponential are proposed to provide stable computations of matrix exponentials, where Arnoldi vectors are orthogonal with respect to the system operator $C$\cite{chen2018transient, AWESOME}. In this paper, we shall examine the modified shift-and-invert Arnoldi algorithm from the perspective of numerical ranges, which provides one theoretical foundation for the Arnoldi algorithm described in~\cite{2018transient, AWESOME}. Since the matrix $C$ could be singular in PDN transient simulation, we introduce $C$ semi-inner product as well as its induced norm, \[ \langle x, y\rangle_C:=\Re(x^* C y), \; \|x\|_C:=\Re(x^* C x) \] to derive the error analysis, instead of the ordinary inner product $\langle x,y\rangle:=\Re(x^* y)$. Likewise, the $C$-norm $\|x\|_C:=\sqrt{x^* Cx}$ is used to define the so-called $C$-numerical ranges in (\ref{FC}). The advantage of $C$ semi-inner product introduced in the modified Arnoldi algorithm is two-fold: the null-space component is removed in the Arnoldi iterations and the $C$-numerical range of the operator in the matrix exponentials lies in the right half plane. The numerical range of the upper Hessenberg matrix is properly restricted within a disk with center at $1/2$ and radius $1/2$. The $C$ semi-inner product as well as the associated Arnoldi algorithm have been employed for different purposes, e.g, solving generalized eigenvector problems\cite{ERICSSON, IRA} and generating stable and passive Arnoldi-based model order reduction\cite{IRA2}. The main contributions are listed as follows. With the aid of eigenvectors of $C$ as a basis, solutions $x(t)$ to PDNs can be decomposed to a sum of $x_\cR(t)$ and $x_\cN(t)$. The shift-and-invert Krylov method in \cite{AWESOME} computes $x_\cR(t)$, which actually captures the dominant transient dynamical behaviors. The orthonormal basis of Krylov subspace is generated by a quadratic norm with the system matrix to preserve the passivity property of the system, which yields stable transient simulations. The positive definite matrix $G$ guarantees the $C$-numerical range of $G^{-1} C$ lying the right half plane, which establishes the convergence to $x_\cR(t)$ as Krylov dimension tends to infinity, including posterior error bounds and prior error bounds. The shift parameter $\gamma$ in the shift-and-invert method provides the flexibility to confine the spectrum of ill-conditioned systems \cite{ericsson1980spectral}. The error with $\varphi_k$-functions tends to $0$ as the dimension increases. In the case of $\varphi_0$ computation with $\gamma$ proportional to time step size, the error curve with respect to $\log \gamma$ is a $\cap$-shaped curve. The stagnation in the small $\gamma$ can be significantly improved, when the $\varphi_1$ or $\varphi_2$ computation is introduced, which is consistent with empirical studies reported in (\cite{AWESOME}). The rest of this paper is organized as follows. The differential algebraic equations(DAEs) framework is introduced in Sec.~\ref{Sec_back}. The explicit formulations of solutions in the basis of eigenvectors of $G^{-1} C$ and in the basis of eigenvectors of $C$ are given in section~\ref{Exact_sol_1} and~\ref{Exact_sol_2}, respectively. In the paper, we focus on the computation of the projected solution $x_\cR(t)$. In section~\ref{Krylov}, we introduce Krylov space corresponding to the shift-and invert method, which is used to approximate the solution. In section~\ref{Posterior}, we give a posterior error bound based on the residual errors and prior error bound. In section~\ref{simulations}, we provide simulations on RLC networks with $G$ only positive semidefinite to validate the effectiveness of the modified shift-and-invert Arnoldi algorithm and examine the error behaviors in computing matrix exponentials. \subsection{Solutions of nonsingular systems }\label{Sec_back} Suppose that $C$ is nonsingular with $A=C^{-1}G$. The variation-of-constants formula yields the solution $x(t)$ described by \beqq\label{1sol} x(t)=\exp(-tA) x_0+\int_0^t \exp(-(t-s)A) C^{-1} u(s)\, ds. \eeqq Introducing so-called phi-functions,\beqq\label{phix_5} \varphi_0(z):=\exp(z),\;\varphi_{k+1}(z):=(\varphi_k(z)-(k!)^{-1})/z \textrm{ for $k\ge 0$ }, \eeqq we can approximate (\ref{1sol}) under linearization on the source term $C^{-1} u(s)\approx b+b's$ as a sum of the $\varphi_0$, $\varphi_1$ and $\varphi_2$ terms. \beqq\label{phi_x} x(t+h)\approx \varphi_0(-hA ) x(t)+h \varphi_1(-hA) b+h^2 \varphi_2(-hA) b', \eeqq where $\varphi_0(z)=\exp(z)$ and $\varphi_1(z)=z^{-1}(\exp(z)-1)$. One can employ the shift-and-invert Arnoldi transform to solve one nonsingular differential system as in (\cite{Botchev}) Briefly, let $A=C^{-1}G$ and construct the Krylov subspace with respect to $(I+\gamma A)^{-1}$ with a parameter $\gamma>0$, i.e., \[ (I+\gamma A^{-1})^{-1} V_m =V_{m+1} \widetilde H_m, \] where one orthogonal basis matrix $V_m\in \IR^{N\times m}$ and one upper-Hessenburg matrix $\widetilde H_m\in \IR^{(m+1)\times m}$ are generated. Let $H_m$ be the sub-matrix of $\widetilde H_m$ without the last row. Then the terms $\varphi_0, \varphi_1$ in (\ref{1sol}) can be approximated by the exponential function of $H_m$, e.g., \beqq\label{solap} \exp(-tA) x_0\approx \|x_0\| \exp(-\gamma t (H_m^{-1}-I_m)) e_1. \eeqq \subsection{Solutions of singular systems } \label{Exact_sol_1} A nonsingular matrix $C$ cannot always achieved in general power delivery networks. For instance, the nodes without nodal capacitance or inductance would contribute to the algebraic equations and the corresponding matrix $C$ is not invertible. One major impact from the singularity is that the system in (\ref{sys}) is in fact one combination of differential equations and algebraic equations, i.e., $x(t)$ must satisfy the range condition: $x(t)-G^{-1}u(t)$ in the range of $G^{-1} C$. In addition, since the projection $H_m$ is constructed from an initial vector, without careful and proper handling, the matrix $H_m$ could become a nearly degenerate matrix, and (\ref{solap}) boils down to be an erroneous approximation. Hence, it is natural to perform some proper decomposition on $x(t+h)$ based on nonzero and zero eigenvalues, so that $H_m$ is not contaminated by null vectors and the solution $x(t)$ can be computed accurately. We discuss two decompositions to express the solutions. Start with the standard approach in differential equations. (This approach is listed as Method 16 in~\cite{Moler03}.) Let $G^{-1}C=V \Lambda V^{-1}$ be the Joran canonical form decomposition of $G^{-1}C$, where \[ \Lambda=\left( \begin{array}{cc} J_\cR & 0 \\ 0 & J_\cZ \end{array} \right)\in \IC^{N\times N} \] is in Jordan normal form. The submatrix $J_\cR\in \IC^{r\times r}$ consists of a few Jordan blocks corresponding to nonzero eigenvalues of $G^{-1}C$ and $J_\cZ\in \IR^{(N-r)\times (N-r)}$ is a nilpotent matrix corresponding to eigenvalue zero of $G^{-1} C$. Since the null space of $G^{-1}C$ has dimension $N-n$, the algebraic multiplicity of the eigenvalue zero is not less than $N-n$. Write $V=[V_\cR, V_\cZ],\; V_\cZ:=[V_\cG, V_\cN ]$, where columns of $V_\cR$ and $V_\cZ$ are the (generalized) eigenvectors of nonzero eigenvalues, respectively. Columns of $V_\cG $ and $ V_\cN$ are the generalized eigenvectors and the eigenvectors of eigenvalue $0$. That is, columns of $V_\cN$ are the null vectors of $G^{-1} C$. Let $U:=(V^{-1})^*=[U_\cR, U_\cZ]$, where $A^*$ is the Hermitian transpose of a matrix $A$. Consider the solution decomposition, \beqq \label{apx} x(t)=x_\cR(t)+x_\cZ(t)= V_\cR x_1 (t)+V_\cZ x_2 (t)\eeqq with some vector functions $x_1 (t), x_2(t)$. Let \beqq\label{GC1} U^* G^{-1} C V= \left( \begin{array}{cc} J_\cR & 0 \\ 0 & J_\cZ \end{array} \right). \eeqq Multiplying with $U^* G^{-1}$ on (\ref{sys}) yields one differential equation for $x_1$ \beqq\label{eq15} J_\cR \frac{dx_1}{dt}+ x_1=U_\cR^* G^{-1}u(t) \eeqq and \beqq\label{eq15''} J_\cZ \frac{dx_2}{dt}+ x_2=U_\cZ^* G^{-1}u(t). \eeqq Focus on (\ref{eq15''}) first. For simplicity, assume that $G^{-1}u(t)$ is a linear function in $t$, i.e. for some constant vectors $w_0, w_1$, \[ U_\cZ^* G^{-1}u(t)=w_0+w_1 t.\] The solution $x_2(t)$ is also linear and can be expressed as \[ x_\cZ(t)= V_\cZ x_2(t)=V_\cZ(w_1 t+w_0-J_\cZ w_1)=V_\cZ( U_\cZ^* G^{-1}u(t)-J_\cZ U_\cZ^* G^{-1}\frac{d u(t)}{dt}). \] Return to (\ref{eq15}). Let $\widetilde u(t)=J_\cR^{-1} U_\cR ^* G^{-1} u(t)$. The solution $x_1(t)$ in (\ref{eq15}) can be expressed as \beqq\label{sol2new} x_{\cR}(t):=V_\cR x_1(t)= V_\cR \left\{\exp(-t J_\cR ^{-1}) U_\cR ^* x(0)+\exp(-t J_\cR^{-1}) \int_0^t \exp(sJ_\cR ^{-1}) \widetilde u(s) \, ds\right\}. \eeqq \subsection{Solution decomposition under eigenvectors of $C$}\label{Exact_sol_2} The matrices $V_\cR, U_\cR, J_\cR$ in (\ref{sol2new}) are generally complex-valued, which makes the computation for large PDN systems very challenging. Next, we introduce one set of \textit{real} basis vectors to express the solution in (\ref{sys}), the eigenvectors of $C$. Let $C=V_C C_1V_C^\top $ be the eigenvector decomposition of $C$ with $C_1$ diagonal and singular. Let $P_C=V_C V_C^\top$ be the orthogonal projection matrix on the range of $C$. Also introduce orthogonal subspaces $\cR$ and $\cN$, \begin{eqnarray} &&\cR:=\{ P_C x : x\in \IR^N\}, \; \\ &&\cN:=\{ x\in \IR^N: P_C x=0 \}. \end{eqnarray} We employ \beqq\label{V_def} V:=[V_\cR, V_\cN], \; V_\cR=V_C, U:=[U_\cR, U_\cN]=(V^{-1})^\top, \eeqq to decouple the system in (\ref{sys}), where columns of $V_\cR \in \IR^{N\times n}$,$V_\cN \in \IR^{N\times (N- n)}$ are basis vectors in $\cR$ and $\cN$, respectively. Write $G, C$ in block forms, \beqq\label{GC} U^\top GV= \left( \begin{array}{cc} G_1 & G_2 \\ G_3 & G_4 \end{array} \right),\;U^\top CV= \left( \begin{array}{cc} C_1 & 0 \\ 0 & 0 \end{array} \right), \eeqq where $C_1\in \IR^{n\times n}$ is non-singular, a positive definite and symmetric sub-matrix. Consider the following solution decomposition, \beqq \label{apx1} x(t)=x_\cR(t)+x_\cN(t)= V_C x_1 (t)+V_\cN x_2 (t)\eeqq with some vector functions $x_1 (t), x_2(t)$. Applying $G^{-1}$ on (\ref{sys}) yields \textit{ one range consistency constraint on $x(t)$ } that $x(t)-G^{-1}u(t)$ must lie in the range of $G^{-1}C$, including the initial vector $x(0)$. Actually, from (\ref{GC}), the system in (\ref{sys}) is a combination of one differential system and one algebraic system, i.e., \begin{eqnarray} &&C_1 \frac{dx_1}{dt}=-G_1 x_1-G_2 x_2+(u)_1\label{eq1}\\ &&G_3 x_1+G_4 x_2=(u)_2. \label{eq2} \end{eqnarray} Suppose $G_4$ is invertible. With (\ref{eq2}), we can eliminate $x_2$ in (\ref{eq1}) and reach one \textit{nonsingular} differential system of $x_1$, i.e., \begin{eqnarray} C_1 \frac{dx_1}{dt}&=&-(G_1-G_2 G_4^{-1} G_3) x_1+G_2G_4^{-1}u_2+u_1\\ &=&-(G^{-1})_{1,1}^{-1} x_1+G_2G_4^{-1}u_2+u_1\label{eq61}. \end{eqnarray} Such a system of differential-algebraic equations can also occur in the simulation of mechanical multi-body systems, e.g.\cite{Drazin}. Finally, we can determine $x_\cN$, i.e., $x_2(t)$ from (\ref{eq2}), if $G_4$ is invertible. Hereafter we shall focus on the computation of $x_1(t)$. Keep in mind that the block form in (\ref{GC}) is only of theoretical interest, since the explicit formulation requires the information of eigenvectors of $C$. In practical applications of large dimension, the explicit formulation in (\ref{eq1},\ref{eq2}) is unlikely to be known in advance. Next, we introduce one sufficient condition: assume the \textit{ positive definite } property on $G$, which ensures the invertibility of $ G_4:=V_\cN^\top G V_\cN$. \begin{prop}\label{prop1.2} Assume that $C, G$ satisfy (\ref{assGC}) with $v^\top Gv\ge \epsilon \|v\|^2$ for some positive scalar $\epsilon$. Let \beqq \label{Bdef} B=G^{-1}C,\; B_{1,1}=V_C^\top B V_C.\eeqq Then the matrix $B_{1,1}$ is invertible. In addition, the eigenvalue $\lambda$ of $B_{1,1}$ has positive real part. \end{prop} \begin{proof} We show the invertibility of $G_4$ first. Let $v_2$ be a null vector of $G_4$. Take $v=[0 , v_2^\top ]^\top\in \IR^N $. Then $v^\top G v=v_4^\top G_4 v_2=0\ge \epsilon \|v_2\|^2$ implies $v_2=0$, i.e., the invertibility. Second, multiplying with $V_C^\top G^{-1}$ on (\ref{sys}) yields one differential equation for $x_1$ \beqq\label{eq15'} B_{1,1} \frac{dx_1}{dt}+x_1=V_C^\top G^{-1} V_C C_1 x_1+x_1=V_C^\top G^{-1} u. \eeqq Let $H=G^{-1}$. With $V=[V_C, V_\cN]$, write $V^\top H V=\left( \begin{array}{cc} H_1 & H_2 \\ H_3 & H_4 \end{array} \right).$ Since $B_{1,1}=V_C^\top B V_C=V_C^\top G^{-1} V_C C_1=H_1 C_1$, we can calculate one explicit form for $H_1^{-1}$. Indeed, $GH=I$ gives $H_3=G_4^{-1} G_3 H_1$ and $(G_1-G_2 G_4^{-1} G_3)H_1=$ the identity matrix. Likewise, $HG=I$ gives $H_1(G_1-G_2 G_4^{-1} G_3)=$ the identity matrix. Therefore, $G_1-G_2 G_4^{-1} G_3$ is $H_1^{-1}$, and thus the invertibility of $B_{1,1}$ is verified, \[ B_{1,1}^{-1}=C_1^{-1} (V_C^\top G^{-1} V_C)^{-1}= C_1^{-1} (G_1-G_2 G_4^{-1} G_3). \] Lastly, let $v$ be one eigenvector of $B_{1,1}$ corresponding to eigenvalue $\lambda$. Choosing $U_\cR=V_\cR=V_C$, \[ \lambda v= V_C^\top B V_C v=V_C^\top G^{-1} C V_C v=V_C^\top G^{-1} V_C C_1 v \] implies \[ \lambda v^*C_1 v= (V_C C_1 v)^* G^{-1} V_C C_1 v \] and thus the positive real part is verified by \[ \Re(\lambda) v^*C_1 v=\frac{1}{2} (G^{-1}V_C C_1 v)^* (G^\top +G) (G^{-1} V_CC_1 v). \] \end{proof} With the above proposition, we can derive the solution to (\ref{eq15'}) as stated below. \begin{prop}\label{case2} Assume that $C, G$ satisfy (\ref{assGC}). Let $V:=[V_C, V_\cN]$ in (\ref{V_def}). Let $B_{1,1}:= V_C^\top G^{-1} C V_C$. Let $\widetilde u:=(B_{1,1})^{-1} V_C^\top G^{-1} u$. Then the projected solution $x_\cR(t)$ is given by \beqq\label{sol2} x_{\cR}(t):=V_C x_1(t)= V_C \left\{\exp(-t B_{1,1}^{-1}) V_C ^\top x(0)+\exp(-t B_{1,1}^{-1}) \int_0^t \exp(s B_{1,1}^{-1}) \widetilde u(s) \, ds\right\}. \eeqq In addition, the projected solution $x_\cN(t)$ is given by \beqq\label{sol2'} x_{\cN}(t):=V_\cN x_2(t)=V_\cN (U_\cN^\top G V_\cN)^{-1}U_\cN^\top (u(t)-G V_C x_1(t)). \eeqq \end{prop} \begin{rem} Suppose $G_4$ is invertible. Suppose $\widetilde u(s)$ is linear, i.e., with some vectors $\widetilde u(0), \widetilde u'(0)=\frac{d\widetilde u}{ds}(0)$, we have\[ \widetilde u(s)=\widetilde u(0)+s \widetilde u'(0).\] Then the second term in (\ref{sol2}) can be further simplified, i.e., \begin{eqnarray} &&\exp(-t B_{1,1}^{-1}) \int_0^t \exp(s B_{1,1}^{-1}) \widetilde u(s) \, ds\\ &=&B_{1,1}\{ \widetilde u(t)-\exp(-t B_{1,1}^{-1}) \widetilde u(0)\}-B_{1,1}^2 (I-\exp(-t B_{1,1}^{-1})) \widetilde u'(0) \\ &=&(-B_{1,1})\{ -I+\exp(-t B_{1,1}^{-1}) \} \widetilde u(0)+B_{1,1}^2 (-I+B_{1,1}^{-1} t+\exp(-t B_{1,1}^{-1})) \widetilde u'(0)\\ &=&t \varphi_1 (-t B_{1,1}^{-1}) \widetilde u(0)+t^2\varphi_2(-t B_{1,1}^{-1}) \widetilde u'(0). \end{eqnarray} Recall $\widetilde u(t)=(B_{1,1})^{-1} V_C ^\top G^{-1} u(t)$. Thus, the projected solution $V_C V_C ^\top x(t)$ is given by \beqq\label{solx3} x_\cR (t)= \left\{ V_C \exp(-t B_{1,1}^{-1}) V_C ^\top x(0)+ t V_C B_{1,1}^{-1} \varphi_1 (-t B_{1,1}^{-1}) V_C ^\top G^{-1} u(0)+t^2 V_C B_{1,1}^{-1} \varphi_2(-t B_{1,1}^{-1}) V_C ^\top G^{-1} u'(0)\right\} \eeqq \end{rem} \begin{rem} What happens if $G_4$ is not invertible? This is one limitation of the decomposition described in section~\ref{Exact_sol_2}: when $G_4$ is not invertible, then $B_{1,1}$ has rank less than $n$ and $B$ can have generalized eigenvectors (in addition to null vectors) corresponding to eigenvalue $0$. Non-invertibility of $G_4$ will lead to the dimension decreases, $\textrm{rank} (P_C G^{-1}C)< \textrm{rank} (C)$, i.e., $V_\cR+V_\cN\neq \IR^N$. Actually, when $G_4$ is not invertible, i.e., $G_4y=0$ for some nonzero vector $y$, a zero eigenvalue of algebraic multiplicity for $G^{-1} C$ is greater than its geometric multiplicity. The Jordan normal form of $B=G^{-1} C$ can have eigenvalue with has order $2$. More discussions can be found in Theorem 1 in \cite{IRA} and Theorem 2.7 in \cite{ERICSSON}. Further analysis on this issue is beyond the scope of the current paper. \end{rem} \subsection{Krylov subspace approximation}\label{Krylov} Since $G^{-1} C$ is well-defined, it is intuitive to apply the shift-and-invert Arnoldi iterations to compute the requisite matrix exponentials in solving (\ref{sys}) with \textit{ singular } $C$. To compute $x_\cR(t)$ from (\ref{sol2}) or (\ref{solx3}) for a large singular system in (\ref{sys}), we shall design one $m$-dimensional Arnoldi algorithm to construct a low-dimensional rational Krylov subspace approximation of the matrix exponential of $B_{1,1}:=V_C^\top G^{-1} C V_C$. Rational Krylov algorithms were originally developed for computing eigenvalues and eigenvectors of large matrices\cite{RUHE84}. Unlike polynomial approximants, rational best approximants of $\exp(-x)$ can converge geometrically in the domain $[0, \infty)$~\cite{CODY69}. Rational Krylov subspace method is a very promising manner in computing matrix exponentials $\phi_k(-t A)$ acting on a vector $v$, when the numerical range of $A$ is located somewhere in the right half complex plane. Typically, the numerical range of the matrix $B_{1,1}$ does not completely lie in the right half plane. In~\cite{AWESOME}, a new Arnoldi scheme with structured orthogonalization is introduced to generate one stable Krylov subspace and to compute matrix exponentials. The orthogonality is based on the positive semi-definite matrix $C$. The orthogonality induced by the $C$ semi-inner product actually plays a fundamental role in enforcing the numerical range of the operator in the right half plane under the assumption in (\ref{assGC}). \subsubsection{Shift-and-invert methods} \begin{rem} Fix some parameter $\gamma>0$. The shift-and-invert method approximates $\phi_k(-t A) v$ in the resolvent Krylov subspace, \[ span\{v, (\gamma I+A)^{-1} v, \ldots, (\gamma I+A)^{-(m-1)} v\}. \] As one reference, we list the result for the nonsingular case. Let $A=C^{-1} G$. The standard Arnoldi iterations are used to construct $(V_m, H_m)$ from \[ (C+\gamma G)^{-1} C V_m=V_m H_m +h_{m+1,m} v_{m+1} e_m^\top, \] where columns of $V_m$ are a set of orthogonal vectors of $m$-dimensional Krylov subspace induced by $(C+\gamma G)^{-1} C$ and $H_m$ satisfies \[ H_m =V_m^\top (C+\gamma G)^{-1} C V_m. \] When $h_{m+1,m}=0$, $(C+\gamma G)^{-1} C$ can be approximated by $V_m H_m V_m^\top$ and then the matrix exponential can be approximated by \[ \exp(-tA) v\approx \|v\| V_m \exp(t (I-H_m^{-1})/\gamma) e_1. \] \end{rem} \begin{definition} To estimate the eigen-structure of $G^{-1}C$ subject to $\cR$, we introduce a few matrices $S, S_{1,1}$ associated to $ B$, \begin{eqnarray} && S:= P_C (C+\gamma G)^{-1} C,\; \widetilde S:= (C+\gamma G)^{-1} C,\; \label{S_def} \\ && S_{1,1}:=V_C ^\top SV_C=V_C ^\top \widetilde SV_C, \label{eq27} \; \gamma>0. \end{eqnarray} \end{definition} Let $W_m:=[ w_1, w_2, \ldots, w_m]$ be one low-dimensional subspace in the range of $P_C G^{-1} C$ and $H_m$ be one upper Hessenburg matrix $H_m$ corresponding to the projection of $P_C G^{-1} C$ on $W_m$, where $\{ w_1, w_2, \ldots, w_m\} \in \IR^{N}$ with $C$-orthogonality span one Krylov subspace from the operator $S$, \[ span\{w_1, S w_1, S^2 w_1, \ldots S^{m-1} w_1\}=span\{w_1, w_2, \ldots, w_m\}. \] The algorithm to generate $(W_m, H_m)$ is stated in Algorithm~\ref{algo_ortho_arnoldi_singular}. Empirically we use the Arnoldi iterations in (\ref{eq29}) to compute $\widetilde W_m$ and $H_m$ instead. Prop.~\ref{prop1.6} suggests computation of the approximate $x_a(t)$ in (\ref{eq52}) is involved with one single operation $P_C$. Since $W_m$ is the projection of $\widetilde W_m$ under $P_C$, the upper Hessenburg matrix $H_m$ are identical. Then the matrix exponentials can be approximated by (\ref{eq52}), where only one $P_C$ projection is applied. Observe that when $h_{m+1,m}=0$ in (\ref{eq28}), then we have $S=W_m H_m W_m^\top C$, which suggests the approximation $W_m H_m W_m^\top C$ of $S$. The proof is straightforward, thus omitted. \begin{prop}\label{prop1.6} Consider the following two $C$-orthogonal Arnoldi iterations to generate $(W_m, H_m)$ and $(\widetilde W_m, \widetilde H_m)$ from $S$ and $\widetilde S$, respectively: \begin{eqnarray}\label{cV} && S W_m=W_m H_m+h_{m+1,m} w_{m+1} e_m^\top,\label{eq28}\\ && \widetilde S \widetilde W_m=\widetilde W_m \widetilde H_m+\widetilde h_{m+1,m} \widetilde w_{m+1} e_m^\top,\label{eq29} \end{eqnarray} where columns of $W_m$ and $\widetilde W_m$ both form two sets of $C$-orthonormal vectors \[ W_m=[w_1, w_2,\ldots, w_m], \widetilde W_m=[\widetilde w_1, \widetilde w_2,\ldots, \widetilde w_m], \, W_m^\top C W_m=\widetilde W_m^\top C \widetilde W_m=I.\] \begin{itemize} \item Suppose the first column of $W_m$ lies in the range of $P_C G^{-1} C$. Then all columns of $W_m$ lie in the range of $P_C G^{-1} C$. \item Suppose the first column of $\widetilde W_m$ lies in the range of $G^{-1} C$. Then all columns of $\widetilde W_m$ lie in the range of $G^{-1} C$. \item Suppose $(\widetilde W_m, \widetilde H_m)$ satisfies (\ref{eq29}). Let $W_m=P_C \widetilde W_m$ and $H_m=\widetilde H_m$. Then $(W_m, H_m)$ satisfies (\ref{eq28}). \end{itemize} \end{prop} The $C$-orthogonality together with the positive definite assumption of $G$ indicates the passivity property of $H_m$ and the invertibility. This is also known as the stability condition\cite{IRA2}. \begin{algorithm} [hbt] \caption{ {\bf An Arnoldi algorithm with explicit structured orthogonalization and implicit regularization}\cite{AWESOME} \label{algo_ortho_arnoldi_singular} \small \KwIn { $C, G, k, \gamma, w, m$ } \KwOut {$H_m, W_m$} { Set $w =P_C w$\; $ w_1 = \frac{w }{\lVert w\rVert_{C}}$ where $\lVert w\rVert_{C} = \sqrt{w^\top {C}w}$ and $w_1^T\mathcal{C}w_1 = 1$ \; \For {$j=1:m$} { Solve $(\gamma G+C) w = C w_j$ and obtain $w$\; \label{algo_construct_line} Set $w = P_C w$\; \For {$i = 1:j$} { $h_{i,j} = w^\top C w_{i}$\; $ w = w - h_{i,j} w_{i}$\; } $h_{j+1,j} = \lVert w \rVert _\mathcal{C}$\; $w_{j+1} = \frac{ w }{h_{j+1,j}} $\; \If{residual $<$ tolerance} { Results converge at dimension $m$\; } } } \end{algorithm} \begin{rem}[Passivity property] Assume $G,C$ given in (\ref{assGC}). The advantage of the $C$-orthogonal iterations in (\ref{cV}) lies in the preservation of the passivity property of $H_m$, i.e., all eigenvalues of $H_m$ have non-negative real components. In particular, with $G$ positive definite, we have the invertibility of $H_m$, which is crucial to the algorithm as well as the error analysis. Indeed, since observe that (\ref{cV}) implies \beqq\label{H_def1} W_m^\top C S W_m= W_m^\top C P_C (C+\gamma G)^{-1}C W_m=W_m^\top C (C+\gamma G)^{-1}C W_m=H_m. \eeqq Then for each nonzero vector $x\in \IR^m$, with $y:=(C+\gamma G)^{-1} (CW_m x)\in \IR^N$, we have \[ \langle x, H_m x\rangle = (CW_m x)^\top (C+\gamma G)^{-1} (CW_m x)= y^\top (C+\gamma G) y\ge 0. \] \end{rem} The following shows the relation between $B_{1,1}$ and $S_{1,1}$. \begin{prop} \label{prop1.8}Suppose $G$ is postive definite. Let $\gamma>0$, and introduce the function $g:\IC\to \IC$ and its inverse $g_1$, \[ \lambda=g(\mu)=(1+\gamma\mu^{-1})^{-1},\mu=g_1(\lambda):=g^{-1}(\lambda)=((\lambda^{-1}-1)/\gamma)^{-1}.\] Then \beqq\label{BS} B_{1,1}=g^{-1}( S_{1,1}),\; S_{1,1}=g(B_{1,1}).\eeqq \end{prop} \begin{proof} By Prop.~\ref{prop1.2}, $B_{1,1}$ is invertible. Let $T:=V_C^\top G^{-1} V_C$ and $C_1=V_C^\top C V_C$. Then \beqq B_{1,1}=V_C^\top G^{-1}C V_C = T C_1, \eeqq \begin{eqnarray} S_{1,1}&=& V_C ^\top (G^{-1}(C+\gamma G))^{-1} G^{-1}C V_C\\ & =& V_C ^\top (G^{-1}C+\gamma I))^{-1} V_C V_C^\top G^{-1}C V_C\\ &=& (T C_1 +\gamma I)^{-1} TC_1 =g( B_{1,1}). \end{eqnarray} \end{proof} Introduce a few notations. Let $g,g_1$ be given in Prop.~\ref{prop1.8} and \beqq \label{def_f} f(\lambda):=\varphi_0(-t (g^{-1}(\lambda))^{-1})=\varphi_0(-t g_1(\lambda)^{-1} ) \eeqq and let \beqq\label{f1} f_k(\lambda):= g_1(\lambda)^{-1} \varphi_k(-t g_1(\lambda)^{-1}) \textrm{ for $k=1,2$.} \eeqq Now, we are ready to state one approximation $x_a(t)$ for $x_\cR (t)$ in (\ref{solx3}). The error analysis will be given in next section. \begin{thm} Let $(\widetilde W_m, H_m)$ and $( W_m, H_m)$ be generated from Arnoldi iterations with respect to $\widetilde S$ and $S$ in Prop.~\ref{prop1.6}. Let \begin{eqnarray}\label{x_apr} && x_a(t):=W_m \left\{ f( H_m) W_m^\top C x(0)+ t f_1 (H_m) W_m^\top C u(0)+t^2 f_2( H_m) W_m^\top C u'(0) \right\}\\ &=& P_C \widetilde W_m \left\{ f( H_m) \widetilde W_m^\top C x(0)+ t f_1 (H_m) \widetilde W_m^\top C u(0)+t^2 f_2( H_m) \widetilde W_m^\top C u'(0) \right\}\label{x_apr_1}\label{eq52} \end{eqnarray} Suppose $x(0), u(0)$ and $u'(0)$ all lie in the range of $W_m$ and $h_{m+1,m}=0$. Then $x_a(t)=x_\cR(t)$. \end{thm} \begin{proof} Write $x_\cR (t)$ in (\ref{solx3}) as follows, \[ x_\cR (t):=z_1(t)+z_2(t)+z_3(t).\] The first term in (\ref{solx3}) gives \begin{eqnarray} &&z_1(t)= V_C \exp(-t B_{1,1}^{-1}) V_C ^\top x(0)\label{eq20} \\ &=& V_C \exp(-t \{ g^{-1}(S_{1,1})\}^{-1}) V_C ^\top x(0)=V_C f(S_{1,1})V_C ^\top x(0).\label{eq_21} \end{eqnarray} The approximation of (\ref{eq_21}) is computed as follows. From \[ S_{1,1}\approx V_C ^\top W_m H_m W_m^\top C V_C,\] and $V_C V_C^\top W_m=W_m$, we have \beqq\label{eq40} (S_{1,1})^k \approx V_C ^\top W_m H_m^k W_m^\top C V_C .\eeqq Since columns of $W_m$ lie in $V_C$, then with $C$-orthogonality, (\ref{eq_21}) yields \begin{eqnarray}\label{eq45} z_1(t)\approx W_m f( H_m) W_m^\top CV_C V_C^\top x(0)\approx W_m f( H_m) W_m^\top C x(0). \label{eq24}\end{eqnarray} \footnote{In the case of $h_{m+1,m}=0$, the equalities in (\ref{eq40}) hold and thus the equalities in (\ref{eq45}) hold.} For the remaining terms $z_2(t),z_3(t)$ of (\ref{solx3}), we have \[ V_C \varphi_0(-t B_{1,1}^{-1}) V_C ^\top \widetilde u(0)=V_C \exp(-t B_{1,1}^{-1}) V_C ^\top \widetilde u(0) \approx W_m f( H_m) W_m^\top C \widetilde u(0). \] Likewise, since $(B_{1,1})^{-1}=(g^{-1}(S_{1,1}))^{-1}=g_1(S_{1,1})$, then \[ V_C B_{1,1}^{-1} \varphi_k(-t B_{1,1}^{-1}) V_C ^\top \widetilde u'(0)= V_C g_1(S_{1,1})^{-1} \varphi_k(-t g_1(S_{1,1})^{-1}) V_C ^\top \widetilde u'(0) \approx W_m f_k( H_m) W_m^\top C \widetilde u'(0). \] In summary, we have (\ref{x_apr}) and (\ref{x_apr_1}) by Prop.~\ref{prop1.6}. \end{proof} \begin{rem}[Complete solutions $x(t)$]With (\ref{x_apr}), we can compute the complete solution $x_\cR(t)+x_\cN(t)$. From (\ref{sys}), \beqq \label{x_complete}x(t)=x_\cR(t)+x_\cN(t)=G^{-1}u(t)-G^{-1}C \frac{dx_\cR(t)}{dt},\eeqq where \begin{eqnarray} && \frac{dx_\cR(t)}{dt}= W_m \{ -g_1(H_m)^{-1} \exp(-t g_1(H_m)^{-1}) W_m^\top C x(0) \\&& + g_1(H_m)^{-1} \exp(-t g_1(H_m)^{-1}) W_m^\top C G^{-1}u(0) + (I- \exp(-t g_1(H_m)^{-1})) W_m^\top C G^{-1}u'(0) \}\\ &=&W_m \{ g_1(H_m)^{-1} \exp(-t g_1(H_m)^{-1}) W_m^\top C (-x(0)+G^{-1}u(0)) \\&& + (I- \exp(-t g_1(H_m)^{-1})) W_m^\top C G^{-1} u'(0) \}. \end{eqnarray} \end{rem} \begin{rem} How to choose the initial vectors for the Arnoldi iterations? Suppose $x(0)$, $u(0)$ and $u'(0)$ lying in $\cR$. Then it is typical to choose them as the initial vector of the corresponding Arnoldi iterations with a proper $C$-normalization, i.e., the first column of $\widetilde W_m$ is the normalized vector $w/\langle w, Cw\rangle^{1/2}$. Note that when $(\widetilde W_m^{(0)}, H_m^{(0)})$ is generated from the $C$-orthogonal Arnoldi iterations with the initial vector $x_0$ in $\cR$, the first term of $x_a(t)$ in (\ref{x_apr}) becomes $ \beta_0 P_C \widetilde W_m^{(0)} f( H_m ^{(0)}) e_1 $, where $ \beta_0=\|x_0\|_C$. Empirically, one can collect all the exponential terms as one matrix-exponential-and-vector product (either $\varphi_0$, $\varphi_1$ or $\varphi_2$) and construct only one pair of $(W, H)$ to conduct the computation, as considered in~\cite{AWESOME} \end{rem} \section{Error analysis} \subsection{$C$-numerical range} The numerical range (or called field of values)~\cite{ Charles, Crou07, Bern09}, which is the range of Rayleigh quotient, is one fundamental quantity in the error analysis of matrix exponential computation. To establish the convergence, for a square matrix $A\in \IC^{N\times N } $ of the form $A=KC$ with some matrix $K\in \IR^{N\times N}$, we introduce the $C$-numerical range \beqq\label{FC} \cF_C (A)=\{x^* CA x: x\in \IC^N, \|x\|_C:=\sqrt{x^*Cx}\le 1 \}, \eeqq which is one generalization of the standard numerical range \[ \cF(A)=\{x^* A x: x\in \IC^N, \|x\|\le 1\}. \] Here $A$ could be the matrix $B$ in (\ref{Bdef}) or $S$ in (\ref{S_def}). Clearly, the set $\cF_C(A)$ in (\ref{FC}) only depending on those vectors $x$ in the range $C$. \begin{definition} The set of a disk with center $c_1\in \IC$ and radius $\rho_1>0$ is denoted by $\cD(c_1, \rho_1)\subset \IC$. \end{definition} Due to possible non-symmetric structure in $G$, numerical range $\cF_C (B)$ is not a line-segment on the real axis. The smallest disk covering $\cF_C(B)$ is introduced to quantize the spectrum of $B=G^{-1} C$. For $G,C$ in (\ref{assGC}), let $C=V_C C_1V_C^\top$ be the eigenvector decomposition. Note that eigenvalues of $B_{1,1}$ all lying in the right half plane from Prop.~\ref{prop1.2} does not implies that $\cF(B_{1,1}) $ lies in the right half plane. As an alternate, the $C$-numerical range $\cF_C(B)$ always lies in the right half plane. \begin{prop} Let $A$ be in the form $A=K C$ for some matrix $K\in \IR^{N\times N}$. Then both $\cF(A)$ and $\cF_C( A)$ contain all nonzero eigenvalues of $A$. In addition, if $K$ is positive semi-definite, then $\cF_C(A)$ lies in the right half plane. \end{prop} \begin{proof} Let $x$ be an nonzero eigenvector of $A$ corresponding to nonzero eigenvalue $\lambda$. Then $Ax=\lambda x$ and the first statement is given by \[ \lambda=\frac{x^* Ax }{x^* x}=\frac{x^* C Ax }{x^* C x}. \] In addition, if $K+K^\top\succeq 0$, then \[ \frac{x^* C Ax }{x^* C x}= \frac{x^* C KC x }{x^* C x}= \frac{x^* C (K+K^\top) Cx }{2 x^* C x} \; \forall x \] have a nonnegative real component. \end{proof} Here are a few properties of $\cF_C(B)$ if $G$ is positive definite. \begin{prop}\label{FCB} Suppose that (\ref{assGC}) holds for $G,C$. Let $H=G^{-1}$. In addition, $(H+H^\top)/2$ is positive definite with eigenvalues in $[\xi_1, \xi_2]$ with $\xi_1>0$, $(H-H^\top)/2$ has eigenvalues in $[-i\xi_3, i\xi_3]$, and $C$ is positive semi-definite with eigenvalues in $\{0\}\cup [\xi_4, \xi_5]$, $\xi_4>0$. Then $\cF_C(B)$ lies in $\cD(c_1, \rho_1)$ with $c_1>\rho_1$. Here $c_1,\rho_1$ only depend on these parameters $\xi_1,\xi_2,\xi_3,\xi_4,\xi_5$ of $C,G$. \end{prop} \begin{proof} Note that the $C$-numerical range of $B$ can be expressed by \beqq\label{FB} \cF_C(B)=\{ \frac{x^* C G^{-1} C x}{x^* C x}: x\in \IC^N, Cx\neq 0 \}=\{ z^* D_C^{1/2}V_C^\top G^{-1} V_C D_C^{1/2} z: \|z\|\le 1, z\in \IC^n\}. \eeqq From $G^{-1}=(H+H^\top)/2+(H-H^\top)/2$, then $\cF_C(B)$ lies within a box region in the right half plane, \[ 0<\xi_1\xi_4 \le \Re(\cF_C(B))\le \xi_2 \xi_5,\; -\xi_3\xi_5 \le \Im(\cF_C(B))\le \xi_3 \xi_5, \] where equalities can hold only if $z$ is a pure real vector or a pure imaginary vector. Thus, we can find some $c_1>0, \rho_1>0$ with $c_1-\rho_1>0$ , such that $\cF_C(B)\subset \cD(c_1, \rho_1)$. Choose \[\rho_1:=\sqrt{ (\max( c_1-\xi_1\xi_4, \xi_2\xi_5 -c_1))^2+(\xi_3\xi_5)^2 }. \] Note that $c_1^2\ge \rho_1^2$ holds if and only if \[ c_1\ge \max\{ (2\xi_1\xi_4)^{-1} \{\xi_1^2\xi_4^2+\xi_3^2\xi_5^2\}, (2\xi_2\xi_5)^{-1} \{\xi_2^2\xi_5^2+\xi_3^2\xi_5^2\}\} \] Hence, with a sufficient large value $c_1$, the disk $\cD(c_1,\rho_1)$ containing $\cF_C(B)$ lies in the right half plane. \end{proof} In general $B$ is not normal. The following proposition and remark exhibit the dependence of $\cF_C(S)$ and $\cF(H_m)$ on $\cF_C(B)$. As long as $\cF_C(B)$ lies in the right half plane, $\cF(H_m)$ does as well. The following function $g$ which is one M{\"o}bius transformation maps generalized circles to generalized circles, which actually lie within $\cD(1/2,1/2)$. \begin{prop}\label{Bound1}Let $\gamma>0$ and $\lambda=g(\mu)=(1+\gamma\mu^{-1})^{-1}$, which maps $\mu \in \cF_C(B)$ to $\lambda\in \cF_C(S)$ by (\ref{BS}). Suppose (\ref{assGC}). Then Prop.~\ref{FCB} indicates that $\cF_C(B)$ lies in the right half plane, \beqq \label{ass1}\cF_C(B)\subset \cD(c_1,\rho_1) \textrm{ with some } c_1, \rho_1 \in \IR. \eeqq Let $\mu_1: =c_1-\rho_1>0, \textrm{ and } \mu_2:=c_1+\rho_1$. Then $\cF_C(S)\subset \cD(c_0, \rho_0)$, where $c_0=(g(\mu_1)+g(\mu_2))/2$, $\rho_0=(g(\mu_2)-g(\mu_1))/2$. Note that since $\mu_1\ge 0, \mu_2\ge 0$, then $g(\mu_2)\le 1$ and $g(\mu_1)\ge 0$. Thus, $\cF(S)\subset \cD(1/2,1/2)$. \end{prop} \begin{proof} Consider the mapping theorem by Berger-Stampfli(1967)\cite{Berger}. Let \beqq\label{TB} T=(B-c_1I)/\rho_1, \eeqq i.e., $B=c_1 I+\rho_1 T$ where $c_1=(\mu_1+\mu_2)/2$, $\rho_1=(\mu_2-\mu_1)/2$. Then by (\ref{ass1}), $|\cF_C (T)|\le 1$. Choose one analytic function $f$ on $z\in \cD(0,1)\to \cD(0,1)$, \[ f(z)=\frac{g(\rho_1 z+c_1)-c_0}{\rho_0}. \] Since $g$ is a function mapping a circle with centre at the real axis to another circle with center at the real axis, by definition of $c_0, c_1, \rho_1$, $|f(z)|\le 1$ for all $|z|\le 1$. Clearly, $f(z)$ is analytic in $|z|<1$ and continuous on the boundary. By the theorem in~\cite{Berger}, $\cF_C(f(B))$ also lies in $\cD(0,1)$. Thus with (\ref{TB}), \[\cF_C(S)=\cF_C(g(B))= c_0+\rho_0\cF_C \left(\frac{g(\rho_1 T+c_1I)-c_0I }{\rho_0}\right) \] lies in the disk $\cD(c_0, \rho_0)$, i.e., with center $c_0$ and radius $\rho_0$. \end{proof} \begin{rem} The passivity property of the system indicates $\cF(H_m)$ in $\cD(c_0,\rho_0)$. Indeed, from (\ref{H_def1}) and $W_m^\top CW_m=I$, the numerical range of $H_m$ lies inside the $C$-numerical range, \beqq\label{HS} \cF(H_m)\subset \cF_C(S)\eeqq according to the definition of $\cF$ and $\cF_C$. \end{rem} To establish the convergence, we need the following results. The coming result relates the spectral norm to the radius of its numerical range. \begin{prop}\label{bound2} Let $A\in \IR^{N\times N}$ in the form of $KC$ with $K\in \IR^{N\times N}$ and $\cF_C(A)$ lie in $\cD(0, \rho)$. Then \[ \|A\|_C:=\sup_v\{ \|A v\|_C/\|v\|_C\}\le 2\rho.\] \end{prop} \begin{proof} Since $C$ is unitary diagonalizable, $C=V_C C_1V_C^\top $, the matrix square root $C^{1/2}$ is given by $C^{1/2}=V_C D_C^{1/2} V_C^\top $. For any $v\in \IR^N$ with $\|v\|_C=1$, we have \begin{eqnarray} &&\frac{\|(K + K^\top )Cx \|_C}{\|x\|_C}=\frac{\| C^{1/2}(K+K^\top)C^{1/2}C^{1/2} x \|}{\|C^{1/2} x\|}= \| C^{1/2}(K+K^\top)C^{1/2} \|\\ &=&\max_x \Re(x^*(CK C+C K^\top C) x)=2\max_x \Re(x^* C K C x)\le 2\rho.\end{eqnarray} Likewise, $\|v\|_C^{-1}\|(K-K^\top )Cv \|_C= \|C^{1/2}(K-K^\top )C^{1/2} \| =\max_x \Im(x^*(CK C-CK^\top C) x)\le 2\rho.$ The sum of the above two inequalities gives $ \|K Cv \|_C/\|v\|_C \le 2\rho.$ \end{proof} The following inequality induces numerical range $\cF_C(A)$ in estimating error bounds for (\ref{eq69}). \begin{prop}\label{bound3} Let $\Gamma$ be a set in $\IC$ and $d(\Gamma, \cF_C(A))$ be the shortest distance between $\Gamma$ and $\cF_C(A)$. Then \[\min_{\lambda\in \Gamma} \|(\lambda I-A)^{-1}\|_C \le d(\Gamma, \cF_C(A))^{-1}.\] \end{prop} \begin{proof} Let $u=(\lambda I-A)^{-1} v\in \IC^n$. Then for each $\lambda\in \Gamma$, \[ d(\Gamma, \cF_C(A))\le \frac{|\langle u, C(\lambda I-A )u \rangle |}{\|u\|_C^2}=\|u\|_C^{-2}|\langle u, v\rangle_C |\le \|u\|_C^{-1}\cdot \|v\|_C \] Hence, for each vector $v$, we have \[ \frac{\| (\lambda I -A)^{-1} v\|_C}{\|v\|_C}=\frac{\|u\|_C}{\|v\|_C}\le d(\Gamma, \cF_C(A))^{-1}, \] which completes the proof. \end{proof} \subsection{A posterior error bounds (residual)}\label{Posterior} A posteriori error estimates are crucial in practical computations, e.g., determining the dimension $m$ of the Krylov space for (\ref{x_apr}) or the time span used in the matrix exponential. In the following, we apply the residual arguments in~\cite{Botchev} to estimate errors of (\ref{x_apr}) in the case of $h_{m+1,m}\neq 0$. Here we focus on the term involving with $\phi_0$ in (\ref{x_apr}) for the sake of simplicity. \begin{prop} Let $y_m(t)$ be the first term of the approximation of $x_\cR(t)$ in (\ref{solx3}), i.e., the first term in (\ref{x_apr}), \[y_m(t):=W_m \varphi_0(-tg_1(H_m) )W_m^\top C x(0)\in \cR.\] Denote the residual function by $r_m(t)$ \[ r_m(t):=P_C G^{-1}C \frac{dy_m}{dt}+ y_m.\] Then \beqq\label{residual} r_m(t)=- \beta(t) P_C \{G^{-1}( C+\gamma G) w_{m+1}\}, \eeqq where $\beta(t)$ is a scalar, independent of whether $P_C$ is applied or not, \begin{eqnarray} &&\label{beta_m} \beta(t):=h_{m+1,m}\gamma^{-1} e_m^\top H_m^{-1} \varphi_0(-tg_1(H_m) )W_m^\top C x(0)\\ &=&h_{m+1,m}\gamma^{-1} e_m^\top H_m^{-1} \varphi_0(-tg_1(H_m) )\widetilde W_m^\top C x(0) \in \IR. \end{eqnarray} \end{prop} \begin{proof} Let $y(t)$ denote the corresponding term of the exact solution in (\ref{solx3}), $y(t):=V_C \exp(-tB_{1,1}^{-1}) V_C ^\top x(0)$. Note that $y(t)$ satisfies \beqq\label{xt} P_CG^{-1} C \frac{dy}{dt}+ y=V_C B_{1,1} V_C ^\top \frac{dy}{dt}+ y=(-P_C+V_C V_C^\top ) G^{-1} C V_C B_{1,1}^{-1} \exp(-tB_{1,1}^{-1}) V_C ^\top x(0)=0. \eeqq Since $V_C B_{1,1}V_C^\top W_m=V_C V_C^\top G^{-1} C V_C V_C^\top W_m=P_C G^{-1} C W_m,$ and from the definition of $g_1$, $(W_m, H_m)$ satisfies (\ref{eq28}), then we have \begin{eqnarray} &&\label{eq47} V_C B_{1,1} V_C ^\top W_m g_1(H_m)=P_CG^{-1} C W_m (H_m^{-1}-I)\gamma^{-1} \\ &=&P_CG^{-1}\{ G W_m +\gamma^{-1}(C+\gamma G) h_{m+1,m} w_{m+1} e_m^\top H_m^{-1} \}. \end{eqnarray} Then \begin{eqnarray}\label{xtm} r_m(t)&=&P_C G^{-1}C \frac{dy_m}{dt}+ y_m=V_C B_{1,1} V_C ^\top \frac{dy_m}{dt}+ y_m\\ &=& \left\{-V_C B_{1,1} V_C ^\top W_m g_1(H_m) + W_m \right\} \varphi_0(-tg_1(H_m) )W_m^\top C x(0)\\ &=&- \beta(t) P_C \{ G^{-1}( C+\gamma G) w_{m+1}\},\label{rtm} \end{eqnarray} where we used (\ref{eq47}) to get the last equality. \end{proof} The following computation provides one connection from the residual estimate giving in (\ref{residual}) to the error estimate under the assumption in (\ref{omega}). One major tool is that by eigenvector decomposition of \[C_1^{1/2}B_{1,1}^{-1}C_1^{-1/2}=C_1^{-1/2} (V_C^\top G^{-1} V_C)^{-1} C_1^{-1/2}=XDX^{-1},\] there exist $K>0$ and $\omega> 0$ depending on $B_{1,1}$, \beqq\label{omega} \|\exp(-tB_{1,1}^{-1})\|_{C_1} \le K \exp(-t \omega). \eeqq Here introduce $C_1$-norm \[ \|x\|_{C_1}=\Re(x^* C_1 x)^{1/2}=\|C_1^{1/2} T x\|,\; \|T \|_{C_1}:=\max_{x\neq 0}\frac{\Re((Tx)^* C_1 Tx)^{1/2}}{x^* C_1 x}=\max_{x\neq 0}\frac{\| Tx\|_{C_1}}{\| x\|_{C_1}} \] for vectors $x\in \IC^n$ and $T\in \IR^{n\times n}$. For instance, one can choose $K=\|X\| \|X^{-1}\|$ and choose $\omega$ to be the largest eigenvalue of \[ C_1^{1/2} (B_{1,1}^{-1}+(B_{1,1}^{-1})^\top)C_1^{-1/2} /2=C_1^{-1/2}\frac{ (V_C^\top G^{-1} V_C)^{-1}+(V_C^\top (G^{-1})^\top V_C)^{-1}}{2} C_1^{-1/2}.\] \begin{thm} Suppose $C, G$ satisfy (\ref{assGC}). Let $r_m(t)$ and $\beta$ be defined in (\ref{rtm}) and (\ref{beta_m}). Let \[\epsilon_m(t)=y_m(t)-y(t)= W_m \varphi_0(-tg_1(H_m) )W_m^\top C x(0)-V_C \exp(-tB_{1,1}^{-1}) V_C ^\top x(0). \] Then (\ref{omega}) holds for some constants $\omega, K$, depending on $B_{1,1}^{-1}$, and \begin{eqnarray}\label{bound_phi} && \| P_C \epsilon_m(t)\|_C \le K t \varphi_1(-t \omega) \max_{0\le s\le t}\| B_{1,1}^{-1} V_C ^\top r_m(s)\|_{C_1} \\ &\le& K t \varphi_1(-t \omega) \cdot \| (I+\gamma B_{1,1}^{-1}) V_C ^\top w_{m+1}\|_{C_1} \cdot \sup_{\theta\in [0,1]} \|\beta(t\theta )\|.\label{eq56} \end{eqnarray} \end{thm} \begin{proof} By (\ref{assGC}) and Prop.~\ref{FCB}, $\cF_C(B)$ lies in the right half plane. This establishes the existence of $K$ and $\omega$ in (\ref{omega}). From (\ref{xt}) and (\ref{xtm}), we can establish one equation between the error vector $\epsilon_m (t)= y_m(t)-y(t)$ and the residual vector $r_m(t)$, \[ V_C^\top G^{-1} C V_C V_C^\top \frac{d\epsilon_m(t)}{dt}+ V_C^\top \epsilon_m(t)=V_C^\top r_m(t). \] Thus variation of constants formula gives \begin{eqnarray} && V_C^\top \epsilon_m(t)= V_C^\top \epsilon_m(t)=\int_0^t \exp(-(t-s) B_{1,1}^{-1} ) B_{1,1}^{-1} V_C ^\top r_m(s) ds \\ &=& \int_0^1 \exp(-t(1-\theta) B_{1,1}^{-1} ) B_{1,1}^{-1} V_C ^\top r_m(t\theta) d\theta. \end{eqnarray} Examine the definition of $\varphi_1$, \[ \varphi_k(-t B_{1,1}^{-1})=\int_0^1 \exp(-(1-\theta)t B_{1,1}^{-1}) \frac{\theta^{k-1}}{(k-1)!} d\theta, k\ge 1, \textrm{ which yields } \|\varphi_1(-t B_{1,1}^{-1})\|_{C_1}\le K \varphi_1(-t \omega). \] Hence, we have the upper bound for the error vector, \begin{eqnarray} &&\| V_C \epsilon_m(t)\|_{C_1}\le \int_0^1\| \exp(-t(1-\theta) B_{1,1}^{-1} ) \|_{C_1} d\theta\, \{ \sup_{\theta\in [0,1]} \| B_{1,1}^{-1} V_C ^\top r_m(t\theta)\|_{C_1}\}. \\ &\le& K \int_0^1 \exp(-t(1-\theta) \omega ) d\theta\, \{ \sup_{\theta\in [0,1]} \| B_{1,1}^{-1} V_C ^\top r_m(t\theta)\|_{C_1}\}\\ &=& K \varphi_1(-t \omega ) \, \{ \sup_{\theta\in [0,1]} \| B_{1,1}^{-1} V_C ^\top r_m(t\theta)\|_{C_1}\}. \end{eqnarray} The proof is completed by using (\ref{residual}), \begin{eqnarray} && \| B_{1,1}^{-1} V_C ^\top r_m(t\theta)\|_{C_1}\le \| B_{1,1}^{-1} V_C ^\top P_C (G^{-1} C V_C V_C^\top +\gamma I) w_{m+1}\|_{C_1} \sup_{\theta\in [0,1]} \|\beta(t\theta )\| \\ & =& \| (I+\gamma B_{1,1}^{-1}) V_C ^\top w_{m+1}\|_{C_1} \cdot \sup_{\theta\in [0,1]} \|\beta(t\theta )\|. \end{eqnarray} \end{proof} \subsection{Error bound inequality} The previous residual analysis does not explicitly reveal the error convergence behavior as the Krylov dimension increases. In the following, we shall establish one upper bound depending on time span $t$, dimension $m$ and $\gamma$ to show the convergence in computing the matrix exponentials. Literatures\cite{Saad92}\cite{Hoch} show that the error of $m$-dimensional approximations of matrix exponentials could decay at least linearly (super-linearly), as the Krylov dimension increases. We shall examine the case, where the $C$-orthogonality Arnoldi iterations are employed to implement the shift-and-invert method. The following error bound shows the effectiveness of $C$-orthogonality Arnoldi algorithms in solving $x_\cR(t)$ of (\ref{sys}) under (\ref{assGC}). From (\ref{BS}), (\ref{solx3}) and (\ref{x_apr}), the quality of $x_a$ in (\ref{x_apr}) can be analyzed in the following inequality, \begin{eqnarray} \label{x_bd0} &&\| x_\cR (t)-x_a(t)\|_C \le \|\{V_C f(S_{1,1}) V_C ^\top -W_m^{(0)} f(H_m^{(0)}) {W_m^{(0)} }^\top C \} x(0) \|_C \\ &+& \|\{V_C f_1(S_{1,1}) V_C ^\top -W_m^{(1)} f_1(H_m^{(1)} ){W_m^{(1)} }^\top C \}u(0) \|_C \label{eq103'0}\\ &+& \|\{V_C f_2(S_{1,1}) V_C ^\top -W_m^{(2)} f_2(H_m^{(2)} ){W_m^{(2)} }^\top C \}u'(0) \|_C. \label{eq104'0}\end{eqnarray} \subsubsection{Convergence} Suppose $G$ is only \textit{positive semi-definite}. The following Theorem~\ref{Thm3.1} is one error bound for $\varphi_l$ functions for $l\ge 1$, (from Theorem 5.9\cite{Tanja14}). Since the analysis cannot be used in the $\varphi_0$-case, we consider $\varphi_1$ for the $x(0)$ term, i.e., with $\varphi_0(-x)=(-x)\varphi_1(-x)+1$, we have \[ f(S_{1,1})= (-S_{1,1}) (f_1(S_{1,1}))+I, \] which gives the $\varphi_1$-computation for $f(S_{1,1})$, \begin{eqnarray} && V_C f(S_{1,1}) V_C^\top x(0)= (-V_CS_{1,1} V_C^\top ) V_C (f_1(S_{1,1})) V_C^\top x(0)+x(0), \\ &\approx & (-V_CS_{1,1} V_C^\top ) ( W_m f_1(H_m) W_m^\top C) x(0)+x(0). \end{eqnarray} Hence, \begin{eqnarray} \label{x_bd} &&\| x_\cR (t)-x_a(t)\|_C \le \|\{(-S)\{ V_C f_1(S_{1,1}) V_C ^\top - W_m^{(0)} f(H_m^{(0)}) {W_m^{(0)} }^\top C \} x(0) \|_C \\ &+& \|\{V_C f_1(S_{1,1}) V_C ^\top -W_m^{(1)} f_1(H_m^{(1)} ){W_m^{(1)} }^\top C \}u(0) \|_C \label{eq103'}\\ &+& \|\{V_C f_2(S_{1,1}) V_C ^\top -W_m^{(2)} f_2(H_m^{(2)} ){W_m^{(2)} }^\top C \}u'(0) \|_C. \label{eq104'}\end{eqnarray} Applying this theorem to~(\ref{x_bd}) gives Theorem~\ref{Thm3}, which describes the convergence to $x_\cR(t)$ in the positive semi-definite case. The convergence in $m$ is at least sub-linear. \begin{thm}\label{Thm3.1} Let $A$ satisfy $\cF(A)\subseteq \IC_0^-$ and let $P_m=V_mV_m^\top$ be the orthogonal projection onto the shift-and-invert Krylov subspace $Q_m(A,v)$. For the restriction $A_m=P_m A P_m$ of $A$ to $Q_m(A,v)$, we have the error bound \[ \|\varphi_l(A) v-\varphi_l(A_m) v\|\le \frac{C(l,\gamma)}{m^{l/2}} \|v\|, \; l\ge 1. \] \end{thm} \begin{thm}\label{Thm3} Suppose that $C,G$ are positive semi-definite and $C$ is symmetric. Then $ \cF_C(G^{-1} C)$ lies in the right half complex plane. Replacement of the $x(0)$-term $W_m^{(0)} f(H_m^{(0)}) {W_m^{(0)} }^\top C \} x(0)$ of $x_a(t)$ in (\ref{x_apr}) with $ - S( W_m f_1(H_m) W_m^\top C) x(0)+x(0)$. Then \beqq\label{error0} \| x_\cR (t)-x_a(t)\|_C \le \frac{C(1,\gamma )}{m^{1/2}} \|S\|_C \|x(0)\|_C+ \frac{C(1,\gamma )}{m^{1/2}} \|u(0)\|_C+ \frac{C(2,\gamma )}{m^{2/2}} \|u'(0)\|_C. \eeqq \end{thm} \begin{proof} We shall verify the conditions stated in Theorem~\ref{Thm3.1}. Let \beqq \label{Adef} A:=-C_1^{1/2} B_{1,1}^{-1} C_1^{-1/2}. \eeqq Since $V_C^\top G^{-1} V_C$ is postive semi-definite, then the positive definite condition on $G$ together with the calculation \[ -A= C_1^{1/2} B_{1,1}^{-1} C_1^{-1/2}=C_1^{1/2} (V_C^\top G^{-1} C V_C)^{-1} C_1^{-1/2}=C_1^{-1/2} (V_C^\top G^{-1} V_C)^{-1} C_1^{-1/2}. \] implies that the numerical range $\cF(-A)$ lies in the right half complex plane. Let $Q_m(A,v)$ be the shift-and-invert Krylov subspace \[ Q_m(A,v)=span\{v, ( I-\gamma A)^{-1} v, \ldots, ( I-\gamma A)^{-(m-1)} v\}. \] Note that the definition of $S_{1,1}$ gives \beqq\label{SBeq} S_{1,1}=V_C^\top (C+\gamma G)^{-1} CV_C =(I+\gamma B_{1,1}^{-1})^{-1}, \eeqq and \[ A=-\gamma^{-1} C_1^{1/2} (S_{1,1}^{-1}-I)C_1^{-1/2}. \] From (\ref{Adef}), we have \[ (I-\gamma A)^{-1}=C_1^{1/2} ( I+ \gamma B_{1,1}^{-1})^{-1} C_1^{-1/2}=C_1^{1/2} S_{1,1} C_1^{-1/2}. \] Thus, the subspace $Q_m(A,v)$ is actually the Krylov subspace $K_m(C_1^{1/2} S_{1,1} C_1^{-1/2} , v)$, i.e., \[ Q_m(A,v)=span\{v, C_1^{1/2} S_{1,1} C_1^{-1/2} v, \ldots, C_1^{1/2} S_{1,1}^{m-1} C_1^{-1/2} v\}. \] Let $V_m$ consist of orthogonal basis vectors in $K_m(C_1^{1/2} S_{1,1} C_1^{-1/2} , v)$. Then we have Arnoldi decomposition under Gram-Schmidt process for some upper Hessenberg matrix $H_m$, \beqq\label{eq108} (I-\gamma A)^{-1}V_m= C_1^{1/2} S_{1,1} C_1^{-1/2}V_m=V_m H_m. \eeqq The orthogonality $V_m^\top V_m=I$ gives \[ H_m=V_m^\top C_1^{1/2} S_{1,1} C_1^{-1/2}V_m. \] Simplifying (\ref{eq108}) yileds \[ V_m (I-H_m^{-1})=\gamma A V_m. \] Let $P_m:=V_mV_m^\top $ be the orthogonal projection onto $Q_m(A,v)$, and $A_m$ be the restriction of $A$ on $Q_m(A,v)$, \[ A_m=P_m A P_m=P_m \gamma^{-1} (I-H_m^{-1}) P_m. \] Let $v=C^{1/2} u(0)$ and $V_m =C^{1/2} W_m$. The construction of $W_m$ ensures its columns lying in the range of $V_C$. Theorem~\ref{Thm3.1} indicates \begin{eqnarray} &&\| \{V_C f_l(S_{1,1}) V_C^\top-W_m f_l(H_m) W_m^\top C\} u(0) \|_C\\ &=& \| C^{1/2}\{V_C f_l(S_{1,1}) V_C^\top-W_m f_l(H_m) W_m^\top C^{1/2}\} v \| \\ &=& \|V_C^\top C^{1/2} V_C f_l(S_{1,1}) V_C^\top-V_C^\top V_m f_l(H_m) W_m^\top C^{1/2}\} v \| \\ &=& \left\|C_1^{1/2} \varphi_l( \gamma^{-1}(I-S_{1,1}^{-1})) C_1^{-1/2}v - P_m\varphi_l( \gamma^{-1} (I-H_m^{-1}) ) P_m v \right \|\\ &=& \|\varphi_l(A) v-\varphi_l(A_m) v \| \le \frac{C(l,\gamma)}{m^{l/2}} \|u(0)\|_C,\label{u0term} \end{eqnarray} where $C(l,\gamma)$ is a constant depending on $l,\gamma$, but independent of $m$ or $A$. Take $l=1$ for the $u(0)$-term. Similar arguments apply to the $u'(0)$-term. Lastly, for the first term involving $x(0)$, since $V_CS_{1,1} V_C^\top =S$, the difference of $\varphi_1$ tends to $0$ as $m\to \infty$. \end{proof} \subsubsection{Linear convergence} When $G$ is positive definite, we can derive (\ref{error1}) under the framework in~\cite{Hoch}. We estimate the error $\{V_C f(S_{1,1})- W_m^{(0)} f(H_m)^{(0)} {W_m^{(0)} }^\top C V_C \} v$ in (\ref{x_bd}) for any nonzero vector $w=V_C v$ as follows. Since $f$ in (\ref{def_f}) is an analytic function on $\IC-\{0\}$, $f(S_{1,1}) v$ and its Krylov space approximation have the Cauchy integral expression (Definition 1.11~\cite{Higham}) \begin{eqnarray} &&V_C f(S_{1,1}) v=\frac{1}{2\pi i}\int_\Gamma f(\lambda) V_C (\lambda I- V_C^\top S V_C )^{-1} v d\lambda=\frac{1}{2\pi i}\int_\Gamma f(\lambda) (\lambda I- S )^{-1} w d\lambda,\label{eq69}\\ && W_m f(H_m) W_m^\top C V_C v=\frac{1}{2\pi i}\int_\Gamma f(\lambda) W_m (\lambda I-H_m)^{-1} W_m^\top w d\lambda,\label{eq70} \end{eqnarray} where $\Gamma$ can be a closed contour enclosing all the eigenvalues of $S_{1,1}:=V_C^\top S V_C$, but not enclosing $0$. The following shows the effectiveness of $C$-orthogonality Arnoldi algorithms in solving $x_\cR(t)$ of (\ref{sys}) under (\ref{assGC}). Since $\rho_0/r<1$, the error tends to $0$ as $m\to \infty$. The proof is listed in the appendix. \begin{theorem}\label{thm4} Suppose $C,G$ satisfy (\ref{assGC}). Then $ \cF_C(B)$ are bounded by $\cD(c_1,\rho_1)$ with a real number $c_1>\rho_1$, i.e., $0$ not inside $\cF_C(B)$ and thus Prop.~\ref{Bound1} indicates that $\cF_C(S)$ is bounded by a disk $\cD(c_0, \rho_0)$ with $c_0>\rho_0$. Take $\Gamma$ as one circle with centre $c_0$ and radius $r\in (\rho_0, c_0)$. Then \beqq\label{error1} \| x_\cR (t)-x_a(t)\|_C \le \max_{\lambda\in \Gamma}( |f(\lambda)|\|x(0)\|_C+|f_1(\lambda)|\|u(0)\|_C+|f_2(\lambda)|\|u'(0)\|_C ) \cdot \frac{4 }{ (r-\rho_0)} (\frac{\rho_0}{r})^m. \eeqq \end{theorem} \subsection{ Upper bounds $E(\gamma)$ with $h/\gamma$ fixed} From Prop.~\ref{FCB}, $\cF_C(B)$ lies in the right half plane with $c_1>\rho_1>0$, \[ \cF_C(B)\subset \cD(c_1,\rho_1).\] Let $\mu_1:=c_1-\rho_1, \mu_2:=c_1+\rho_1$ be lower and upper bounds for $\Re(\cF_C(B))$, respectively. Since M{\"o}bius transformations map generalized circles to generalized circles, the function $g$ maps $\cD(c_1,\rho_1)$ in the $\mu$-plane to $\cD(c_0, \rho)$ in the $\lambda$-plane, where $c_0, \rho$ are functions of $\gamma$, \beqq\label{rho_fun} c_0=\frac{1}{2}\left((1+\gamma/\mu_2)^{-1}+(1+\gamma/\mu_1)^{-1}\right),\; \rho=\frac{1}{2}\left((1+\gamma/\mu_2)^{-1}-(1+\gamma/\mu_1)^{-1}\right).\eeqq Consider the $\varphi_0$ case, $f$ defined in (\ref{def_f}) with $t=h$, \[ f(\lambda)=\exp(-(h/\gamma) (\lambda^{-1}-1)). \] One upper bound for the right hand side of (\ref{error1}) is given by \beqq\label{eq82} |f(c_0+r)|\cdot \frac{4}{(r-\rho)}\cdot (\frac{\rho}{r})^m. \eeqq To simplify the computation, choose $\Gamma$ to be one circle tangent to the imaginary axis at $0$, sharing the same centre with $\cD(c_0, \rho)$, i.e., $r=c_0$ is chosen. Here we are interested in asymptotic results, i.e., $m\to \infty$, thus for the sake of simplicity, we omit the absolute constant $4$ in (\ref{eq82}), \beqq\label{Efun} E(\gamma):=\exp((h/\gamma)(1-(2c_0)^{-1}))\left(\frac{\rho}{c_0}\right)^m \frac{1}{(c_0-\rho)}. \eeqq \subsubsection{$\phi_0$ functions }\label{case1} Suppose the eigenvalue information on $B_{1,1}$ is not available. It is natural to choose $\gamma$ proportional to $h$, as in \cite{AWESOME}. The following computation gives qualitative analysis on $E$ with respect to $\gamma$. Here we focus on the $\varphi_0$ case. Arguments can be applied to other $\varphi_k$ functions after some proper modifications. The proofs are tedious, and placed in the appendix. Introduce $\rho_*, \gamma_*$ as follows, where $c_0(\gamma_*)=1/2$: \[ \gamma_*=\sqrt{\mu_1\mu_2},\; \rho(\gamma_*)=\rho_*:=\frac{1}{2}\frac{\sqrt{\mu_2}-\sqrt{\mu_1}}{\sqrt{\mu_2}+\sqrt{\mu_1}}. \] The following shows that the base $\rho/c_0$ of $(\rho/c_0)^m$ in $E$ gets smaller, as $\gamma$ gets close to $0$. In particular, at $\gamma=\gamma_*$, \[ \frac{\rho}{c_0}=\frac{\sqrt{\mu_2}-\sqrt{\mu_1}}{\sqrt{\mu_2}+\sqrt{\mu_1}}. \] \begin{prop}\label{num} As $\gamma$ increases in $[0,\infty)$, the radius ratio \[ \frac{\rho}{c_0}=\frac{(\mu_2-\mu_1)\gamma}{\mu_1(\mu_2+\gamma)+\mu_2(\mu_1+\gamma)} \] increases. \end{prop} Prop.~\ref{prop3.5} indicates that when $\delta=h/\gamma$ is kept fixed, the slope of $E(\gamma)$ decreases as $\gamma$ increases from $0$ to $\infty$. The graph of $E(\gamma)$ asymptotically looks like a $\cap$-shaped curve. In particular, $E(\gamma)$ can decay rapidly when $\gamma$ is sufficiently larger than $\mu_2$. \begin{prop}\label{prop3.5} Let $\delta=h/\gamma$ fixed. Let $\omega=\mu_1/\mu_2$ and \[ \epsilon(\gamma)=\delta-\frac{2m\omega}{1+3\omega}(1+\sqrt{\omega})^2-\frac{(1+\sqrt{\omega})^2}{1+\omega}. \] For $\omega:=\mu_1/\mu_2$ close to $0$ with $\epsilon>0$ , we have \[ - \frac{d}{d\gamma}\log E(\gamma)\ge (\sqrt{\mu_1}+\sqrt{\mu_2})^{-2} \epsilon. \] Then $E(\gamma)$ has the exponential decay for $\gamma>\mu_2$, \[ E(\gamma)=E(\mu_2)\exp(-\epsilon (\gamma-\mu_2)(\sqrt{\mu_1}+\sqrt{\mu_2})^{-2}). \] \end{prop} \begin{rem} With similar calculus computation, the error upper bound function $E(\gamma)$ behaves as one ``flat" function for small $\gamma$. In particular, when $\gamma\le \gamma_*$, (\ref{eq102}) gives \beqq\label{eq103} \frac{m}{\gamma} (\frac{2\mu_1\mu_2}{(\mu_1+\mu_2) \gamma+2\mu_1\mu_2})\xi =2\mu_1\mu_2\frac{m}{\gamma} \cdot \frac{ ( (\mu_1+\mu_2)\gamma+2\mu_1\mu_2 )}{ \mu_1(\mu_2+\gamma)^2+ \mu_2(\mu_1+\gamma)^2 } \ge \frac{m}{\gamma} \left(\frac{(\mu_1+\mu_2)\gamma+2\mu_1\mu_2}{(\sqrt{\mu_1}+\sqrt{\mu_2})^2}\right). \eeqq Hence, with $\gamma$ sufficiently close to $0$, the lower bound in (\ref{eq103}) will exceed $\delta$ eventually, which indicates the increase of $E(\gamma)$, $ \frac{d}{d\gamma}\log E(\gamma)>0$ in (\ref{eq102}). However, as $\mu_1$ is very close to $0$, the increases of $\log E$ could be very slow, $O(\log \gamma)$. For instance, at $\gamma=\mu_1$, \[ \frac{d}{d\log \gamma}\log E(\gamma)=2\mu_1\mu_2 m\frac{\mu_1+3\mu_2}{(\mu_1+\mu_2)^2+4\mu_1\mu_2}\le 6\mu_1m.\] \end{rem} \subsubsection{Higher order functions $\varphi_k$ } The phi-functions \[ \varphi_0(z)=\exp(z),\; \varphi_k(z)=z^{-1} (\varphi_{k-1}(z)-1/((k-1)!)) \] are initially proposed to serve as error bounds for the matrix exponential function, e.g., Theorem 5.1 in \cite{Saad92}. In applications, one can use any function $\varphi_k$, $k>0$ to compute $\exp(-B_{1,1}^{-1} h) v$. Researchers~\cite{AWESOME} observe dissimilar error behaviours, even though two equivalent phi functions are computed based on Krylov subspace approximations, \begin{eqnarray} &&\varphi_0(-h B_{1,1}^{-1})B_{1,1}v,\\ &&-h\varphi_1(-h B_{1,1}^{-1})v+B_{1,1}v,\label{eq122} \end{eqnarray} Here we focus on the computation framework in (\ref{eq122}). With small Krylov dimensions, the error mainly originates from the Krylov approximation error of $h\varphi_1(-h B_{1,1}^{-1})$. To estimate the error, we can choose $f$ in (\ref{error1}) to be \beqq\label{f1_eq} f(\lambda):=h\varphi_1( (h/\gamma) (1-\lambda^{-1})) =h\{(h/\gamma) (1-\lambda^{-1})\}^{-1} \{\exp((h/\gamma) (1-\lambda^{-1}))-1\}. \eeqq For general $k\ge 1$, choose \beqq\label{phik_f} f(\lambda)= f(g(\mu))=h^k \varphi_k((h/\gamma) (1-\lambda^{-1})) =h^k \varphi_k(- h/\mu) \eeqq in estimating the error of the $\varphi_k$ case, \beqq\label{phi2} \exp(-h B_{1,1}^{-1})u=u+\sum_{j=1}^{k-1}(-h B_{1,1}^{-1})^j u+ (-h)^k \varphi_k(-h B_{1,1}^{-1})\cdot ( B_{1,1}^{-1})^k u. \eeqq Prop.~\ref{h_bd} shows that $1/k!$ is one upper bound for each $\varphi_k$ for $k\ge 1$ and thus $f$ has an upper bound $h^k/k!$. This new upper bound mainly brings two adjustments to the original $\cap$-shaped error bound. First, the exponential fast dropping under large $h$ disappears, since the upper bound for this function $f$ is lifted to an increasing function $h^k/k!$. Second, polynomial decaying under small $\gamma$ can be obtained, in contrast to the original stagnation in the $\varphi_0$-case. \begin{prop} \label{h_bd} Consider integers $k>0$. Let $f$ be given in (\ref{phik_f}). Then with $g(\mu)=(1+\mu^{-1}\gamma)^{-1}$, $|f(g(\mu))|$ can be bounded by $h^k/(k!)$. \end{prop} \begin{proof} Let $\lambda=g(\mu)$. Claim: for each positive integer $k$, we have \[ |\varphi_k (-h\mu^{-1})|\le {(k!)}^{-1}. \] By Taylor's expansion Theorem. if $z<0$, then with $\xi $ between $0$ and $z$, \[ \varphi_k(z)=z^{-1} \left(\exp(z)-1-\sum_{j=1}^{k-1}\frac{z^j}{j!}\right )= \frac{\exp(\xi) z^k/k!}{z^k}=\frac{\exp(\xi)}{k!}. \] Since $\xi\in [-h\mu^{-1},0 ]$, then \beqq\label{phi_1} |f(g(\mu))|= | h^k\varphi_k(h\mu^{-1})|\le( k!)^{-1} \max_{\xi} |\exp(\xi)|={(k!)}^{-1} h^k. \eeqq \end{proof} \begin{prop} Consider $h$ in proportional to $\gamma$, $\delta=h/\gamma$. Error bounds corresponding to the $\varphi_k$ case can be described by \beqq\label{E_phi_k} E(\gamma):=h^k ( \frac{\rho}{c_0})^m\frac{1}{c_0-\rho}. \eeqq Then \[ \frac{d\log E}{d\log \gamma}\ge k+1,\; \forall \gamma>0.\] \end{prop} \begin{proof} From (\ref{E_phi_k}), we have \[ \log E=k\log (\delta \gamma)+m\log \frac{\rho}{c_0}-\log(c_0-\rho). \] To explore the dependence on $\gamma$, taking derivative with respect to $\gamma$ yields \begin{eqnarray} && \frac{d}{d\gamma}\log E(\gamma)=\frac{d}{d\gamma} \{ k\log \gamma+m\log \frac{\rho}{c_0}-\log (c-\rho)\}\\ &=&\frac{k}{\gamma} +2m \left((\frac{1}{\mu_1}+\frac{1}{\mu_2}) \gamma^2+2\gamma\right)^{-1}+(\mu_1+\gamma)^{-1}. \label{eq97} \end{eqnarray} Hence, for all $\gamma>0$, we have \[ \frac{d\log E}{d\log \gamma}=k+2m( (\frac{1}{\mu_1}+\frac{1}{\mu_2}) \gamma+2 )^{-1} +1-\frac{\mu_1}{\mu_1+\gamma}>k+1. \] \end{proof} \section{Simulations}\label{simulations} \label{sec_problem} Previous work in~\cite{chen2018transient} and~\cite{AWESOME} is recalled to illustrate the stability issue in solving semi-explicit DAEs by the ordinary Arnoldi method. \subsection{Stability Problems of DAEs} \label{sec:sing} \input{./DAC18_RLC_Fig/section_3_Stability_1} \label{sec:sens} \input{./DAC18_RLC_Fig/section_3_Stability_3_rev} This simple example illustrates whether the numerical range of $B$ is located in the right half plane or not affects the sensitivity of numerical integration methods. Indeed, since the matrix $P_C G^{-1} C$ is \[ \left( \begin{array}{cccc} 0 & 0 &0 & 0 \\ 0 & 0 &0 & 0 \\ 0 & 0 & 5\times 10^{-14} & 5\times 10^{-10} \\ 0 & 0 & -5\times 10^{-10} &0 \end{array} \right), \] $\cF_C(B)$ is the ellipse with centre $(2.5\times 10^{-14},0)$ and semi-major axis $5\times 10^{-14}$ and semi-minor axis $5\times 10^{-10}$. By (\ref{HS}) and Prop.~\ref{Bound1}, the Rayleigh quotient of the matrix $H_m$ always lie in the image of the ellipse under the function $g$. Thus, $\cF_C(H_m)$ lies in the disk $\cD(1/2,1/2)$. In contrast, the ordinary Arnoldi iterations generate upper Hessenberg matrix $H_m$, whose numerical range $\cF(H_m)$ does not necessarily lies in $\cD(1/2,1/2)$, since part of $\cF(B)$ even lies in the left half plane. \begin{figure} \begin{center} \includegraphics[width=0.35\textwidth]{FB_figr.pdf} \includegraphics[width=0.35\textwidth]{FCB_figr.pdf} \caption{ Illustration of $\cF(G^{-1}C)$(left) and $\cF_C(G^{-1}C)$(right) under $5\times 10^5$ Rayleigh quotient realizations from $\IC^4$. } \label{FB} \end{center} \end{figure} \subsection{RLC networks} To illustrate the performance of the proposed Arnoldi algorithm on the case with $G$ only positive semi-definite, we use one PDN, consisting of $260$ resistors, $160$ capacitors and $160$ inductors. The system matrix $C$ is positive semi-define and symmetric ( actually diagonal). The matrix $G$ is positive semi-definite, but not symmetric. The eigenvalues of $ B_{1,1}=V_C^\top G^{-1} CV_C$ are in the range of $[10^{-17}, 10^{-8}]$. The distribution of the eigenvalues is plotted in Fig.~\ref{Fig_RLCmesh_Eigens}. The transient response of the RLC mesh circuit is calculated with a single step integration. Assume the slope of input current source is unchanged within the current step. Starting from zero initial state $x(0)$, the response $x(h)$ of circuit at time $h$ is derived. The exact solution is computed by directly solving differential equations and algebraic equations in (\ref{eq1},\ref{eq2}). The shift parameter $\gamma$ is set as $h/2$ empirically. The matrix exponentials in the solution are evaluated at different time step sizes $h$ with increasing dimension $m$ of Krylov subspace. For simplicity, we consider $x(0)=0=u(0)$ and the solution is given by $x(h)=h^2V_C \varphi_2(-h B_{1,1}^{-1}) C_1^{-1} V_C^\top u'(0)$. Since \[ \varphi_0(t)v=t^2\varphi_2(t)v+v+tv=t\varphi_1(t)v+v,\] then the matrix exponential $\varphi_2(-h B_{1,1}^{-1}) v$ appeared in the solution can be computed with a Krylov subspace approximation of either $\varphi_0$, $\varphi_1$ or $\varphi_2$ functions. Consider the following three approaches to compute the Krylov subspace approximation. \begin{enumerate} \item[(a)] The original Arnoldi method with implicit regularization. \item[(b)] The original Arnoldi method with implicit regularization + numerical pruning of spurious eigenvalues. \item[(c)] The Arnoldi method with structured orthognality + numerical pruning of spurious eigenvalues. \end{enumerate} Left column to right column in Fig.~\ref{Fig_RC_error_orig} includes the distribution of absolute error after applying approach (a), (b) and (c), respectively. Here the absolute errors are focused on matrix exponentials, thus subfigures from the top row to the bottom top shows the absolute errors of the following matrix exponentials. \begin{enumerate} \item[(i)] $\varphi_0$ function: $ V_C^\top \varphi_0(h B_{1,1}^{-1}) V_C G^{-1} V_C^\top C_1 V_C G^{-1} u'(0)$. \item[(ii)] $\varphi_1$ function: $h V_C^\top \varphi_1(h B_{1,1}^{-1}) V_C G^{-1} u'(0)$. \item[(iii)] $\varphi_2$ function: $h^2 V_C^\top \varphi_1(h B_{1,1}^{-1}) C_1^{-1} V_C G^{-1} u'(0)$. \end{enumerate} Experiments in Fig.~\ref{Fig_RC_error_orig} show that the upper Hessenberg matrix can consist of many spurious eigenvalues. From (\ref{HS}) and (\ref{rho_fun}), $\cF_C(S)\subseteq \cD(1/2,1/2)$ and thus $\cF(H_m)\subseteq \cD(1/2,1/2)$. The region with spurious eigenvalues is plotted in red color. When the original Arnoldi iterations are used, the upper Hessenberg matrix could lose the positive definite property and the absolute error could grow extremely high. Clearly, the issue is resolved with (iii) see the right column. Notice that for $\gamma$ close to 0, the set $\cF(H_m)$ is very close to $1$ from(\ref{rho_fun}), and rounding errors could easily contaminate the computations of $H_m$, such that $\cF(H_m)$ fails to lie in $\cD(1/2,1/2)$. Hence, proper numerical pruning is required. Observe that the error reduces quickly with all $\varphi$ functions by increasing the dimension of rational Krylov subspace, which is consistent with the theorem~\ref{Thm3}. When $h$ is larger than $\mu_2$(the upper bound for real components of eigenvalues of $B_{1,1}$), the calculation with the $\phi_0$ function gives the best accuracy. On the other hand, if $h$ is smaller than the spectrum, the errors (in the log-scale) with $\varphi_1$ and $\varphi_2$ exhibit a decrease proportional to $\gamma$ in the log scale, which alleviates the error stagnation in the solution with the $\varphi_0$ function. \begin{figure}[hbt] \centering \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.3cm, clip=true, keepaspectratio, width = 0.38\textwidth ]{eig_dist.pdf} \caption{RLC network: eigenvalues of $B = G^{-1}C$ in log-scale.} \label{Fig_RLCmesh_Eigens} \vspace{-0.2cm} \end{figure} \begin{figure}[hbt] \centering \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_Orig_phi0_NoExcludeSpurious.eps} \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_Orig_phi0_Prune.eps} \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_AWESOME_phi0.eps}\\ \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_Orig_phi1_NoExcludeSpurious.eps} \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_Orig_phi1_Prune.eps} \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_AWESOME_phi1.eps} \\ \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_Orig_phi2_NoExcludeSpurious.eps} \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_Orig_phi2_Prune.eps} \includegraphics[trim = 0.1cm 0.1cm 0.1cm 0.4cm, clip=true, keepaspectratio, width = 0.3\textwidth ] {pdn_Error_AWESOME_phi2.eps} \caption{RLC network with $N=507$: Left to right columns show the absolute error versus $h$ and $m$ with (a) original Arnoldi process, (b) original Arnoldi process+numerical pruning and (c) Arnoldi process with explicit structured orthogonalization +numerical pruning. } \label{Fig_RC_error_orig} \vspace{-0.35cm} \end{figure}
{'timestamp': '2020-06-01T02:08:46', 'yymm': '2005', 'arxiv_id': '2005.14451', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14451'}
arxiv
\section*{Robot HSR Hardware Description} \section*{Appendix 1: Competition results} \begin{table}[t] \begin{center} \caption{Results of recent competitions. [DSPL, domestic standard-platform lea
gue; JSAI, Japanese Society for Artificial Intelligence; METI, Ministry of Economy, Trade and Industry (Japan); OPL, open-platform league; RSJ, Robotics Society of Japan]} \label{tab:result} \begin{tabular}{l|l|l} \hline \multicolumn{1}{c|}{Country} & \multicolumn{1}{c|}{Competition} & \multicolumn{1}{c}{Result} \\ \hline \hline Japan & RoboCup 2017 Nagoya & {\bf @Home DSPL 1st} \\ && @Home OPL 5th \\ \hline Japan & RoboCup Japan Open 2018 Ogaki & @Home DSPL 2nd \\ && @Home OPL 1st \\ && JSAI Award \\ \hline Canada & RoboCup 2018 Montreal & {\bf @Home DSPL 1st} \\ && P\&G Dishwasher Challenge Award \\ \hline Japan & World Robot Challenge 2018 & {\bf Service Robotics Category} \\ && {\bf Partner Robot Challenge Real Space 1st} \\ && METI Minister's Award, RSJ Special Award \\ \hline Australia & RoboCup 2019 Sydney & @Home DSPL 3rd \\ \hline Japan & RoboCup Japan Open 2019 Nagaoka & @Home DSPL 1st \\ && @Home OPL 1st \\ \hline \end{tabular} \end{center} \end{table} Table \ref{tab:result} shows the results achieved by our team in recent competitions. We have participated in the RoboCup and World Robot Challenge for several years, and as a result, our team has won prizes and academic awards. \par Notably, we participated in the RoboCup 2019 Sydney using the system described herein. We were able to demonstrate the performance of HSR and our technologies. Thanks to these results, we were awarded the third prize in that competition. \section*{Appendix 2: Link to Team Video, Team Website} \begin{itemize} \item Team Video \\ \url{https://www.youtube.com/watch?v=0loNuukvOec} \\ \item Team Website \\ \url{http://www.brain.kyutech.ac.jp/~hma/wordpress/} \\ \item GitHub \\ \url{https://github.com/hibikino-musashi-athome} \\ \item Facebook \\ \url{https://www.facebook.com/HibikinoMusashiAthome/} \\ \item YouTube \\ \url{https://www.youtube.com/channel/UCJEeZZiDXijz6PidLiOtvwQ} \end{itemize} \section*{Appendix 3: Robot's Software Description} For our robot we are using the following software: \begin{itemize} \item OS: Ubuntu 16.04. \item Middleware: ROS Kinetic. \item State management: SMACH (ROS). \item Speech recognition (English): \begin{itemize} \item rospeex \cite{rospeex}. \item Web Speech API. \item Kaldi. \end{itemize} \item Morphological Analysis Dependency Structure Analysis (English): SyntaxNet. \item Speech synthesis (English): Web Speech API. \item Speech recognition (Japanese): Julius. \item Morphological Analysis (Japanese): MeCab. \item Dependency structure analysis (Japanese): CaboCha. \item Speech synthesis (Japanese): Open JTalk. \item Sound location: HARK. \item Object detection: point cloud library (PCL) and you only look once (YOLO) \cite{redmon2016you}. \item Object recognition: YOLO. \item Human detection / tracking: \begin{itemize} \item Depth image + particle filter. \item OpenPose \cite{cao2017realtime}. \end{itemize} \item Face detection: Convolutional Neural Network. \item SLAM: hector\_slam (ROS). \item Path planning: move\_base (ROS). \end{itemize} \section{Introduction} Our team, Hibikino-Musashi@Home (HMA), was founded in 2010, and we have been competing annually in the RoboCup@Home Japan Open competition in the open platform league (OPL). Our team is developing a home-service robot, and we intend to demonstrate our robot in this event in 2020 to present the outcome of our latest research. In RoboCup 2017 Nagoya, we participated both in the OPL and the domestic standard platform league (DSPL) and in the RoboCup 2018 Montreal and RoboCup 2019 Sydney we participated in the DSPL. Additionally, in the World Robot Challenge (WRC) 2018, we participated in the service-robotics category of the partner-robot challenge (real space). In the RoboCup 2017, 2018 and 2019 competitions and in the WRC 2018, we used a Toyota human support robot (HSR) \cite{toyota_hsr}. We were awarded the first prize at the WRC 2018 and third prize at the RoboCup 2019. In this paper, we describe the technologies used in our robot. In particular, this paper outlines our object recognition system that uses deep learning \cite{hinton2006fast}, improves the speed of HSR, and has a brain-inspired artificial intelligence model, which was originally proposed by us and is installed in our HSR. \section{System overview} Figure \ref{fig:softOverview} presents an overview of our HSR system. We have used an HSR since 2016. In this section, we will introduce the specifications of our HSR. \subsection{Hardware overview} We participated in RoboCup 2018 Montreal and 2019 Sydney with this HSR. The computational resources built into the HSR were inadequate to support our intelligent systems and were unable to extract the maximum performance from the system. To overcome this limitation, using an Official Standard Laptop for DSPL that can fulfill the computational requirements of our intelligent systems has been permitted since RoboCup 2018 Montreal. We use an ALIENWARE (Intel Core i7-8700K CPU, 32GB RAM, and GTX-1080 GPU) as the Official Standard Laptop for DSPL. Consequently, the computer equipped inside the HSR can be used to run basic HSR software, such as sensor drivers, motion planning and, actuator drivers. This has increased the operational stability of the HSR. \subsection{Software overview} \begin{figure}[bt] \begin{center} \includegraphics[scale=0.5]{systemblock.png} \caption{Block diagram overview of our HSR system. [HSR, human-support robot; ROS, robot operating system]} \label{fig:softOverview} \end{center} \end{figure} In this section, we introduce the software installed in our HSR. Figure \ref{fig:softOverview} shows the system installed in our HSR. The system is based on the Robot Operating System \cite{ros}. In our HSR system, laptop computer and a cloud service, if a network connection is available, are used for system processing. The laptop is connected to a computer through an Hsrb interface. The built-in computer specializes in low-layer systems, such as HSR sensor drivers, motion planning, and actuator drivers, as shown in Fig. \ref{fig:softOverview} (c) and (d). Furthermore, the built-in computer has a sound localization system that use HARK \cite{hark}, as shown in Fig. \ref{fig:softOverview} (e). \section{Object recognition} In this section, we explain the object recognition system (shown in Fig. \ref{fig:softOverview} (a)), which is based on you look only once (YOLO) \cite{redmon2016you}. To train YOLO, a complex annotation phase is required for annotating labels and bounding boxes of objects. In the RoboCup@Home competition, predefined objects are typically announced during the setup days right before the start of the competition days. Thus, we have limited time to train YOLO during the competition, and the annotation phase impedes the use of the trained YOLO during the competition days. We utilize an autonomous annotation system for YOLO using a three-dimensional (3D) scanner. Figure \ref{fig:annotation1} shows an overview of the proposed system. \begin{figure}[b] \begin{center} \includegraphics[scale=0.5]{annotation1.png} \caption{Overview of proposed autonomous annotation system for YOLO.} \label{fig:annotation1} \end{center} \end{figure} In this system, QLONE \cite{qlone}, a smartphone application capable of 3D scanning, is used. QLONE makes it easy to create 3D models by placing objects on dedicated markers and shooting them. We placed the marker and object on a turntable and created a 3D model. In this method, the bottom surface of the object could not be shoot; thus, two 3D models can be created for each object by acquiring the flipped upside-down object. Figure \ref{fig:annotation2} shows the processing flow to generate training images for YOLO. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.5]{annotation2.png} \caption{Processing flow for generating training images for YOLO.} \label{fig:annotation2} \end{center} \end{figure} Multi-viewpoint images are automatically generated from the created two 3D models (Fig. \ref{fig:annotation2} (a)). Then, we remove image backgrounds (Fig. \ref{fig:annotation2} (b)). For backgrounds of the training images, we shoot background images, for example, a table, shelf, and other items. To adapt to various lighting conditions, we apply the automatic color equation algorithm \cite{RIZZI20031663} to the background images (Fig. \ref{fig:annotation2} (c)). To incorporate the object images into the background images, we define 20-25 object locations on the background images (the number of object locations depends on the background images). Then, by placing the object images on the defined object locations autonomously, the training images for YOLO are generated (Fig. \ref{fig:annotation2} (d)). If there are 15 class objects and 306 background images, 400,000 training images are generated. Additionally, annotation data for the training images are generated autonomously because object labels and positions are known. Image generation requires ~15 min (using six CPU cores in parallel), and training of YOLO requires approximately 6 h when using the GTX1080 GPU on a standard laptop. Even though the generated training data are artificial, recognition of YOLO in actual environments works. The accuracy when learning 10,000 epochs is 60.72\% in a mean average precision (mAP) evaluation. \section{High-speed behavioral synthesis} We are working to improve the speed of HSR from two viewpoints: behavioral synthesis and software processing speed. Regarding behavioral synthesis, we reduce the wasted motion by combining and synthesizing several behaviors for the robot. For instance, by moving each joint of the arm during navigation, the robot can move to the next action such as grasping without wasting any time as soon as the robot reaches an interim target location. Regarding the processing speed, we aim to operate all software at 30 Hz or higher. To reduce the waiting time for software processing, which causes the robot to stop, the essential functions of the home service robot, such as object recognition and object grasping-point estimation, need to be executed in real time. We optimized these functions for the Tidy Up Here task. We used two optimized methods for that task in the WRC 2018 (Fig. \ref{fig:synthesis}). In the WRC 2018 results, for which we won first place, our achieved speedup was approximately 2.6 times the prior record. Our robot can tidy up within 34 s per object; thus, so we expect that it can tidy up approximately 20 objects in 15 min. \begin{figure}[bt] \begin{center} \includegraphics[scale=0.65]{synthesis.jpg} \caption{Speed comparison between a conventional system and the proposed high-speed system.} \label{fig:synthesis} \end{center} \end{figure} \section{Brain-inspired artificial intelligence model} In this section, we explain a brain-inspired artificial intelligence model that consists of a visual cortex model, an amygdala model, a hippocampus model, and a prefrontal cortex model \cite{tanaka2019biai}. It is expected that home service robots have local knowledge which is based on the experiences of the robots. In the case of acquiring local knowledge, its learning is executed during the daily life of the robots. Thus, applying only deep learning \cite{hinton2006fast} to acquire local knowledge is not effective because the robot cannot prepare big data of the local knowledge. To acquire local knowledge, we propose an artificial intelligence model that is inspired by the structure of the brain because our brain can acquire local knowledge from only few data. Mainly, we focus on an amygdala, a hippocampus, and a prefrontal cortex, and the proposed model integrates functions of these parts of the brain. In addition, we integrated a deep neural network as a visual cortex model into the proposed model. Figure \ref{fig:biai1} shows the proposed model. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.4]{biai1.png} \caption{A brain-inspired artificial intelligence model that consists of a visual cortex model, an amygdala model, a hippocampus model, and a prefrontal cortex model.} \label{fig:biai1} \end{center} \end{figure} The visual cortex model consists of YOLO and a Point Cloud Library (PCL) \cite{rusu2011pcl}. The visual cortex model recognizes an environment and outputs the label of detected object and its position. The object label is input into the amygdala model. We use the amygdala model for value judgments. The amygdala model consists of a lateral nucleus (LA) model and a central nucleus (CE) model \cite{tanaka2019amygdala}. The LA and the CE judge the value of the object. Only if the value of the object is high enough, the object label and the object position are input into the hippocampus model. We use the hippocampus model for event coding. The hippocampus model consists of cue cells, time cells and social place cells as an internal model of detected events. The cue cells and time cells represent what and when events happen, respectively. The social place cells represent where events happen. The cue cells receive the object label and the social place cell receives the object position. Then, the hippocampus model integrates the outputs of these cells and computes an event vector. The event vector is input into the prefrontal cortex model. We use the prefrontal cortex model for event predictions. The prefrontal cortex model is an echo state network (ESN) \cite{jaeger2001echostate}. The ESN trains the time-series event vector to predict future events. After training, the ESN predicts time-series events without input from environments. We evaluated the proposed model using the following experiment. A person walked across in front of a robot, as shown in Fig. \ref{fig:biai2}. The robot detected the person and learned the trajectory of the person which was a type of local knowledge. Subsequently, the robot predicted the trajectory. In addition, the robot added the predicted trajectory as an imaginary potential of a map for SLAM. Figure \ref{fig:biai3} shows the imaginary potential of the predicted trajectory. By using the imaginal potential, the robot was able to avoid the person who walked across in front of the robot. Therefore, the proposed model acquired the local knowledge. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.7]{biai2.png} \caption{Trajectory of a person.} \label{fig:biai2} \end{center} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[scale=0.7]{biai3.png} \caption{Imaginary potential of the predicted trajectory.} \label{fig:biai3} \end{center} \end{figure} \vspace{-0.3cm} \section{Conclusions} In this paper, we summarized the available information about our HSR, which we entered into RoboCup 2019 Sydney. The object recognition and improved speed of the HSR that we built into the robot were also described. Currently, we are developing many different pieces of software for an HSR that will be entered into RoboCup 2020 Bordeaux. \vspace{-0.3cm} \section*{Acknowledgment} This work was supported by Ministry of Education, Culture, Sports, Science and Technology, Joint Graduate School Intelligent Car \& Robotics course (2012-2017), Kitakyushu Foundation for the Advancement of Industry Science and Technology (2013-2015), Kyushu Institute of Technology 100th anniversary commemoration project : student project (2015, 2018-2019) and YASKAWA electric corporation project (2016-2017), JSPS KAKENHI grant number 17H01798 and 19J11524, and the New Energy and Industrial Technology Development Organization (NEDO). \newpage
{'timestamp': '2020-06-01T02:10:14', 'yymm': '2005', 'arxiv_id': '2005.14496', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14496'}
arxiv
\section{Introducntion} The present COVID-19 pandemic is a dynamic and volatile process with often unpredictable ups and downs in the infected populations that make it difficult to predict its future
course. In the absence of any vaccine or definitive drug in the immediate future \cite{Chen2020} the fight against COVID-19 is a hard and long drawn bitter battle, with two strategies being put forward. The first is the widely enforced lockdown, quarantine, and social distancing where the spread of the disease is contained at its inception and only a limited fraction of population is allowed to be infected.\cite{Prem2020} This model appears to be successful in South Korea and China, and some other Asian countries.\cite{Shen2020} The other model is to allow the virus to have a relatively unconstrained transmission so that a large fraction of the people develops the immunity.\cite{Kamikubo2020} This is called the herd immunity (HI) that is favoured by Sweden, and was initially discussed by Germany and England, but largely discarded later. HI can be achieved by two ways- (i) by vaccination, and (ii) by infection. The HI approach is based on the understanding that one can obtain the herd immunity in the society if 60-70\% of the population gets immunized. Needless to say this herd immunity is preferable through vaccination as happened in small pox and measles. Implementation of both the models has difficulties. Implementation of lockdown and social distancing requires enormous effort, backed up by resources. On the other hand, the HI model could have adverse consequence on the vulnerable citizens, a subject not adequately discussed. In fact, experiences in Italy and Spain show that the demography can be altered in some regions if HI is given an unconstrained run. Herd immunity ensures an indirect protection from COVID-19 (or any other infectious disease) when a large fraction of the population becomes immunized to the disease.\cite{Fine1993,Anderson1985,John2000} Herd immunity drastically decreases the probability of the presence of an uninfected individual in the vicinity of a presently infected individual. That is, the infected person is effectively quarantined by the surrounding immunized population. Hence, the chain of propagation breaks. In Fig. \ref{fig1} we pictorially explain the phenomenon of herd immunity. \begin{figure}[H] \centering \includegraphics[width=3.2in,keepaspectratio=true]{Figures/fig1.jpg} \caption{A pictorial representation of the herd immunity phenomenon. In the left we have a region with the susceptible population and one infected person. The total susceptibles are further divided into vulnerables and resilients. The infection propagates in an unconstrained manner and after a certain period the region possesses a large fraction of immunized population (right). After this immunisation any further infection cannot propagate and indirectly protects the susceptibles. In addition to that, multiple infected persons cannot do further harm. The colour codes are maintained throughout this paper.} \label{fig1} \end{figure} This can happen by providing the population with a vaccine or by getting cured after being infected. In the case of COVID-19 pandemic, as of now, we are unsure regarding the success of a vaccine and the latter is the only option to attain HI. However, the herd immunity threshold (HIT), that is the minimum fraction of population needs to get immunized in order to eradicate the disease, is different for different infectious diseases.\cite{Georgette2009,McBryde2009} For example, HIT for measles is ~92-95\% and for that of SARS-1 it is in the range of 50-80\%. Researchers around the world are exploring mainly two aspects of this disease- (i) the microscopic and clinical aspects which would eventually lead to drug discovery and vaccine preparation,\cite{Chen2020,Wrapp2020} (ii) the demographic aspects which lead to policy making and timeline prediction.\cite{Prem2020,Shen2020,Singh2020,Mukherjee2020} The latter requires effective mathematical modelling and crowd simulations. However, these models fail to predict the real scenario because of some inherent assumptions and limitations. Although a lot of interesting new studies are emerging in both categories in the context of the recent coronavirus pandemic, the issue of herd immunity and its fatality are not studied. There are several mathematical models which have been employed in the context of epidemic modelling, for example, the famous Kermack-McKendrick (KM) model which has been used extensively to study the spread of infectious diseases like measles, small pox etc.\cite{Daley2001,Kermack1927} At the core of this model lies a system of three coupled differential equations for susceptible (S), infected (I) and removed (R) (cured and dead) populations, that is, the famous SIR model (Eq. \ref{eq1}).\cite{Skvortsov2007,Jones2009,Anderson1979} At the onset of an epidemic S becomes I and I eventually becomes R, but R can never become S or I because of acquired immunity. \begin{equation} \begin{split} \frac{dS}{dt}& =-k_{S\rightarrow I}SI\\ \frac{dI}{dt}& =k_{S\rightarrow I}SI-k_{I\rightarrow R}I\\ \frac{dR}{dt}& =k_{I\rightarrow R}I \end{split} \label{eq1} \end{equation} Eq. \ref{eq1} describes the three coupled non-linear differential equations of the KM model where $k_{S\rightarrow I}$ is the rate of infection and $k_{I\rightarrow R}$ is the rate of removal (recovery and death). In the conventional SIR model $k_{S\rightarrow I}$ and $k_{I\rightarrow R}$ are written as $\alpha$ and $\beta$ respectively. In principle the rate constants should be time and space dependent, that is, non-local in nature. But it is difficult to predict the functional form of the rate constants with time- it could be periodic, decaying or stochastic in nature. The applicability of this model is for a homogeneous population distribution and mass transmission at a large scale.\cite{Daley2001} An important quantity is the basic reproduction number ($R_0$) which is an estimate of the number of secondary infection from one primary infection.\cite{Dietz1993} The value of $R_0$ is intimately connected with the herd immunity threshold ($H_t$) discussed above.\cite{McBryde2009,Diekmann1995} (Eq. \ref{eq2}) Hence a correct determination of the basic reproduction parameter, $R_0$, is important. \begin{equation} H_t=\left(1-\frac{1}{R_0}\right)\times 100 \% \label{eq2} \end{equation} It is clear from Eq. \ref{eq2} that a higher value of $R_0$ increases the herd immunity threshold. For SARS-Cov2 the value of $R_0$ shows a large dispersion and as a consequence we cannot predict the value of $H_t$. For COVID-19 the average value of $R_0$ is estimated to be in the range of $\sim$2.0-3.0 but it can possess spatial heterogeneity and time dependence in reality.\cite{Zhang2020,Tang2020} If one considers $R_0$ to be in the range of 2.0-3.0 the value of $H_t$ would be in between 50\%-66\%. In the light of SIR model [Eq.(1)] $R_0$ can be defined as \begin{equation} R_0=\frac{k_{S\rightarrow I}}{k_{I\rightarrow R}} S \label{eq3} \end{equation} Eq. \ref{eq3} provides a different definition of $R_0$ and can be understood as follows. If we assume that (the S the fraction of susceptible population) is near 1.0 at the beginning (as there are very few infections compared to a huge population), then $R_0$ could be equal to unity if the two rate constants are equal. This means that the number of infection and recovery are same at any time. In this situation the disease remains under control. $R_0 > 1$ causes an epidemic as it challenges the capacity of the healthcare facilities. However, for different region the value of $R_0$ could be different depending on the intensity of region wise preventive and healthcare measures. In this work we ask the following questions- (i) what are the relative magnitude of the fatality to the vulnerable and resilient populations if we attempt to achieve HI without a vaccine? (ii) What is the dependence of the fraction of survival on the rate of the attainment of HI? These two issues are widely discussed all over the world. Here we seek answers to these two important questions by employing a modified Susceptible-Infected-Removed (SIR) model and cellular automata (CA) simulations. The rest of the paper is organised as follows. In section \ref{sec2} we describe the mathematical model and the CA simulation protocols. Section \ref{sec3} consists of the results from numerical solutions of the modified SIR model and simulations, accompanied by detailed discussions. This section is further divided into several sub sections. In section IV we summarize and conclude our study. \section{Theoretical Formalism} \label{sec2} \subsection{Mathematical Modelling} We modify the celebrated SIR (Susceptible-Infected-Removed) model by dividing the entire susceptible population into two parts, namely vulnerable (Vul) and resilient (Res). In the context of the corona virus disease, the vulnerable category consists of persons who are above 60 years of age or have pre-existing medical conditions like diabetes, heart and kidney disease, and lung conditions.\cite{Yang2020} The rest of the population is termed as resilient who have a greater chance of getting cured. We achieve such classification by employing different rate constants associated with their recovery. This is based on the available data on the coronavirus disease. The scheme of this classification is described in Fig. \ref{fig2}. \begin{figure}[ht] \centering \includegraphics[width=3in,keepaspectratio=true]{Figures/fig2.jpg} \caption{Schematic representation of the modified SIR network model. Here the susceptible (S) population is divided into $S_V$ and $S_R$ that represent elderly and younger people respectively. A part of the fraction $S_V$ gets infected and creates $I_O$ fraction of infected population. A part of the remaining fraction of the population, that is, $S_R$ gets infected and creates $I_R$ fraction of the infected population. Both $I_V$ and $I_R$ get either cured (C) or dead (D). Naturally the rate of recovery for the younger fraction of the population is more than that of the older infected population. On the other hand, the rate of death for the older population is more than that of the younger invectives.} \label{fig2} \end{figure} We follow the scheme described in Fig. \ref{fig2} and formulate a system of eight coupled non-linear differential equations [Eqs. \ref{eq4} - \ref{eq11}]. \begin{equation} \frac{dS_{Vul}(t)}{dt} = -k_{S_{Vul}\rightarrow I_{Vul}}(t)S_{Vul}(t)I(t) \label{eq4} \end{equation} \begin{equation} \frac{dS_{Res}(t)}{dt} = -k_{S_{Res}\rightarrow I_{Res}}(t)S_{Res}(t)I(t) \label{eq5} \end{equation} \begin{equation} \begin{split} \frac{dI_{Vul}(t)}{dt} = k_{S_{Vul}\rightarrow I_{Vul}}(t)S_{Vul}(t)I(t)\\ -(k_{I_{Vul}\rightarrow C_{Vul}}(t)+k_{I_{Vul}\rightarrow D_{Vul}}(t))I_{Vul}(t) \end{split} \label{eq6} \end{equation} \begin{equation} \begin{split} \frac{dI_{Res}(t)}{dt} = k_{S_{Res}\rightarrow I_{Res}}(t)S_{Res}(t)I(t)\\ -(k_{I_{Res}\rightarrow C_{Res}}(t)+k_{I_{Res}\rightarrow D_{Res}}(t))I_{Res}(t) \end{split} \label{eq7} \end{equation} \begin{equation} \frac{dC_{Vul}(t)}{dt} = k_{I_{Vul}\rightarrow C_{Vul}}(t)I_{Vul}(t) \label{eq8} \end{equation} \begin{equation} \frac{dC_{Res}(t)}{dt} = k_{I_{Res}\rightarrow C_{Res}}(t)I_{Res}(t) \label{eq9} \end{equation} \begin{equation} \frac{dD_{Vul}(t)}{dt} = k_{I_{Vul}\rightarrow D_{Vul}}(t)I_{Vul}(t) \label{eq10} \end{equation} \begin{equation} \frac{dD_{Res}(t)}{dt} = k_{I_{Res}\rightarrow D_{Res}}(t)I_{Res}(t) \label{eq11} \end{equation} In the following, we explain the complex set of equations. Here I(t) is the number of total infectives at any time t, that is $I(t)=I_{Vul}(t)+I_{Res}(t)$. This is the variable that couples the two population sub-categories. $k(t)$ are the rate constants associated with processes that are described in the subscript with an arrow. We would like to point out that the rates in above equations of motion are all assumed to be time dependent. These rate constants contain all the basic information and also connected with $R_0$. In our earlier study, we employed a time dependent rate to produce certain features observed in the time dependence of new cases such a double peaked population structure.\cite{Mukherjee2020} The time dependence of rate can be employed to include certain dynamical features like crossover from local contact to community transmission. It is worth stressing that the modelling of these time dependent rate constants plays a pivotal role in the SIR scheme. We propagate these equations numerically to obtain the respective temporal evolution of each kind of population fraction. From the temporal profiles we can extract several important quantities after a long time (that is, the end of the spread), for example, (i) the peak height of the active infected cases, (ii) the fraction of cured population, (iii) the fraction of dead population, (iv) the fraction of uninfected population, (v) time required to reach the immunity threshold etc. We can regard these equations together to form a system of reacting species, as in a system of chemical reactions. We solve these equations with two different sets of the rate constant values and aim to understand the relative damages to the vulnerable and resilient population. The values of rate constants are provided in Table \ref{tab1}. We keep $k_{S_{Vul}\rightarrow I_{Vul}}$ and $k_{S_{Res}\rightarrow I_{Res}}$ the same which depicts the same probability of getting infected for both the sub-categories. However, the rate constants associated with recovery and death differs in orders of magnitude between Vul and Res. We now discuss the procedure we follow to assign different rate constants to the vulnerables and resilients. In a previous study we estimated the values of $k_{S\rightarrow I}$ and $k_{I\rightarrow R}$ by fitting the infected/cured/death vs. time data for India (source: www.covid19india.org).\cite{Mukherjee2020} We plot the rate of change of the cured ($dC/dt$) and dead ($dD/dt$) population against the infected population to find the slope that gives the rate. This procedure provides us with required estimates of $k_{I\rightarrow C}$ and $k_{I\rightarrow D}$. For India, till $27^{th}$ May, the estimated values are $k_{I\rightarrow C} = 0.026 \:day^{-1}$ and $k_{I\rightarrow D} = 0.0013 \:day^{-1}$. That is, $k_{I\rightarrow C}$ is approximately 20 times of $k_{I\rightarrow D}$. However, for countries like Italy, Spain, and USA $k_{I\rightarrow D}$ was significantly higher. This comparison however takes no cognition of the relative time scales, and therefore should be taken with care. These values are mean field in nature and contain enormous spatial heterogeneity. If we see the state wise (or district wise) statistics we find a large dispersion. On the other hand, the determination of $k_{S\rightarrow I}$ is not that straight forward as the equations containing $k_{S\rightarrow I}$ are non-linear in nature in the SIR model. Hence one needs to obtain a good estimate of $R_0$ and calculate $k_{S\rightarrow I}$ from Eq. \ref{eq3}. As mentioned above, $R_0$ also exhibits spatiotemporal heterogeneity which makes the problem of estimating the rate constants even more challenging. For example, in Italy $R_0$ has been estimated to be $\sim$3.0-6.0 and in the Hunan province of China it is $\sim$1.73-5.25.\cite{Wangping2020} In a recent study on Wuhan, the transmission rate ($k_{S\rightarrow I}$) is assumed to vary from 0.59 to $1.68 \:day^{-1}$.\cite{Lin2020} However, the data required to extract the rate constants associated with the two individual sub-categories, namely, vulnerable and resilient, are not available separately. As the values of the rate constants are connected to the basic reproduction number ($R_0$), we choose the inputs, by preserving the basic features, such that the average value of $R_0$ yields an acceptable number, in light of acquired information. Next we tune the parameters such that the maximum of the active cases falls in the range of $\sim$60-90 days, as observed for most countries. We note that we consider these values only to study the trends and do not strictly correspond to any particular region in reality. \begin{table}[ht] \caption{The values of rate constants used to solve the system of coupled differential equations [Eq.\ref{eq4} - \ref{eq11}]. The unit of the rate constants is $day^{-1}$.} \centering \begin{tabular}{|c|c|c|} \hline Rate Const. & Set-1 & Set-2\\ \hline \hline $k_{S_{Vul}\rightarrow I_{Vul}}$ & 0.50 & 0.78\\ \hline $k_{I_{Vul}\rightarrow C_{Vul}}$ & 0.05 & 0.05\\ \hline $k_{I_{Vul}\rightarrow D_{Vul}}$ & 0.10 & 0.10\\ \hline $k_{S_{Res}\rightarrow I_{Res}}$ & 0.50 & 0.78\\ \hline $k_{I_{Res}\rightarrow C_{Res}}$ & 0.50 & 0.50\\ \hline $k_{I_{Res}\rightarrow D_{Res}}$ & 0.05 & 0.05\\ \hline \end{tabular} \label{tab1} \end{table} We invoke two different values of $R_0$ for the two different sub-categories. For set-1 $R_0^{Vul} = 3.33$ and $R_0^{Res} = 3.33$. The larger value of $R_0$ for vulnerables arise from slower rate of recovery, Eq. \ref{eq3}. On the other hand, for set-2 $R_0^{Vul} = 5.20$ and $R_0^{Res} = 1.42$. We obtain these values by considering each of the population to be individually normalised (that is ~100\%). In such a situation the effective $R_0$ can be calculated as follows (Eq. \ref{eq12}). \begin{equation} R_0^{eff} = \frac{R_0^{Vul}N_{Vul}+R_0^{Res}N_{Res}}{N_{Vul}+N_{Res}} \label{eq12} \end{equation} Here $N_{Vul}$ and $N_{Res}$ represent the number of people in the vulnerable and resilient category respectively. In all our calculations we start with total infected fraction as 0.001 and vary the percentage of vulnerable populations from 5\%-40\%. By using Eq. \ref{eq12} we calculate the effective $R_0$ values for different ratio of vulnerable to resilient population. We find $R_0$ varies from 1.03 to 1.88 for set-1 and 1.61 to 2.93 for set-2. In a way, set-1 represents a more controlled situation compared to set-2 (Table \ref{tab2}). \begin{table}[ht] \caption{The basic reproduction number ($R_0$) for the parameters described in Table \ref{tab1} (set-1 and set-2) for various ratios of vulnerable to resilient population.} \centering \begin{tabular}{|c|c|c|c|} \hline \% & \% & \multicolumn{2}{c|}{$R_0$} \\ \cline{3-4} vulnerble & resilient & Set-1 & Set-2 \\ \hline \hline 5 & 95 & 1.031 & 1.609 \\ \hline 10 & 90 & 1.152 & 1.798 \\ \hline 15 & 85 & 1.273 & 1.987 \\ \hline 20 & 80 & 1.394 & 2.176 \\ \hline 25 & 75 & 1.515 & 2.365 \\ \hline 30 & 70 & 1.636 & 2.554 \\ \hline 35 & 65 & 1.757 & 2.743 \\ \hline 40 & 60 & 1.878 & 2.932 \\ \hline \end{tabular} \label{tab2} \end{table} \subsection{Stochastic Cellular Automata Simulation} Stochastic cellular automata (CA) simulations give a microscopic and nonlocal picture of the problem at hand. Such simulations are often used to model several physical phenomena.\cite{Hollingsworth2004, Seybold1998, Wolfram1983, Bartolozzi2004, Soares-Filho2002, Goltsev2010, Almeida2011} Unlike the mathematical model, CA simulations can directly establish a physical map of the disease-spread. Moreover, we incorporate several region specific and disease specific parameters in our CA simulations that give a general outlook to our investigations. A detailed list of the parameters and associated symbols can be found in our previous work.\cite{Mukherjee2020} The spread of COVID-19 is strongly inhomogeneous. So, a homogeneous model fails to capture many aspects. In a real-world scenario, the non-local description may often become important in determining the fate of a pandemic in a given geographical region. In such a case, the population parameters are space-dependent. Moreover, the rate constants also have a spatial distribution. Hence, solutions of these equations are highly non-trivial and a large scale cellular automata simulation may capture these inherent spatiotemporal heterogeneities. In this work, we neglect the effects of social distancing and quarantine, since we aim at establishing a relation between the percentage of mortality and immunization by an unhindered transmission of the disease within the whole population. Calculation of the rates of transmission and recovery/death can often be difficult due to several reasons like unavailability of data, political or demographic complications etc. This becomes particularly nontrivial when we consider the process with respect to a given population distribution of vulnerable and resilient individuals. The probabilistic approach employed in our simulations makes it easier to study the process, since obtaining an average probability for each of the processes is much more practical. We use the Moore definition \cite{Fu2003, White2007, Sirakoulis2000} to denote the neighbourhood of a given person. The salient features of our simulation are detailed in our previous work.\cite{Mukherjee2020} Here, we summarize our CA simulation methodology. We start we a land randomly occupied by susceptibles and infectives. The population distribution is such that 5\% and 0.05\% of the total available land is covered by susceptibles and infectives respectively. We divide the population into vulnerable and resilient individuals with respect to their probabilities of recovery ($P_R^{Vul}$ and $P_R^{Res}$). Vulnerables primarily include people above the age of 60. This also includes people with serious health issues, who are more prone to get deceased if infected.\cite{Remuzzi2020, Ruan2020, Wu2020} The resilients, on the other hand, are the young fraction of the society with no severe health conditions. When an infective comes in the neighbourhood of a susceptible, the latter is converted to an infective with a given probability of transmission which is considered to be equal and time independent (constant) for both vulnerables and resilients. The time period of infection is determined by probability of recovery and the probability of remaining infected in a given simulation step. In this work, we consider the latter to be 0.99.\cite{Mukherjee2020} An individual, once cured from infection, becomes immune to the disease. We run our simulations for a given number of steps ($N$). It should be noted that the time unit is not well-defined for this simulations. To get an estimate of time, the results need to be compared with our theoretical model. \section{Results and Discussion} \label{sec3} \subsection{Numerical solutions of the SIR model} \label{sec3A} \begin{figure}[ht] \centering \includegraphics[width=3.4in,keepaspectratio=true]{Figures/fig3.jpg} \caption{Population disease progression as obtained from the solution of the system of eight coupled non-linear differential equations presented in Eqs. \ref{eq4} - \ref{eq11} as function of time for two different situations described in Table \ref{tab1}. Plots show the increase in the total immunity (blue) with the decrease in the populations of vulnerable population (maroon) and resilient population (green) for (a) Set-1 and (b) Set-2. In these two calculations we start with $V:R=1:4$. In both the two cases the percentage demise in the vulnerable population is significantly higher.} \label{fig3} \end{figure} Here we present the results from the numerical solutions of Eqs. \ref{eq4} - \ref{eq11} in Fig. \ref{fig3}. We choose two sets of rate constants, set-1 (Fig. \ref{fig3}a) and set-2 (Fig. \ref{fig3}b) and obtain the changes in the population of vulnerables and resilients. With our choice of parameters (Table \ref{tab1}) for set-1 we observe 40.8\% increase in the immuned population. In order to achieve the 40.8\% immunity a region loses 4.7\% of its resilient population and 34.3\% vulnerable population. On the other hand, for set-2 a region loses 7.9\% of its resilient population and 57.1\% of its vulnerable population in order to achieve $\sim$68\% immunity (that could be the HIT for COVID-19). Hence, it is clear that for both the two cases the vulnerables are significantly affected. We note that with an increased infection rate the timescale of the saturation of the temporal profiles are drastically reduced. The graphs that are presented in Fig. \ref{fig3} are obtained for 20\% initial vulnerable population. In Fig. \ref{fig4}a, we show the time evolution of the total immunity percentage. In order to study the effect of fast (early) vs slow (late) achievement of the immunity saturation, we plot the percentage survival of the total population against the time required to attain the immunity threshold ($t_{Im}$) for different values of $k_{S\rightarrow I}$ (Fig. \ref{fig4}b). We find that the percentage of survival increases linearly with increasing $t_{Im}$. This indicates that a quick achievement of immunity saturation could lead to fatal consequences. \textit{If a society opts for herd immunity, it has to be a slow process}. \begin{figure}[h] \centering \includegraphics[width=3.2in,keepaspectratio=true]{Figures/fig4.jpg} \caption{The effect of different rates of attaining herd immunity on the total population. (a) Plot of the time evolution of the percentage of total immunized population for different values of susceptible to infected rate-constants. With increasing $k_{S\rightarrow I}$ we see an increase in the percentage immunity and decrease the time required to reach saturation ($t_{Im}$). (b) Percentage survival (uninfected and cured population) of the total population against $t_{Im}$. The two quantities show linear dependence. That is, the percentage survival increases as we take more time to reach immunity saturation. Note that both the X and the Y axes are the outcome of the numerical solution and not provided as inputs. The calculations are done using a fixed Vul:Res=1:4 and the rate constants associated with recovery/death are also kept same as given in Table \ref{tab1}.} \label{fig4} \end{figure} To make the immunity gaining process slow (which leads to relatively less casualty), the rate of infection ($k_{S\rightarrow I}$) needs to be brought down. On the other hand, the rate of removal (recovery and death), $k_{I\rightarrow R}$, depends primarily on the disease and partly on the presently available healthcare facilities. $k_{S\rightarrow I}$ can be controlled by employing effective strategies like lockdown, quarantine, and social distancing. \begin{figure}[H] \centering \includegraphics[width=3.2in,keepaspectratio=true]{Figures/fig5.jpg} \caption{The effect of the change in the initial percentage of the vulnerable population on the relative infection and recovery for the sub-categories, namely, vulnerables and resilients. Plots show the dependence of infection peak, percentage cured and dead population for vulnerable (maroon) and resilient (green) population with the initial fraction of vulnerable population as obtained from the solution of the modified SIR model described in Eqs. \ref{eq4} - \ref{eq11}. For figures (a)-(c) set-1 and for (d)-(f) set-2. The quantities show a non-linear dependence and enhanced fatality for the vurnerables.} \label{fig5} \end{figure} Next we vary the \% of initial vulnerable population from 5\% to 40\% and obtain the \% of highest active cases (that is the maxima in the temporal variation of $I_V(t)$ or $I_R(t)$), \% of cured population and \% of death. The range is chosen in order to represent different regions/countries. For example, in India only $\sim8\%$ of the entire population is above 60 years whereas, in countries like Italy and Germany the number is over 20\%. We obtain Fig. \ref{fig5}a - \ref{fig5}c for set-1 and Fig. \ref{fig5}d - \ref{fig5}f for set-2. In both the cases the variation of the infected peak maxima with \% vulnerable shows nearly linear increase with a higher slope for the vulnerables (Fig. \ref{fig5}a and \ref{fig5}d). Interestingly the \% cured (Fig. \ref{fig5}b and \ref{fig5}e) and \% dead (Fig. \ref{fig5}c and \ref{fig5}f) shows a nonlinear dependence on \% vulnerable. It clearly shows that the damage is huge to the vulnerable population when the \% of vulnerables increases. We plot (Fig. \ref{fig6}) the percentage of deaths for both the subcategories against the herd immunity threshold for a given Vul:Res composition (1:4). This is to show the increasing damage with respect to Ht. We find that the trend is linear for both the sets of parameters and the relative fatality is substantially higher for the vulnerables. \begin{figure}[H] \centering \includegraphics[width=3.2in,keepaspectratio=true]{Figures/fig6.jpg} \caption{Percentage outcome of different herd immunity thresholds ($H_t$) on the vulberable and resilient population. Plot of percentage deaths against $H_t$ calculated from Eq. \ref{eq2} for (a) set-1, and (b) set-2. In both the two cases the dependence is linear with substantially more damage to the vulnerable population. The values on the Y axes are individually normalised.} \label{fig6} \end{figure} \subsection{(Stochastic Cellular Automata Simulations} \label{sec3B} \subsubsection{Dependence on the initial population distribution} Here, we keep the probability of transmission of disease time-independent and equal for both resilients and vulnerables. We change the initial fraction of the vulnerable section of the total population from 5\% to 40\%. In Fig. \ref{fig7} we plot the \% of cured individuals (resilients and vulnerables) against \% of total immunization when the temporal progression of the population reaches saturation. As discussed earlier, herd immunity is obtained when a major section of the population becomes immune, post infection. However, apart from gaining immunity, this process involves the death of many infected individuals according to their survival probability. The probability of recovery of the resilients is higher than that of the vulnerables. Here, these two probabilities are taken as 0.95 and 0.8 respectively.\cite{Verity2020,Ruan2020} In Fig. \ref{fig7} the abscissa is the percentage of the total population that becomes immune after recovering from the infection. The ordinate quantifies the percentage of cured resilients and vulnerables with respect to the total initial population. \textit{With increase in the immunity attained in the society, a significant decrease in the percentage of cured vulnerable individuals is observed}. This implies that higher the percentage of immunization in the total population, greater is the probability of death of the vulnerable section. Hence herd immunity results in the death of a major fraction of the vulnerable population. This stratum of the society includes mainly the old people (age greater than years) and people with serious health conditions or comorbidity.\cite{Fang2020,Yang2020} The geographical regions with demographic distributions having higher fraction of the people of age above $\sim$60 years are among the worst affected. For example, Italy suffered the loss of many aged people as a result of the COVID-19 pandemic.\cite{Livingston2020,Onder2020} \begin{figure}[H] \centering \includegraphics[width=2.5in,keepaspectratio=true]{Figures/fig7.jpg} \caption{Percentage of cured resilient and vulnerables in the population on the course of attaining herd immunity. The percentage of cured individuals is shown as a function of the percentage of total population immunized after getting infected. This is obtained by averaging over 100 CA simulations. Green shows the percentage of death for the resilient fraction of the society and maroon denotes the same for the vulnerable people.} \label{fig7} \end{figure} In Fig. \ref{fig8}a , we show the time evolution of the fraction of vulnerables and resilients in the total population for different \% of initial number of vulnerables. The fractions are calculated with respect to the total initial population. We see that with increase in the initial \% of vulnerables, the number of resilients dying show a slight decrease, whereas the number of dead vulnerables increases significantly. This observation is clarified in Fig. \ref{fig8}b. Here we plot the absolute change in the fraction of resilients and vulnerables as functions of the initial \% of vulnerables. Both show linear dependence. The gradient (slope) is negative for resilients and positive for vulnerables. However, we find that the absolute value of the slope for the latter is $\sim$5 times higher than that of the former. This denotes that countries with higher population of elderly and vulnerable people in the society incur a greater loss in the number of vulnerable individuals. \begin{figure}[H] \centering \includegraphics[width=3.4in,keepaspectratio=true]{Figures/fig8.jpg} \caption{(a) Population dynamics represented as the temporal evolution of the fraction of resilient and vulnerable sections of the population are shown with varying initial distribution of resilients and vulnerables. The colour bar on the right hand side shows the initial \% of vulnerables in the total population. (b) The absolute decrease in the resilient (green) and vulnerable (maroon) fractions of the total population as functions of the initial percentage of vulnerables.} \label{fig8} \end{figure} \subsubsection{Dependence on the probability of recovery} Now, we keep the initial population distribution fixed at 20\% vulnerable and 80\% resilient individuals. We change the probability of recovery of these two categories ($P_R^{Vul}$ and $P_R^{Res}$) with the constraint $P_R^{Vul} \leq P_R^{Res}$. Accordingly, we change these two probabilities from 0.6 to 0.8 and 0.8 to 0.95 respectively. We choose these values according to reported case fatality ratios for the SARS-CoV-2 pandemic.\cite{Verity2020, Ruan2020} \begin{figure}[ht] \centering \includegraphics[width=3in,keepaspectratio=true]{Figures/fig9.png} \caption{Interdependence of different fractions of the population as the immunity evolves. Percentage of immunized (colour coded) represented as a function of the percentage of survival for vulnerables and resilients. The proportions are with respect to the total initial population. The primary variables are the probabilities of recovery of the vulnerables and the resilients. The results are obtained after averaging over 100 simulations.} \label{fig9} \end{figure} For every pair of $P_R^{Vul}$ and $P_R^{Res}$ we get a value of percentage of vulnerables and resilients who survive and a fraction of the population that gets immunized. In Figure 9 we plot the survival \% of vulnerables and resilients in the two perpendicular axes and represent the \% immunized as colour codes according to the colour gradation bar on the right hand side. In this contour representation, red denotes low immunity and blue denotes higher immunity. The survival \% of the vulnerables is lower than that of the resilients. The percentage of immunized population is higher (blue) for maximum survival of the resilients as compared to that of the vulnerables. This means that to attain higher immunity in the population, greater number of old and vulnerable people suffer death as compared to resilients. Hence, attainment of herd immunity comes with the cost of a higher mortality of the vulnerable section of the society. \section{Summary and Conclusion} Any epidemic is a dynamic process where time dependence plays a crucial role in the control of the spread and the damage, that is, the outcome. COVID-19 is a pandemic which is currently under intense scrutiny by all and sundry, and many aspects are yet to be understood. Every move by the government, and the population in general, is of crucial importance. Each pandemic comes with unique characteristics that deserve special treatments, not just medical and clinical but also sociological. In each such epidemic, immunity plays a critical role. Spanish Flu mainly attacked the age group between 20 and 30 years of age. This is the age group with maximum immunity. In the case of COVID-19, again we face the sad reality that certain section of the society is substantially more vulnerable than other sections. The vulnerable section consists of age groups which are above 60-65 years of age, and people with comorbidity. There is yet to further classification, although it is conceivable that as we understand the disease better and more precisely, better perception of danger would emerge. An epidemic often starts by a process of nucleation which is an important phenomenon often studied in physics and chemistry. The process of nucleation is initiated by a sudden appearance of a group of infected individuals in a region. This may be triggered a laboratory accident, or infection from eating wild animals like bats, pangolin etc. or by arrival of infected tourists and so on. The process may be dependent on the nature of the geography and demography of the country or region. The initial period of the process is often slow. After the initial nucleation, the disease spreads by a diffusion process into the susceptible population. Hence, it is a percolation with a temporal evolution. In order to address the issue of vulnerability of the population and the outcome with the progression of the epidemic, we carry out a theoretical analysis with the objective to analyze the consequences of aiming for herd immunity without vaccine, or a good drug, in the context of the present COVID-19 pandemic. We develop and solve a modified SIR model numerically and by employing cellular automata simulations. We particularly probed the following question: what is dependence of mortality on the rate of herd immunity? One of the key results of the present study is the dependence of the percentage survival on the rate of attainment of the immunity threshold. We find that a late attainment of the immunity saturation leads to relatively lesser fatality. We show that approximately 50-60\% of the vulnerables might lose their lives in order to attain ~70\% total immunized population. On the contrary the mortality of the resilient fraction of population is relatively low, may be just about 10\%. We find a non-linear trend in the dependence of the cured and dead population on the initial population of the vulnerables. This is because as the number of vulnerables increases, the immunity by infection from a larger fraction of population which cannot protect the vulnerables unless deliberate efforts are made that requires intervention. While we discuss herd immunity by infection in this work, the other, more sustainable option is herd immunity by vaccination. For example, diseases like small pox, polio etc. have been wiped off the face of earth by vaccination. This is particularly crucial for diseases with high mortality rates. However, for any novel disease, preparation of a vaccine can take years. In case of the present COVID-19 pandemic, for instance, extensive research is going on globally in search of a vaccine.\cite{Chen2020} However, no promising result has been obtained in almost five months and researchers believe it may take more than a year to prepare the vaccine. \begin{acknowledgments} We thank Prof. Sarika Bhattacharyya (NCL, Pune) and Prof. Suman Chakrabarty (SNCNCBS, Kolkata) for several fruitful discussions and comments. The authors thank the Department of Science and Technology (DST), India for financial support. BB thanks sir J. C. Bose fellowship for partial financial support. S. Mo. thanks Universities Grants Commission (UGC), India for research fellowship. S. Mu. thanks DST-INSPIRE programme for providing research fellowship. \end{acknowledgments} \section{Introducntion} The present COVID-19 pandemic is a dynamic and volatile process with often unpredictable ups and downs in the infected populations that make it difficult to predict its future course. In the absence of any vaccine or definitive drug in the immediate future \cite{Chen2020} the fight against COVID-19 is a hard and long drawn bitter battle, with two strategies being put forward. The first is the widely enforced lockdown, quarantine, and social distancing where the spread of the disease is contained at its inception and only a limited fraction of population is allowed to be infected.\cite{Prem2020} This model appears to be successful in South Korea and China, and some other Asian countries.\cite{Shen2020} The other model is to allow the virus to have a relatively unconstrained transmission so that a large fraction of the people develops the immunity.\cite{Kamikubo2020} This is called the herd immunity (HI) that is favoured by Sweden, and was initially discussed by Germany and England, but largely discarded later. HI can be achieved by two ways- (i) by vaccination, and (ii) by infection. The HI approach is based on the understanding that one can obtain the herd immunity in the society if 60-70\% of the population gets immunized. Needless to say this herd immunity is preferable through vaccination as happened in small pox and measles. Implementation of both the models has difficulties. Implementation of lockdown and social distancing requires enormous effort, backed up by resources. On the other hand, the HI model could have adverse consequence on the vulnerable citizens, a subject not adequately discussed. In fact, experiences in Italy and Spain show that the demography can be altered in some regions if HI is given an unconstrained run. Herd immunity ensures an indirect protection from COVID-19 (or any other infectious disease) when a large fraction of the population becomes immunized to the disease.\cite{Fine1993,Anderson1985,John2000} Herd immunity drastically decreases the probability of the presence of an uninfected individual in the vicinity of a presently infected individual. That is, the infected person is effectively quarantined by the surrounding immunized population. Hence, the chain of propagation breaks. In Fig. \ref{fig1} we pictorially explain the phenomenon of herd immunity. \begin{figure}[H] \centering \includegraphics[width=3.2in,keepaspectratio=true]{Figures/fig1.jpg} \caption{A pictorial representation of the herd immunity phenomenon. In the left we have a region with the susceptible population and one infected person. The total susceptibles are further divided into vulnerables and resilients. The infection propagates in an unconstrained manner and after a certain period the region possesses a large fraction of immunized population (right). After this immunisation any further infection cannot propagate and indirectly protects the susceptibles. In addition to that, multiple infected persons cannot do further harm. The colour codes are maintained throughout this paper.} \label{fig1} \end{figure} This can happen by providing the population with a vaccine or by getting cured after being infected. In the case of COVID-19 pandemic, as of now, we are unsure regarding the success of a vaccine and the latter is the only option to attain HI. However, the herd immunity threshold (HIT), that is the minimum fraction of population needs to get immunized in order to eradicate the disease, is different for different infectious diseases.\cite{Georgette2009,McBryde2009} For example, HIT for measles is ~92-95\% and for that of SARS-1 it is in the range of 50-80\%. Researchers around the world are exploring mainly two aspects of this disease- (i) the microscopic and clinical aspects which would eventually lead to drug discovery and vaccine preparation,\cite{Chen2020,Wrapp2020} (ii) the demographic aspects which lead to policy making and timeline prediction.\cite{Prem2020,Shen2020,Singh2020,Mukherjee2020} The latter requires effective mathematical modelling and crowd simulations. However, these models fail to predict the real scenario because of some inherent assumptions and limitations. Although a lot of interesting new studies are emerging in both categories in the context of the recent coronavirus pandemic, the issue of herd immunity and its fatality are not studied. There are several mathematical models which have been employed in the context of epidemic modelling, for example, the famous Kermack-McKendrick (KM) model which has been used extensively to study the spread of infectious diseases like measles, small pox etc.\cite{Daley2001,Kermack1927} At the core of this model lies a system of three coupled differential equations for susceptible (S), infected (I) and removed (R) (cured and dead) populations, that is, the famous SIR model (Eq. \ref{eq1}).\cite{Skvortsov2007,Jones2009,Anderson1979} At the onset of an epidemic S becomes I and I eventually becomes R, but R can never become S or I because of acquired immunity. \begin{equation} \begin{split} \frac{dS}{dt}& =-k_{S\rightarrow I}SI\\ \frac{dI}{dt}& =k_{S\rightarrow I}SI-k_{I\rightarrow R}I\\ \frac{dR}{dt}& =k_{I\rightarrow R}I \end{split} \label{eq1} \end{equation} Eq. \ref{eq1} describes the three coupled non-linear differential equations of the KM model where $k_{S\rightarrow I}$ is the rate of infection and $k_{I\rightarrow R}$ is the rate of removal (recovery and death). In the conventional SIR model $k_{S\rightarrow I}$ and $k_{I\rightarrow R}$ are written as $\alpha$ and $\beta$ respectively. In principle the rate constants should be time and space dependent, that is, non-local in nature. But it is difficult to predict the functional form of the rate constants with time- it could be periodic, decaying or stochastic in nature. The applicability of this model is for a homogeneous population distribution and mass transmission at a large scale.\cite{Daley2001} An important quantity is the basic reproduction number ($R_0$) which is an estimate of the number of secondary infection from one primary infection.\cite{Dietz1993} The value of $R_0$ is intimately connected with the herd immunity threshold ($H_t$) discussed above.\cite{McBryde2009,Diekmann1995} (Eq. \ref{eq2}) Hence a correct determination of the basic reproduction parameter, $R_0$, is important. \begin{equation} H_t=\left(1-\frac{1}{R_0}\right)\times 100 \% \label{eq2} \end{equation} It is clear from Eq. \ref{eq2} that a higher value of $R_0$ increases the herd immunity threshold. For SARS-Cov2 the value of $R_0$ shows a large dispersion and as a consequence we cannot predict the value of $H_t$. For COVID-19 the average value of $R_0$ is estimated to be in the range of $\sim$2.0-3.0 but it can possess spatial heterogeneity and time dependence in reality.\cite{Zhang2020,Tang2020} If one considers $R_0$ to be in the range of 2.0-3.0 the value of $H_t$ would be in between 50\%-66\%. In the light of SIR model [Eq.(1)] $R_0$ can be defined as \begin{equation} R_0=\frac{k_{S\rightarrow I}}{k_{I\rightarrow R}} S \label{eq3} \end{equation} Eq. \ref{eq3} provides a different definition of $R_0$ and can be understood as follows. If we assume that (the S the fraction of susceptible population) is near 1.0 at the beginning (as there are very few infections compared to a huge population), then $R_0$ could be equal to unity if the two rate constants are equal. This means that the number of infection and recovery are same at any time. In this situation the disease remains under control. $R_0 > 1$ causes an epidemic as it challenges the capacity of the healthcare facilities. However, for different region the value of $R_0$ could be different depending on the intensity of region wise preventive and healthcare measures. In this work we ask the following questions- (i) what are the relative magnitude of the fatality to the vulnerable and resilient populations if we attempt to achieve HI without a vaccine? (ii) What is the dependence of the fraction of survival on the rate of the attainment of HI? These two issues are widely discussed all over the world. Here we seek answers to these two important questions by employing a modified Susceptible-Infected-Removed (SIR) model and cellular automata (CA) simulations. The rest of the paper is organised as follows. In section \ref{sec2} we describe the mathematical model and the CA simulation protocols. Section \ref{sec3} consists of the results from numerical solutions of the modified SIR model and simulations, accompanied by detailed discussions. This section is further divided into several sub sections. In section IV we summarize and conclude our study. \section{Theoretical Formalism} \label{sec2} \subsection{Mathematical Modelling} We modify the celebrated SIR (Susceptible-Infected-Removed) model by dividing the entire susceptible population into two parts, namely vulnerable (Vul) and resilient (Res). In the context of the corona virus disease, the vulnerable category consists of persons who are above 60 years of age or have pre-existing medical conditions like diabetes, heart and kidney disease, and lung conditions.\cite{Yang2020} The rest of the population is termed as resilient who have a greater chance of getting cured. We achieve such classification by employing different rate constants associated with their recovery. This is based on the available data on the coronavirus disease. The scheme of this classification is described in Fig. \ref{fig2}. \begin{figure}[ht] \centering \includegraphics[width=3in,keepaspectratio=true]{Figures/fig2.jpg} \caption{Schematic representation of the modified SIR network model. Here the susceptible (S) population is divided into $S_V$ and $S_R$ that represent elderly and younger people respectively. A part of the fraction $S_V$ gets infected and creates $I_O$ fraction of infected population. A part of the remaining fraction of the population, that is, $S_R$ gets infected and creates $I_R$ fraction of the infected population. Both $I_V$ and $I_R$ get either cured (C) or dead (D). Naturally the rate of recovery for the younger fraction of the population is more than that of the older infected population. On the other hand, the rate of death for the older population is more than that of the younger invectives.} \label{fig2} \end{figure} We follow the scheme described in Fig. \ref{fig2} and formulate a system of eight coupled non-linear differential equations [Eqs. \ref{eq4} - \ref{eq11}]. \begin{equation} \frac{dS_{Vul}(t)}{dt} = -k_{S_{Vul}\rightarrow I_{Vul}}(t)S_{Vul}(t)I(t) \label{eq4} \end{equation} \begin{equation} \frac{dS_{Res}(t)}{dt} = -k_{S_{Res}\rightarrow I_{Res}}(t)S_{Res}(t)I(t) \label{eq5} \end{equation} \begin{equation} \begin{split} \frac{dI_{Vul}(t)}{dt} = k_{S_{Vul}\rightarrow I_{Vul}}(t)S_{Vul}(t)I(t)\\ -(k_{I_{Vul}\rightarrow C_{Vul}}(t)+k_{I_{Vul}\rightarrow D_{Vul}}(t))I_{Vul}(t) \end{split} \label{eq6} \end{equation} \begin{equation} \begin{split} \frac{dI_{Res}(t)}{dt} = k_{S_{Res}\rightarrow I_{Res}}(t)S_{Res}(t)I(t)\\ -(k_{I_{Res}\rightarrow C_{Res}}(t)+k_{I_{Res}\rightarrow D_{Res}}(t))I_{Res}(t) \end{split} \label{eq7} \end{equation} \begin{equation} \frac{dC_{Vul}(t)}{dt} = k_{I_{Vul}\rightarrow C_{Vul}}(t)I_{Vul}(t) \label{eq8} \end{equation} \begin{equation} \frac{dC_{Res}(t)}{dt} = k_{I_{Res}\rightarrow C_{Res}}(t)I_{Res}(t) \label{eq9} \end{equation} \begin{equation} \frac{dD_{Vul}(t)}{dt} = k_{I_{Vul}\rightarrow D_{Vul}}(t)I_{Vul}(t) \label{eq10} \end{equation} \begin{equation} \frac{dD_{Res}(t)}{dt} = k_{I_{Res}\rightarrow D_{Res}}(t)I_{Res}(t) \label{eq11} \end{equation} In the following, we explain the complex set of equations. Here I(t) is the number of total infectives at any time t, that is $I(t)=I_{Vul}(t)+I_{Res}(t)$. This is the variable that couples the two population sub-categories. $k(t)$ are the rate constants associated with processes that are described in the subscript with an arrow. We would like to point out that the rates in above equations of motion are all assumed to be time dependent. These rate constants contain all the basic information and also connected with $R_0$. In our earlier study, we employed a time dependent rate to produce certain features observed in the time dependence of new cases such a double peaked population structure.\cite{Mukherjee2020} The time dependence of rate can be employed to include certain dynamical features like crossover from local contact to community transmission. It is worth stressing that the modelling of these time dependent rate constants plays a pivotal role in the SIR scheme. We propagate these equations numerically to obtain the respective temporal evolution of each kind of population fraction. From the temporal profiles we can extract several important quantities after a long time (that is, the end of the spread), for example, (i) the peak height of the active infected cases, (ii) the fraction of cured population, (iii) the fraction of dead population, (iv) the fraction of uninfected population, (v) time required to reach the immunity threshold etc. We can regard these equations together to form a system of reacting species, as in a system of chemical reactions. We solve these equations with two different sets of the rate constant values and aim to understand the relative damages to the vulnerable and resilient population. The values of rate constants are provided in Table \ref{tab1}. We keep $k_{S_{Vul}\rightarrow I_{Vul}}$ and $k_{S_{Res}\rightarrow I_{Res}}$ the same which depicts the same probability of getting infected for both the sub-categories. However, the rate constants associated with recovery and death differs in orders of magnitude between Vul and Res. We now discuss the procedure we follow to assign different rate constants to the vulnerables and resilients. In a previous study we estimated the values of $k_{S\rightarrow I}$ and $k_{I\rightarrow R}$ by fitting the infected/cured/death vs. time data for India (source: www.covid19india.org).\cite{Mukherjee2020} We plot the rate of change of the cured ($dC/dt$) and dead ($dD/dt$) population against the infected population to find the slope that gives the rate. This procedure provides us with required estimates of $k_{I\rightarrow C}$ and $k_{I\rightarrow D}$. For India, till $27^{th}$ May, the estimated values are $k_{I\rightarrow C} = 0.026 \:day^{-1}$ and $k_{I\rightarrow D} = 0.0013 \:day^{-1}$. That is, $k_{I\rightarrow C}$ is approximately 20 times of $k_{I\rightarrow D}$. However, for countries like Italy, Spain, and USA $k_{I\rightarrow D}$ was significantly higher. This comparison however takes no cognition of the relative time scales, and therefore should be taken with care. These values are mean field in nature and contain enormous spatial heterogeneity. If we see the state wise (or district wise) statistics we find a large dispersion. On the other hand, the determination of $k_{S\rightarrow I}$ is not that straight forward as the equations containing $k_{S\rightarrow I}$ are non-linear in nature in the SIR model. Hence one needs to obtain a good estimate of $R_0$ and calculate $k_{S\rightarrow I}$ from Eq. \ref{eq3}. As mentioned above, $R_0$ also exhibits spatiotemporal heterogeneity which makes the problem of estimating the rate constants even more challenging. For example, in Italy $R_0$ has been estimated to be $\sim$3.0-6.0 and in the Hunan province of China it is $\sim$1.73-5.25.\cite{Wangping2020} In a recent study on Wuhan, the transmission rate ($k_{S\rightarrow I}$) is assumed to vary from 0.59 to $1.68 \:day^{-1}$.\cite{Lin2020} However, the data required to extract the rate constants associated with the two individual sub-categories, namely, vulnerable and resilient, are not available separately. As the values of the rate constants are connected to the basic reproduction number ($R_0$), we choose the inputs, by preserving the basic features, such that the average value of $R_0$ yields an acceptable number, in light of acquired information. Next we tune the parameters such that the maximum of the active cases falls in the range of $\sim$60-90 days, as observed for most countries. We note that we consider these values only to study the trends and do not strictly correspond to any particular region in reality. \begin{table}[ht] \caption{The values of rate constants used to solve the system of coupled differential equations [Eq.\ref{eq4} - \ref{eq11}]. The unit of the rate constants is $day^{-1}$.} \centering \begin{tabular}{|c|c|c|} \hline Rate Const. & Set-1 & Set-2\\ \hline \hline $k_{S_{Vul}\rightarrow I_{Vul}}$ & 0.50 & 0.78\\ \hline $k_{I_{Vul}\rightarrow C_{Vul}}$ & 0.05 & 0.05\\ \hline $k_{I_{Vul}\rightarrow D_{Vul}}$ & 0.10 & 0.10\\ \hline $k_{S_{Res}\rightarrow I_{Res}}$ & 0.50 & 0.78\\ \hline $k_{I_{Res}\rightarrow C_{Res}}$ & 0.50 & 0.50\\ \hline $k_{I_{Res}\rightarrow D_{Res}}$ & 0.05 & 0.05\\ \hline \end{tabular} \label{tab1} \end{table} We invoke two different values of $R_0$ for the two different sub-categories. For set-1 $R_0^{Vul} = 3.33$ and $R_0^{Res} = 3.33$. The larger value of $R_0$ for vulnerables arise from slower rate of recovery, Eq. \ref{eq3}. On the other hand, for set-2 $R_0^{Vul} = 5.20$ and $R_0^{Res} = 1.42$. We obtain these values by considering each of the population to be individually normalised (that is ~100\%). In such a situation the effective $R_0$ can be calculated as follows (Eq. \ref{eq12}). \begin{equation} R_0^{eff} = \frac{R_0^{Vul}N_{Vul}+R_0^{Res}N_{Res}}{N_{Vul}+N_{Res}} \label{eq12} \end{equation} Here $N_{Vul}$ and $N_{Res}$ represent the number of people in the vulnerable and resilient category respectively. In all our calculations we start with total infected fraction as 0.001 and vary the percentage of vulnerable populations from 5\%-40\%. By using Eq. \ref{eq12} we calculate the effective $R_0$ values for different ratio of vulnerable to resilient population. We find $R_0$ varies from 1.03 to 1.88 for set-1 and 1.61 to 2.93 for set-2. In a way, set-1 represents a more controlled situation compared to set-2 (Table \ref{tab2}). \begin{table}[ht] \caption{The basic reproduction number ($R_0$) for the parameters described in Table \ref{tab1} (set-1 and set-2) for various ratios of vulnerable to resilient population.} \centering \begin{tabular}{|c|c|c|c|} \hline \% & \% & \multicolumn{2}{c|}{$R_0$} \\ \cline{3-4} vulnerble & resilient & Set-1 & Set-2 \\ \hline \hline 5 & 95 & 1.031 & 1.609 \\ \hline 10 & 90 & 1.152 & 1.798 \\ \hline 15 & 85 & 1.273 & 1.987 \\ \hline 20 & 80 & 1.394 & 2.176 \\ \hline 25 & 75 & 1.515 & 2.365 \\ \hline 30 & 70 & 1.636 & 2.554 \\ \hline 35 & 65 & 1.757 & 2.743 \\ \hline 40 & 60 & 1.878 & 2.932 \\ \hline \end{tabular} \label{tab2} \end{table} \subsection{Stochastic Cellular Automata Simulation} Stochastic cellular automata (CA) simulations give a microscopic and nonlocal picture of the problem at hand. Such simulations are often used to model several physical phenomena.\cite{Hollingsworth2004, Seybold1998, Wolfram1983, Bartolozzi2004, Soares-Filho2002, Goltsev2010, Almeida2011} Unlike the mathematical model, CA simulations can directly establish a physical map of the disease-spread. Moreover, we incorporate several region specific and disease specific parameters in our CA simulations that give a general outlook to our investigations. A detailed list of the parameters and associated symbols can be found in our previous work.\cite{Mukherjee2020} The spread of COVID-19 is strongly inhomogeneous. So, a homogeneous model fails to capture many aspects. In a real-world scenario, the non-local description may often become important in determining the fate of a pandemic in a given geographical region. In such a case, the population parameters are space-dependent. Moreover, the rate constants also have a spatial distribution. Hence, solutions of these equations are highly non-trivial and a large scale cellular automata simulation may capture these inherent spatiotemporal heterogeneities. In this work, we neglect the effects of social distancing and quarantine, since we aim at establishing a relation between the percentage of mortality and immunization by an unhindered transmission of the disease within the whole population. Calculation of the rates of transmission and recovery/death can often be difficult due to several reasons like unavailability of data, political or demographic complications etc. This becomes particularly nontrivial when we consider the process with respect to a given population distribution of vulnerable and resilient individuals. The probabilistic approach employed in our simulations makes it easier to study the process, since obtaining an average probability for each of the processes is much more practical. We use the Moore definition \cite{Fu2003, White2007, Sirakoulis2000} to denote the neighbourhood of a given person. The salient features of our simulation are detailed in our previous work.\cite{Mukherjee2020} Here, we summarize our CA simulation methodology. We start we a land randomly occupied by susceptibles and infectives. The population distribution is such that 5\% and 0.05\% of the total available land is covered by susceptibles and infectives respectively. We divide the population into vulnerable and resilient individuals with respect to their probabilities of recovery ($P_R^{Vul}$ and $P_R^{Res}$). Vulnerables primarily include people above the age of 60. This also includes people with serious health issues, who are more prone to get deceased if infected.\cite{Remuzzi2020, Ruan2020, Wu2020} The resilients, on the other hand, are the young fraction of the society with no severe health conditions. When an infective comes in the neighbourhood of a susceptible, the latter is converted to an infective with a given probability of transmission which is considered to be equal and time independent (constant) for both vulnerables and resilients. The time period of infection is determined by probability of recovery and the probability of remaining infected in a given simulation step. In this work, we consider the latter to be 0.99.\cite{Mukherjee2020} An individual, once cured from infection, becomes immune to the disease. We run our simulations for a given number of steps ($N$). It should be noted that the time unit is not well-defined for this simulations. To get an estimate of time, the results need to be compared with our theoretical model. \section{Results and Discussion} \label{sec3} \subsection{Numerical solutions of the SIR model} \label{sec3A} \begin{figure}[ht] \centering \includegraphics[width=3.4in,keepaspectratio=true]{Figures/fig3.jpg} \caption{Population disease progression as obtained from the solution of the system of eight coupled non-linear differential equations presented in Eqs. \ref{eq4} - \ref{eq11} as function of time for two different situations described in Table \ref{tab1}. Plots show the increase in the total immunity (blue) with the decrease in the populations of vulnerable population (maroon) and resilient population (green) for (a) Set-1 and (b) Set-2. In these two calculations we start with $V:R=1:4$. In both the two cases the percentage demise in the vulnerable population is significantly higher.} \label{fig3} \end{figure} Here we present the results from the numerical solutions of Eqs. \ref{eq4} - \ref{eq11} in Fig. \ref{fig3}. We choose two sets of rate constants, set-1 (Fig. \ref{fig3}a) and set-2 (Fig. \ref{fig3}b) and obtain the changes in the population of vulnerables and resilients. With our choice of parameters (Table \ref{tab1}) for set-1 we observe 40.8\% increase in the immuned population. In order to achieve the 40.8\% immunity a region loses 4.7\% of its resilient population and 34.3\% vulnerable population. On the other hand, for set-2 a region loses 7.9\% of its resilient population and 57.1\% of its vulnerable population in order to achieve $\sim$68\% immunity (that could be the HIT for COVID-19). Hence, it is clear that for both the two cases the vulnerables are significantly affected. We note that with an increased infection rate the timescale of the saturation of the temporal profiles are drastically reduced. The graphs that are presented in Fig. \ref{fig3} are obtained for 20\% initial vulnerable population. In Fig. \ref{fig4}a, we show the time evolution of the total immunity percentage. In order to study the effect of fast (early) vs slow (late) achievement of the immunity saturation, we plot the percentage survival of the total population against the time required to attain the immunity threshold ($t_{Im}$) for different values of $k_{S\rightarrow I}$ (Fig. \ref{fig4}b). We find that the percentage of survival increases linearly with increasing $t_{Im}$. This indicates that a quick achievement of immunity saturation could lead to fatal consequences. \textit{If a society opts for herd immunity, it has to be a slow process}. \begin{figure}[h] \centering \includegraphics[width=3.2in,keepaspectratio=true]{Figures/fig4.jpg} \caption{The effect of different rates of attaining herd immunity on the total population. (a) Plot of the time evolution of the percentage of total immunized population for different values of susceptible to infected rate-constants. With increasing $k_{S\rightarrow I}$ we see an increase in the percentage immunity and decrease the time required to reach saturation ($t_{Im}$). (b) Percentage survival (uninfected and cured population) of the total population against $t_{Im}$. The two quantities show linear dependence. That is, the percentage survival increases as we take more time to reach immunity saturation. Note that both the X and the Y axes are the outcome of the numerical solution and not provided as inputs. The calculations are done using a fixed Vul:Res=1:4 and the rate constants associated with recovery/death are also kept same as given in Table \ref{tab1}.} \label{fig4} \end{figure} To make the immunity gaining process slow (which leads to relatively less casualty), the rate of infection ($k_{S\rightarrow I}$) needs to be brought down. On the other hand, the rate of removal (recovery and death), $k_{I\rightarrow R}$, depends primarily on the disease and partly on the presently available healthcare facilities. $k_{S\rightarrow I}$ can be controlled by employing effective strategies like lockdown, quarantine, and social distancing. \begin{figure}[H] \centering \includegraphics[width=3.2in,keepaspectratio=true]{Figures/fig5.jpg} \caption{The effect of the change in the initial percentage of the vulnerable population on the relative infection and recovery for the sub-categories, namely, vulnerables and resilients. Plots show the dependence of infection peak, percentage cured and dead population for vulnerable (maroon) and resilient (green) population with the initial fraction of vulnerable population as obtained from the solution of the modified SIR model described in Eqs. \ref{eq4} - \ref{eq11}. For figures (a)-(c) set-1 and for (d)-(f) set-2. The quantities show a non-linear dependence and enhanced fatality for the vurnerables.} \label{fig5} \end{figure} Next we vary the \% of initial vulnerable population from 5\% to 40\% and obtain the \% of highest active cases (that is the maxima in the temporal variation of $I_V(t)$ or $I_R(t)$), \% of cured population and \% of death. The range is chosen in order to represent different regions/countries. For example, in India only $\sim8\%$ of the entire population is above 60 years whereas, in countries like Italy and Germany the number is over 20\%. We obtain Fig. \ref{fig5}a - \ref{fig5}c for set-1 and Fig. \ref{fig5}d - \ref{fig5}f for set-2. In both the cases the variation of the infected peak maxima with \% vulnerable shows nearly linear increase with a higher slope for the vulnerables (Fig. \ref{fig5}a and \ref{fig5}d). Interestingly the \% cured (Fig. \ref{fig5}b and \ref{fig5}e) and \% dead (Fig. \ref{fig5}c and \ref{fig5}f) shows a nonlinear dependence on \% vulnerable. It clearly shows that the damage is huge to the vulnerable population when the \% of vulnerables increases. We plot (Fig. \ref{fig6}) the percentage of deaths for both the subcategories against the herd immunity threshold for a given Vul:Res composition (1:4). This is to show the increasing damage with respect to Ht. We find that the trend is linear for both the sets of parameters and the relative fatality is substantially higher for the vulnerables. \begin{figure}[H] \centering \includegraphics[width=3.2in,keepaspectratio=true]{Figures/fig6.jpg} \caption{Percentage outcome of different herd immunity thresholds ($H_t$) on the vulberable and resilient population. Plot of percentage deaths against $H_t$ calculated from Eq. \ref{eq2} for (a) set-1, and (b) set-2. In both the two cases the dependence is linear with substantially more damage to the vulnerable population. The values on the Y axes are individually normalised.} \label{fig6} \end{figure} \subsection{(Stochastic Cellular Automata Simulations} \label{sec3B} \subsubsection{Dependence on the initial population distribution} Here, we keep the probability of transmission of disease time-independent and equal for both resilients and vulnerables. We change the initial fraction of the vulnerable section of the total population from 5\% to 40\%. In Fig. \ref{fig7} we plot the \% of cured individuals (resilients and vulnerables) against \% of total immunization when the temporal progression of the population reaches saturation. As discussed earlier, herd immunity is obtained when a major section of the population becomes immune, post infection. However, apart from gaining immunity, this process involves the death of many infected individuals according to their survival probability. The probability of recovery of the resilients is higher than that of the vulnerables. Here, these two probabilities are taken as 0.95 and 0.8 respectively.\cite{Verity2020,Ruan2020} In Fig. \ref{fig7} the abscissa is the percentage of the total population that becomes immune after recovering from the infection. The ordinate quantifies the percentage of cured resilients and vulnerables with respect to the total initial population. \textit{With increase in the immunity attained in the society, a significant decrease in the percentage of cured vulnerable individuals is observed}. This implies that higher the percentage of immunization in the total population, greater is the probability of death of the vulnerable section. Hence herd immunity results in the death of a major fraction of the vulnerable population. This stratum of the society includes mainly the old people (age greater than years) and people with serious health conditions or comorbidity.\cite{Fang2020,Yang2020} The geographical regions with demographic distributions having higher fraction of the people of age above $\sim$60 years are among the worst affected. For example, Italy suffered the loss of many aged people as a result of the COVID-19 pandemic.\cite{Livingston2020,Onder2020} \begin{figure}[H] \centering \includegraphics[width=2.5in,keepaspectratio=true]{Figures/fig7.jpg} \caption{Percentage of cured resilient and vulnerables in the population on the course of attaining herd immunity. The percentage of cured individuals is shown as a function of the percentage of total population immunized after getting infected. This is obtained by averaging over 100 CA simulations. Green shows the percentage of death for the resilient fraction of the society and maroon denotes the same for the vulnerable people.} \label{fig7} \end{figure} In Fig. \ref{fig8}a , we show the time evolution of the fraction of vulnerables and resilients in the total population for different \% of initial number of vulnerables. The fractions are calculated with respect to the total initial population. We see that with increase in the initial \% of vulnerables, the number of resilients dying show a slight decrease, whereas the number of dead vulnerables increases significantly. This observation is clarified in Fig. \ref{fig8}b. Here we plot the absolute change in the fraction of resilients and vulnerables as functions of the initial \% of vulnerables. Both show linear dependence. The gradient (slope) is negative for resilients and positive for vulnerables. However, we find that the absolute value of the slope for the latter is $\sim$5 times higher than that of the former. This denotes that countries with higher population of elderly and vulnerable people in the society incur a greater loss in the number of vulnerable individuals. \begin{figure}[H] \centering \includegraphics[width=3.4in,keepaspectratio=true]{Figures/fig8.jpg} \caption{(a) Population dynamics represented as the temporal evolution of the fraction of resilient and vulnerable sections of the population are shown with varying initial distribution of resilients and vulnerables. The colour bar on the right hand side shows the initial \% of vulnerables in the total population. (b) The absolute decrease in the resilient (green) and vulnerable (maroon) fractions of the total population as functions of the initial percentage of vulnerables.} \label{fig8} \end{figure} \subsubsection{Dependence on the probability of recovery} Now, we keep the initial population distribution fixed at 20\% vulnerable and 80\% resilient individuals. We change the probability of recovery of these two categories ($P_R^{Vul}$ and $P_R^{Res}$) with the constraint $P_R^{Vul} \leq P_R^{Res}$. Accordingly, we change these two probabilities from 0.6 to 0.8 and 0.8 to 0.95 respectively. We choose these values according to reported case fatality ratios for the SARS-CoV-2 pandemic.\cite{Verity2020, Ruan2020} \begin{figure}[ht] \centering \includegraphics[width=3in,keepaspectratio=true]{Figures/fig9.png} \caption{Interdependence of different fractions of the population as the immunity evolves. Percentage of immunized (colour coded) represented as a function of the percentage of survival for vulnerables and resilients. The proportions are with respect to the total initial population. The primary variables are the probabilities of recovery of the vulnerables and the resilients. The results are obtained after averaging over 100 simulations.} \label{fig9} \end{figure} For every pair of $P_R^{Vul}$ and $P_R^{Res}$ we get a value of percentage of vulnerables and resilients who survive and a fraction of the population that gets immunized. In Figure 9 we plot the survival \% of vulnerables and resilients in the two perpendicular axes and represent the \% immunized as colour codes according to the colour gradation bar on the right hand side. In this contour representation, red denotes low immunity and blue denotes higher immunity. The survival \% of the vulnerables is lower than that of the resilients. The percentage of immunized population is higher (blue) for maximum survival of the resilients as compared to that of the vulnerables. This means that to attain higher immunity in the population, greater number of old and vulnerable people suffer death as compared to resilients. Hence, attainment of herd immunity comes with the cost of a higher mortality of the vulnerable section of the society. \section{Summary and Conclusion} Any epidemic is a dynamic process where time dependence plays a crucial role in the control of the spread and the damage, that is, the outcome. COVID-19 is a pandemic which is currently under intense scrutiny by all and sundry, and many aspects are yet to be understood. Every move by the government, and the population in general, is of crucial importance. Each pandemic comes with unique characteristics that deserve special treatments, not just medical and clinical but also sociological. In each such epidemic, immunity plays a critical role. Spanish Flu mainly attacked the age group between 20 and 30 years of age. This is the age group with maximum immunity. In the case of COVID-19, again we face the sad reality that certain section of the society is substantially more vulnerable than other sections. The vulnerable section consists of age groups which are above 60-65 years of age, and people with comorbidity. There is yet to further classification, although it is conceivable that as we understand the disease better and more precisely, better perception of danger would emerge. An epidemic often starts by a process of nucleation which is an important phenomenon often studied in physics and chemistry. The process of nucleation is initiated by a sudden appearance of a group of infected individuals in a region. This may be triggered a laboratory accident, or infection from eating wild animals like bats, pangolin etc. or by arrival of infected tourists and so on. The process may be dependent on the nature of the geography and demography of the country or region. The initial period of the process is often slow. After the initial nucleation, the disease spreads by a diffusion process into the susceptible population. Hence, it is a percolation with a temporal evolution. In order to address the issue of vulnerability of the population and the outcome with the progression of the epidemic, we carry out a theoretical analysis with the objective to analyze the consequences of aiming for herd immunity without vaccine, or a good drug, in the context of the present COVID-19 pandemic. We develop and solve a modified SIR model numerically and by employing cellular automata simulations. We particularly probed the following question: what is dependence of mortality on the rate of herd immunity? One of the key results of the present study is the dependence of the percentage survival on the rate of attainment of the immunity threshold. We find that a late attainment of the immunity saturation leads to relatively lesser fatality. We show that approximately 50-60\% of the vulnerables might lose their lives in order to attain ~70\% total immunized population. On the contrary the mortality of the resilient fraction of population is relatively low, may be just about 10\%. We find a non-linear trend in the dependence of the cured and dead population on the initial population of the vulnerables. This is because as the number of vulnerables increases, the immunity by infection from a larger fraction of population which cannot protect the vulnerables unless deliberate efforts are made that requires intervention. While we discuss herd immunity by infection in this work, the other, more sustainable option is herd immunity by vaccination. For example, diseases like small pox, polio etc. have been wiped off the face of earth by vaccination. This is particularly crucial for diseases with high mortality rates. However, for any novel disease, preparation of a vaccine can take years. In case of the present COVID-19 pandemic, for instance, extensive research is going on globally in search of a vaccine.\cite{Chen2020} However, no promising result has been obtained in almost five months and researchers believe it may take more than a year to prepare the vaccine. \begin{acknowledgments} We thank Prof. Sarika Bhattacharyya (NCL, Pune) and Prof. Suman Chakrabarty (SNCNCBS, Kolkata) for several fruitful discussions and comments. The authors thank the Department of Science and Technology (DST), India for financial support. BB thanks sir J. C. Bose fellowship for partial financial support. S. Mo. thanks Universities Grants Commission (UGC), India for research fellowship. S. Mu. thanks DST-INSPIRE programme for providing research fellowship. \end{acknowledgments}
{'timestamp': '2020-06-01T02:09:36', 'yymm': '2005', 'arxiv_id': '2005.14474', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14474'}
arxiv
\section{Introduction}\label{secintroduction} Einstein's general theory of relativity may not be the final word on gravity. As beautiful and successful as it is, it seems to have serious problems bot
h on very small and on very large length scales. A problem on small length scales is signaled by $90$ years of unwavering resistance of general relativity to quantization. A problem on the largest length scales is indicated by the present search for ``dark energy'' to explain the accelerated expansion of the universe within general relativity. These problems provide the main motivation for continued research on alternative theories of gravity (see, for example, the broad review \cite{CapozzielloDeLau11} of extended theories of gravity). A composite higher derivative theory of gravity has recently been proposed in \cite{hco231}. The general idea of a composite theory is to specify the variables of a ``workhorse theory'' in terms of more fundamental variables and their time derivatives \cite{hco235,hco237}. The occurrence of time derivatives in the ``composition rule'' leads to a higher derivative theory, which is naturally tamed by the constraints resulting from the composition rule. For the composite theory of gravity proposed in \cite{hco231}, the underlying workhorse theory is the Yang-Mills theory \cite{YangMills54} based on the Lorentz group and the composition rule expresses the corresponding gauge vector fields in terms of the \emph{tetrad} or \emph{vierbein} variables providing a Cholesky-type decomposition of a metric. As a consequence of the composite structure of the proposed theory, it differs significantly from contentious previous attempts \cite{Utiyama56,Yang74,BlagojevicHehl} to turn the Yang-Mills theory based on the Lorentz group into a theory of gravity. Whereas the original formulation of composite gravity in \cite{hco231} was based on the Lagrangian framework, we here switch to the Hamiltonian approach. As a Hamiltonian formulation separates time from space, it certainly cannot provide the most elegant formulation of relativistic theories. However, the Hamiltonian framework has clear advantages by offering a natural formulation of constraints and a straightforward canonical quantization procedure. For bringing constraints and quantization together, we here establish the constraints resulting from the composition rule as second class constraints that can be treated via Dirac brackets \cite{Dirac50,Dirac58a,Dirac58b}, whereas gauge constraints can be handled separately by BRST quantization (the acronym derives from the names of the authors of the original papers \cite{BecchiRouetStora76,Tyutin75}; see also \cite{Nemeschanskyetal86,hco229}). Moreover, the Hamiltonian approach provides the natural starting point for a generalization to dissipative systems. In particular, this approach allows us to formulate quantum master equations \cite{BreuerPetru,Weiss,hco199,hco221} and to make gravity accessible to the robust framework of dissipative quantum field theory \cite{hcoqft}. The number of fields involved in the composite theory of gravity is enormously large. Each Yang-Mills vector field has four components satisfying second-order differential equations so that, in the Hamiltonian approach, four additional conjugate momenta are required. For the Lorentz group, the six Yang-Mills vector fields associated with six infinitesimal generators (three rotations, three boosts) thus result in $6 \times 8 = 48$ fields. Gauge constraints eventually reduce this number of degrees of freedom by a factor of two (simply speaking, among the four components of a vector field, only the two transverse components carry physical information). In addition, we consider $16$ tetrad or vierbein variables, again coming with conjugate momenta, so that we deal with a total of $48+32=80$ fields in our canonical Hamiltonian approach. Actually, this is not even the end of the story as additional ghost fields would be introduced in the BRST approach for handling the gauge constraints. Our approach differs from the traditional Hamiltonian formulation of general higher derivative theories developed by Ostrogradsky \cite{Ostrogradsky1850,Woodard15,Gitmanetal83}. The Ostrogradsky framework would involve only $4 \times 16 = 64$ fields, but would possess much less structure and less natural constraints \cite{hco235}. A key task of the present paper is to elaborate in detail in the context of the linearized theory that the constraints from the composition rule, together with the gauge constraints, reduce this enormous number of fields to just a few degrees of freedom, as expected for a theory of gravity. Another important task of the present discussion of the weak-field approximation is to provide guidance for the discussion of the fully nonlinear composite theory of gravity. Understanding the structure of the constraints is helpful also for proper coupling of the gravitational field to matter. Whereas the coupling of the Yang-Mills fields to matter was considered previously \cite{hco231}, we here introduce a properly matched additional coupling of the tetrad fields to matter. The structure of the paper is as follows. In a first step, we introduce the space of $80$ fields for our canonical Hamiltonian formulation of linearized composite gravity, with special emphasis on gauge transformations and the implications of the composition rule (Section~\ref{secarena}). For the pure field theory in the absence of matter, we elaborate all evolution equations and constraints in detail and we readily find the solutions for gravitational waves and static isotropic systems (Section~\ref{secpurefields}). We subsequently introduce a double coupling mechanism for Yang-Mills and tetrad fields to matter into composite gravity. The modifications resulting from the inclusion of matter are elaborated to obtain a complete theory of gravity that can be compared to linearized general relativity (Section~\ref{secwithmatter}). We finally summarize our results and draw a number of conclusions (Section~\ref{secconclusions}). The relation between the Lagrangian and Hamiltonian approaches and some intermediate and additional results are provided in three appendices. \section{Arena for composite theory}\label{secarena} For developing the composite theory of gravity, we consider a fixed background Minkowski space where $x^0=ct$ is the product of the speed of light and time, $x^1,x^2,x^3$ are the spatial coordinates, and $\eta_{\mu\nu} = \eta^{\mu\nu}$ denotes the Minkowski metric [with signature $(-,+,+,+)$]. Greek indices go from $0$ to $3$. The Minkowski metric, which is its own inverse, is always used for raising or lowering space-time indices. Throughout this paper we set the speed of light equal to unity ($c=1$). Assuming a background Minkowski space comes with the advantage of offering a clear understanding of energy, momentum and their conservation laws. \subsection{Tetrad variables and gauge vector fields} Standard \emph{tetrad} or \emph{vierbein} variables ${b^\kappa}_\mu$ result from a Cholesky-type decomposition of a metric $ g_{\mu\nu} $, \begin{equation}\label{Choleskyg} g_{\mu\nu} = \eta_{\kappa\lambda} \, {b^\kappa}_\mu {b^\lambda}_\nu , \end{equation} which may also be interpreted as a coordinate transformation associated with a local set of base vectors. The non-uniqueness of this decomposition is the source of the gauge transformation behavior discussed in the next subsection. In the weak-field approximation, we write \begin{equation}\label{glinb} {b^\kappa}_\mu = {\delta^\kappa}_\mu + \eta^{\kappa\lambda} \hat{h}_{\lambda\mu} , \end{equation} where $\hat{h}_{\lambda\mu}$ is assumed to be small so that we need to keep only the lowest-order terms. It is convenient to define the symmetric and antisymmetric parts of $\hat{h}_{\mu\nu}$, \begin{equation}\label{hhatsyms} h_{\mu\nu} = \hat{h}_{\mu\nu} + \hat{h}_{\nu\mu} , \qquad \omega_{\mu\nu} = \hat{h}_{\mu\nu} - \hat{h}_{\nu\mu} . \end{equation} In the weak-field approximation, we obtain the following first-order expression for the metric (\ref{Choleskyg}), \begin{equation}\label{gling} g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu} . \end{equation} We denote the conjugate momenta associated with ${b^\kappa}_\mu$ by ${p_\kappa}^\mu$. Again, it is useful to introduce the symmetric and antisymmetric parts, \begin{equation}\label{psyms} \tilde{h}^{\mu\nu} = p^{\mu\nu} + p^{\nu\mu} , \qquad \tilde{\omega}^{\mu\nu} = p^{\mu\nu} - p^{\nu\mu} . \end{equation} The $32$ fields ${b^\kappa}_\mu$ and ${p_\kappa}^\mu$ represent the canonical space associated with the tetrad variables, which in turn characterize a metric. The fields $\tilde{h}^{\mu\nu}$ and $\tilde{\omega}^{\mu\nu}$ may essentially be regarded as the conjugate momenta associated with $h_{\mu\nu}$ and $\omega_{\mu\nu}$, respectively (after properly accounting for normalization and symmetrization effects). The Hamiltonian description of a Yang-Mills theory is based on the four-vector fields $A_{a \mu}$ and their conjugates $E^{a \mu}$, which are the generalizations of the vector potentials and the electric fields of electrodynamics, respectively. Whereas $\mu$ is the usual space-time index, $a$ labels the base vectors of the Lie algebra associated with the underlying Lie group. For the Lorentz group, which consists of the real $4 \times 4$ matrices that leave the Minkowski metric invariant, the Lie algebra is six-dimensional. We here choose six natural base vectors of the Lie algebra, three of which generate the Lorentz boosts in the coordinate directions and the other three generate rotations around the coordinate axes. It is convenient to switch back and forth between the labels $a=1, \ldots 6$ for all six generators and the pairs $(0,1)$, $(0,2)$, $(0,3)$ for the boosts in the respective directions (involving also time) and $(2,3)$, $(3,1)$, $(1,2)$ for the rotations in the respective planes according to Table~\ref{tabindexmatch}. In particular, we can now write our base vectors of the Lie algebra as \begin{equation}\label{Lorentzgenerators} T^a_{\kappa\lambda} = {\delta^{\tilde{\kappa}}}_\lambda \, {\delta^{\tilde{\lambda}}}_\kappa - {\delta^{\tilde{\kappa}}}_\kappa \, {\delta^{\tilde{\lambda}}}_\lambda . \end{equation} \begin{table} \begin{tabular}{c|c c c c c c} $a$ \, & \, $1$ & $2$ & $3$ & $4$ & $5$ & $6$ \\ \hline $(\tilde{\kappa},\tilde{\lambda})$ \, & \, $(0,1)$ & $(0,2)$ & $(0,3)$ & $(2,3)$ & $(3,1)$ & $(1,2)$ \\ \end{tabular} \caption{Correspondence between label $a$ for the base vectors of the six-dimensional Lie algebra ${\rm so}(1,3)$ and ordered pairs $(\tilde{\kappa},\tilde{\lambda})$ of space-time indices.} \label{tabindexmatch} \end{table} We finally need to specify the composition rule for expressing the four-vector fields $A_{a \mu}$ in terms of the tetrad fields ${b^\kappa}_\mu$ or, in view of Eq.~(\ref{glinb}) equivalently, the symmetric and antisymmetric parts $h_{\mu\nu}$ and $\omega_{\mu\nu}$, respectively, of $\hat{h}_{\mu\nu}$. For $a=(\tilde{\kappa},\tilde{\lambda})$ according to Table~\ref{tabindexmatch}, we postulate the simple composition rule \begin{equation}\label{glinA} A_{a \mu} = \frac{1}{2} \left( \frac{\partial h_{\tilde{\lambda}\mu}}{\partial x^{\tilde{\kappa}}} - \frac{\partial h_{\tilde{\kappa}\mu}}{\partial x^{\tilde{\lambda}}} \right) + \frac{1}{2\tilde{g}} \, \frac{\partial \omega_{\tilde{\kappa}\tilde{\lambda}}}{\partial x^\mu} , \end{equation} where $\tilde{g}$ is a dimensionless coupling constant that controls the relative weight of the symmetric and antisymmetric contributions to ${b^\kappa}_\mu$. Only for $\tilde{g}=1$, the four-vector variables (\ref{glinA}) can be interpreted as a connection field \cite{hco231}. Such an interpretation would be essential for a closer relation to general relativity. \subsection{Gauge transformations} As the Minkowski metric in the decomposition (\ref{Choleskyg}) is invariant under Lorentz transformations, the corresponding transformation matrices can be applied to the factors ${b^\kappa}_\mu$ without changing the metric. For infinitesimal Lorentz transformations, this implies the lowest-order gauge transformation \begin{equation}\label{gaugeb} \delta b_{\kappa\lambda} = - \tilde{g} \, \Lambda_a T^a_{\kappa\lambda} , \end{equation} in terms of six additional fields $\Lambda_a$. As the base vectors $T^a_{\kappa\lambda}$ of the Lie algebra defined in Eq.~(\ref{Lorentzgenerators}) are antisymmetric, only the antisymmetric part of $b_{\kappa\lambda}$ is affected by gauge transformations, that is, \begin{equation}\label{gaugeh} \delta h_{\kappa\lambda} = 0 , \end{equation} whereas \begin{equation}\label{gaugeom} \delta \omega_{\kappa\lambda} = - 2 \tilde{g} \, \Lambda_a T^a_{\kappa\lambda} . \end{equation} The latter equation suggests that the six fields $\Lambda_a$ can be chosen to make the six components of $\omega_{\kappa\lambda}$ equal to zero. We refer to this particular choice as the symmetric gauge. From Eqs.~(\ref{glinA}), (\ref{gaugeh}) and (\ref{gaugeom}), we further obtain \begin{equation}\label{gaugeA} \delta A_{a \mu} = \frac{\partial \Lambda_a}{\partial x^\mu} , \end{equation} which is the proper gauge transformation behavior for the gauge-vector fields of the linearized theory. This transformation rule implies gauge invariance of the combination \begin{equation}\label{gaugeinvcomb1} 2 \tilde{g} A_{(\tilde{\kappa}\tilde{\lambda}) \mu} - \frac{\partial \omega_{\tilde{\kappa}\tilde{\lambda}}}{\partial x^\mu} , \end{equation} which is obvious from the definition (\ref{glinA}) and the gauge invariance of $h_{\kappa\lambda}$. We moreover assume that all conjugate momenta are gauge invariant (cf.\ Eq.~(49) of \cite{hco229}), \begin{equation}\label{gaugeE} \delta E^{a \mu} = 0 , \end{equation} and \begin{equation}\label{gaugep} \delta p^{\kappa\lambda} = 0 . \end{equation} It turns out below that the assumption (\ref{gaugeE}) requires that $\partial A_{a \mu}/\partial x_\mu$ must be a gauge invariant quantity. In view of Eq.~(\ref{gaugeA}), this implies \begin{equation}\label{Lamevoleq} \frac{\partial^2 \Lambda_a}{\partial x_\mu \partial x^\mu} = 0 . \end{equation} In other words, the fields $\Lambda_a$ generating gauge transformations must become dynamic players and satisfy free field equations. This idea is the basis of the BRST approach for handling gauge constraints. Moreover, we conclude \begin{equation}\label{tetradom} \frac{\partial^2 \omega_{\kappa\lambda}}{\partial x_\mu \partial x^\mu} = 0 , \end{equation} which follows from the symmetric gauge $\omega_{\kappa\lambda}=0$ and the gauge invariance of the left-hand side. \subsection{Implications of composition rule} The composition rule (\ref{glinA}) contains two types of equations. If $\tilde{\kappa}$ or $\mu$ is equal to zero, it contains time derivatives and hence implies time evolution equations for the tetrad variables. Otherwise, the composition rule provides constraints that must be satisfied at any time. The expressions for $A_{(0k)l}+A_{(0l)k}$ and $A_{(kl)0}$ lead to the unambiguous evolution equations \begin{eqnarray} \frac{\partial h_{kl}}{\partial t} &=& \frac{1}{2} \left( \frac{\partial h_{0l}}{\partial x^k} + \frac{\partial h_{0k}}{\partial x^l} \right) \nonumber \\ &+& A_{(0k)l} + A_{(0l)k} - \frac{1}{2\tilde{g}} \left( \frac{\partial \omega_{0l}}{\partial x^k} + \frac{\partial \omega_{0k}}{\partial x^l} \right) , \qquad \label{hklevol} \end{eqnarray} and \begin{equation}\label{omklevol} \frac{\partial \omega_{kl}}{\partial t} = 2 \tilde{g} A_{(kl)0} - \tilde{g} \left( \frac{\partial h_{0l}}{\partial x^k} - \frac{\partial h_{0k}}{\partial x^l} \right) . \end{equation} The expression for $A_{(0l)0}$ provides only the time derivative of $\tilde{g} h_{0l} + \omega_{0l}$, and there is no evolution equation for $h_{00}$ whatsoever. Once we have made a decision about the evolution of $h_{0\mu}$, all evolution equations are fixed uniquely. Choosing four conditions for $h_{0\mu}$ is superficially reminiscent of imposing coordinate conditions for obtaining unique solutions in general relativity, but the logical status is entirely different. Whereas the coordinate conditions of general relativity have no influence on the physical predictions of general relativity, in the canonical formulation of composite gravity suitable conditions for $h_{0\mu}$ are used to characterize ``good'' or ``valid'' Minkowskian coordinate systems. If these conditions are Lorentz covariant, we have no possibility of switching between different types of conditions corresponding to different physical predictions. As obvious as these remarks may be, the proper appreciation of coordinate conditions in general relativity was a slow process, in which even Einstein could not easily detach himself from the idea of physically preferred coordinate systems \cite{Giovanelli20}. An appealing set of Lorentz covariant conditions is given by \begin{equation}\label{harmoniclin} \frac{\partial h_{\mu\nu}}{\partial x_\nu} = K \, \frac{\partial {h^\nu}_\nu}{\partial x^\mu} , \end{equation} where $K=1/2$ corresponds to particularly convenient harmonic coordinates (in the linear approximation). We here adopt the conditions (\ref{harmoniclin}) as the tentative criteria for physically meaningful coordinates. They can be rewritten as explicit time evolution equations, namely \begin{equation}\label{h0kevol} \frac{\partial h_{0l}}{\partial t} = \frac{\partial h_{ln}}{\partial x_n} - K \, \frac{\partial {h^\nu}_\nu}{\partial x^l} , \end{equation} and \begin{equation}\label{h00evol} \frac{\partial h_{00}}{\partial t} = \frac{\partial h_{0l}}{\partial x_l} - \frac{K}{1-K} \left[ 2 A_{(0l)l} - \frac{1}{\tilde{g}} \frac{\partial \omega_{0l}}{\partial x_l} \right] . \end{equation} From the expression for $A_{(0l)0}$, we finally obtain \begin{equation}\label{om0kevol} \frac{\partial \omega_{0l}}{\partial t} = 2 \tilde{g} A_{(0l)0} + \tilde{g} \left[ (1-K) \frac{\partial h_{00}}{\partial x^l} + K \frac{\partial h_{nn}}{\partial x^l} - \frac{\partial h_{ln}}{\partial x_n} \right] . \end{equation} All the above evolution equations are gauge invariant. These evolution equations suggest that also $K=0$ could be an appealing choice. We now turn from the evolution equations to the constraints implied by the composition rule. The obvious constraints are obtained by choosing only spatial indices in Eq.~(\ref{glinA}), \begin{equation}\label{glinconstraints1a} A^{(kl)}_j = \frac{1}{2} \left( \frac{\partial h_{jl}}{\partial x^k} - \frac{\partial h_{jk}}{\partial x^l} \right) + \frac{1}{2\tilde{g}} \, \frac{\partial \omega_{kl}}{\partial x^j} . \end{equation} Further constraints are obtained by considering $A_{(0k)l}-A_{(0l)k}$, \begin{equation}\label{glinconstraints1b} A^{(0k)}_l - A^{(0l)}_k = \frac{1}{2} \left( \frac{\partial h_{0l}}{\partial x^k} - \frac{\partial h_{0k}}{\partial x^l} \right) + \frac{1}{2\tilde{g}} \left( \frac{\partial \omega_{0l}}{\partial x^k} - \frac{\partial \omega_{0k}}{\partial x^l} \right) . \end{equation} In total, we have turned the composition rule for the $24$ components of the gauge vector fields and the $4$ coordinate conditions (\ref{harmoniclin}) into the $16$ evolution equations (\ref{hklevol}), (\ref{omklevol}) and (\ref{h0kevol})--(\ref{om0kevol}) for the tetrad variables and the $9+3=12$ constraints (\ref{glinconstraints1a}), (\ref{glinconstraints1b}). We refer to these constraints resulting directly from the composition rule as the primary constraints of the composite theory. These primary constraints are not affected by coupling the gravitational field to matter. However, the evolution equations for the tetrad variables should be expected to be changed by coupling terms in the Hamiltonian. \section{Pure field theory}\label{secpurefields} We are now ready to define the canonical Hamiltonian version of the composite theory of gravity in the weak-field approximation on the combined space of Yang-Mills and tetrad fields, following the general ideas developed in \cite{hco237}. We first provide the Hamiltonian and then elaborate a number of its implications. \subsection{Hamiltonian} The Hamiltonian for the composite theory of pure gravity, \begin{equation}\label{Hampurefields} H = H_{\rm YM} + H_{\rm YM/t} , \end{equation} consists of two contributions describing the workhorse theory and reproducing the evolution equations obtained from the composition rule, respectively. Our workhorse theory is the linearized version of the Yang-Mills theory based on the Lorentz group on the space $(A_{a \mu}, E^{a \mu})$. The proper Hamiltonian is given by (see, e.g., Section~15.2 of \cite{PeskinSchroeder}, Chap.~15 of \cite{WeinbergQFT2}, or \cite{hco229}; a derivation from the Yang-Mills Lagrangian is given in Appendix~\ref{appL2H}), \begin{eqnarray} H_{\rm YM} &=& \int \bigg[ \frac{1}{2} \left( E^{a \mu} E_{a \mu} + \frac{\partial A_{a i}}{\partial x^j} \frac{\partial A^{a i}}{\partial x_j} - \frac{\partial A_{a i}}{\partial x^j} \frac{\partial A^{a j}}{\partial x_i} \right) \nonumber \\ &-& E^{a 0} \frac{\partial A_{a j}}{\partial x_j} - E^{a j} \frac{\partial A_{a 0}}{\partial x^j} \bigg] d^3 x . \label{pH2LHamf} \end{eqnarray} The Hamiltonian for coupling the Yang-Mills and tetrad variables, \begin{eqnarray} H_{\rm YM/t} &=& \int \dot{b}_{\kappa\lambda} \, p^{\kappa\lambda} \, d^3 x \nonumber\\ &=& \frac{1}{4} \int \left( \frac{\partial h_{\kappa\lambda}}{\partial t} \, \tilde{h}^{\kappa\lambda} + \frac{\partial \omega_{\kappa\lambda}}{\partial t} \, \tilde{\omega}^{\kappa\lambda} \right) \, d^3 x , \qquad \label{pglinHcoupl} \end{eqnarray} is chosen such that the canonical evolution equations \begin{equation}\label{Hamtetradeqs} \frac{\partial b_{\kappa\lambda}}{\partial t} = \frac{\delta H}{\delta p^{\kappa\lambda}} , \qquad \frac{\partial p^{\kappa\lambda}}{\partial t} = - \frac{\delta H}{\delta b_{\kappa\lambda}} , \end{equation} reproduce the evolution equations (\ref{hklevol}), (\ref{omklevol}) and (\ref{h0kevol})--(\ref{om0kevol}) for the tetrad variables. These evolution equations implied by the composition rule and the coordinate conditions (\ref{harmoniclin}) are of crucial importance for finding the Hamiltonian $H_{\rm YM/t}$, that is, for obtaining the complete canonical Hamiltonian formulation of the composite theory. We have introduced the variables $p^{\kappa\lambda}$ in a purely formal manner as the conjugate momenta of the tetrad variables. At this point, we can offer a physical interpretation. Note that, in view of the evolution equations of the tetrad variables, the Hamiltonian $H_{\rm YM/t}$ contains a contribution that is bilinear in the variables $p^{\kappa\lambda}$ and the gauge vector fields $A_{a\mu}$. This contribution can be written in the form $-J^{a\mu} A_{a\mu}$ with the identifications \begin{equation}\label{conjtetradinterpret0l} -J^{(0l)0} = \tilde{g} \tilde{\omega}^{0l} , \quad -J^{(0l)j} = \frac{1}{2} \tilde{h}^{lj} - \frac{1}{2} \frac{K}{1-K} \tilde{h}^{00} \eta^{lj} , \end{equation} and \begin{equation}\label{conjtetradinterpretkl} -J^{(kl)0} = \tilde{g} \tilde{\omega}^{kl} , \quad -J^{(kl)j} = 0 . \end{equation} The symmetric and antisymmetric parts of the variables $p^{\kappa\lambda}$ play the role of external Yang-Mills fluxes. By requiring Lorentz covariant fluxes, Eq.~(\ref{conjtetradinterpretkl}) immediately leads to the conclusion \begin{equation}\label{tetradomtilkl} \tilde{\omega}^{kl} = 0 . \end{equation} Only the external fluxes $J^{(0l)\mu}$ can be nonvanishing in our composite Yang-Mills theory of gravity. \subsection{Field equations}\label{secpurefieldeqs} For the evolution of the conjugate momenta of the tetrad variables, we find the following results by means of Eq.~(\ref{Hamtetradeqs}): \begin{eqnarray} \frac{\partial \tilde{h}^{kl}}{\partial t} &=& \frac{\partial (\tilde{h}^{0k} - \tilde{g} \tilde{\omega}^{0k})}{\partial x_l} + \frac{\partial (\tilde{h}^{0l} - \tilde{g} \tilde{\omega}^{0l})}{\partial x_k} \nonumber \\ &-& 2 K \delta_{kl} \, \frac{\partial (\tilde{h}^{0n} - \tilde{g} \tilde{\omega}^{0n})}{\partial x^n} , \label{htilevolkl} \end{eqnarray} \begin{equation}\label{htilevol00} \frac{\partial \tilde{h}^{00}}{\partial t} = 2 K \, \frac{\partial \tilde{h}^{0l}}{\partial x^l} + 2 \tilde{g} \, (1-K) \, \frac{\partial \tilde{\omega}^{0l}}{\partial x^l} , \end{equation} \begin{equation}\label{htilevol0l} \frac{\partial \tilde{h}^{0l}}{\partial t} = \frac{1}{2} \frac{\partial \tilde{h}^{ln}}{\partial x^n} + \frac{1}{2} \frac{\partial \tilde{h}^{00}}{\partial x_l} , \end{equation} \begin{equation}\label{omtilevol0l} \frac{\partial \tilde{\omega}^{0l}}{\partial t} = - \frac{1}{2 \tilde{g}} \frac{\partial \tilde{h}^{ln}}{\partial x^n} + \frac{1}{2 \tilde{g}} \frac{K}{1-K} \, \frac{\partial \tilde{h}^{00}}{\partial x_l} , \end{equation} and \begin{equation}\label{omtilevolkl} \frac{\partial \tilde{\omega}^{kl}}{\partial t} = 0 . \end{equation} Note that these equations for the conjugate momenta of the tetrad variables are independent of any other variables. The last of these evolution equations is consistent with our previous conclusion (\ref{tetradomtilkl}). According to the definition (\ref{conjtetradinterpret0l}), Eq.~(\ref{omtilevol0l}) can be rewritten as \begin{equation}\label{Jconservation} \frac{\partial J^{(0l)\mu}}{\partial x^\mu} = 0 , \end{equation} which supports our interpretation of conjugate tetrad variables in terms of conserved fluxes. The evolution equations for the Yang-Mills fields are obtained from (note the sign conventions) \begin{equation}\label{HamYMeqs} \frac{\partial A_{a \mu}}{\partial t} = - \frac{\delta H}{\delta E^{a \mu}} , \qquad \frac{\partial E^{a \mu}}{\partial t} = \frac{\delta H}{\delta A_{a \mu}} . \end{equation} The resulting equations can be written in the following form: \begin{equation}\label{Aevola0} \frac{\partial A^a_0}{\partial t} = - E^a_0 + \frac{\partial A^a_n}{\partial x_n} , \end{equation} and \begin{equation}\label{Aevolaj} \frac{\partial A^a_j}{\partial t} = - E^a_j + \frac{\partial A^a_0}{\partial x^j} , \end{equation} for the gauge vector fields, whereas their conjugate partners are governed by \begin{equation}\label{Eevol0k0} \frac{\partial E^{(0l)}_0}{\partial t} = - \frac{\partial E^{(0l)}_n}{\partial x_n} - J^{(0l)}_0 , \end{equation} \begin{equation}\label{Eevolkl0} \frac{\partial E^{(kl)}_0}{\partial t} = - \frac{\partial E^{(kl)}_n}{\partial x_n} , \end{equation} \begin{equation}\label{Eevol0kj} \frac{\partial E^{(0l)}_j}{\partial t} = - \frac{\partial E^{(0l)}_0}{\partial x^j} - \frac{\partial^2 A^{(0l)}_j}{\partial x^n \partial x_n} + \frac{\partial^2 A^{(0l)}_n}{\partial x^j \partial x_n} - J^{(0l)}_j , \end{equation} and \begin{equation}\label{Eevolklj} \frac{\partial E^{(kl)}_j}{\partial t} = -\frac{\partial E^{(kl)}_0}{\partial x^j} - \frac{\partial^2 A^{(kl)}_j}{\partial x^n \partial x_n} + \frac{\partial^2 A^{(kl)}_n}{\partial x^j \partial x_n} . \end{equation} Note that these evolution equations are gauge invariant, provided that Eq.~(\ref{Lamevoleq}) for $\Lambda_a$ holds. These are the linearized standard field equations for Yang-Mills fields, which are strongly reminiscent of Maxwell's equations of electrodynamics. Equation (\ref{Aevolaj}), together with the representation (\ref{glinA}), implies the useful identity \begin{equation}\label{EklnJacobi} E^{(ln)}_k + E^{(nk)}_l + E^{(kl)}_n = 0 . \end{equation} This identity remains valid when we later include matter [that is, it can more generally be derived from Eq.~(\ref{representEklj})]. \subsection{Constraints} The primary constraints (\ref{glinconstraints1a}), (\ref{glinconstraints1b}) must be valid at all times. From the time derivative of the primary constraints we obtain secondary constraints, a further time derivative yields tertiary constraints, and so on. This iterative process, in which the required time derivatives are evaluated by means of the evolution equations, is continued until no further constraints arise. The crucial question is whether the iterative process stops before all degrees of freedom are fixed by constraints. As the introduction revealed that, in the canonical Hamiltonian formulation of composite gravity, we are dealing with $80$ fields, we need around $75$ constraints to obtain an appropriate number of degrees of freedom for a theory of gravity. The secondary constraints obtained as the time derivatives of the primary constraints can be formulated nicely in terms of Yang-Mills variables, \begin{equation}\label{glinconstraints2a} E^{(kl)}_j = \frac{\partial A^{(0j)}_l}{\partial x^k} - \frac{\partial A^{(0j)}_k}{\partial x^l} , \end{equation} \begin{equation}\label{glinconstraints2b} E^{(0l)}_k = E^{(0k)}_l , \end{equation} and the tertiary constraints are subsequently obtained as \begin{equation}\label{glinconstraints3a} \frac{\partial E^{(kl)}_0}{\partial x^j} - \frac{\partial E^{(0j)}_l}{\partial x^k} + \frac{\partial E^{(0j)}_k}{\partial x^l} = \frac{\partial}{\partial x_n} \left( \frac{\partial A^{(kl)}_n}{\partial x^j} - \frac{\partial A^{(kl)}_j}{\partial x^n} \right) , \end{equation} \begin{equation}\label{glinconstraints3b} \frac{\partial E^{(0l)}_0}{\partial x_k} - \frac{\partial E^{(0k)}_0}{\partial x_l} = \frac{\partial E^{(kl)}_n}{\partial x_n} . \end{equation} The latter constraint has been simplified by means of the identity (\ref{EklnJacobi}). Note that these tertiary constraints can be used to rewrite the evolution equations (\ref{Eevolkl0}) and (\ref{Eevolklj}) as \begin{equation}\label{Eevolkl0cx} \frac{\partial E^{(kl)}_0}{\partial t} = \frac{\partial E^{(0k)}_0}{\partial x^l} - \frac{\partial E^{(0l)}_0}{\partial x^k} , \end{equation} and \begin{equation}\label{Eevolkljcx} \frac{\partial E^{(kl)}_j}{\partial t} = \frac{\partial E^{(0j)}_k}{\partial x^l} - \frac{\partial E^{(0j)}_l}{\partial x^k} . \end{equation} Up to this point, the variables $p^{\kappa\lambda}$ do not appear in the constraints. From now on, only the variables $p^{\kappa\lambda}$ occur in the constraints. In the next round, we find \begin{equation}\label{glinconstraints4a} \frac{\partial J^{(0l)j}}{\partial x_k} - \frac{\partial J^{(0k)j}}{\partial x_l} = 0 , \end{equation} \begin{equation}\label{glinconstraints4b} \frac{\partial J^{(0l)0}}{\partial x_k} - \frac{\partial J^{(0k)0}}{\partial x_l} = 0 . \end{equation} As we assume that, in the absence of matter, the external fluxes (\ref{conjtetradinterpret0l}) vanish, these last conditions are satisfied trivially so that the hierarchy of constraints ends at this point. We have arrived at a total of $4 \times 12 = 48$ constraints resulting from the composition rule, supplemented by the three constraints (\ref{tetradomtilkl}) so that the total is $51$. All these constraints are gauge invariant. This is a consequence of the fact that the composition rule is designed such that the four-vector fields $A_{a \nu}$ possess the proper gauge transformation behavior (\ref{gaugeA}) and all the evolution equations are gauge invariant. We eventually argue in favor of the $16$ constraints $p^{\kappa\lambda}=0$ (or $\tilde{h}^{\kappa\lambda} = \tilde{\omega}^{\kappa\lambda} = 0$), which would replace the $15$ constraints (\ref{tetradomtilkl}), (\ref{glinconstraints4a}), (\ref{glinconstraints4b}) and actually increase the count by one. In a Yang-Mills theory, half of the degrees of freedom can be eliminated by gauge constraints (roughly speaking, the four-vector potentials have only transverse components, no longitudinal or temporal ones). In our case, we have $24$ gauge constraints, which brings us to a total of $75$ (or $76$) constraints for our $80$ fields. It is quite remarkable that just a few of the $80$ degrees of freedom survive, as we would expect for a theory of gravity. \subsection{Compact form of theory}\label{seccompactformpure} The goal of this subsection is to find a closed set of differential equations for the tetrad variables. To reach this goal it is important to express all the Yang-Mills variables in terms of the tetrad variables. For the vector fields $A_{a \mu}$, the desired expression is given by the composition rule (\ref{glinA}). Their conjugates $E^{a \mu}$ can then be extracted from the evolution equations (\ref{Aevola0}), (\ref{Aevolaj}) (see Appendix \ref{appYMtetrad} for a summary of the resulting expressions). As we have already recognized $J^{(0l)\mu} = 0 = \tilde{\omega}^{kl}$, Eqs.~(\ref{htilevolkl})--(\ref{omtilevolkl}) imply that all conjugate tetrad variables must be constant and can be assumed to be zero, \begin{equation}\label{tetradallmom} p^{\kappa\lambda} = 0 . \end{equation} This is a very desirable condition for the natural canonical Hamiltonian approach to composite theories. As the conjugate momenta $p^{\kappa\lambda}$ appear linearly in the Hamiltonian (\ref{pglinHcoupl}), they lead to an unbounded Hamiltonian and consequently to the famous risk of instabilities in higher derivative theories \cite{Ostrogradsky1850,Woodard15}. Avoiding such instabilities is an important topic, in particular, in alternative theories of gravity \cite{Chenetal13,RaidalVeermae17,Stelle77,Stelle78,Krasnikov87,GrosseKnetter94,Beckeretal17,Salvio19}. The constraints (\ref{tetradallmom}) provide the most obvious way of eliminating instabilities in the canonical Hamiltonian approach to composite higher derivative theories, which differs from the usual Ostrogradsky approach \cite{hco235,hco237}. Moreover, this constraint implies that we have to solve the original Yang-Mills equations without any modification. In other words, the composite theory simply selects solutions of the Yang-Mills theory based on the Lorentz group to obtain the composite theory of gravity. This insight provides a more direct argument for the stability of solutions. Note that the large number of constraints and the small number of remaining degrees of freedom indicates that the composite theory is highly selective. From Eq.~(\ref{Eevolklj}) we obtain \begin{equation}\label{tetradhkl} \frac{\partial^2 h_{kl}}{\partial x_\mu \partial x^\mu} = \frac{\partial^2 f}{\partial x^k \partial x^l} , \end{equation} where the unknown function $f$ results from integration, and similarly Eq.~(\ref{Eevolkl0}) gives \begin{equation}\label{tetradh0l} \frac{\partial^2 h_{0l}}{\partial x_n \partial x^n} = \frac{\partial f_0}{\partial x^l} . \end{equation} Equations (\ref{Eevol0k0}) and (\ref{Eevol0kj}) provide further integrability conditions that can be exploited in a similar manner. Consolidating all the results, we get the following compact formula summarizing the linear version of the composite theory of pure gravity on the level of tetrad variables, \begin{equation}\label{tetradhall} \frac{\partial^2 h_{\mu\nu}}{\partial x_\lambda \partial x^\lambda} = \frac{\partial^2 f}{\partial x^\mu \partial x^\nu} , \end{equation} possibly after a minor redefinition of $f$. An interesting feature of these field equations is that the function $f$, which results from integration, needs to be determined simultaneously with the solutions $h_{\mu\nu}$. We arrive at a set of second-order differential equations because higher derivative equations play the role of integrability conditions. The coupling constant $\tilde{g}$ does not occur in these equations. Possible antisymmetric contributions to the tetrad variables are governed by the wave equations (\ref{tetradom}), and all conjugate momenta of the tetrad variables must vanish according to Eq.~(\ref{tetradallmom}). \subsection{Comparison to general relativity} Einstein's field equation for pure gravity in the weak-field approximation to general relativity is given by a vanishing curvature tensor [see Eq.~(\ref{linRt})], \begin{equation}\label{GRgeneq} \frac{\partial^2 h_{\mu\nu}}{\partial x_\lambda \partial x^\lambda} - \frac{\partial^2 {h^\lambda}_\mu}{\partial x^\lambda \partial x^\nu} - \frac{\partial^2 {h^\lambda}_\nu}{\partial x^\mu \partial x^\lambda} + \frac{\partial^2 {h^\lambda}_\lambda}{\partial x^\mu \partial x^\nu} = 0 . \end{equation} It is important to note that the coordinates $x^\mu$ in general relativity are not associated with an underlying Minkowski space so that these field equations can be simplified by suitable general coordinate transformations. If we impose the same coordinate conditions (\ref{harmoniclin}) as used in composite gravity, the field equations (\ref{GRgeneq}) of linearized general relativity simplify to \begin{equation}\label{GRgeneq1} \frac{\partial^2 h_{\mu\nu}}{\partial x_\lambda \partial x^\lambda} = (2K-1) \frac{\partial^2 {h^\lambda}_\lambda}{\partial x^\mu \partial x^\nu} . \end{equation} This equation coincides with Eq.~(\ref{tetradhall}) for composite gravity for $f=(2K-1) {h^\lambda}_\lambda$. It becomes particularly simple for harmonic coordinates with $K=1/2$, which may be pictured as nearly Minkowskian (see, e.g., pp.\,163 and 254 of \cite{Weinberg}). As in general relativity, the solutions for the deviatoric metric $h_{\mu\nu}$ in harmonic coordinates can assume all kinds of polarization states, including longitudinal and temporal components. The actual polarization of gravitational waves depends on the nature of their source (typically binary systems of two black holes, two neutron stars, or a black hole and a neutron star during their in-spiral or merger phases). \subsection{Static isotropic solution}\label{secisostatsol} To find the static isotropic solution for the weak-field approximation to composite gravity for the coordinate conditions (\ref{harmoniclin}), we start from the general ansatz \begin{equation}\label{linisostath} h_{00} = \bar{\beta}(r) , \quad h_{kl} = \bar{\alpha}(r) \delta_{kl} + \bar{\xi}(r) \frac{x_k x_l}{r^2} , \quad h_{0k} = h_{k0} = 0 , \end{equation} with $r=(x_1^2+x_2^2+x_3^2)^{1/2}$. The coordinate conditions (\ref{harmoniclin}) become \begin{equation}\label{harmonicaltisolin} r \Big[ (3K-1) \bar{\alpha}'- K \bar{\beta}' + (K-1) \bar{\xi}' \Big] = 2 \bar{\xi} . \end{equation} A prime on a function of $r$ indicates the derivative with respect to $r$. We assume that also $f$ in Eq.~(\ref{tetradhall}) is static and isotropic. With $f=f(r)$, Eq.~(\ref{tetradhall}) leads to two equations, \begin{equation}\label{linbetadiffeq} r^2 \bar{\beta}'' + 2 r \bar{\beta}' = 0 , \end{equation} and \begin{eqnarray} \frac{x_k x_l}{r^2} \left( r^2 \bar{\xi}'' + 2r \bar{\xi}' - 6 \bar{\xi} -r^2 f'' +r f' \right) &=& \nonumber \\ && \hspace{-10em} \delta_{kl} \left( r f' - 2 \bar{\xi} -r^2 \bar{\alpha}'' -2 r \bar{\alpha}' \right) , \qquad \label{sepcompaz} \end{eqnarray} where each side of the latter equation must vanish separately. All these equations are of the equidimensional type, that is, in each term there are as many factors of $r$ as there are derivatives with respect to $r$, suggesting simple power-law solutions. Equation~(\ref{linbetadiffeq}) implies $\bar{\beta}' \propto r^{-2}$ and we hence write \begin{equation}\label{linbetasol} \bar{\beta}(r) = 2 \, \frac{r_0}{r} , \end{equation} where $r_0$ is a constant length scale and a possible additive constant has been omitted to obtain asymptotic Minkowskian behavior. Equation (\ref{harmonicaltisolin}) suggests that $\bar{\alpha}$ has the same power-law decay, so that the right-hand side of Eq.~(\ref{sepcompaz}) implies $r f' = 2 \bar{\xi}$. Equation (\ref{harmonicaltisolin}) provides a relation among prefactors, so that we can write \begin{equation}\label{linalphaxisol} \bar{\alpha}(r) = \frac{\bar{c}}{1-K} \, \frac{r_0}{r} , \qquad \bar{\xi}(r) = \frac{\bar{c}-(3\bar{c}-2)K-2K^2}{1-K^2} \, \frac{r_0}{r} . \end{equation} Consistency with general relativity, which implies a vanishing curvature tensor, requires $\bar{c}=1$. The condition $\bar{c}=1$ is not predicted by the weak-field approximation of pure composite gravity, but it arises naturally in the full, nonlinear theory or from a suitable coupling to matter (see Section~\ref{seccompactmat} below). \section{Coupling of field to matter}\label{secwithmatter} Of course, we cannot really appreciate a theory of the gravitational field without coupling it to matter. On the one hand, we want to understand the gravitational field generated by matter, say for calculating the parameters $\bar{c}$ and $r_0$ in the solution given in Eq.~(\ref{linalphaxisol}). On the other hand, we want to understand the motion of matter in a gravitational field. The most convenient options for describing matter are given by point particle mechanics or hydrodynamics. We here consider a single point particle either generating a gravitational field or moving in a gravitational field. \subsection{Particle in a gravitational field} As a starting point for discussing particle motion in a weak gravitational field, we use the first-order expansion of the standard Hamiltonian, \begin{equation}\label{weakfieldparticleH} H_{\rm m} = \gamma m - \frac{1}{2 \gamma m} \, p_\mu p_\nu h^{\mu\nu} , \end{equation} where $m$ is the rest mass of the particle, $h^{\mu\nu}$ depends on the particle position $x^j$, the particle momentum is given by $p_j$, and we define $-p_0=p^0=m \gamma$, where \begin{equation}\label{sprelgammap} \gamma = \left[ 1 + \left( \frac{\bm{p}}{m} \right)^2 \right]^{\frac{1}{2}} , \end{equation} is a function of the spatial components $p_j$ of the particle momentum. When a higher order definition of $p^0$ is required, one should use $p^0=H_{\rm m}$, where Eq.~(\ref{weakfieldparticleH}) provides the first-order result in $h^{\mu\nu}$. The lowest-order energy-momentum tensor is given by (see, e.g., Eq.~(2.8.4) of \cite{Weinberg}) \begin{eqnarray} T_{\mu\nu} &=& -2 \frac{\delta H_{\rm m}}{\delta h^{\mu\nu}} = \frac{p_\mu p_\nu}{\gamma m} \, \delta^3(\bm{x}-\bm{x}(t)) \nonumber \\ &=& \gamma m \frac{d x_\mu}{d t} \frac{d x_\nu}{d t} \, \delta^3(\bm{x}-\bm{x}(t)) , \label{weakfieldparticleT} \end{eqnarray} where the lowest-order result $p_j=m \gamma \, dx_j/dt$ has been used. The evolution equation $dp_j/dt=0$ for a free particle in the absence of gravity leads to the result \begin{equation}\label{energymomtimeder} \frac{\partial T_{\mu\nu}}{\partial t} = - \frac{\partial T_{\mu\nu}}{\partial x^j} \, \frac{d x^j}{d t} = \frac{p_j}{p_0} \, \frac{\partial T_{\mu\nu}}{\partial x^j} , \end{equation} from which, for $\nu=0$, we obtain energy-momentum conservation in the form \begin{equation}\label{energymomconsmattT} \frac{\partial T^{\mu\nu}}{\partial x^\nu} = 0 . \end{equation} By construction, the Hamiltonian $(\ref{weakfieldparticleH})$ leads to geodesic motion in a weak field. The potential distortion of geodesic motion by further couplings between matter and field is explored in Section \ref{secmodpartmot} below. \subsection{Hamiltonian for coupling field and matter} The occurrence of $h^{\mu\nu}$ in the Hamiltonian (\ref{weakfieldparticleH}) already implies a coupling of field and matter. It leads to geodesic motion in the given field $h^{\mu\nu}$, but it does not provide meaningful field equations for determining gravitational fields. For that purpose we need to couple the Yang-Mills field to the energy-momentum tensor of matter. In Appendix~\ref{appL2H}, the details of the coupling are discussed in a Lagrangian setting, and the following Hamiltonian for the coupling is obtained, \begin{equation}\label{Hcoupling} H_{\rm YM/m} = \int \left( F^{(\lambda n)}_{jn} {C^j}_\lambda - E^{(\lambda j)}_j {C^0}_\lambda - E^{(0 l)}_j {C^j}_l \right) d^3x , \end{equation} with \begin{equation}\label{Ctensorchoice0} C_{\mu\nu} = G_1 \, \mathring{T}_{\mu\nu} + G_2 \, \eta_{\mu\nu} {T^\lambda}_\lambda , \end{equation} where $\mathring{T}_{\mu\nu}$ is the traceless part of the energy-momentum tensor of matter defined in Eq.~(\ref{Tringdef}) and the coefficients $G_1$, $G_2$ must have the same dimensions as Newton's constant $G$ (cf.\ Table~\ref{tabledimensions}). The concrete values of $G_1$, $G_2$ can only be chosen once we have elaborated all the equations for gravitational fields coupled to matter. \begin{table} \begin{tabular}{cc} Quantities & Dimensions \\ \hline $g_{\mu\nu}$, ${b^\kappa}_\mu$, $h_{\mu\nu}$, $\omega_{\mu\nu}$, $\Lambda_{\rm E}$ & --- \\ $A^a_\nu$, $H$ & $L^{-1}$ \\ $E^a_\nu$, $B^a_\nu$, $F^a_{\mu\nu}$, $R^{\mu\nu}$ & $L^{-2}$ \\ $p^{\kappa\lambda}$, $\tilde{h}^{\kappa\lambda}$, $\tilde{\omega}^{\kappa\lambda}$ & $L^{-3}$ \\ $V_{\kappa\lambda}$ & $M$ \\ $T_{\mu\nu}$ & $L^{-3} M$ \\ $G$, $G_1$, $G_2$ & $L M^{-1}$ \\ \end{tabular} \caption{Dimensions of various quantities in terms of length ($L$) and mass ($M$) for $c=1$.} \label{tabledimensions} \end{table} The Lagrangian associated with the Hamiltonian (\ref{Hcoupling}) for the coupling of the Yang-Mills field to matter has previously been proposed in Eqs.~(51) and (52) of \cite{hco231}. We here introduce an additional coupling of the tetrad field to matter, \begin{equation}\label{Htetradmatter} H_{\rm t/m} = \int V_{\kappa\lambda} \, p^{\kappa\lambda} \, d^3x , \end{equation} where, for the linearized theory, $V_{\kappa\lambda}$ can be assumed to be a symmetric tensor to be constructed from the energy-momentum tensor of matter (more precisely, the time derivative of $V_{\kappa\lambda}$ turns out to be a tensor; the defining equations and more insight into the role of the indices are provided in Section~\ref{secmodifiedconstr}). The idea behind this additional coupling is as follows. In the absence of matter, the composite theory selects solutions from a pure Yang-Mills theory, which is a consequence of the vanishing conjugate momenta $p^{\kappa\lambda}$ of the tetrad variables established in Eq.~(\ref{tetradallmom}). In the presence of matter, it is more natural to select solutions of the Yang-Mills theory with suitable external fluxes, so that the conjugate momenta $p^{\kappa\lambda}$ should no longer be expected to vanish. Use of the separate Hamiltonian (\ref{Htetradmatter}) in addition to the previously suggested coupling mechanism (\ref{Hcoupling}) allows us to find a consistently tuned coupling of both Yang-Mills and tetrad fields to matter. Note that the ``general wisdom'' about the possibilities of coupling gravity to matter \cite{Feynmanetal,Deser70,Straumann00} is not beyond all doubt \cite{Padmanabhan08} and, in the context of composite theories, this coupling can be even richer. For obtaining the composite theory of gravity in the presence of matter, we would like to add the Hamiltonians $H_{\rm YM/m}$, $H_{\rm t/m}$ and $H_{\rm m}$ introducing the coupling of field and matter to the Hamiltonian (\ref{Hampurefields}) of pure gravity. However, there is a problem. With the help of Table~\ref{tabledimensions}, we realize that the Hamiltonian (\ref{Hampurefields}) has dimension of ${\rm length}^{-1}$, and so does the Hamiltonian $H_{\rm YM/m}$ defined in Eq.~(\ref{Hcoupling}). The dimensions of the Hamiltonian $H_{\rm t/m}$ defined in Eq.~(\ref{Htetradmatter}) can still be adjusted by the definition of $V_{\kappa\lambda}$. However, the Hamiltonian $H_{\rm m}$ defined in Eq.~(\ref{weakfieldparticleH}) has dimensions of mass, which is what we actually expect for a Hamiltonian when using the speed of light as the unit for velocities ($c=1$). As the mismatch in dimensions can be regarded as an action factor, it seems natural to multiply $H_{\rm YM} + H_{\rm YM/t} + H_{\rm YM/m}$ by Planck's constant $\hbar$. We do that implicitly by using $\hbar$ as the unit of action ($\hbar=1$), thus eliminating the dimensional mismatch. However, this choice of units implies that, in $H_{\rm YM} + H_{\rm YM/t}$, we actually deal with the energy of gravitational field quanta, which is clearly not the most appropriate energy scale when we usually consider problems involving gravity. We hence introduce a very small dimensionless parameter $\Lambda_{\rm E}$ to scale down the typical energy associated with gravitationally interacting masses to the level of graviton energies, \begin{equation}\label{Hamfullsystem} H = H_{\rm YM} + H_{\rm YM/t} + H_{\rm YM/m} + \Lambda_{\rm E} (H_{\rm t/m} + H_{\rm m}) . \end{equation} In the Lagrangian formulation in Eq.~(52) of \cite{hco231}, it can be recognized that $\Lambda_{\rm E}$ plays the role of a dimensionless cosmological constant in general relativity. We hence write \begin{equation}\label{LambdaEdef} \Lambda_{\rm E} = \left( \frac{\ell_{\rm p}}{D} \right)^2 , \end{equation} where $\ell_{\rm p} = \sqrt{\hbar G/c^3} = \sqrt{G}$ is the Planck length and $D$ is the diameter of the observable universe. This parameter $\Lambda_{\rm E}$ can be estimated to be of the order of $10^{-124}$. It is interesting to note that even our formulation of classical gravity requires an action constant. A similar situation arises in formulating the entropy of a classical ideal gas, indicating that a deeper understanding of an ideal gas requires quantum theory. The same conclusion may be true for a deeper understanding of gravity. \subsection{Modified field equations} In the presence of matter, the dynamic aspects of the composition rule (\ref{glinA}) are affected by the Hamiltonian $H_{\rm t/m}$, but not its static aspects. In other words, the primary constraints are unchanged whereas the evolution equations for the tetrad variables get modified. As $V_{\kappa\lambda}$ is assumed to be symmetric, only the evolution equations (\ref{hklevol}), (\ref{h0kevol}) and (\ref{h00evol}) for $h_{\kappa\lambda}$ get changed, \begin{eqnarray} \frac{\partial h_{kl}}{\partial t} &=& \frac{1}{2} \left( \frac{\partial h_{0l}}{\partial x^k} + \frac{\partial h_{0k}}{\partial x^l} \right) + A_{(0k)l} + A_{(0l)k} \nonumber \\ &-& \frac{1}{2\tilde{g}} \left( \frac{\partial \omega_{0l}}{\partial x^k} + \frac{\partial \omega_{0k}}{\partial x^l} \right) + 2 \Lambda_{\rm E} V_{kl} , \qquad \label{hklevolmat} \end{eqnarray} \begin{equation}\label{h0kevolmat} \frac{\partial h_{0l}}{\partial t} = \frac{\partial h_{ln}}{\partial x_n} - K \, \frac{\partial {h^\nu}_\nu}{\partial x^l} + 2 \Lambda_{\rm E} V_{0l} , \end{equation} and \begin{equation}\label{h00evolmat} \frac{\partial h_{00}}{\partial t} = \frac{\partial h_{0l}}{\partial x_l} - \frac{K}{1-K} \left[ 2 A_{(0l)l} - \frac{1}{\tilde{g}} \frac{\partial \omega_{0l}}{\partial x_l} \right] + 2 \Lambda_{\rm E} V_{00} . \end{equation} Equations~(\ref{h0kevolmat}) and (\ref{h00evolmat}) imply a tiny modification of the coordinate conditions (\ref{harmoniclin}). The Hamiltonian $H_{\rm YM/m}$ given in Eq.~(\ref{Hcoupling}) depends only on the spatial components of the Yang-Mills fields, so that the evolution equations (\ref{Aevola0}), (\ref{Eevol0k0}) and (\ref{Eevolkl0}) for the temporal components of the Yang-Mills fields remain unaffected. Equation (\ref{Aevolaj}) gets modified to \begin{equation}\label{Aevol0kjmod} \frac{\partial A^{(0l)}_j}{\partial t} = - E^{(0l)}_j + \frac{\partial A^{(0l)}_0}{\partial x^j} - C_{jl} + \delta_{jl} \, C_{00} , \end{equation} and \begin{equation}\label{Aevolkljmod} \frac{\partial A^{(kl)}_j}{\partial t} = - E^{(kl)}_j + \frac{\partial A^{(kl)}_0}{\partial x^j} + \delta_{jk} C_{0l} - \delta_{jl} C_{0k} . \end{equation} whereas Eqs.~(\ref{Eevol0kj}) and (\ref{Eevolklj}) become \begin{eqnarray} \frac{\partial E^{(0l)}_j}{\partial t} &=& - \frac{\partial E^{(0l)}_0}{\partial x^j} - \frac{\partial^2 A^{(0l)}_j}{\partial x^n \partial x_n} + \frac{\partial^2 A^{(0l)}_n}{\partial x^j \partial x_n} - J^{(0l)}_j \qquad \nonumber \\ &-& \frac{\partial C_{j0}}{\partial x_l} + \delta_{jl} \, \frac{\partial C_{n0}}{\partial x_n} , \label{Eevol0kjmod} \end{eqnarray} and \begin{eqnarray} \frac{\partial E^{(kl)}_j}{\partial t} &=& -\frac{\partial E^{(kl)}_0}{\partial x^j} - \frac{\partial^2 A^{(kl)}_j}{\partial x^n \partial x_n} + \frac{\partial^2 A^{(kl)}_n}{\partial x^j \partial x_n} \nonumber \\ &+& \frac{\partial C_{jk}}{\partial x^l} - \frac{\partial C_{jl}}{\partial x^k} + \delta_{jk} \, \frac{\partial C_{nl}}{\partial x_n} - \delta_{jl} \, \frac{\partial C_{nk}}{\partial x_n} . \qquad \label{Eevolkljmod} \end{eqnarray} The fact that $C_{\mu\nu}$ occurs in Eqs.~(\ref{Aevol0kjmod}) and (\ref{Aevolkljmod}) for the gauge vector fields underlines that the coupling of the stress tensor to the workhorse theory of composite gravity does not happen via the usual flux mechanism for Yang-Mills theories. The occurrence of $h^{\mu\nu}$ in Eq.~(\ref{weakfieldparticleH}) implies that the evolution equations (\ref{htilevolkl})--(\ref{htilevol0l}) for the symmetrized conjugate momenta $\tilde{h}^{\kappa\lambda}$ get modified, too. We find \begin{eqnarray} \frac{\partial \tilde{h}^{kl}}{\partial t} &=& \frac{\partial (\tilde{h}^{0k} - \tilde{g} \tilde{\omega}^{0k})}{\partial x_l} + \frac{\partial (\tilde{h}^{0l} - \tilde{g} \tilde{\omega}^{0l})}{\partial x_k} \nonumber \\ &-& 2 K \delta_{kl} \, \frac{\partial (\tilde{h}^{0n} - \tilde{g} \tilde{\omega}^{0n})}{\partial x^n} + 2 \Lambda_{\rm E} \, T^{kl} , \label{htilevolklm} \end{eqnarray} \begin{equation}\label{htilevol00m} \frac{\partial \tilde{h}^{00}}{\partial t} = 2 K \, \frac{\partial \tilde{h}^{0l}}{\partial x^l} + 2 \tilde{g} \, (1-K) \, \frac{\partial \tilde{\omega}^{0l}}{\partial x^l} + 2 \Lambda_{\rm E} \, T^{00} , \end{equation} and \begin{equation}\label{htilevol0km} \frac{\partial \tilde{h}^{0l}}{\partial t} = \frac{1}{2} \frac{\partial \tilde{h}^{ln}}{\partial x^n} + \frac{1}{2} \frac{\partial \tilde{h}^{00}}{\partial x_l} + 2 \Lambda_{\rm E} \, T^{0l} . \end{equation} The occurrence of the energy-momentum tensor in Eqs.~(\ref{htilevolklm})--(\ref{htilevol00m}) is a very important qualitative modification. As anticipated, the conjugate momenta of the tetrad variables do not vanish in the presence of matter. Remember, however, that the dimensionless parameter $\Lambda_{\rm E}$ is extremely small. \subsection{Modified constraints}\label{secmodifiedconstr} In the presence of matter, the primary constraints (\ref{glinconstraints1a}) and (\ref{glinconstraints1b}) remain unchanged. The secondary constraints (\ref{glinconstraints2a}) change to \begin{eqnarray} E^{(kl)}_j &=& \frac{\partial A^{(0j)}_l}{\partial x^k} - \frac{\partial A^{(0j)}_k}{\partial x^l} - \Lambda_{\rm E} \left( \frac{\partial V_{jl}}{\partial x^k} - \frac{\partial V_{jk}}{\partial x^l} \right) \nonumber \\ &+& \delta_{jk} \, C_{0l} - \delta_{jl} \, C_{0k} , \quad \label{glinconstraints2amod} \end{eqnarray} whereas Eq.~(\ref{glinconstraints2b}) becomes \begin{equation}\label{glinconstraints2bmod} E^{(0l)}_k - E^{(0k)}_l = \Lambda_{\rm E} \left( \frac{\partial V_{0l}}{\partial x^k} - \frac{\partial V_{0k}}{\partial x^l} \right) . \end{equation} The tertiary constraints (\ref{glinconstraints3a}) become \begin{eqnarray} \frac{\partial E^{(kl)}_0}{\partial x^j} - \frac{\partial E^{(0j)}_l}{\partial x^k} + \frac{\partial E^{(0j)}_k}{\partial x^l} &=& \frac{\partial}{\partial x_n} \left( \frac{\partial A^{(kl)}_n}{\partial x^j} - \frac{\partial A^{(kl)}_j}{\partial x^n} \right) \nonumber \\ && \hspace{-12em} + \Lambda_{\rm E} \frac{\partial}{\partial t} \left( \frac{\partial V_{jl}}{\partial x^k} - \frac{\partial V_{jk}}{\partial x^l} \right) + \delta_{jk} \, G_1 \frac{\partial T_{00} }{\partial x^l} - \delta_{jl} \, G_1 \frac{\partial T_{00} }{\partial x^k} , \nonumber \\ \label{glinconstraints3amod} \end{eqnarray} and Eq.~(\ref{glinconstraints3b}) changes to \begin{equation}\label{glinconstraints3bmod} \frac{\partial E^{(0l)}_0}{\partial x_k} - \frac{\partial E^{(0k)}_0}{\partial x_l} = \frac{\partial E^{(kl)}_n}{\partial x_n} + \Lambda_{\rm E} \frac{\partial}{\partial x^\mu} \left( \frac{\partial V^{\mu l}}{\partial x_k} - \frac{\partial V^{\mu k}}{\partial x_l} \right) . \end{equation} Finally, the quaternary constraints (\ref{glinconstraints4a}) and (\ref{glinconstraints4b}) become \begin{equation}\label{glinconstraints4amod} \frac{\partial J^{(0l)j}}{\partial x_k} - \frac{\partial J^{(0k)j}}{\partial x_l} = - \Lambda_{\rm E} \frac{\partial^2}{\partial x_\mu \partial x^\mu} \left( \frac{\partial V^{jl}}{\partial x_k} - \frac{\partial V^{jk}}{\partial x_l} \right) , \end{equation} and \begin{equation}\label{glinconstraints4bmod} \frac{\partial J^{(0l)0}}{\partial x_k} - \frac{\partial J^{(0k)0}}{\partial x_l} = - \Lambda_{\rm E} \frac{\partial^2}{\partial x_\mu \partial x^\mu} \left( \frac{\partial V^{0l}}{\partial x_k} - \frac{\partial V^{0k}}{\partial x_l} \right) . \end{equation} At this stage we have to make a proper choice of the functions $V^{\kappa\lambda}$ in the Hamiltonian in order to avoid further constraints that would quickly make it impossible to find any solutions to the entire set of constraints. For this purpose we added a coupling of matter to the tetrad variables in addition to the more obvious coupling to the Yang-Mills variables. As a first step, we want to identify further vanishing conjugate tetrad variables because, according to Eq.~(\ref{conjtetradinterpret0l}), only the variables $\tilde{\omega}^{0l}$ and $\tilde{h}^{kl}$ carry essential information. Careful inspection of the structure of the evolution equations suggests the following choices of vanishing variables in addition to those given in Eq.~(\ref{tetradomtilkl}), \begin{equation}\label{tetradconjsum0} \tilde{h}^{00} = 0 , \qquad \tilde{h}^{0l} - \tilde{g} \tilde{\omega}^{0l} = 0 . \end{equation} The evolution equations for the conjugate tetrad variables then reduce to the much simpler form \begin{equation}\label{htilevolklmX} \frac{\partial \tilde{h}^{kl}}{\partial t} = 2 \Lambda_{\rm E} \, T^{kl} , \qquad \frac{\partial \tilde{h}^{kl}}{\partial x^l} = - 2 \Lambda_{\rm E} \, T^{k0} , \end{equation} and \begin{equation}\label{omtilevol0lX} \tilde{g} \, \frac{\partial \tilde{\omega}^{0l}}{\partial t} = \Lambda_{\rm E} \, T^{0l} , \qquad \tilde{g} \, \frac{\partial \tilde{\omega}^{0l}}{\partial x^l} = - \Lambda_{\rm E} \, T^{00} . \end{equation} Note that the consistency between the two members of each equation is guaranteed by energy-momentum conservation. The quaternary constraints can now be satisfied if we construct $V^{\kappa\lambda}$ by solving the Poisson equations \begin{equation}\label{tetradhomtilrepsm1} J^{(0l)\nu} = - \Lambda_{\rm E} \frac{\partial^2 V^{l\nu}}{\partial x_\mu \partial x^\mu} , \end{equation} where suitable initial and boundary conditions need to be imposed to find $V^{l\nu}$. There is no need to choose any particular form of $V^{00}$ because, according to Eq.~(\ref{tetradhomtilrepsm1}), there is no flux component associated with it. We hence assume $V^{00}=0$, unless there is any particular need to modify Eq.~(\ref{h00evolmat}). Note that $\nu$ is a four-vector index whereas $l$ is related to the labels of the Lie algebra (more precisely, $l$ is the label for the Lorentz boosts). Equations (\ref{htilevolklmX}) and (\ref{omtilevol0lX}) can now be written as \begin{equation}\label{VTrels} \frac{\partial}{\partial t} \frac{\partial^2 V^{l\nu}}{\partial x_\mu \partial x^\mu} = T^{l\nu} , \qquad \frac{\partial}{\partial x^l} \frac{\partial^2 V^{l\nu}}{\partial x_\mu \partial x^\mu} = - T^{0\nu} . \end{equation} implying that $V^{l\nu}$ and $H_{\rm t/m}$ have dimensions of mass or energy ($c=1$). As announced, $V^{l\nu}$ is determined by the energy-momentum tensor and vanishes in the absence of matter. Note that the three derivatives in Eq.~(\ref{VTrels}) are required to go from the level of lowest derivatives (tetrad variables) to the level of highest derivatives (conjugate tetrad variables), with the gauge vector fields and their conjugates in between (compare, for example, Eqs.~(\ref{hklevolmat}) and (\ref{htilevolklm})). Note that the different numbers of derivatives occurring in the various fields are also reflected in the different powers of $L^{-1}$ in Table~\ref{tabledimensions}. In the presence of matter, the procedure for selecting among the solutions of the Yang-Mills theory with external fluxes extends the idea of composite theories. This selection criterion should provide stability instead of the vanishing conjugate momenta associated with the tetrad variables for the composite theory of pure gravity. Again, the selection is very restrictive so that the composite theory of gravity possesses only few degrees of freedom. \subsection{Compact form of theory}\label{seccompactmat} As in Section~\ref{seccompactformpure}, we would like to find a closed set of differential equations for the tetrad variables, but now in the presence of matter. Again we need to express all the Yang-Mills variables in terms of the tetrad variables. Expressions for the vector fields $A_{a \mu}$ can be obtained from the evolution equations (\ref{omklevol}), (\ref{om0kevol}), (\ref{hklevolmat}) and the primary constraints (\ref{glinconstraints1a}). Their conjugates $E^{a \mu}$ can then be extracted from the original evolution equation (\ref{Aevola0}) for the temporal components and the modified equations (\ref{Aevol0kjmod}), (\ref{Aevolkljmod}) for the spatial components of the gauge vector fields. For the convenience of the reader, the explicit representations are listed in Appendix~\ref{appYMtetrad}. By construction, these expressions satisfy the primary constraints identically. We only need to consider the evolution equations for the conjugate Yang-Mills fields $E^a_\mu$ (the higher constraints can be verified in a straightforward manner). From Eq.~(\ref{Eevolkljmod}) we obtain \begin{eqnarray} \frac{\partial}{\partial x^l} \left( \frac{1}{2} \frac{\partial^2 h_{jk}}{\partial x_\mu \partial x^\mu} + C_{jk} \right) + \delta_{jk} \frac{\partial C_{l\mu}}{\partial x_\mu} &=& \nonumber\\ && \hspace{-12em} \frac{\partial}{\partial x^k} \left( \frac{1}{2} \frac{\partial^2 h_{jl}}{\partial x_\mu \partial x^\mu} + C_{jl} \right) + \delta_{jl} \frac{\partial C_{k\mu}}{\partial x_\mu} . \qquad \label{tetradsummatkl1} \end{eqnarray} By using that the tensor $T_{\mu\nu}$ in Eq.~(\ref{Ctensorchoice0}) satisfies the energy-momentum conservation (\ref{energymomconsmattT}), we obtain the following generalization of Eq.~(\ref{tetradhkl}) , \begin{eqnarray} \frac{1}{2} \frac{\partial^2 h_{kl}}{\partial x_\mu \partial x^\mu} + G_1 \bigg( T_{kl} - \frac{1}{2} {T^\lambda}_\lambda \, \eta_{kl} \bigg) + 2 G_2 {T^\lambda}_\lambda \, \eta_{kl} &=& \qquad \nonumber \\ && \hspace{-6em} \frac{1}{2} \frac{\partial^2 f}{\partial x^k \partial x^l} , \label{tetradsummatkl2} \end{eqnarray} where the function $f$ results from integration of the third-order equations. From Eq.~(\ref{Eevolkl0}) we obtain another integrability condition, \begin{equation}\label{tetradsummat0l1} \frac{\partial}{\partial x^l} \bigg( \frac{1}{2} \frac{\partial^2 h_{0k}}{\partial x_\mu \partial x^\mu} + G_1 T_{0k} \bigg) = \frac{\partial}{\partial x^k} \bigg( \frac{1}{2} \frac{\partial^2 h_{0l}}{\partial x_\mu \partial x^\mu} + G_1 T_{0l} \bigg) . \end{equation} From Eqs.~(\ref{Eevol0k0}) and (\ref{Eevol0kjmod}) we obtain after using Eqs.~(\ref{energymomconsmattT}) and (\ref{tetradhomtilrepsm1}), \begin{eqnarray} \frac{\partial}{\partial x^l} \bigg[ \frac{1}{2} \frac{\partial^2 h_{00}}{\partial x_\mu \partial x^\mu} + G_1 \bigg( T_{00} - \frac{1}{2} {T^\lambda}_\lambda \, \eta_{00} \bigg) && \nonumber\\ && \hspace{-16em} + 2 G_2 {T^\lambda}_\lambda \, \eta_{00} \bigg] = \frac{\partial}{\partial t} \left( \frac{1}{2} \frac{\partial^2 h_{0l}}{\partial x_\mu \partial x^\mu} + G_1 T_{0l} \right) , \qquad \label{tetradsummatjl00} \end{eqnarray} and \begin{eqnarray} \frac{\partial}{\partial t} \bigg[ \frac{1}{2} \frac{\partial^2 h_{jl}}{\partial x_\mu \partial x^\mu} + G_1 \bigg( T_{jl} - \frac{1}{2} {T^\lambda}_\lambda \, \eta_{jl} \bigg) + 2 G_2 {T^\lambda}_\lambda \, \eta_{jl} \bigg] &=& \nonumber\\ && \hspace{-20em} \frac{\partial}{\partial x^l} \left( \frac{1}{2} \frac{\partial^2 h_{0j}}{\partial x_\mu \partial x^\mu} + G_1 T_{0j} \right) , \label{tetradsummatjl0l} \end{eqnarray} respectively. Again, the choice (\ref{tetradhomtilrepsm1}) of $V^{l\nu}$ is of crucial importance because it leads to further integrability conditions. Equations (\ref{tetradsummat0l1})--(\ref{tetradsummatjl0l}) allow us to extend the differential equation (\ref{tetradsummatkl2}) to all components, \begin{eqnarray} \frac{1}{2} \frac{\partial^2 h_{\mu\nu}}{\partial x_\lambda \partial x^\lambda} + G_1 \bigg( T_{\mu\nu} - \frac{1}{2} {T^\lambda}_\lambda \, \eta_{\mu\nu}\bigg) + 2 G_2 {T^\lambda}_\lambda \, \eta_{kl} &=& \qquad \nonumber \\ && \hspace{-6em} \frac{1}{2} \frac{\partial^2 f}{\partial x^\mu \partial x^\nu} , \label{tetradsummatall} \end{eqnarray} possibly after a minor modification of $f$. The compact equation (\ref{tetradsummatall}) has a remarkable similarity with the linearized version of Einstein's field equation (\ref{linGRfieldeq2}) with the curvature tensor (\ref{linRt}) in a harmonic coordinate system, provided that we choose \begin{equation}\label{G12choices} G_1 = 8 \pi G , \qquad G_2 = 0 , \end{equation} and $f=0$. The freedom of choosing the function $f$ is the only leftover from the higher derivative nature of the theory. It gives us the remarkable possibility to mimic the local gauge degree of freedom associated with the general coordinate transformations employed to achieve the one-parameter family of coordinate conditions (\ref{harmoniclin}), although the composite theory is defined in Minkowski space. \subsection{Isotropic solution revisited} As an application of our compact equations, we consider a mass $M$ resting at the origin, which is represented by an energy-momentum tensor $T_{\mu\nu}$ with only one nonvanishing component, $T_{00} = M \delta^3(\bm{x})$. Equation~(\ref{VTrels}) requires nonzero components $V^{l0}$. A simple solution of this equation is found to be \begin{equation}\label{isoVl0sol} V^{l0} = \frac{M}{8 \pi} \frac{x^l}{r} , \end{equation} which describes a purely orientational effect. The complete list of conjugate tetrad variables is given by \begin{equation}\label{isoconjtetrads} \tilde{h}^{0l} = \tilde{g} \tilde{\omega}^{0l} = - \frac{\Lambda_{\rm E} M}{4 \pi} \frac{x^l}{r^3} , \quad \tilde{\omega}^{kl} = \tilde{h}^{kl} = \tilde{h}^{00} = 0 . \end{equation} Note that the modification of the coordinate condition (\ref{h0kevolmat}) is extremely tiny, but independent of the distance from the central mass. We now focus on the field equations (\ref{tetradsummatall}) with the parameter choices (\ref{G12choices}). Away from the origin, these equations have already been solved in Section \ref{secisostatsol}. By integrating the simplified field equations \begin{equation}\label{tetradh00kls} \frac{\partial^2 h_{00}}{\partial x_n \partial x_n} + G_1 T_{00} = 0 , \quad \frac{\partial^2 (h_{ll}-f)}{\partial x_n \partial x_n} + 3 G_1 T_{00} = 0 , \end{equation} over a sphere around the origin and using $h_{ll}-f=3(\bar{\alpha}+\bar{\xi})$, we find $r_0=MG$ and $\bar{c}=1$ for the coefficients in the solution (\ref{linbetasol}), (\ref{linalphaxisol}). More details about isotropic solutions can be found in Appendix \ref{appharmonic}. \subsection{Modified particle motion}\label{secmodpartmot} For obtaining the motion of a particle with mass $m$ in a gravitational field, it is convenient to divide the Hamiltonian (\ref{Hamfullsystem}) by $\Lambda_{\rm E}$ because the resulting equations then look more familiar. Whereas the variational problem of the Lagrangian approach is clearly unaffected by such a constant factor, it corresponds to a rescaling of the particle momentum variables in the Hamiltonian formulation. However, the particle trajectories remain unchanged. We assume that the influence of the Hamiltonian $H_{\rm t/m}$ is negligibly small and only $H_{\rm m}$ and $H_{\rm YM/m}$ contribute to the particle motion. This assumption is justified by the extremely small factor $\Lambda_{\rm E}$ in $\tilde{h}^{\kappa\lambda}$ (see, for example, Eq.~(\ref{isoconjtetrads}) for static isotropic fields). The Hamiltonian $H_{\rm t/m}$ might have an influence of the motion of mass only on cosmological length and time scales. The resulting evolution equation for the particle momentum is given by \begin{equation}\label{fullparticleevolp} \frac{d p_j}{d t} = \frac{p_\mu p_\nu}{2 \gamma m} \, \frac{\partial}{\partial x^j} \left[ h^{\mu\nu} - \frac{2}{\Lambda_{\rm E}} \left( G_1 \mathring{\cal R}^{\mu\nu} + G_2 \eta^{\mu\nu} {{\cal R}^\lambda}_\lambda \right) \right] , \end{equation} where we have used the expression (\ref{YML2HR}) for $H_{\rm YM/m}$, and the evolution of the particle position is governed by \begin{eqnarray} \left( 1 + \frac{1}{2} h^{00} - \frac{p_k p_l h^{kl}}{2 \gamma^2 m^2} \right) \frac{d x^j}{d t} &=& \nonumber \\ && \hspace{-10em} \label{fullparticleevolx} \left( \delta^{j\mu} - h^{j\mu} + \frac{2 G_1}{\Lambda_{\rm E}} \mathring{\cal R}^{j \mu} \right) \frac{p_\mu}{\gamma m} \\ && \hspace{-10em} + \, \frac{1}{\Lambda_{\rm E}} \left[ G_1 \left( \mathring{\cal R}^{00} - \frac{p_k p_l \mathring{\cal R}^{kl}}{\gamma^2 m^2} \right) + G_2 \frac{{{\cal R}^\lambda}_\lambda}{\gamma^2} \right] \frac{p_j}{\gamma m} . \nonumber \end{eqnarray} The factor in parentheses on the left-hand side of Eq.~(\ref{fullparticleevolx}) simply changes $d x^j/d t$ into $d x^j/d \tau$, where $\tau$ is the proper time of the particle moving in a gravitational field. For the static isotropic solution in the weak-field approximation, the curvature tensor vanishes. Equations (\ref{fullparticleevolp}) and (\ref{fullparticleevolx}) then describe geodesic motion. However, this should not be taken for granted. For the fully nonlinear composite theory of gravity, it has been shown in Appendix~A of \cite{hco231} that only ${{\cal R}^\lambda}_\lambda$ and ${\cal R}^{00}$ vanish (however, that result was found in a standard quasi-Minkowskian coordinate system that does not satisfy the coordinate conditions (\ref{harmoniclin})). If one still wants to achieve geodesic motion then one would have to choose the scalar coupling of fields and matter through $G_2$ rather than the tensorial coupling through $G_1$. A more appealing option is to search for coordinate conditions characterizing a background Minkowski system that leads to a vanishing curvature tensor in matter-free space. \section{Summary and conclusions}\label{secconclusions} The main insight from this paper is this: A lot of things could go wrong with composite gravity, but they don't. The canonical Hamiltonian formulation of the composite theory of gravity obtained by expressing the gauge vector fields of the Yang-Mills theory based on the Lorentz group in terms of tetrad or \emph{vierbein} variables requires $80$ fields, not counting any ghost fields for handling gauge conditions. A large number of constraints should arise, so that gravity has only a few degrees of freedom, but not so many that the theory would not admit any solutions. In addition to constraints associated with gauge degrees of freedom, there are constraints resulting from the composition rule. Quite miraculously, we obtain exactly the right total number of constraints. In the presence of matter, securing solutions by avoiding too many constraints requires a consistently matched double coupling of matter to both Yang-Mills and tetrad fields. The possibility of finding a proper number of natural constraints relieves the pressure to use smaller Lie groups like SU(2), which is behind the Ashtekar variables proposed for a canonical approach to gravity in the context of \emph{dreibein} variables \cite{Ashtekar86,Ashtekar87}. Composite theories involve higher derivatives and are hence prone to instability. For composite gravity, one would expect fourth-order differential equations. However, the constraints lead to a very special feature of composite higher derivative theories: they select solutions from a workhorse theory. For composite gravity this means that we deal with selected solutions of the Yang-Mills theory based on the Lorentz group. In the presence of matter, the Yang-Mills theory includes suitable external fluxes. This selection effect guarantees the elimination of instabilities. As the selection is very restrictive, we hope that it also helps to eliminate potential problems associated with the non-compact nature of the Lorentz group (Yang-Mills theories are usually based for good reasons on compact Lie groups). As composite gravity provides selected solutions of a Yang-Mills theory, it is much closer to the standard treatment of electroweak and strong interactions than general relativity. As a consequence of the equivalence principle, gravity is all about geometry. However, this remark does not imply that gravity must necessarily be interpreted as curvature in space-time \cite{Jimenezetal19}. The composite theory of gravity expresses the Yang-Mills fields associated with the Lorentz group in terms of the tetrad fields associated with a space-time metric. This metric is only used for expressing momenta in terms of velocities and may hence be interpreted as an anisotropy of mass. The metric has no effect on the measure used for the integrations in the Hamiltonian or Lagrangian, which are performed in an underlying Minkowski space. Nevertheless, the particle motion in the field around a central mass turns out to be geodesic. And nevertheless, the field equations for a tensorial coupling of the gravitational field to matter are remarkably similar to general relativity in the weak-field approximation. In the nonlinear regime, however, it might turn out to be necessary to use the scalar coupling to guarantee the geodesic motion of particles. The canonical Hamiltonian formulation of the evolution equations of composite gravity in a large space is clearly advantageous for quantization. The constraints resulting from the composition rule are found to be gauge invariant, second class constraints. This suggests that, in the quantization process, they can be treated via Dirac brackets, and the gauge constraints can be treated independently with the BRST procedure. Therefore, quantization of linearized composite gravity in the context of dissipative quantum field theory \cite{hcoqft} seems to be straightforward. A compact formulation of the equations for the metric is advantageous for solving practical problems, even though these second-order differential equations have some special features: a free function appears as a result of eliminating higher derivatives by integration; this function is reminiscent of gauge degrees of freedom in general relativity. The steps carried out here in great detail for the weak-field approximation should provide guidance for the proper canonical treatment of the fully nonlinear composite theory of gravity proposed in \cite{hco231}. Whereas many of the steps are straightforward and may actually be more transparent in the nonlinear setting (for example, true vector indices can be recognized more easily), special attention must be paid to the coordinate conditions that we want to use for characterizing appropriate Minkowskian coordinate systems (see Appendix \ref{appharmonic}). It would be desirable to find coordinate conditions for which the curvature tensor vanishes in empty space. Moreover, one needs to make a choice between coordinate conditions that are more in the spirit of general relativity or better matched to the assumption of a background Minkowski metric.
{'timestamp': '2021-02-04T02:16:38', 'yymm': '2005', 'arxiv_id': '2005.14495', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14495'}
arxiv
\section{Introduction} \label{section:into} Compelling evidence for the existence of non-baryonic dark matter particles is provided by the temperature structure of the cosmic microwave background ra
diation \citep{planck2016} and supported by observations of gravitational lensing \citep[see][for a review]{massey2010}. Measurements of the cosmic large-scale structure set constraints on the properties of the particles. Thus, the observed large-scale distribution of galaxies rules out hot dark matter, that is, particles with large primordial thermal velocities, as the main form of dark matter \citep{frenk1983,white1983,white1984}. On the other hand, the data are in excellent agreement with the cold dark matter (CDM) model, in which the particles have negligible primordial thermal velocities \citep{davis1985,springel2005, rodriguez2016}. Warm dark matter (WDM) models represent the current upper bound on the primordial velocity distribution of the dark matter particle. Testing these models serves to constrain the properties of dark matter in the early Universe and also to guide searches for the fundamental particle nature of dark matter. The main distinguishing difference between the CDM and WDM models is the predicted abundance of structures on the scale of dwarf galaxies and below \citep{colin2000, bode2001, lovell2012, schneider2012, kennedy2014}. Current WDM models of interest, for example a 7 keV sterile neutrino\footnote{Such models are motivated by the observation of a 3.5~keV emission line in the X-ray spectra of galaxies and clusters \citep{bulbul2014, boyarsky2014}.}, predict an exponential reduction in the abundance of structure below a mass of approximately $10^8$~M$_{\odot}${} \citep{lovell2012,schneider2013,hellwing2016, bose2017, lovell2017}; by contrast, in the CDM model the halo mass function continues to increases towards low masses \citep{diemand2007, springel2008}. Precise measurements of the abundance of such low mass haloes would constrain WDM models and, if they were shown to be absent, would conclusively rule out the CDM model. Galaxies cannot form in halos of mass $\lesssim 10^8$M$_{\odot}${} \citep{sawala2013,sawala2016,benitez-llambay2020} so these can only be detected through gravitational lensing effects, particularly the distortions they cause to the images of strong lensing arcs produced by much more massive lenses such as groups and clusters of galaxies \citep{koopmans2005}. This method has already been used successfully to detect a $1.9\pm0.1\times10^8$~M$_{\odot}${} dark satellite and the detection sensitivity is expected to reach $\sim 2\times 10^7$~M$_{\odot}${} \citep{vegetti2012}\footnote{The definition of mass in these papers assumes a truncated pseudo-Jaffe model and differs from the definition used in more recent gravitational lensing studies which is based on the NFW model.}. \citet{li2017} estimate that analysis of about 100 strong lensing systems could conclusively distinguish CDM from the 7~keV sterile neutrino WDM models, while samples of quadruply imaged quasars have already been used to infer that the halo mass function continues down to masses $\lesssim 10^{7.8}$~M$_{\odot}${} \citep{2020MNRAS.491.6077G}. \citet{li2017} and \citet{despali2018} based their predictions of the subhalo and field halo contributions to the lensing signal on dark-matter-only (DMO) simulations. It is now well established that the inclusion of baryons in the simulations has important effects on the population of small-mass subhalos orbiting in Milky Way mass haloes \citep{donghia2010,sawala2017, garrison2017, richings2018}, leading to a reduction in the abundance of subhaloes near the centre of the host of at least 50\%. The size of these effects in general depends on the size and shape of the galaxy at the centre. Haloes that produce visible lens arcs are typically ten times more massive than the Milky Way halo \citep{bolton2008} and the galaxies that form at their centres are different in size and morphology to the Milky Way. Simulating $10^{13}$~M$_{\odot}${} haloes with a small enough particle mass to resolve the population of $10^7$~M$_{\odot}${} subhaloes necessary for strong lensing tests, whilst also including the effects of baryons at sufficient resolution, is computationally prohibitive with conventional techniques. Here we describe and implement a new technique for setting up the initial conditions of a cosmological simulation, so that dark matter particles outnumber gas particles by 7:1. This approach allows us to resolve $10^7$~M$_{\odot}${} substructures within a $10^{13}$~M$_{\odot}${} halo, whilst following the gas dynamics at the full resolution of the high-resolution \textsc{Eagle}{} simulation, ${\sim}10^5$M$_{\odot}${} \citep{schaye2015}. In the simulation described here, the masses of dark matter and gas particles are approximately equal. This approach has the added benefit of avoiding the spurious growth in the sizes of galaxies described by \citet{ludlow2019}, caused by gravitational two-body scattering of unequal-mass particles imparting velocity kicks to the lighter particles. This paper is arranged as follows: in \S\ref{section:simulations} we describe the creation and testing of the initial conditions of our simulation, as well as some key diagnostics of the completed simulation. In \S\ref{section:halos} we examine the effect of both baryons and environment on the abundance and properties of field halos. This section also includes a discussion of the definition of the mass of a halo. In \S\ref{section:subhalos} we study the abundance and concentration of subhalos in the central halo of the simulation. We also consider the variation in the observed abundance of structure due to projection effects. We conclude in \S\ref{section:conclusions}. \section{Simulations} \label{section:simulations} The simulation was performed using the \textsc{Eagle}{} Reference model \citep{schaye2015, crain2015} with one exception: in addition to the fiducial star formation rate calculation, any gas particle reaching a density $n_\mathrm{H} > 10^4 \, \rm{cm}^{-3}$ was directly converted into a star particle.. \subsection{Candidate selection} It is important that the halo and associated central galaxy selected for resimulation be representative of those that produce observed lenses. \citet{despali2017} identified a sample of halos in the \textsc{Eagle}{} 100 Mpc simulation \citep{schaye2015} which have similar properties to lenses detected in the SLACS Survey \citep{bolton2006}. This was designed to detect bright, early-type lens galaxies, the most suitable for detailed lensing and photometric studies, at $z\sim 0.2$. The following criteria were used: \begin{itemize} \item The halo is at a redshift of approximately $z=0.2$. \item The halo must be relaxed (according the criteria of \citealt{neto2007}). \item The halo has a virial mass between $10^{12}$--$10^{14.5}$~M$_{\odot}${}. (Less massive halos will not produce visible Einstein rings.) \item The halo has a velocity dispersion of between 160--400~km/s. inside the half-mass radius\footnote{The half mass radius is calculated in projection, averaging over three orthogonal directions.}. \item The central galaxy is an Elliptical. Specifically, at least 25\% of all star particles inside 20 kpc must be counter rotating, where direction of rotation is given by the total angular momentum of all the star particles in this region. \end{itemize} From the sample of halos we select one object for resimulation. In the \textsc{Eagle}{} 100 Mpc volume run with the \textsc{Reference} subgrid model, the halo has a FOF ID of 129, a mass of $M_{200}=10^{13.1}$~M$_{\odot}${}, and is located at at [89.742, 42.189, 94.507] Mpc. \subsection{Construction of initial conditions} \label{section:zoom_ics} We use a zoom simulation \citep{frenk1996} to study the selected halo. This allows us to resolve the low-mass substructures relevant for tests of the CDM model whilst minimising the computational burden. We find all particles which are less than 5.5 Mpc from the potential minimum of the halo at redshift $z=0.2$. We then identify these particles in the Eagle simulation initial conditions and trace them back to their comoving coordinates at the Big Bang using the Zel'dovich approximation \citep{zeldovich1970}. This defines the region of space known as the Lagrangian region, which is the patch of the universe from which our target halo will form. To perform a zoom simulation, the Lagrangian region is populated with particles which have smaller masses than the particles of the parent simulation. The rest of the volume is populated with more massive particles, present only to reproduce the correct large-scale tidal forces without significantly increasing the overall computational cost. The particles which populate the Lagrangian region must be arranged such that {\em (i)}~the whole region has the mean density of the universe, {\em (ii)}~ the configuration of particles is very close to being gravitationally stable. Any instabilities in the initial conditions which are not to due physical effects will lead to the rapid growth of artificial structure. For DMO zoom simulations, the Lagrangian region can simply be populated with a uniformly-spaced grid. A common approach for simulations using the smooth particle hydrodynamics (SPH) technique is to take the uniform grid of DMO particles and split each particle into a gas particle and a dark matter particle. The total mass of each pair is kept the same as the DMO particle, and the particles are placed such that their centre of mass is the same as the position of the DMO particle. In this setup there is one dark matter particle per gas particle, and the ratio of the particle masses is determined by the cosmological parameters of the simulation, i.e. $m_\mathrm{DM}/m_\mathrm{gas}\equiv\Omega_\mathrm{DM}/\Omega_\mathrm{b}$. In the Planck 2015 cosmology, this means that each dark matter particle is 5.36 times heavier than a gas particle. Our approach differs from the method outlined above in that the initial conditions are created with 7 dark matter particles per gas particle. This means that the ratio of the particle masses is given by $m_\mathrm{DM}/m_\mathrm{gas}\equiv\Omega_\mathrm{DM}/7\Omega_\mathrm{b}\sim0.77$. To ensure uniform matter density, and to avoid gravitational instabilities (especially at the boundary of the Lagrangian region), we tesselate the Lagrangian region with a template as shown in Fig.~\ref{fig:ic_cell}. \begin{figure} \includegraphics[width=\linewidth,trim=3.5cm 3.cm 3.5cm 0.4cm]{Figures/ic_cell.pdf} \caption{Template set of particles used to populate the Lagrangian region of the initial conditions. Dark matter particles are blue, and the gas particle is orange. The area of each particle in the diagram is directly proportional to its mass.} \label{fig:ic_cell} \end{figure} Each template contains one gas particle, which sits at the centre of the cell. The template also contains 26 ``fractional'' dark matter particles, positioned symmetrically on the faces, edges and vertices of the cell. When two templates are placed next to each other, some particles from each template will occupy the same position as particles from the template next door. These coincident fractional particles are combined into one whole particle, with a mass equal to the combined mass of the original particles. In the interior of the Lagrangian region, each face particle will overlap with one other face particle, each edge particle will overlap with three other edge particles, and each vertex particle will overlap with seven other vertex particles. Therefore in order for the masses of all the dark matter particles in the interior of the Lagrangian region to have the same target mass, the mass of each face particle in the template is one half of the target mass. Similarly, the edge and vertex particles in the template have masses of one quarter and one eighth of the target particle mass respectively. The total number of dark matter particles per template in the interior of the Lagrangian region is thus given by $6/2 + 12/4 + 8/8 = 7$. Once the Lagrangian region has been populated with copies of the template, almost all dark matter particles will have the same mass, except for dark matter particles at the boundary, which will have some fraction of the target dark matter mass. These fractional masses at the edge of the Lagrangian region are necessary to ensure uniform density and gravitational stability. As the gas particle is placed at the centre of the template all the gas particle masses in the Lagrangian region will be the same. Outside of the high resolution region, the tidal particles were placed using the method adopted for the Aquarius simulations \citep{springel2008}. Because the tiling method for the high resolution is new, as a precaution, we did an additional test on the particle load. We created a full set of initial conditions with no cosmological perturbations and ran a simulation from our intended start redshift 127 to redshift zero. No structures formed within the high resolution region. Not unexpectedly, some clustering occurred at the interface between the high resolution region and the lightest mass tidal particles. This structure formation, which is numerical in origin, was limited to a thin surface only. The velocities remained small except close to this surface. This indicates that the high resolution region in the particle load is at precisely the mean density of the universe as intended. The dark matter particles in this boundary region have masses that differ from those in the interior of the Lagrangian region. We excluded these particles, as well as all other tidal particles from the analysis. The cosmological parameters used for the simulation are taken from the Planck 2013 results \citep{planck2014} and are listed in Table~\ref{tab:cosmo}. The table also lists the gravitational softening length used in the high-resolution region of our simulation and the masses of the dark matter and gas particles in the initial conditions. The initial conditions contain about 198 million gas particles and 1.393 billion dark matter particles in the high resolution region. In addition there are about 76 million more massive `tidal' dark matter particles which surround the high resolution region and fill the entire computational volume. \begin{table} \centering \begin{tabular}{@{}ll@{}} \toprule Cosmological parameter & Value \\ \midrule $\Omega_m$ & 0.307 \\ $\Omega_{\Lambda}$ & 0.693 \\ $\Omega_b$ & 0.04825 \\ $h\equiv H_0$/(100 km s$^{-1}$ Mpc$^{-1}$) & 0.6777 \\ $\sigma_8$ & 0.8288 \\ $n_s$ & 0.9611 \\ $Y$ & 0.248 \\ $l_{\mathrm{box}}$ [cMpc] & 100 \\ $\epsilon_0$ [kpc] & 0.05 \\ $m_{\mathrm{DM}}$ {[}$10^4$ M$_{\odot}${]} & 8.27 \\ $m_{\mathrm{gas}}$ {[}$10^4$ M$_{\odot}${]} & 10.74 \\ \bottomrule \end{tabular}% \caption{Cosmological and numerical parameters used in the simulation. $\Omega_m$, $\Omega_{\Lambda}$ and $\Omega_{b}$ are the mean density of matter, dark energy and baryons in units of the critical density at redshift $z=0$; $H_0$ is the value of the Hubble parameter at redshift $z=0$; $\sigma_8$ is the standard deviation of the linear matter distribution smoothed with a top hat filter of radius 8 $h^{-1}$ cMpc; $n_s$ is the index of the power law which describes the power spectrum of primordial fluctuations; $Y$ is the primordial abundance of helium; $\l_{\mathrm{box}}$ is the comoving side length of the simulation box; $\epsilon_0$ is the softening length used in the force calculations for high-resolution dark matter and gas particles at redshift $z=0$. $m_{\mathrm{DM}}$ is the mass of a dark matter particle in the high-resolution region of the hydrodynamical version of the simulation. Edge effects in the construction of the initial conditions mean that a tiny fraction of the high-resolution dark matter particles (approximately 1.5\%) have masses which are a fraction of this value. } \label{tab:cosmo} \end{table} \subsection{Testing the initial conditions} Changing the number of dark matter particles per gas particle can potentially affect important observables in the final simulation. For example, gravitational two-body scattering between species of different masses influence observables like the size of small galaxies \citep{ludlow2019}. To study the effects of increasing the dark matter resolution for a fixed gas mass, we ran a 25~Mpc cosmological volume with 376$^3$ gas particles and the same initial phases as the L0025N0376 volume described in \citet{schaye2015}, but with seven times as many dark matter particles. We refer to the original run as the standard-resolution (SR) simulation, and our new volume as the DMx7 simulation. The mass of gas particles in these two simulation are the same, but our version has seven times as many dark matter particles, that is our simulation has the standard \textsc{Eagle}{} gas resolution, but a dark matter resolution similar to that of the \textsc{Eagle}{} high-resolution (HR) run (L0025N0752). \begin{figure} \includegraphics[width=\linewidth]{Figures/testing_mf.pdf} \caption{The mass function of halos (solid lines, $M=M_{200}$) and galaxies (dashed lines, $M=M_\star(<30 \, \mathrm{kpc})$) in three realisations of the \textsc{Eagle}{} 25~Mpc simulation. The blue lines show the halo and galaxy mass functions at standard \textsc{Eagle}{} resolution. The orange lines show the effect of increasing the resolution of both gas and dark matter in the simulation, while the green lines show the effect of only increasing the resolution of dark matter whilst holding the gas resolution constant, as described in \S\ref{section:zoom_ics}. Dotted lines show the mass of 100 DM particles in each simulation.} \label{fig:testing_mf} \end{figure} We checked several key properties, the first of which is the mass function of halos and galaxies. Here we take the mass of a galaxy to be the mass of all star particles within 30 kpc of the potential minimum of the host halo. These properties are shown in Fig.~\ref{fig:testing_mf}. The mass function of galaxies is almost unchanged between the versions of the simulation which have the same number of gas particles but different numbers of dark matter particles. The effect of increasing the resolution of gas particles has a much more significant impact on the abundance of both smaller and larger galaxies. The DMx7 simulation also does an excellent job of reproducing the halo mass function at masses below the resolution limit of the SR simulation. In general, if the difference between the blue and orange lines is bigger than the difference between either blue-green or orange-green, we conclude that the effect of increasing the gas resolution is more significant than the effect of changing the dark matter-gas mass ratio. \begin{figure*} \includegraphics[width=\linewidth]{Figures/testing_den.pdf} \caption{The ratio of the density of dark matter and stars in the HR (orange lines) and DMx7 (green lines) simulations to the density of dark matter and stars in the \textsc{Eagle}{} SR 25~Mpc simulation. The sample contains 100 halos bijectively matched between simulations. Solid lines show the median ratio as a function of radius for each species. Shaded regions indicate the interquartile range. The light grey shaded region shows the approximate value of the \citealt{power2003} radius for the SR simulation, whilst the dark grey region shows the corresponding radius for the HR and DMx7 simulations.} \label{fig:testing_den} \end{figure*} We also tested the effect of differing species resolution on the internal structure of halos. We matched halos between simulations by mass and position. Specifically, the masses of a potential matched pair must be within a factor of two,\footnote{Typically the masses of matched halos agree to better than 10\%.} and the first halo must lie within the virial radius of the second halo and vice versa. This procedure produces a unique match for each of the 100 most massive halos in the SR simulation. Each halo in the SR simulation has a corresponding matched halo in the HR and DMx7 simulations. We calculated the density of dark matter and stars as a function of radius in each halo. For each species, we then calculate the ratio of density in the HR and DMx7 to the density in SR simulation. We performed this calculation for the 100 most massive halos in each simulation, which span a mass range of approximately $10^{11.5}$--$10^{13.5}$~M$_{\odot}${}. The results are shown in Fig.~\ref{fig:testing_den}. Outside the \cite{power2003} radius, all three versions of the simulations display excellent agreement in the measured dark matter density profiles. At distances of less than 5 kpc from the centre of the halo, the density of dark matter in the DMx7 simulation is significantly lower than in the simulations which have a standard gas to dark matter particle mass ratio. This result is not unexpected. \citet{ludlow2019} have shown that the equipartition of energy between multiple species of different-mass particles causes the heavier species to sink artificially towards the centre of the halo. In the case of the SR simulation, the dark matter particles are around five times heavier than the star particles, which causes an artificial increase in the density of dark matter at the centre of the halo. The second panel of Fig.~\ref{fig:testing_den} shows that beyond the \cite{power2003} radius, where energy equipartition can affect the distribution of particles, the density of stars is generally well reproduced in the DMx7 simulation, albeit with considerable scatter. The same cannot be said for the HR simulation, where the effect of increasing gas resolution has a pronounced effect on the distribution of stars in galaxies. The key takeaway is that the uncertainties in the modelling of baryonic effects are significantly larger than variations introduced by altering the ratio of the mass of dark matter and gas particles. \subsection{The simulation} \begin{figure*} \includegraphics[width=\linewidth]{Figures/simpic_zoom.png} \caption{Projected density of matter in a cube of side length 10 Mpc, centred on the most massive halo in the high-resolution region of the simulation. The brightness of each pixel is proportional to the logarithm of the density of matter, and the hue encodes the density of gas. The orange inset shows a zoom into the largest halo, with a side length of 1 Mpc, and the pink inset shows a zoom into the subhalo with the greatest baryonic mass in the main halo, with a side length of 100 kpc. The main image contains approximately 500 million particles, whilst the image in the pink inset is based on approximately 1.8 million particles.} \label{fig:simpic} \end{figure*} A visualisation of the high-resolution region of the simulation is shown in Fig.~\ref{fig:simpic}. The brightness of each pixel in the image is proportional to the logarithm of the projected density of matter, in a cube of side length 10 Mpc. The projected density of gas in the simulation is encoded in the hue of each pixel. Fig.~\ref{fig:simpic} shows that the main halo in our simulation sits at the centre of three large filaments. The inset panels demonstrate the large dynamic range of the simulation, with the volume of the cube shown in the pink square being a millionth of the volume shown in the main figure. In addition to the excellent resolution of the central halo, our simulation also resolves the internal structure of the filaments of the cosmic web, including strands of filaments that are almost entirely devoid of baryonic matter. The region simulated at high resolution is unusually large for a zoom simulation. The region is approximately spherical, with a radius of around 7 Mpc at redshift $z=0$. This is approximately 14 times the virial radius of the main halo. For comparison, the high-resolution region in the Hydrangea cluster simulations is 10 times the virial radius \citep{2017MNRAS.470.4186B} and is 4-5 times the virial radius (or around 1 Mpc in absolute terms) for the \textsc{Auriga} suite of galactic zoom simulations \citep{grand2016}. The largest halo in the high-resolution region (to which we will hereafter refer as the main halo) has a mass of $M_{200}=10^{13.14}$~M$_{\odot}${} and a radius of $r_{200}=506$~kpc at redshift $z=0$. This halo contains 200 million particles (as identified using the standard friends-of-friends algorithm; \citealt{davis1985}). Running this hydrodynamical version of this simulation required around 1.5 million core-hours, on 512 cores. \section{The halo population} \label{section:halos} In this section we examine the field halos in our simulation. In particular, we focus on the halo mass function in the mass range $10^{6.5}$--$10^{10.5}$~M$_{\odot}${}, critical for studies of strong gravitational lensing by massive elliptical galaxies designed to test the $\Lambda$CDM model and to distinguish CDM from viable alternatives such as WDM in the form of 7~keV sterile neutrinos. We discuss the effects of baryons on the halo mass function, and compare the measured halo mass function to predictions of the widely used Sheth-Tormen model \citep{2002MNRAS.329...61S}. We also study the relationship between halo properties and their environment, specifically the abundance of halos in different environments and the relationship between halo environment and internal halo structure. \subsection{The mass of a halo} There is no unique way to define the mass of halos in cosmological simulations. A number of definitions are widely used in the analysis of simulations, and here we adopt $M_{200}$ -- the total mass contained inside a sphere within which the mean density of matter is 200 times the critical density of the universe -- as our definition. For each halo, this sphere is centred on the particle in the corresponding friends-of-friends (FOF) group \citep{davis1985} that has the lowest gravitational potential. This means that there is one halo per FOF group. Several previous studies of the halo mass function have used the total mass within each FOF group as the definition of halo mass \citep{jenkins2001, springel2005, hellwing2016} and, when the FOF mass is used, the Sheth-Tormen prediction for the halo mass function agrees well with simulations. Studies predicting the contribution to strong-lensing perturbations from halos along the line-of-sight have used the Sheth-Tormen mass function. \citep{li2017, despali2018}. However, \citet{tinker2008} argue strongly in favour of using a spherical overdensity method for measuring the mass of a halo, as observable properties are more strongly correlated with spherical overdensity masses than FOF masses. We calculated both FOF and $M_{200}$ masses for the halos in the high-resolution region of our simulation and found that $M_{200}$ is typically lower than $M_\mathrm{FOF}$. For halo masses above $10^9$M$_{\odot}${}, the median ratio is approximately 0.9 \citep{jiang2014}, but at lower masses the discrepancy grows. This implies that the mass function has a slightly shallower slope when considering $M_{200}$ rather than $M_\mathrm{FOF}$, which leads to the Sheth-Tormen mass function overpredicting the number of low $M_{200}$ mass halos. \iffalse \begin{figure} \includegraphics[width=\linewidth]{Figures/m200_vs_mfof.pdf} \caption{Top panel: the relationship between the FOF mass and the $M_{200}${} mass of halos in two DMO simulations with different dark matter particle masses. Blue lines represent halos taken from the \textsc{Eagle}{} 100 Mpc simulation, whilst orange lines represent halos taken from the high resolution region of the simulation presented earlier in this chapter. Solid lines show the median relation between the two mass measures, and shaded regions indicate 68\% scatter. We only consider halos with a nonzero value of $M_{200}${}. Bottom panel: The fraction of halos which have an a reported $M_{200}${} mass of zero, as a function of FOF mass.} \label{fig:m200_vs_mfof} \end{figure} The relationship between $M_{\mathrm{FOF}}${} and $M_{200}${} for halos in two DMO simulations with different particle masses is shown in the upper panel of Fig.~\ref{fig:m200_vs_mfof}. The simulation with larger particle mass is the \textsc{Eagle}{} 100~Mpc DMO simulation, and the simulation with the smaller particle mass is the DMO version of our new simulation. The difference between the two mass definitions clearly depends not just on the mass of the halo being resolved, but also on the mass of the simulation particle. In the simulation with the smaller particle mass, the mass of a halo calculated using the FOF algorithm is around 10\% greater than $M_{200}${}. In both simulations considered, a significant fraction of halos identified by the FOF algorithm have $M_{200}${}$=0$. This occurs when there exists no radius inside which the mean enclosed density of a set of FOF particles meets the density threshold required to define the spherical overdensity mass. Such halos are ubiquitous in mass ranges where halos are resolved with fewer than 1000 particles, and can account for almost 30\% of halos in simulations. Furthermore, the frequency of such occurrences is a function of the simulation particle mass. The lower panel of Fig~\ref{fig:m200_vs_mfof} shows the fraction of halos which our halo-finder reports as having $M_{200}${}$=0$, as a function of the FOF mass. In the simulation with a smaller particle mass, the proportion is greater for both the total number of halos and the proportion of halos at a fixed particle number. For example, the probabilities that a FOF group with 100 particles will be assigned a $M_{200}${} mass of zero are around 10\% and 17\% in simulations with particle masses of $10^7$ and $10^5$~M$_{\odot}${} respectively. The abundance of these seemingly spurious halos is explained by the high resolution of our simulation. These unusual objects are preferentially located around the periphery of high density objects, for example the central $10^{13}$~M$_{\odot}${} object in our simulation. A typical example of a FOF group with $M_{200}${}$=0$ is shown in Fig.~\ref{fig:bad_halo}. When compared to a halo with a similar number of particles and a nonzero $M_{200}${}, these objects are far more diffuse and aspherical, explaining why the spherical overdensity criterion is not fulfilled. The high density of nearby particles, either unbound or belonging to other spurious FOF groups, suggests that the FOF algorithm is identifying the filamentary structure of the cosmic web, e.g. sheets. The virial ratio of these sets of particles is often very large, indicating that they are not bound, self-gravitating structures. These groups are preferentially located around the outskirts of large, genuine, halos. Furthermore, whereas genuine halos may be linked between snapshots by their common set of particles, these spurious FOF groups do not exist for more than one snapshot at a time. These FOF groups do not correspond to halos in the real Universe, and this further motivates us to use a spherical overdensity criterion when identifying halos in the simulation. \begin{figure*} \includegraphics[width=\linewidth]{Figures/bad_halos.png} \caption{Each row shows three orthogonal projections of a group identified by the FOF algorithm. Each image has a dimension of 100x100x10 kpc. Both groups contain approximately 1000 particles. The top row shows a group where the $M_{200}${} mass and the $M_{\mathrm{FOF}}${} mass are approximately equal, whilst the bottom row shows a group with $M_{200}${}$=0$. Orange points show particles belonging to the target halo, blue points show particles belonging to different FOF groups, and black points show unbound particles. The coordinate system used in each image is centred on the potential minimum of the group.} \label{fig:bad_halo} \end{figure*} \fi \subsection{The effect of baryons on the halo mass function} \label{section:baryon_mf} A significant fraction of the distortions of strong lensing arcs is expected to come from halos along the line-of-sight, as opposed to subhalos around the main lensing galaxy \citep{li2017}. It is computationally difficult to simulate cosmological volumes on the scale of hundreds of megaparsecs with sufficient resolution to characterise the distribution of the low-mass halos of interest for tests of the CDM model. As such, it is necessary to use an analytic prescription for the abundance of field halos when calculating the expected lensing signal. Both \citet{li2017} and \citet{despali2018} used the analytic Sheth-Tormen mass function to predict the number of halos lying between the source galaxy and the observer (so-called interlopers).\footnote{\citet{despali2018} used updated values for some of the numerical parameters related to the Sheth-Tormen mass function. These updated parameters provide a better match to the mass function in simulations with a \emph{Planck} cosmology \citep{2016MNRAS.456.2486D}} \begin{figure} \includegraphics[width=\linewidth]{Figures/baryon_hmf.pdf} \caption{Top panel: the differential mass function of field halos in the hydrodynamical and DMO versions of our simulation, shown in blue and orange respectively. The mass function is calculated in a sphere of radius 5 Mpc centred on the potential minimum of the most massive halo in the high resolution region of the simulation. Circles show the measured halo mass function in each mass bin. The errorbars show the Poisson error. Solid lines show power-law fits to the halo mass function. Points shown with empty circles were not used when calculating the power-law fit. Bottom panel: the ratio of the calculated halo mass function to the analytic Sheth-Tormen mass function.} \label{fig:baryon_hmf} \end{figure} Our simulation contains a large enough field volume to allow us to study the abundance of the low-mass halos important for lensing. Fig.~\ref{fig:baryon_hmf} shows the measured halo mass function in both the hydrodynamical and DMO versions of our simulation at redshift $z=0$. We find that the mass functions in both versions of the simulation are well fit by a power law, of the form, \begin{equation} \frac{\mathrm{d}n}{\mathrm{dlog_{10}}M/M_\odot} = b (M/M_\odot)^{-a} \;, \end{equation} in the range ($3\times 10^6$ -- $3\times 10^{11}$)~M$_{\odot}${}. The best fit parameters are listed in Table~\ref{tab:mf_fits}. We find no significant difference between the slope of the halo mass functions in the hydrodynamical and DMO versions of our simulation. Across all halo masses considered, the amplitude of the DMO mass function is greater than the amplitude of the hydrodynamical mass function by around 25\%. Given the mass function is a power-law with a slope of approximately -1, this difference is equivalent to all halos in the DMO simulation having their mass reduced by approximately 25\%, consistent with the reduction in halo mass (at low halo masses) going from DMO to \textsc{Eagle}{} shown in \citet{2015MNRAS.451.1247S}.\footnote{Different hydrodynamical simulations disagree slightly on the effects of baryons on the halo mass function. For example, GIMIC was similar to \textsc{Eagle}{}, with a roughly 30\% reduction in the mass of $\sim 10^{10} \, M_\odot$ haloes \citep{sawala2013}, while \citet{2018MNRAS.481.1950L} showed that the IllustrisTNG simulations show only a 20\% reduction in mass for similar-mass haloes.} The reduction in halo mass is caused by two processes operating at early times. Firstly, after the primordial gas is reionized, photo-heating evaporates gas from small mass halos or prevents it from cooling into them. Secondly, in halos where gas does cool and make stars supernovae expel the remaining gas \citep[][and references therein]{benson2002,benitez-llambay2020}. Of course, these processes are not modelled in DMO simulations and DMO halos become around 15\% more massive (the value of $\Omega_\mathrm{b}/\Omega_\mathrm{m}$) than an otherwise equivalent halo in a hydrodynamical simulation. The loss of mass from these processes reduces the rate at which halos grow in the hydrodynamical simulation and the 15\% difference at the redshift of reionisation increases to the 25\% mass difference in halo mass at the present day \citep{sawala2016}. The measured slope of the halo mass function is shallower than the slope of the Sheth-Tormen mass function --- 0.90 in the simulation and 0.92 in the Sheth-Tormen model. We can see in the lower panel of Fig.~\ref{fig:baryon_hmf} that the Sheth-Tormen model overpredicts the abundance of halos less massive than $10^{10}$~M$_{\odot}${} in our high-resolution volume. While the difference in abundance between the Sheth-Tormen prediction and the DMO simulation could be affected by the special nature of the volume we have simulated, the difference in slope seems to be robust, as is the difference between the DMO and hydrodynamical simulation. We therefore conclude that previous studies which used the Sheth-Tormen model, e.g. \citet{li2017}, may have overpredicted the expected lensing signal originating from halos in the $10^7$--$10^8$~M$_{\odot}${} range by around 20--30\%. Whilst we are unable to check whether the same overprediction applies to the calculation of the lensing signal in a WDM cosmology, this difference in the expected abundance of halos in a CDM universe is important from an observational standpoint. \begin{table} \centering \begin{tabular}{@{}lcc@{}} \toprule & $a$ & $b$ [Mpc$^{-3}$] \\ \midrule Hydro & $0.897\pm0.005$ & $2.2\pm0.2\times10^8$ \\ DMO & $0.898\pm0.009$ & $2.8\pm0.5\times10^8$ \\ \bottomrule \end{tabular}% \caption{Slope and amplitude of power-law fits to the halo mass function in the high-resolution region of our simulation at redshift $z=0$.} \label{tab:mf_fits} \end{table} \subsection{The effect of environment on the halo mass function} We also study the effect of environment on the abundance and properties of field halos. We use the \textsc{Nexus}{} code \citep{cautun2013} to classify halo environments. \textsc{Nexus}{} divides space into a cubic grid, and classifies each cell as belonging to either a void, a sheet, a filament, or a node. The method is scale-free, analysing the density field smoothed on a number of different scales in order to detect structure of all sizes. \iffalse The algorithm for classifying environments is as follows: \begin{itemize} \item Construct a 3D density field, $f$, from a simulation snapshot. \item Apply a Gaussian filter of RMS width $R_n$ to $f$. \item Compute the eigenvalues of the Hessian matrix of the smoothed field. \item Use the eigenvalues to assign each point a void/filament/sheet/node signature. \item Repeat the previous steps over a range of smoothing scales ($R_0,R_1$,...) to construct the scale space representation of the field. \item Combine the results from all smoothing scales to obtain scale-free environment signatures. \item The detection threshold for nodes is set by requiring that half of the identified objects have an average density of at least $\Delta=370~\rho_{\mathrm{crit}}$ (effectively measuring whether a cluster is virialised). \item The detection thresholds for filaments and sheets are set by finding the signature value $S$ which maximises the function $\left|\mathrm{d}M^2/\mathrm{d log}S\right|$, where $M$ is the mass in filaments or sheets. \end{itemize} \fi The mass function of halos in voids, sheets and filaments is shown in the left-hand panel of Fig.~\ref{fig:environment_hmf}. The slope of the mass function does not depend strongly on halo environment, but the amplitude of the mass function in different environments is strongly correlated with the average density of those environments; the amplitude of the halo mass function in filaments is an order of magnitude greater than in voids. It is natural to wonder whether the difference in amplitude results solely from the difference in the density of matter in each region. To account for the differing densities in each environment type, we also calculate the halo mass function per unit Lagrangian volume\footnote{The Lagrangian volume represents the comoving volume which would have been occupied by a region at the Big Bang, and can be calculated by dividing the total mass of matter in a region by the mean matter density of the Universe.}. The results are shown in the right-hand panel of Fig.~\ref{fig:environment_hmf}. We see here that relative to the density of matter in each region, halos in the mass range considered here are less abundant in filaments than in voids. The halo masses we consider all lie comfortably below the characteristic clustering mass scale, $M^{\star}(z)$, which at redshift $z=0$ is around $6\times10^{12}$~M$_{\odot}${} \citep{white1993, schneider_m_2012}. The abundance of halos of a fixed mass below $M^{\star}(z)$ eventually decreases in time, as these smaller halos merge and accrete material to become larger halos. The higher density filament regions are effectively in a more advanced state of cosmic evolution relative to the lower density void regions, so the abundance of halos less massive than $M^{\star}(z)$ ends up lower in the filaments. \begin{figure*} \includegraphics[width=\linewidth]{Figures/environment_hmf.pdf} \caption{The differential halo mass function for halos in voids (blue), filaments (orange), sheets (green), and the entire volume (red) in the high-resolution region of the hydrodynamical version of our simulation at redshift $z=0$. The environment of a halo is determined using the \textsc{Nexus}{} algorithm \citep{cautun2013}. Circles show the measured mass function, whilst lines show power-law fits. In the left-hand panel the amplitude of the mass function is normalised to the physical volume of each environment type. Empty circles show points not used when calculating power-law fits. In the right-hand panel the amplitude is normalised to the Lagrangian volume of each environment type, i.e the mass contained in each environment type.} \label{fig:environment_hmf} \end{figure*} \subsection{The effect of environment on the internal structure of halos} We also consider the relationship between halo environment and the internal structure of the halo. Specifically, we compare the concentrations of halos in voids and filaments, for halo masses between $10^{7.5}$--$10^{9.5}$~M$_{\odot}${}. If the halo has an NFW density profile \citep{navarro1996,navarro1997}, with scale radius, $r_\mathrm{s}$, the concentration, $c$, is given by $r_{200}/r_\mathrm{s}$. We only consider halos which satisfy the three relaxation criteria of \citet{neto2007}, and where $r_\mathrm{s}$ is greater than the convergence radius of the halo, as defined using the criterion of \citet{power2003}. The distribution of concentrations for halos in the mass range $10^{7.5}$--$10^{9.5}$~M$_{\odot}${} at redshift $z=0$ is shown in Fig.~\ref{fig:conc_dist}. \begin{figure} \includegraphics[width=\linewidth]{Figures/halo_conc_dist.pdf} \caption{The distribution of concentration for halos in filaments and voids in the hydrodynamical version of our simulation at redshift $z=0$. Halos are selected to have masses between $10^{7.5}$--$10^{9.5}$~M$_{\odot}${}. All halos in our sample satisfy the three relaxation criteria of \citet{neto2007}, and the concentrations are calculated by fitting NFW profiles.} \label{fig:conc_dist} \end{figure} Whilst the width and skew of the distribution is similar in both filaments and voids, we see that halos in filaments tend to have slightly higher concentrations, and halos with a concentration greater than 25 reside exclusively in filaments. The concentration of a halo reflects the density of the universe at its formation time \cite{navarro1997}. For a fixed mass, halos tend to form earlier in filaments than voids \citep{hahn2007}, when the universe was denser. This explains the higher average concentration observed for halos in filaments. \section{The subhalo population} \label{section:subhalos} The small dark matter particle mass of our simulation allows us to study the abundance and properties of subhalos as small as $10^7$~M$_{\odot}${}. This is the first time that such small substructures have been studied in a hydrodynamic simulation of a $10^{13}$~M$_{\odot}${} halo. In this section we focus on how the inclusion of baryons in the simulation changes the abundance and properties of this subhalo population. For low-redshift halos of mass ${\sim}10^{13}$~M$_{\odot}${}, a significant fraction of the distortions to strong lensing arcs is due to substructure within the lensing halo. For example, for a typical SLACS lens (at $z=0.2$, with a source at $z=1$), CDM substructure produces around 30\% of the lensing distortions, whilst in WDM the contribution of substructures is comparable to that from field halos along the line-of-sight \citep{li2017,despali2018}. \subsection{The subhalo mass function} \begin{figure*} \includegraphics[width=\linewidth]{Figures/subhalo_mf_hline.pdf} \caption{Large panels: cumulative subhalo mass functions in concentric spherical shells centred on the potential minimum of the central halo. Thin lines show the abundance of subhalos at six individual snapshots, approximately evenly spaced in time between redshift $z=0.5$ and the present day. Thick lines show the abundance of subhaloes averaged over these six snapshots. Small panels: the ratio of the cumulative subhalo mass functions in the hydrodynamical and DMO versions of the simulation at each snapshot (thin black lines). The thick black lines show the average reduction in subhalo abundance as a function of mass over a 5~Gyr period. The dashed red lines show the reduction in subhalo abundance when the masses of the objects in the DMO simulation are multiplied by 0.75 to approximate the reduced-growth effect described in \S\ref{section:baryon_mf}.} \label{fig:subhalo_mf} \end{figure*} Fig.~\ref{fig:subhalo_mf} shows the cumulative subhalo mass function in four concentric spherical shells centred on the potential minimum of the halo. We see that the inclusion of baryons in the simulation leads to a reduction in subhalo abundance as a function of subhalo mass. As discussed in \S\ref{section:baryon_mf}, halos in cosmological hydrodynamical simulations are systematically less massive than their DMO counterparts because the loss of baryons at early time reduces their subsequent growth rate. To distinguish this ``reduced-growth'' effect from environmental effects, such as tidal stripping and disruption, we apply a correction to the subhalo abundance in the DMO simulation by reducing the masses by 25\%, which is the typical size of the reduced-growth effect. The corresponding reduction in subhalo abundance is shown by the red dotted line in each panel. In the innermost radial bin, the total number of subhalos in the mass range $(3\times 10^{6}-3\times 10^{7})$~M$_{\odot}${} is reduced by around 50\%, although there is considerable scatter in the different snapshots. Approximately half the measured reduction is due to dynamical processes -- tidal stripping and destruction -- and half to the reduced-growth effect. The average reduction in subhalo abundance in this region is comparable to that in Milky Way-mass halos found in the \textsc{Apostle}{} simulations\footnote{In general, the galaxy mass--halo mass relation peaks at a mass of $10^{12}$~M$_{\odot}${}; however the galaxies in the \textsc{Apostle}{} simulations are unusually small for their halo size.}, which also used the \textsc{Eagle}{} model \citep{richings2018}. This is not surprising as the ratio of galaxy to halo mass is similar in all these simulations. There is a clear radial trend in the reduction of subhalo abundance in the hydrodynamical simulation. The effect of the central galaxy on the subhalo population is negligible at distances greater than 100~kpc (which is also the case in the \textsc{Apostle}{} simulations). Here, the reduction is essentially independent of subhalo mass and is explained entirely by the reduced-growth effect in hydrodynamical simulation. In the inner shells, where the effect of the central galaxy is important, there seems to be some dependence of the reduction on subhalo mass but the numbers are too small to reach a firm conclusion. \subsection{Subhalo concentrations} Since the size of a subhalo is not well defined, it is better to characterise their concentrations in terms of their mean overdensity, $\delta_V${}, within the radius, $r_{\mathrm{max}}${}, at which the circular velocity peaks, in units of the critical density, \begin{equation} \delta_V = 2\left(\frac{V_{\mathrm{max}}}{H_0 r_{\mathrm{max}}}\right)^2 \;, \end{equation} where $V_{\mathrm{max}}${} is the maximum circular velocity of the halo\footnote{$V_{\mathrm{max}}=\mathrm{max}\left(\sqrt{\frac{GM(<r)}{r}}\right)$} \cite{springel2008}. For an NFW halo, the concentration, $c$, is related to $\delta_V${} by \begin{equation} \delta_V = 7.213\left(\frac{200}{3}\right)\frac{c^3}{\ln(1+c) - c/(1+c)} \;. \end{equation} Whilst this equation cannot be inverted analytically, we find that an approximate relation that holds well for concentrations between 5 and 50 is \begin{equation} c = 0.3\delta_V^{0.4} \;. \end{equation} \begin{figure} \includegraphics[width=\linewidth]{Figures/sub_conc_dist.pdf} \caption{The distribution of subhalo characteristic overdensity, $\delta_V${} (which we use to characterize subhalo concentration) in the hydrodynamical and DMO versions of our simulation at redshift $z=0$. Subhalos are selected to have maximum circular velocities between 3 and 20 km/s. The dotted red line corresponds to the case when the DMO $V_{\mathrm{max}}${} values are multiplied by 0.85 to account for the systematic mass difference between hydrodynamical and DMO halos due to the reduced-growth effect discussed in \S\ref{section:baryon_mf}.} \label{fig:subhalo_conc_dist} \end{figure} The distribution of $\delta_V${} for subhalos with $V_{\mathrm{max}}${} between 3 and 20 km/s lying within 500~kpc of the centre of the main halo at $z=0$ is shown in Fig.~\ref{fig:subhalo_conc_dist}. We only consider well-resolved subhalos by requiring that $r_{\mathrm{max}}${} be greater than the gravitational softening length, 0.5 kpc. Subhalos in the hydrodynamical version of our simulation are systematically less concentrated than subhalos in the DMO version, although the difference is small. The peak of the DMO distribution occurs at a value of $\delta_V${} which is 23\% higher than in the hydrodynamical simulation. The difference in $\delta_V${} is equivalent to a difference of approximately 8\% in concentration for NFW halos. Fig.~\ref{fig:subhalo_conc_dist} also shows the distribution of $\delta_V${} for subhalos in the DMO simulation when the values of $V_{\mathrm{max}}${} are reduced by 15\% to mimic the reduced-growth effect discussed in \S\ref{section:baryon_mf}, as found by \citet{sawala2016} for field halos. This slight shift in $V_{\mathrm{max}}${} largely explains the difference between the hydrodynamics and DMO distributions. We conclude that the inclusion of baryons in the simulations does not have a significant impact on the concentration of subhalos in the mass range considered, beyond a small shift. \subsection{Projection effects} The projected mass distribution is responsible for gravitational lensing and, since the spatial distribution of mass around a large halo is strongly anisotropic, the observed lensing effect will depend on the direction along which the lens is observed. The central halo in our simulation sits at the intersection of three filaments (see Fig.~\ref{fig:simpic}). The number density of substructures along these filaments is greater than the average around the halo, so a lens observed along a a filament will be affected by substructure much more strongly than a lens observed along an average direction. A visual representation of the dependence of the observed abundance of substructure on viewing angle is presented in Fig.~\ref{fig:nsub_proj}. To construct this image we distributed $10^6$ lines-of-sight uniformly on the surface of a sphere\footnote{Technically, an exactly uniform spacing of points on the surface of a sphere is impossible for all but a set of special numbers of points\citep{saff1997}. Here we used the python package \textsc{Seagen} \citep{kegerreis2019} to distribute points on the surface of a sphere such that the density of points over the sphere is very close to uniform, including at the poles.} centred on the potential minimum of the main halo. Along each line-of-sight, we calculate the number of halos and subhalos with a \textsc{Subfind}{} mass\footnote{That is the mass found by the \textsc{Subfind}{} algorithm \citep{springel2001} which, for subhalos, corresponds closely to the mass enclosed by the tidal radius \citep{springel2008}} between $10^{6.5}$--10$^{8.5}$~M$_{\odot}${}, in a cylinder of radius 10~kpc and length 10~Mpc centred on the main halo. This includes the subhalos of the main halo, and also other halos and their subhalos which fall along the line-of-sight. The map of the number of objects along each line-of-sight in Fig.~\ref{fig:nsub_proj} is smoothed on a scale of one degree and is for the cluster at redshift $z=0.1$ since this is typical of low-redshift lenses \citep[e.g.][]{bolton2006} and is the value used in the analysis of \citet{li2017}. \begin{figure*} \includegraphics[width=\linewidth]{Figures/proj_skymap.pdf} \caption{The number of halos and subhalos of mass in the range $(10^{6.5}$--$10^{8.5})$~M$_{\odot}${} along lines-of-sight to the main cluster in the hydrodynamical simulation at redshift $z=0.1$. Each line-of-sight is a cylinder of 10~Mpc length and 10~kpc radius. The map is an equal-area Mollweide projection, smoothed on a scale of one degree, made from $10^6$ lines-of-sight spread almost uniformly across the surface of a sphere of radius 5~Mpc centred on the main halo.} \label{fig:nsub_proj} \end{figure*} It is clear that the number of objects varies strongly with viewing angle. Highly populated viewing angles are closely aligned with filaments and often contain 2--3 times as many objects as viewing angles that do not overlap a filament. The dominant contribution to the signal originates from subhalos, not from nearby field halos although the distinction between halos and subhalos is ambiguous as the shape of the halo, and thus the number of subhalos along a particular line-of-sight, is strongly correlated with the direction of the filaments. From an observational perspective, the distinction is artificial. \begin{figure} \includegraphics[width=\linewidth]{Figures/proj_counts.pdf} \caption{The distribution of the number of halos and subhalos of mass between $10^{6.5}$--$10^{8.5}$~M$_{\odot}${} along lines-of-sight projected through the centre of the main halo at redshift $z=0.1$. Each projection is of a cylinder of 10~Mpc length and 10~kpc radius. The dotted red line shows the distribution in the case where the masses of all objects in the DMO version of our simulation are multiplied by 0.75 to account for the effect discussed in \S\ref{section:baryon_mf}.} \label{fig:nsub_dist} \end{figure} We compare the distribution of the number of objects along different lines-of-sight in the hydrodynamical and DMO versions of our simulation in Fig.~\ref{fig:nsub_dist}. The median number of objects along a line-of-sight, and the interquartile ranges, are listed in Table~\ref{tab:proj_counts}. The number of objects along a given line-of-sight in the hydrodynamical simulation is around 30\% smaller on average. This is a combination of the reduced-growth effect together with the destruction and tidal stripping of subhalos in the hydrodynamical simulation. Comparison of the abundance in the hydrodynamical simulation to that in the DMO simulation with the masses of objects reduced by 25\% (dotted red line) shows that the reduced-growth effect accounts for approximately half of the measured difference between the hydrodynamical and DMO simulations. In the hydrodynamical simulation, the median number of objects along the line of sight is 26, but there are lines-of-sight that intercept more than twice this number. \begin{table} \centering \begin{tabular}{@{}lc@{}} \toprule Simulation & N \\ \midrule Hydro & $26^{+5}_{-4}$ \\ DMO & $38^{+5}_{-5}$ \\ DMO - corrected & $32^{+5}_{-5}$ \\ \bottomrule \end{tabular} \caption{The median number of objects in the mass range $(10^{6.5}$--$10^{8.5})$~M$_{\odot}${} along a 10~Mpc long cylindrical line-of-sight of radius 10~kpc centred on the potential minimum of the main halo. Subscripts and superscripts give the interquartile range. The numbers quoted includes subhalos of the main halo as well as field halos.} \label{tab:proj_counts} \end{table} \section{Conclusions} \label{section:conclusions} We have developed a new technique to generate initial conditions for cosmological smooth particle hydrodynamics simulations in which the number of dark matter particles can be much larger than the number of gas particles. Our main motivation is to simulate a massive elliptical galaxy with realistic galaxy formation astrophysics -- which requires good gas resolution -- while, at the same time, resolving the $\sim 10^6$~M$_{\odot}${} haloes and subhaloes relevant to strong gravitational lensing tests of the identity of the dark matter -- which requires very high dark matter resolution. An added benefit of our new technique is that it avoids the 2-body scattering processes inherent in the traditional cosmolgical SPH setup in which the dark matter and the gas are followed with the same number of particles which, consequently have very different masses \citep{ludlow2019}. We have simulated a $10^{13}$M$_{\odot}${} galaxy cluster and its surrounding large-scale environment, a volume of over 500 Mpc$^3$, using the \textsc{Eagle}{} \textsc{Reference} model of galaxy formation. Our conclusions may be summarized as follows: \noindent $\bullet$ The field halo mass function in the mass range ($5\times 10^6 - 3 \times 10^{11}$)~M$_{\odot}${} closely follows a power law of slope -0.9 in both the DMO and hydrodynamic simulations (see Table~\ref{tab:mf_fits}). However, the amplitude of the halo mass function in the hydrodynamics case is about 25\% lower than in the DMO case (Fig.~\ref{fig:baryon_hmf}). The difference originates at early times when halos in the hydrodynamics simulation lose gas, either as a result of reionization or of supernovae feedback and, as a result, experience less growth than their DMO counterparts, as first discussed by \cite{sawala2016}. \noindent $\bullet$ The halo mass functions are not well described by the commonly used Sheth-Tormen formula, which is based on a fit to DMO simulations and has a steeper slope than we measure. As a result, previous lensing studies using the Sheth-Tormen model have overpredicted the expected lensing signal originating from halos in the ($10^7$--$10^8$)~M$_{\odot}${} range by around 20--30\%. \noindent $\bullet$ The abundance of field halos depends sensitively on environment. In our hydrodynamical simulation we find that the number of halos per unit mass in the range of halo masses considered here is largest in the sheets and voids of the cosmic web, where it exceeds the number per unit mass in filaments by a factor of four to five (although the volume-weighted number is largest in filaments; Fig.~\ref{fig:environment_hmf}). \noindent $\bullet$ The mass function of {\em subhalos} in the cluster also has lower amplitude in the hydrodynamical simulation than in the DMO simulation (Fig.~\ref{fig:subhalo_mf}). In addition to the same reduced growth experienced by field halos, the subhalo abundance is further reduced in the hydrodynamical simulation by the enhanced destruction of subhalos caused by the stronger tidal interactions in the presence of a massive galaxy at the centre of the cluster. The extent of this destruction depends sensitively on radius. For example, within 50~kpc in projection, the number of substructures in the ($10^{6.5}$--$10^{8.5}$)~M$_{\odot}${} mass range in the hydrodynamics simulation is only about half the number in the DMO simulation (with considerable halo-to-halo scatter). Approximately 50\% of this difference is accounted for by the reduced growth effect in the hydrodynamical simulation and the remaining 50\% by tidal disruption. Beyond 100~kpc from the centre, the effect of the central galaxy is small and the reduction is due almost entirely to the reduced-growth effect. \noindent $\bullet$ Subhalos in the hydrodynamical simulation are less concentrated than their DMO counterparts but the difference is only about 10\%. It arises from the reduced-growth effect which effectively shifts the formation time of halos in the hydrodynamical simulation to slightly later times. \noindent $\bullet$ The matter distribution around the cluster is highly anisotropic and, as a result, the projected number of halos and subhalos -- the quantity of interest in strong gravitational lensing studies -- is also highly anisotropic. For example, the projected number of objects in the mass range ($10^{6.6}-10^{8.5}$)~M$_{\odot}${} along a cylinder of radius 10~kpc and length 10~Mpc centred on cluster can be 2-3 times larger if aligned with a filament than if not. The analysis of the perturbations on strong gravitational lenses offers a real prospect of testing the $\Lambda$CDM model in the regime of small-mass halos where it makes robust predictions that distinguish it from viable alternatives such as WDM \citep{li2017}. The prime targets for this kind of lensing studies are $10^{13}$~M$_{\odot}${} halos like the one we have simulated here. Understanding the abundance, structure and distribution of subhalos in these halos, and of field halos around them, is an important prerequisite for the successful application of lensing techniques to the problem of the identity of the dark matter. \section*{Acknowledgements} We thank an anonymous referee for a positive and constructive review. We acknowledge support from the European Research Council through ERC Advanced Investigator grant, DMIDAS [GA 786910] to CSF. This work was also supported by STFC Consolidated Grants for Astronomy at Durham ST/P000541/1 and ST/T000244/1. AR is supported by the European Research Council's Horizon2020 project `EWC’ (award AMD-776247-6). It used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (\url{www.dirac.ac.uk}). This equipment was funded by BIS National E-infrastructure capital grants ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. \section*{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
{'timestamp': '2020-06-01T02:05:06', 'yymm': '2005', 'arxiv_id': '2005.14355', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14355'}
arxiv
\section{Introduction} \Blindtext[5][2] \midlacknowledgments{\blindtext} \end{document} \section{Introduction} In last decade, the automated or semi-automated segmentation approaches have been wid
ely developed to identify and parse organs, bones, tumors, and other regions-of-interest (ROI). However, most of the segmentation approaches tend to have difficulty in producing high quality prediction at the foreground boundary areas where appearance contrast of medical images is intrinsically fuzzy, which is usually caused by the scanner settings, respiration, or body motions during the image acquisition procedure. Recently, the neural network based methods have been deployed for the segmentation tasks~\cite{cciccek20163d,milletari2016v,liu20183d,myronenko20183d}, and have achieved the state-of-the-art performance in various datasets with different image modalities. These model architectures follow a U-shape fashion using convolutional encoders and decoders, which takes images as direct input and output segmentation masks. In addition, these models are trained end-to-end using gradient-based optimization, with the objective of minimizing well-established loss functions, such as multi-class weighted cross-entropy, soft Dice loss~\cite{milletari2016v}. Although such loss functions are capable of handling the class-imbalance issues that often present in the medical image segmentation tasks, the boundary issue is not well addressed, because these functions treat all pixels/voxels equally. In order to further improve the segmentation performance, we introduce a new loss function, called boundary enhancement loss, to explicitly focus on the boundary areas during training. Our proposed approach shares the similar motivation as to previous work, trying to improve the boundary segmentation of deep neural networks, like ~\cite{chen2016dcan, oda2018besnet, karimi2019reducing, kervadec2018boundary}. Unlike the previous work, our approach is light-weighted without causing much computational burden, and it does not require any pre- or post-processing such as in \cite{karimi2019reducing, kervadec2018boundary}, or any special network architecture such as in \cite{chen2016dcan, oda2018besnet} in order to compute the loss function. Furthermore, our proposed loss function is very effective for various segmentation applications, which could be easily implemented and plugged into any 3D backbone networks. \section{Methodology} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{main_figure/filtering_combined.png} \caption{(a). Green curve indicate 1D cross-section of binary mask, and red dashed curve represents the result after filtering; (b). a 2D cross-section of 3D binary mask; (c) a 2D cross-section of 3D output after filtering; (d) Visual comparison of spleen segmentation. Green contour is ground truth label, blue contour is the result applying~\cite{myronenko20183d}, yellow contour is from the proposed work.} \label{fig:filtering} \end{figure} In order to emphasize the boundary regions, we apply the Laplacian filter $\mathcal{L}\left ( \cdot \right)$, which generates strong responses around the boundary areas and zero response elsewhere, to a 3D binary segmentation mask $S$ in Eq.~\ref{eq:lap}. \begin{equation} \mathcal{L}\left ( x,y,z \right ) = \frac{\partial^2 S}{\partial x^2} + \frac{\partial^2 S}{\partial y^2} + \frac{\partial^2 S}{\partial z^2} \label{eq:lap} \end{equation} The discrete Laplacian filtering can be achieved through standard 3D convolution operations. As a result, we can readily compute the difference between filtered output of ground truth labels and filtered output of predictions of a deep neural network. Minimizing the difference between two filtered outputs would implicitly close the gap between ground truth labels and predictions. Following the analysis above, the boundary enhancement loss is defined as a $l_2$-norm shown in Eq.\ref{eq:be}. Meanwhile, $l_{BE}$ effectively suppresses false positives and remote outliers, which are far away from the boundary regions. \begin{equation} l_{BE}=\left \| \mathcal{L}\left ( \mathcal{F}\left ( X \right ) \right )-\mathcal{L}\left ( Y \right ) \right \|_2=\left \| \frac{\partial^2 \left ( \mathcal{F}\left ( X \right ) - Y\right )}{\partial x^2} + \frac{\partial^2 \left ( \mathcal{F}\left ( X \right ) - Y\right )}{\partial y^2} + \frac{\partial^2 \left ( \mathcal{F}\left ( X \right ) - Y\right )}{\partial z^2} \right \|_2 \label{eq:be} \end{equation} In practice, the boundary enhancement loss is implemented as a series of single-channel $3 \times 3 \times 3$ convolutional operations without bias terms. Kernels of the first three consecutive convolution layers have identical constant value $1/27$ for smoothing purpose. And the last convolution kernel has fixed values from a standard 3D discrete Laplacian kernel. All parameters of convolution kernels in $l_{BE}$ are non-trainable. The entire operation is similar with the Laplacian of Gaussian (LoG) filtering for edge detection. An example of Laplacian filtering with a ground truth label is shown in Fig.~\ref{fig:filtering}. The overall loss function $l_{overall}$ in our approach is the combination of the soft Dice loss~\cite{milletari2016v} and the boundary enhancement (BE) loss: $l_{overall} = \lambda_1 \cdot l_{dice}+ \lambda_2 \cdot l_{BE}$. $\lambda_1$ and $\lambda_2$ are the positive weights between two losses. The boundary enhancement loss cannot be applied alone without the soft Dice loss, because it can not differentiate between the interior and exterior. Take the regions where the label values are constant (0 or 1) for example, everywhere except boundary would be zero after filtering shown in Fig.~\ref{fig:filtering}. \section{Experiments and Discussion} \textbf{Datasets} To cover various objects and image modalities, the datasets of medical decathlon challenge (MSD)~\cite{msd2018} task 01 (brain tumor MRI segmentation) and task 09 (spleen CT segmentation) are adopted for experiments with our own data split for training/validation. For task 01, 388 multi-channel MRI volumes for training, 96 for validation. And for task 09, 32 CT volumes for training, 9 for validation. Both datasets are re-sampled into the isotropic resolution 1.0 $mm$. For task 01, the voxels are normalized within a uniform normal distribution. For task 09, the voxel intensities of the images are normalized to the range $\left[0,1\right]$ according to 5th and 95th percentile of overall foreground intensities. \noindent\textbf{Implementation}~Our baseline neural network is from~\cite{myronenko20183d}, which has the convolutional encoder-decoder structure using 3D residual blocks. During training, the input of the network are patches with size $224\times 224\times 128$ (task 01) and $96\times 96\times 96$ (task 09) respectively, randomly cropped from images. $\lambda_1$ and $\lambda_2$ are 1 and 1000 respectively for all experiments. All training jobs use the Adam optimizer. Necessary data augmentation techniques, including random axis flipping and random intensity shift, are used for training. Moreover, the validation follows the scanning-window scheme with small overlaps between neighboring windows. The validation accuracy is measured with the Dice's score after scanning-window inference. The final results are shown in Table~\ref{tab:results}. Our experimental results show that our proposed approach is capable to work effectively on both structural objects (e.g. organ) and non-structural objects (e.g. tumor). Also, it works well for different modalities of medical images (CT, MRI, etc.). Moreover, our proposed boundary enhancement loss can be easily plugged into any 3D segmentation backbone networks. \begin{table} \centering \caption{Validation Dice comparison with baseline approaches and proposed approach. } \begin{tabular}{*3c} \toprule Method & Task01 & Task09\\ \midrule U-Net~\cite{cciccek20163d} & 0.72 & 0.94\\ AH-Net~\cite{liu20183d} & 0.81 & 0.95\\ SegResNet~\cite{myronenko20183d} & 0.83 & 0.95\\ \cite{myronenko20183d}$+$Boundary Loss~\cite{kervadec2018boundary} & \textbf{0.85} & 0.94\\ \cite{myronenko20183d}$+$Focal Loss~\cite{zhu2019anatomynet} & 0.85 & 0.95\\ \midrule \cite{myronenko20183d}$+$Proposed BE Loss & \textbf{0.85} & \textbf{0.96}\\ \bottomrule \end{tabular} \label{tab:results} \end{table}
{'timestamp': '2020-06-01T02:08:05', 'yymm': '2005', 'arxiv_id': '2005.14433', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14433'}
arxiv
\section{Introduction} Recently, deployments of \glspl{mav} are gaining increasing attention, especially in the mining industry~\cite{mansouri2020deploying}. The autonomous navigation of the \gls{ma
v}, equipped with a sensor suite has the ability to collect information, such as images, gas level, dust level, monitor the personnel, explore unknown and production areas, and minimize service times. At the same time, the deployments of \glspl{mav} increase the production and the overall safety in the mining industry, while reducing the overall operation costs aligned with the envisioned mine of \gls{zepa}~\cite{NIKOLAKOPOULOS201566}. Furthermore, the harsh mining environments are characterized from a lack of illumination, narrow passages, wind gusts, dust and in general conditions that have a direct affect the performance of the platforms and in the worse case may cause failure in system components or even result to collision and crashes. Moreover, the commercially available \glspl{mav} rely on \gls{gps} or visual sensors, or both for performing a position estimation, however underground mines are \gls{gps}-denied environments and due to lack of any natural illumination and prominent visual and geometric features, the vision-based positioning methods are not reliable. Additionally, commercially available platforms provide manual or semi-autonomous flights, which require a pilot with a direct line of sight to the platform, a case which cannot be guaranteed in dark tunnels with multiple turns and crosses. The main objective of this article is to propose a low cost and modular \gls{mav} platform, as depicted in Figure~\ref{fig:quadcopter}, for autonomous navigation in dark tunnels. The platform is equipped with a 2D lidar, a single beam laser range finder, LED light bars, a PX4 optical flow, a forward looking camera, a flight controller and an on-board computer, while the software architecture is developed based on the equipped sensor suites to establish fully autonomous navigation. The proposed configuration of the \gls{mav} has been specifically designed for a direct deployed in underground mines, without a natural illumination and by that demonstrating the capability for fully autonomous \glspl{mav} navigation in such environments. Finally, this article discusses all the needed components in order to enable further hardware developments towards the autonomous navigation in dark tunnels. Although this work showcases the platform in tunnel navigation, the system can be deployed in similar missions including subterranean exploration~\cite{rogers2017distributed}, or Mars canyon exploration~\cite{matthaei2013swarm}. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{quad_angle_axis_new.png} \caption{The proposed low-cost \glspl{mav} platform with attached body fixed frame $\mathbb{B}$.} \label{fig:quadcopter} \end{figure} The rest of the article is structured as it follows. Initially, the state of the art of \gls{mav} development is presented in Section~\ref{sec:relatedworks}, followed by the platform architecture in Section~\ref{sec:platformartichecture}, which discusses the corresponding hardware and software components. In Section~\ref{sec:platformperformance} the performance of the proposed platform is evaluated in an underground mine tunnel and finally in Section~\ref{Conclusions} a summary of the findings is provided. \section{Related Works} \label{sec:relatedworks} Nowadays, industry and academic sectors develop \glspl{mav} for different applications. The majority of the developed platforms by commercial companies do not provide access to raw sensor measurements or tuning parameters of the controllers. Additionally, these platforms do not allow for hardware modifications and their performance is limited to semi-autonomous navigation especially in indoor environments. These factors limit their usage and functionality in challenging environments, such as underground mine tunnels, where the platform should be modified based on the application requirements. As an example, the commercially available quad-copter Parrot Bebop 2~\cite{parrot2016parrot} is equipped with a forward looking camera, an optical flow sensor and a sonar sensor facing down, it weights $\unit[0.5]{kg}$ and it able to provide $\unit[25]{mins}$ of flight time. The platform is \gls{ros} compatible and can be used for research or teaching purposes. However, it does not have an onboard computer and provides a WiFi link so that all the computations should be on a ground station. Moreover, the user cannot modify or add extra sensors to the system, while there is no access to the low-level control architecture and sensor measurements. Another commercial product is the DJI Matrice 100~\cite{DJI}, which is a fully customizable and programmable flight platform that can be equipped with sensors required for performing an autonomous underground navigation, however it does not allow access to the low-level controllers and the raw sensor data, thus increasing the overall complexity for the fusion of new sensor measurements. Moreover, the basic price of the platform without the sensor suites and computing unit starts from $\unit[3300]{USD}$. Table~\ref{table:industrymav} compares commercial \glspl{mav} with the proposed platform, emphasizing on important factors such as cost, \gls{ros} compatibility, sensor measurement accessibility, etc. {\renewcommand{\arraystretch}{1.3} \begin{table}[htbp!] \centering \caption{The comparison of the existing \glspl{mav} with proposed platform.} \label{table:industrymav} \resizebox{\linewidth}{!}{ \begin{tabular}{cccccccc} \hline Platform & \rotatebox{90}{\parbox{1.5cm}{Cost}} & \rotatebox{90}{\parbox{2cm}{Sensors for autonomy}} & \rotatebox{90}{\parbox{2cm}{Computer unit}} & \rotatebox{90}{\parbox{2cm}{ROS}} & \rotatebox{90}{\parbox{2cm}{Data accessibility}} & \rotatebox{90}{\parbox{2cm}{Hardware Modification}} & \rotatebox{90}{\parbox{2cm}{Spare part availability}} \\ \hline \textbf{Proposed Platform} & Low & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & High \\ \hline Intel Aero & Low & Moderate & \checkmark & Moderate & \checkmark & \checkmark & Low \\ \hline DJI Matrice 100 & High & \xmark & \xmark & \checkmark & Moderate & \checkmark & High \\ \hline Yuneec H520 & High & Moderate & \xmark & \xmark & \xmark & \xmark & Moderate \\ \hline AscTec Neo & Very high & \xmark & \checkmark & \checkmark & \checkmark & \checkmark & Moderate \\ \hline \end{tabular}} \end{table} } There are multiple open-source platforms developed for teaching proposes, such as the CrazyFlie~\cite{giernacki2017crazyflie}, the Parrot Minidrone~\cite{MDrone} and the PiDrone~\cite{brand2018pidrone}. The CrazyFlie is a small quad-copter, which provides $\unit[4]{min}$ of flight time, without pay-load and due to its size, it cannot compensate wind-gusts. The parrot Minidrone has the same drawbacks as the CrazyFlie, while the platform is not \gls{ros} compatible, a factor that is drastically limiting its usage. The PiDrone is a \gls{ros} compatible quad-copter with an onboard Raspberry Pi that runs Python and provides a $\unit[7]{min}$ flight time. Additionally, the PiDrone provides an accessible and inexpensive platform for introducing students to robotics. The drawback of this platform is lack of proper sensor suites for dark tunnels and the corresponding limited computational power for advanced algorithms and methods such as \gls{vio}. Furthermore, within the related literature of \glspl{mav} in underground mining operations, few research efforts have been reported trying to address challenging tasks within the mine. In~\cite{schmid2014autonomous} a visual-inertial navigation framework has been proposed, while the system was experimentally evaluated in a real-scale tunnel environment, simulating a coal mine, where the illumination challenge was assumed solved, while the platform is based on Ascending Technologies Pelican quad-copter and the authors have performed a low-level adaptation of a commercial platform, which is a complex task. In~\cite{gohl2014towards}, a more realistic approach, compared to~\cite{schmid2014autonomous}, regarding the underground localization has been performed. The FireFly hexacopter from Ascending Technologies, equipped with a \gls{vi} and a Hokuyo URG-04LX 2D laser scanner was used and it was manually guided across a vertical mine shaft to collect data for post-processing. In~\cite{ozaslan2017autonomous}, the authors addressed the problem of estimation, control, navigation and mapping for autonomous inspection of tunnels using a DJI F550 platform equipped with a Velodyne PuckLITE lidar, four Chameleon3 cameras, a PixHawk optical flow and an Intel core i7 NUC PC. The overall approach was validated through field trials, however in this case a high-end and expensive sensor suit was utilized while flying. \section{Platform Architecture} \label{sec:platformartichecture} In this article, the proposed quad-copter is designed to be inexpensive, modular, autonomous, while it provides access to all the onboard raw sensor measurements. In the sequel, the hardware and software components of the overall architectures are discussed. \subsection{Hardware Components} The Enzo330 V2 330mm Wheelbase frame is selected, due to a dense market availability, low cost, durability and customizability. Figure~\ref{fig:frame} presents the corresponding frame structure. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{frame_s.png} \caption{The quad-copter frame.} \label{fig:frame} \end{figure} However, few modifications are performed to improve the frame functionality and durability as depicted in Figure~\ref{fig:top_frame}. The top part of the frame has been redesigned to allow installation of the computing unit, power modules and additional sensors. For durability reasons, the top part has been made out of carbon fiber. Additionally, extra damping for the flight controller was introduced to reduce the vast amount of vibrations generated by the motors. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{dampingWithComponents.png} \caption{Modified top part of the frame with components installed} \label{fig:top_frame} \end{figure} The landing gear has been designed and 3D printed to provide sufficient space for the sensors located on the bottom, such as an optical flow sensor and a front facing camera. Figure~\ref{fig:LTUplatfrom} depicts the modified proposed frame, equipped with sensor suits, while highlighting the dimensions of the platform. Moreover, the Multistar Elite 2308-1400 motors, carbon fiber T-Style 8x2.7 propellers and the Turnigy Multistar BLheli 32 ARM 4 in 1 32bit 31A \glspl{esc} have been selected based on the frame dimensions, estimated weight of the complete platform and the power required by the motors. \begin{figure*}[htbp!] \centering \includegraphics[width=1\linewidth]{pixy_new.png} \caption{The developed quad-copter equipped with a forward looking camera, LED lights, an optical flow, a 2D lidar and a single beam laser range finder.} \label{fig:LTUplatfrom} \end{figure*} Furthermore, to establish the autonomous flight, the proposed quad-copter is equipped with the hardware modules depicted in Figure~\ref{fig:hwmodule}, while Table~\ref{table:parts} provides the cost of each component and the total cost of the platform. In the following Table~\ref{table:parts} the core hardware components are discussed. \begin{figure}[htbp!] \centering \includegraphics[width=0.9\linewidth]{diagram4_new.png} \caption{The proposed quad-copter hardware components.} \label{fig:hwmodule} \end{figure} \begin{table}[htbp!] \caption{List of the components} \label{table:parts} \begin{tabular}{ccc} \hline \textbf{Subsystem} & \textbf{Item} & \textbf{Cost} \\ \hline Computation & Aaeon UP-Board & \$190.00 \\ Avionics & NAZE 32 REV 6 FCU & \$35.00 \\ Avionics & 4x Multistar Elite 2308-1400 Motors & \$85.00 \\ Avionics & Turnigy Multistar 4-in-1 ESC & \$45.00 \\ Avionics & 4x 8x2.7 carbon fiber propellers & \$10.00 \\ Avionics & Enzo330 Frame with upgraded top plate & \$30.00 \\ Sensors & RPLidar A2M8 & \$280.00 \\ Sensors & LIDAR-Lite 3 & \$130.00 \\ Sensors & PX4FLOW & \$135.00 \\ Sensors & PS3 Eye Camera & \$30.00 \\ Power & Battery and DC-DC converter & \$50.00 \\ Light & LED bars and current drivers & \$75.00 \\ \hline & \textbf{Total cost:} & \textbf{\$1095.00} \\ \hline \end{tabular} \end{table} \subsubsection{Flight Controller and On-board Computer} The ROSFlight~\cite{jackson2016rosflight} is an embedded autopilot system that provides high-rate sensor data to \gls{ros} and a simple \gls{api} for sending commands. Thus, the AfroFlight Naze32 Rev6 has been used as \gls{fcu} in the proposed platform. Moreover, the selection of the on-board computer is based on trade-off between the cost, performance, and weight. The on-board computer should provide enough computation power to execute an autonomous navigation, a state estimation, and proper vision algorithms. Based on the requirements, the \gls{sbc} Up-Board UP-CHT01 manufactured by Aaeon was selected. The Aaeon is equipped with a Quad Core Intel Atom x5-z8350 Processor, 4GB DDR3L-1600 memory and provides six USB 2.0 and one USB 3.0 \gls{otg} ports, while it weights $\unit[195]{g}$ \subsubsection{Sensor Suite} The main component and cost of the platform is the sensor suite. The 2D lidar scanner, the PX4 optical flow, the PlayStation 3 Eye camera and the single beam laser range finder have been selected to provide the required information for the algorithms to establish the autonomous navigation. The RPLidar A2M8 360 Degree Laser Scanner is the sensor that provides distance measurements of the surrounding. It uses a laser with a wavelength: $\unit[775]{nm}$ to $\unit[795]{nm}$ and provides an up to $\unit[15]{hz}$ scan frequency. The distance range is from $\unit[0.15]{m}$ to $\unit[12]{m}$, but measurements above $\unit[8]{m}$ are not reliable. Thus, above the $\unit[7]{m}$ the measurement data is not considered. In order to provide accurate altitude measurements, the LIDAR-Lite 3 Laser Rangefinder has been used as the main altitude sensor. It provides a range up to $\unit[40]{m}$ with the accuracy of $\unit[2.5]{cm}$. The sampling rate is set to $\unit[300]{hz}$. The rangefinder utilizes a laser $\unit[905]{nm}$ wavelength with a power of $\unit[1.3]{W}$. It should be highlighted that due to low temperature of the mine tunnels and vibrations of the frame, the PX4FLOW sonar sensor measurements are not reliable thus one beam range finder is used, although it has a higher cost. The autonomous navigation in the unknown environment requires a pose estimation, however in low-illumination environments the pose estimation may not be reliable. Thus, velocity estimation is provided by the PX4FLOW optical flow and it includes a synchronized MT9V034 machine vision CMOS sensor with a global shutter, together with a L3GD20 3D Gyroscope. The sensor allows processing images at $\unit[250]{hz}$ based on the on-board 168MHz Cortex M4F micro-controller. Finally, for collecting visual data, the PlayStation 3 Eye camera has been used that has the ability to capture a video stream with a resolution of $\unit[640 \times 480]{ pixels}$ with $\unit[60]{fps}$ or $\unit[320 \times 240]{ pixels}$ with $\unit[120]{fps}$. The camera has 56-75 degrees of horizontal field of view and the image stream can be used as \gls{ar} for human operator or vision based algorithms. \subsubsection{Additional light sources} Several extra light sources have been installed on the aerial platform to provide illumination for the visual sensors. Two $\unit[10]{W}$ LED bars have been installed on the front part of the quad-copters' arms and the constant current is provided by dedicated power LED drivers Recom RCD-24-0.70/W/X3. The drivers allow to modify the constant current, which will indicate a change in the light luminosity by \glspl{pwm} and analogue input signals. The measured light illumination on the $\unit[1]{m}$ distance from the \glspl{mav} with the maximum power utilized was $\unit[2200]{lux}$. Furthermore, the power LED bars and the drivers are placed under the propellers to utilize the airflow during the flight as a forced cooling. Additionally, 4 low power $\unit[10]{mm}$ LEDs have been installed on the bottom side of the \glspl{mav} for creating an optical flow sensor. \subsubsection{Battery} After multiple tests, the optimal battery has been selected given parameters such as size, weight, voltage and energy stored. Proposed battery, ZIPPY Compact 3300mAh 14.8V 40C 4S1P with weight of 360g provide flight time of $\unit[12]{min}$ in no wind conditions. \subsection{Software Architecture} \label{ref:softwareartichecture} The general scheme of the proposed software architecture is presented in Figure~\ref{fig:schematic}. The software architecture of the developed platform consists of the navigation, control, state estimation and visual feedback components. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{software_arch_new.png} \caption{The overall proposed software architecture for the navigation in the mine; For simplicity the sub-components are not shown.} \label{fig:schematic} \end{figure} The navigation component, based on the authors previous works~\cite{mansouri2020deploying, mansouri2018dl_tunnel} and~\cite{raad2018_tunnel}, incorporates data from the on-board front facing camera or the 2D lidar for following the tunnel axis. The output of the navigation component includes the heading rate command for the \gls{mav} to correct the \gls{mav} heading towards the tunnel axis and the velocity and altitude references for controller components to hover at a fixed altitude and to avoid collision in case the platform flies close to obstacles. Furthermore, a state estimation module provides estimated values for altitude $z$, velocities along $x,y,z$ axes and attitude (roll and pitch) of the platform from the IMU, the optical flow sensor and the downward looking laser range finder measurements. It should be highlighted that an accurate estimation of the heading angle is not possible as magnetometer is not reliable, especially in an underground mine and \gls{gnss} is not available for underground areas. Moreover, the navigation commands, as well as the state estimation outcome, are sent to the \gls{nmpc} controller~\cite{small_panoc_2018} component, which generates control commands (thrust, roll, pitch, yaw-rate) for the flight controller. Finally, the visual feedback component consists of the sequential stream of the on-board camera images, which can be used for \gls{ar}. \section{Platform Performance} \label{sec:platformperformance} This section describes the platform performance and results from each component, while the platform performs autonomous navigation in a real scale underground mine in Sweden~\cite{mansouri2019autonomous,mansouri2019vision}. Link: \url{https://youtu.be/dxMUx49a_uo} provides a video summary of the obtained results. The location of the field trials was $\unit[790]{m}$ deep, without any natural illumination sources and with a tunnel width of $\unit[6]{m}$ and a height of $\unit[4]{m}$. The area does not have strong corrupting magnetic fields, which could affect the platform sensors, while Figure~\ref{fig:mineboliden} depicts one part of the visited underground mine. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{tunnel1.png} \caption{Photo of a visited underground mine in Sweden.} \label{fig:mineboliden} \end{figure} During the performed experiment, the desired altitude was selected as $\unit[1]{m}$ with a constant velocity of $\unit[0.1]{m/s}$ aligned with the $x$-axis, while the platform is equipped with a PlayStation 3 Eye Camera and the LED light bars that provide a $\unit[460]{lux}$ illumination in $\unit[1]{m}$ distance. Moreover, a potential field method based on a 2D lidar was utilized for avoiding collisions to the tunneling walls, while measurements from a 2D lidar or a camera was used to correct the heading of the platform towards the open spaces or the tunnel axes respectively. The downward looking single beam laser range finder can be directly used for altitude regulation and Figure~\ref{fig:altitude} depicts the controlled achieved altitude over time for the proposed quad-copter during field trials in an underground tunnel. In this experimental case, there were no accurate height references available in the mine to evaluate the range finder measurements. However, from the overall performance of the platform, a constant altitude during the mission was successfully achieved. \begin{figure}[htbp!] \centering \includegraphics[width=1\linewidth]{rangefinder.png} \caption{Altitude measurements from the downward facing range finder over time during a test flight in an underground tunnel.} \label{fig:altitude} \end{figure} The 2D laser scanner was used by algorithms for obstacle avoidance and heading correction. The collected range measurements, during the flight, can be post processed to provide a 2D map of the area. The laser scans are processed from a Lidar SLAM method~\cite{hess2016real} that is available in the ROS framework, to generate a 2D occupancy map. The map characterizes the occupied and free space of the visited underground tunnel, while it is generated online with a 1Hz update rate and a resolution of $\unit[0.05]{m/pixel}$, while the robot covered an approximate distance of (x,y)=($\unit[70]{m}$,$\unit[85]{m}$) provided from the 2D lidar processing. Figure~\ref{fig:map} depicts the 2D map of the area. The platform successfully avoids collisions from the tunnel walls and corrects its heading towards the open spaces in multiple field tests in the mine\footnote{\url{https://youtu.be/dxMUx49a_uo}}. \begin{figure}[htbp!] \centering \includegraphics[width=1\linewidth]{lidar_cartographer.png} \caption{Obtained 2D map from laser scans while flying.} \label{fig:map} \end{figure} The velocity estimation for the developed system is provided from the optical flow system, however due to lack of features and illumination it is corrupted from noise measurements, thus the raw measurements were passed through low-pass filter. Figure~\ref{fig:flow} demonstrates the velocity state estimate during the field trials in the underground tunnel \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{rawfilter.png} \caption{Time evolution of the $v_x$ and $v_y$ raw (red) and filtered (blue) corresponding velocities measurements from the downward facing optical flow sensor during a test flight in an underground tunnel.} \label{fig:flow} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{fig11.png} \caption{Example images collected from optical flow sensor.} \label{fig:optimg} \end{figure} The online information from the on-board sensors like the current altitude, \gls{mav} status, the 2D top down lidar map and the collision prediction can be overlaid to the image stream and displays to the operator as depicted in the attached video. \section{Conclusions\label{Conclusions}} In this work, a low-cost high-performance quad-copter has been proposed. The developed solution is ready to fly in underground tunnels while accomplishing inspection tasks. The operator of the platform has access to all sensor measurements, as well as the information for the \gls{mav} status, assisting the algorithm development, the software implementation and the overall operation both in the preparation of the mission as well as during the mission. Finally, the developed \gls{mav} has been deployed in an unknown dark underground tunnel and successfully performs fully autonomous navigation.
{'timestamp': '2020-06-01T02:04:20', 'yymm': '2005', 'arxiv_id': '2005.14334', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14334'}
arxiv
\section{Introduction and main results} Consider the following semilinear elliptic equation, which has been extensively studied: $$ \left\{ \begin{array}{ll} -\Delta u=\lambda f(u)\ \ \ \
\ \ \ & \mbox{ in } \Omega \, ,\\ u>0 & \mbox{ in } \Omega \, ,\\ u=0 & \mbox{ on } \partial\Omega \, ,\\ \end{array} \right. \eqno{(P_\lambda)} $$ \ \noindent where $\Omega\subset\real^N$ is a smooth bounded domain, $N\geq 1$, $\lambda\geq 0$ is a real parameter and the nonlinearity $f:[0,\infty)\rightarrow \real$ satisfies \begin{equation}\label{convexa} f \mbox{ is } C^1, \mbox{ nondecreasing and convex, }f(0)>0,\mbox{ and }\lim_{u\to +\infty}\frac{f(t)}{t}=+\infty. \end{equation} It is well known that there exists a finite positive extremal parameter $\lambda^\ast$ such that ($P_\lambda$) has a minimal classical solution $u_\lambda\in C^0(\overline{\Omega})\cap C^2(\Omega)$ if $0< \lambda <\lambda^\ast$, while no solution exists, even in the weak sense, for $\lambda>\lambda^\ast$. The set $\{u_\lambda:\, 0< \lambda < \lambda^\ast\}$ forms a branch of classical solutions increasing in $\lambda$. Its increasing pointwise limit $u^\ast(x):=\lim_{\lambda\uparrow\lambda^\ast}u_\lambda(x)$ is a weak solution of ($P_\lambda$) for $\lambda=\lambda^\ast$, which is called the extremal solution of ($P_\lambda$) (see \cite{Bre,BV,Dup}). The regularity and properties of extremal solutions depend strongly on the dimension $N$, domain $\Omega$ and nonlinearity $f$. When $f(u)=e^u$, it was proven that $u^\ast\in L^\infty (\Omega)$ if $N<10$ (for every $\Omega$) (see \cite{CrR,MP}), while $u^\ast (x)=-2\log \vert x\vert$ and $\lambda^\ast=2(N-2)$ if $N\geq 10$ and $\Omega=B_1$ (see \cite{JL}). There is an analogous result for $f(u)=(1+u)^p$ with $p>1$ (see \cite{BV}). Brezis and V\'azquez \cite{BV} raised the question of determining the boundedness of $u^\ast$, depending only on the dimension $N$, for general smooth bounded domains $\Omega\subset\real^N$ and nonlinearities $f$ satisfying (\ref{convexa}). This was proven by Nedev \cite{Ne} when $N\leq3$; by Cabr\'e and Capella \cite{cc} when $\Omega=B_1$ and $N\leq 9$; by Cabr\'e \cite{ca4} when $N=4$ and $\Omega$ is convex; by the author \cite{yo4} when $N=4$; by Cabr\'e and Ros-Oton \cite{caro} when $N\leq 7$ and $\Omega$ is a convex domain “of double revolution”; by Cabr\'e, Sanch\'on, and Spruck \cite{css} when $N=5$ and $\limsup_{t\to\infty}f'(t)/f(t)^{1+\varepsilon}<+\infty$ for every $\varepsilon>0$. Finally, in a recent paper Cabr\'e, Figalli, Ros-Oton and Serra \cite{cfros} solved completely this question by proving that $u^\ast$ is bounded if $N\leq 9$. Another question posed by Brezis and V\'azquez \cite[Open problem 5]{BV} for singular extremal solutions is the following: What is the behavior of $f'(u^\ast)$ near the singularities? Does it look like $C/r^2$? This question is motivated by the fact that in the explicit examples $\Omega=B_1$ and $f(u)=(1+u)^p$, $p>1$ or $f(u)=e^u$ it is always $f'(u^\ast (r))=C/r^2$ for certain positive constant $C$, when the extremal solution $u^\ast$ is singular. In this paper we give a negative answer to this question, by showing that, in the case in which $\Omega=B_1$ and $u^\ast$ is singular, we always have $\limsup_{r\to 0}r^2f'(u^\ast(r))\in (0,+\infty)$. However, it is possible to give examples of $f\in C^\infty ([0,+\infty ))$ satisfying (\ref{convexa}) for which $u^\ast$ is singular and $\liminf_{r\to 0}r^2f'(u^\ast(r))=0$. In fact, we exhibit a large family of functions $f\in C^\infty ([0,+\infty ))$ satisfying (\ref{convexa}) for which $u^\ast$ is singular and $f'(u^\ast)$ can have a very oscillating behavior. \begin{theorem}\label{limsup} Assume that $\Omega=B_1$, $N\geq 10$, and that $f$ satisfies (\ref{convexa}). Suppose that the extremal solution $u^\ast$ of $(P_\lambda)$ is unbounded. Then $\limsup_{r\to 0} r^2 f'(u^\ast (r))\in (0,\infty)$. Moreover $$\frac{2(N-2)}{\lambda^\ast}\leq \limsup_{r\to 0} r^2 f'(u^\ast (r))\leq \frac{\lambda_1}{\lambda^\ast} ,$$ \noindent where $\lambda_1$ denotes the first eigenvalue of the linear problem $-\Delta v=\lambda v$ in $B_1\subset {\mathbb R}^N$ with Dirichlet conditions $v=0$ on $\partial B_1$. \end{theorem} \begin{theorem}\label{liminf} Assume that $\Omega=B_1$, $N\geq 10$, and that $\varphi :(0,1)\rightarrow {\mathbb R^+}$ satisfies $\lim_{r\to 0} \varphi (r)=+\infty$. Then there exists $f\in C^\infty([0,+\infty))$ satisfying (\ref{convexa}) such that the extremal solution $u^\ast$ of $(P_\lambda)$ is unbounded and $$\liminf_{r\to 0} \frac{f'(u^\ast (r))}{\varphi (r)}=0.$$ \end{theorem} Note that in the case $\varphi(r)=1/r^2$, we would obtain $\liminf_{r\to 0} r^2 f'(u^\ast (r))=0$. This answers negatively to \cite[Open problem 5]{BV}. In fact $r^2 f'(u^\ast (r))$ could be very oscillating, as the next result shows. \begin{theorem}\label{oscillation} Assume that $\Omega=B_1$, $N\geq 10$, and let $0\leq C_1\leq C_2$, where $C_2\in[2(N-2),(N-2)^2/4]$. Then there exists $f\in C^\infty([0,+\infty))$ satisfying (\ref{convexa}) such that the extremal solution $u^\ast$ of $(P_\lambda)$ is unbounded, $\lambda^\ast=1$ and $$\liminf_{r\to 0} r^2 f'(u^\ast (r))=C_1, $$ $$\limsup_{r\to 0} r^2 f'(u^\ast (r))=C_2. $$ \end{theorem} Note that if $C_1=C_2$, then the interval $[2(N-2),(N-2)^2/4]$ is optimal: $C_2\geq 2(N-2)$ by Theorem \ref{limsup}, while $C_1\leq (N-2)^2/4$ by Hardy's inequality. \begin{theorem}\label{cualquiera} Assume that $\Omega=B_1$, $N\geq 11$, and that $\Psi\in C(\overline{B_1}\setminus \{ 0\} )$ is a radially symmetric decreasing function satisfying $$\frac{2(N-2)}{r^2}\leq \Psi(r) \leq \frac{(N-2)^2}{4 r^2}, \ \ \mbox{ for every } 0<r\leq 1.$$ Then there exist $f\in C^1([0,+\infty))$ satisfying (\ref{convexa}) such that $\lambda^\ast =1$ and $$f'(u^\ast (x))=\Psi (x), \ \ \mbox{ for every } x\in \overline{B_1}\setminus\{ 0\}.$$ Moreover, this function $f$ is unique up to a multiplicative constant. That is, if $g$ is a function with the above properties, then there exists $\alpha>0$ such that $g=\alpha \, f(\cdot /\alpha)$ (whose extremal solution is $\alpha u^\ast$). \end{theorem} \section{Proof of the main results} First of all, if $\Omega=B_1$, and $f$ satisfies (\ref{convexa}), it is easily seen by the Gidas-Ni-Nirenberg symmetry result that $u_\lambda$, the solution of $(P_\lambda)$, is radially decreasing for $0<\lambda<\lambda^\ast$. Hence, its limit $u^\ast$ is also radially decreasing. In fact $u_r^\ast(r)<0$ for all $r\in (0,1]$, where $u_r$ denotes the radial derivative of a radial function $u$. Moreover, it is immediate that the minimality of $u_\lambda$ implies its stability. Clearly, we can pass to the limit and obtain that $u^\ast$ is also stable, which means \begin{equation}\label{inequa} \int_{B_1} \vert \nabla \xi\vert^2 \, dx\geq\int_{B_1} \lambda^\ast f'(u^\ast)\xi^2 \, dx \end{equation} \noindent for every $\xi\in C^\infty (B_1)$ with compact support in $B_1$. On the other hand, differentiating $-\Delta u^\ast =\lambda^\ast f(u^\ast)$ with respect to $r$, we have \begin{equation}\label{ahiledao} -\Delta u_r^\ast=\left(\lambda^\ast f'(u^\ast) -\frac{N-1}{r^2}\right) u_r^\ast, \ \ \mbox{ for all }r\in (0,1]. \end{equation} \begin{proposition}\label{key} Let $N\geq 3$ and $\Psi:\overline{B_1}\setminus\{ 0\} \rightarrow {\mathbb R}$ be a radially symmetric function satisfying that there exists $C>0$ such that $\vert \Psi (r)\vert /r^2 \leq C$, for every $0<r\leq 1$, and \begin{equation}\label{ineq} \int_{B_1} \vert \nabla \xi\vert^2 \, dx\geq\int_{B_1} \Psi\, \xi^2 \, dx \end{equation} \noindent for every $\xi\in C^\infty (B_1)$ with compact support in $B_1$. Then \begin{enumerate} \item[i)] The problem $$ \left\{ \begin{array}{rll} -\Delta \omega(x)&={\displaystyle \left( \Psi (x)-\frac{N-1}{\vert x\vert^2}\right) \omega (x)} \ \ \ \ \ \ & \mbox{ in } B_1 \, ,\\ \omega (x)&= 1 & \mbox{ on } \partial B_1 \, ,\\ \end{array} \right. \eqno{(P_\Psi}) $$ \noindent has an unique solution $\omega\in W^{1,2}(B_1)$. Moreover $\omega$ is radial and strictly positive in $B_1\setminus \{ 0\}$ . \ \item[ii)] If $\Psi_1 \leq \Psi_2$ in $\overline{B_1}\setminus \{ 0\} $ satisfy the above hypotheses and $\omega_i$ $(i=1,2)$ are the solutions of the problems $(P_{\Psi_i})$ then $\omega_1 \leq \omega_2$ in $\overline{B_1}\setminus \{ 0\}$. \end{enumerate} \end{proposition} \begin{proof} i) By Hardy's inequality $$\int_{B_1} \vert \nabla \xi\vert^2 \, dx\geq \frac{(N-2)^2}{4}\int_{B_1}\frac{\xi^2}{\vert x\vert^2} \, dx,$$ \noindent for every $\xi\in C^\infty (B_1)$ with compact support in $B_1$, we can define the functional $I:X\rightarrow {\mathbb R}$ by $$I(\omega):=\frac{1}{2}\int_{B_1} \vert \nabla \omega \vert^2 dx-\frac{1}{2}\int_{B_1} \left( \Psi-\frac{N-1}{\vert x\vert^2}\right) \omega^2 dx,$$ \noindent for every $\omega\in X$, where $X=\left\{ \omega:B_1\rightarrow {\mathbb R} \mbox{ such that } \omega-1\in W_0^{1,2}(B_1)\right\} $. It is immediate that $$I'(\omega)(v)=\int_{B_1}\nabla \omega \nabla v \, dx-\int_{B_1} \left( \Psi-\frac{N-1}{\vert x\vert^2}\right) \omega v\, dx\, ; \ \ \ \omega\in X,v\in W_0^{1,2}(B_1).$$ Therefore to prove the existence of a solution of (P$_\Psi$) it is sufficient to show that $I$ has a global minimum in $X$. To do this, we first prove that $I$ is bounded from below in $X$. Taking $v=\omega -1$ in (\ref{ineq}) and applying Cauchy–Schwarz inequality we obtain $$I(\omega)\geq \frac{1}{2}\int_{B_1}\Psi (\omega-1)^2 dx-\frac{1}{2}\int_{B_1} \left( \Psi-\frac{N-1}{\vert x\vert^2}\right) \omega^2 dx=$$ $$=\frac{1}{2}\int_{B_1} \Psi (-2\omega +1) dx+\frac{1}{2}\int_{B_1} \frac{N-1}{\vert x\vert^2} \omega^2 dx$$ $$\geq\frac{1}{2}\int_{B_1} \frac{-C (2\vert\omega\vert+1)+(N-1)\omega^2}{\vert x\vert^2}dx\geq\frac{1}{2}\int_{B_1}\frac{-C-C^2}{\vert x\vert^2}\, dx.$$ Hence $I$ is bounded from below in $X$. Take $\{ w_n\}\subset X$ such that $ I(\omega_n)\rightarrow \inf I$. Let us show that $\{ w_n\}$ is bounded in $W^{1,2}$. To this end, taking into account the above inequalities and that $-C(2\vert s\vert+1)+(N-1)s^2\geq -C(2\vert s\vert+1)+2s^2\geq s^2-C-C^2$ for every $N\geq 3$ and $s\in {\mathbb R}$, we have $$I(\omega_n)\geq \frac{1}{2}\int_{B_1} \frac{-C(2 \vert\omega_n\vert+1)+(N-1)\omega_n^2}{\vert x\vert^2}dx\geq\frac{1}{2}\int_{B_1}\frac{\omega_n^2-C-C^2}{\vert x\vert^2}\, dx.$$ From this $\int_{B_1}\omega_n^2/\vert x\vert^2$ is bounded. Therefore $\int_{B_1}\Psi\omega_n^2$ is also bounded. From the definition of $I$ we conclude that $\int_{B_1}\vert \nabla \omega_n\vert^2$ is bounded, which clearly implies that $\{ w_n\}$ is bounded in $W^{1,2}$. Since $X$ is a weakly closed subset of $W^{1,2}$, we have that, up to a subsequence, $\omega_n \rightharpoonup \omega_0\in X$. Taking $v=\omega_n-\omega_0$ in (\ref{ineq}) we deduce \ $I(\omega_n)-I(\omega_0)$ $$=\frac{1}{2}\int_{B_1} \vert \nabla (\omega_n-\omega_0) \vert^2 dx-\frac{1}{2}\int_{B_1} \Psi (\omega_n-\omega_0)^2 dx+\frac{1}{2}\int_{B_1} \frac{(N-1)(\omega_n-\omega_0)^2}{\vert x\vert^2} dx$$ $$+\int_{B_1}\nabla \omega_0\nabla (\omega_n-\omega_0) dx-\int_{B_1}\Psi \omega_0 (\omega_n-\omega_0) dx+\int_{B_1}\frac{(N-1)\omega_0 (\omega_n-\omega_0)}{\vert x\vert^2}dx$$ $$\geq \int_{B_1}\nabla \omega_0\nabla (\omega_n-\omega_0) dx-\int_{B_1}\Psi \omega_0 (\omega_n-\omega_0) dx+\int_{B_1}\frac{(N-1)\omega_0 (\omega_n-\omega_0)}{\vert x\vert^2}dx.$$ Since $\omega_n-\omega_0 \rightharpoonup 0$, taking limit as $n$ tends to infinity in the above inequality we conclude $$(\inf I)-I(\omega_0)\geq 0,$$ \noindent which implies that $I$ which attains its minimum at $\omega_0$. The existence of solution of (P$_\Psi$) is proven. To show the uniqueness of solution suppose that there exists two solutions $\omega_1$ and $\omega_2$ of the same problem (P$_\Psi$). Then $\omega_2-\omega_1\in W_0^{1,2}$. By (\ref{ineq}) we have $$0=I'(\omega_2)(\omega_2-\omega_1)-I'(\omega_1)(\omega_2-\omega_1)$$ $$=\int_{B_1} \vert \nabla (\omega_2-\omega_1) \vert^2 dx-\int_{B_1} \Psi (\omega_2-\omega_1)^2 dx+\int_{B_1} \frac{(N-1)(\omega_2-\omega_1)^2}{\vert x\vert^2} dx$$ $$\geq \int_{B_1} \frac{(N-1)(\omega_2-\omega_1)^2}{\vert x\vert^2} dx,$$ \noindent which implies that $\omega_1=\omega_2$. The uniqueness is proven. The radial symmetry of the solution of (P$_\Psi$) follows easily from the uniqueness of solution and the radiality of the function $\Psi(x)-(N-1)/\vert x\vert^2$ and the boundary condition of the problem. Finally, to prove that the solution $\omega$ of (P$_\Psi$) is strictly positive in $B_1\setminus \{ 0\}$ suppose, contrary to our claim, that there exists $r_0\in (0,1)$ such that $\omega(r_0)=0$ (with radial notation). Thus the function $v$ defined by $v=\omega$ in $B_{r_0}$ and $v=0$ in $B_1\setminus \overline{B_{r_0}}$ is in $W_0^{1,2}(B_1)$. By (\ref{ineq}) we have $$0=I'(\omega)(v)=\int_{B_{r_0}}\vert \nabla \omega\vert^2 dx-\int_{B_{r_0}}\Psi \omega^2 dx+\int_{B_{r_0}}\frac{(N-1)\omega^2}{\vert x\vert^2}dx$$ $$\geq \int_{B_{r_0}}\frac{(N-1)\omega^2}{\vert x\vert^2}dx.$$ Therefore $\omega=0$ in $B_{r_0}$. In particular $\omega(r_0)=\omega'(r_0)=0$ (with radial notation), which implies, by the uniqueness of the corresponding Cauchy problem, that $\omega=0$ in $(0,1]$. This contradicts $\omega(1)=1$. \ ii) Consider the function $v=(\omega_1-\omega_2)^+=\max\{0,\omega_1-\omega_2\}\in W_0^{1,2}(B_1)$ in the weak formulation of problem (P$_{\Psi_1}$). We have $$0=\int_{B_1}\left(\nabla \omega_1 \nabla (\omega_1-\omega_2)^+ -\Psi_1 \omega_1 (\omega_1-\omega_2)^+ +\frac{(N-1)\omega_1 (\omega_1-\omega_2)^+ }{\vert x\vert^2}\right) dx$$ Consider the same function $v=(\omega_1-\omega_2)^+$ in the weak formulation of problem (P$_{\Psi_2}$). Taking into account that $\Psi_1\leq \Psi_2$ and $\omega_2\geq 0$ we obtain $$ 0=\int_{B_1}\left(\nabla \omega_2 \nabla (\omega_1-\omega_2)^+ -\Psi_2 \omega_2 (\omega_1-\omega_2)^+ +\frac{(N-1)\omega_2 (\omega_1-\omega_2)^+ }{\vert x\vert^2}\right) dx$$ $$\leq \int_{B_1}\left(\nabla \omega_2 \nabla (\omega_1-\omega_2)^+ -\Psi_1 \omega_2 (\omega_1-\omega_2)^+ +\frac{(N-1)\omega_2 (\omega_1-\omega_2)^+ }{\vert x\vert^2}\right) dx$$ Subtracting the above two expressions it is follows that $$0\geq\int_{B_1} \vert \nabla (\omega_1-\omega_2)^+ \vert^2 dx-\int_{B_1} \Psi_1 (\omega_1-\omega_2)^{+\, 2} dx+\int_{B_1} \frac{(N-1)(\omega_1-\omega_2)^{+\, 2}}{\vert x\vert^2} dx$$ $$\geq \int_{B_1} \frac{(N-1)(\omega_1-\omega_2)^{+\, 2}}{\vert x\vert^{2}} dx.$$ This implies $(\omega_1-\omega_2)^+=0$. Hence $\omega_1\leq \omega_2$, which is our claim. \end{proof} \noindent\textbf{Proof of Theorem \ref{limsup}.} We first prove that $\lambda^\ast f'(u^\ast(r))\leq \lambda_1/r^2$ for every $r\in (0,1]$. To see this, let $0<\varphi_1$ be the first eigenfunction of the linear problem $-\Delta v=\lambda v$ in $B_1\subset {\mathbb R}^N$ with Dirichlet conditions $v=0$ on $\partial B_1$. Then $\int_{B_1} \vert\nabla \varphi_1 \vert^2=\lambda_1 \int_{B_1}\varphi_1^2$. By density, for arbitrary $0<r\leq 1$, we could take in (\ref{inequa}) the radial function $\xi=\varphi_1 (\cdot/r)$ in $B_r$ and $\xi=0$ in $B_1 \setminus \overline{B_r}$. Since $f'$ is nondecreasing and $u^\ast$ is radially decreasing, then $f'(u^\ast)$ is radially decreasing. An easy computation shows that $$ \int_{B_1} \vert\nabla \xi \vert^2=\int_{B_r} \vert\nabla \xi \vert^2=r^{N-2} \int_{B_1} \vert\nabla \varphi_1 \vert^2=\lambda_1 r^{N-2} \int_{B_1}\varphi_1^2\ ,$$ $$\int_{B_1}\lambda^\ast f'(u^\ast) \xi^2=\int_{B_r}\lambda^\ast f'(u^\ast) \xi^2\geq\lambda^\ast f'(u^\ast(r)) \int_{B_r} \xi^2=\lambda^\ast f'(u^\ast(r)) r^N \int_{B_1}\varphi_1^2\ .$$ Combining this with (\ref{inequa}) we obtain the desired conclusion. Consequently $\limsup_{r\to 0} r^2 f'(u^\ast (r))\leq \lambda_1/\lambda^\ast$. We now prove that $\limsup_{r\to 0} r^2 f'(u^\ast (r))\geq 2(N-2)/\lambda^\ast$. To obtain a contradiction, suppose that there exists $r_0\in (0,1]$ and $\varepsilon>0$ such that \begin{equation}\label{ves} \lambda^\ast f'(u^\ast (r))\leq \frac{2(N-2)-\varepsilon}{r^2}, \end{equation} \noindent for every $r\in (0,r_0]$. Consider now the radial function $\omega (r):=u_r^\ast (r_0\, r)/u_r^\ast (r_0)$, defined in $\overline{B_1}\setminus\{ 0\}$. Applying (\ref{ahiledao}), an easy computation shows that $\omega(1)=1$ and $$-\Delta \omega (r)=\frac{1}{u_r^\ast (r_0)}r_0^2\left( -\Delta (u_r^\ast (r_0\, r))\right)$$ $$=\frac{1}{u_r^\ast (r_0)}r_0^2\left( \lambda^\ast f'(u^\ast (r_0\, r))-\frac{N-1}{(r_0\, r)^2}\right) u_r^\ast (r_0\, r)=\left( \Psi (r)-\frac{N-1}{r^2}\right)\omega(r),$$ \ \noindent for every $r\in (0,1)$, where $\Psi(r):=r_0^2\lambda^\ast f'(u^\ast (r_0\, r))$. From (\ref{ves}) we obtain $\Psi(r)\leq \Psi_2(r):=(2(N-2)-\varepsilon)/r^2$ for every $r\in (0,1]$. It is easy to check that the solution $\omega_2$ of the problem $(P_{\Psi_2})$ is given by $w_2(r)=r^\alpha$ ($0<r\leq 1$) where $$\alpha=\frac{2-N+\sqrt{(N-4)^2+4\varepsilon}}{2}.$$ Therefore, applying Proposition \ref{key}, we can assert that $0<\omega (r)\leq r^\alpha$ for every $r\in (0,1]$. It is clear that $\alpha>-1$. Hence $\omega\in L^1(0,1)$. This gives $u_r^\ast \in L^1(0,r_0)$, which contradicts the unboundedness of $u^\ast$. \qed \begin{lemma}\label{AB} Let $N\geq 10$ and $0<A<B\leq 1$. Define the radial function $\Psi_{A,B}:\overline{B_1}\setminus\{ 0\} \rightarrow {\mathbb R}$ by $$\Psi_{A,B}(r):=\left\{ \begin{array}{ll} 0 & \mbox{ if } 0< r <A \, \\ \\ \displaystyle{\frac{2(N-2)}{r^2}} & \mbox{ if } A\leq r\leq B \, ,\\ \\ 0 & \mbox{ if } B<r\leq 1. \end{array} \right. $$ Let $\omega[A,B]$ be the unique radial solution of $(P_{\Psi_{A,B}})$. Then $$\lim_{s \to 0}\int_0^1\omega[s e^{-1/s^3},s](r)dr=+\infty.$$ \end{lemma} \begin{proof} We first observe that since $N\geq 10$ we have $2(N-2)\leq (N-2)^2/4$. Hence $0\leq \Psi_{A,B}\leq (N-2)^2/(4r^2)$ for every $0<r\leq 1$. Thus, by Hardy's inequality, $\Psi_{A,B}$ satisfies (\ref{ineq}) and we can apply Proposition \ref{key}. We check at once that $$\omega[A,B](r)=\left\{ \begin{array}{ll}\frac{N(N-4)B^{N-2}A^{-2}\ r}{(N-2)^2B^{N-4}-4A^{N-4}+2(N-2)B^N(B^{N-4}-A^{N-4})} & \mbox{ if } 0\leq r <A ,\\ \\ \frac{N(N-2)B^{N-2}\ r^{-1}\ -\ 2NA^{N-4}B^{N-2}\ r^{3-N}}{(N-2)^2B^{N-4}-4A^{N-4}+2(N-2)B^N(B^{N-4}-A^{N-4})} & \mbox{ if } A\leq r\leq B , \\ \\ \frac{\left( (N-2)^2B^{N-4}-4A^{N-4}\right) \ r\ +\ 2(N-2)B^N(B^{N-4}-A^{N-4})\ r^{1-N}}{(N-2)^2B^{N-4}-4A^{N-4}+2(N-2)B^N(B^{N-4}-A^{N-4})} & \mbox{ if } B<r\leq 1. \end{array} \right. $$ \ To see that $\omega[A,B]$ is the solution of (P$_{\Psi_{A,B}}$) it suffices to observe that $\omega[A,B]\in C^1(\overline{B_1}\setminus\{ 0\})\cap W^{1,2}(B_1)$ satisfies pointwise (P$_{\Psi_{A,B}}$) if $\vert x\vert \neq A,B$. On the other hand, taking into account that $r^{3-N}\leq A^{4-N}r^{-1}$ if $A\leq r\leq B$, we have that $$\omega[A,B](r)\geq \frac{N(N-2)B^{N-2}\ r^{-1}\ -\ 2NA^{N-4}B^{N-2}A^{4-N}\ r^{-1}}{(N-2)^2B^{N-4}-4A^{N-4}+2(N-2)B^N(B^{N-4}-A^{N-4})}$$ $$\geq \frac{N(N-2)B^{N-2}\ r^{-1}\ -\ 2NA^{N-4}B^{N-2}A^{4-N}\ r^{-1}}{(N-2)^2B^{N-4}+2(N-2)B^N B^{N-4}}$$ $$=\frac{N(N-4)B^2 \ r^{-1}}{(N-2)^2+2(N-2)B^N} \, ,\ \ \mbox{ if } A\leq r\leq B.$$ \ From this and the positiveness of $\omega[A,B]$ it follows that $$\int_0^1\omega[A,B](r)\geq\int_A^B\omega[A,B](r)dr\geq \int_A^B \frac{N(N-4)B^2 \ r^{-1}}{(N-2)^2+2(N-2)B^N}dr$$ $$=\frac{N(N-4)B^2 \ \log (B/A)}{(N-2)^2+2(N-2)B^N}.$$ \ Taking in this inequality $A=s e^{-1/s^3}$, $B=s$ (for arbitrary $0<s\leq 1$), it may be concluded that $$\int_0^1\omega[s e^{-1/s^3},s](r)dr\geq\frac{N(N-4)}{s\left( (N-2)^2+2(N-2)s^N\right)}$$ \ \noindent and the lemma follows. \end{proof} \begin{proposition}\label{peasofuncion} Let $N\geq 10$ and $\varphi :(0,1)\rightarrow {\mathbb R}^+$ such that $\lim_{r\to 0} \varphi (r)=+\infty$. Then there exists $\Psi\in C^\infty (\overline{B_1}\setminus \{ 0\})$ an unbounded radially symmetric decreasing function satisfying \begin{enumerate} \item[i)] $\displaystyle{0<\Psi(r)\leq\ \frac{2(N-2)}{r^2}}$ and $\Psi'(r)<0$ for every $0<r\leq 1$. \item[ii)] $\displaystyle{\liminf_{r\to 0} \frac{\Psi(r)}{\varphi (r)}=0}$, $\displaystyle{\limsup_{r\to 0} r^2 \Psi (r)=2(N-2)}$. \item[iii)] $\displaystyle{\int_0^1 \omega(r)dr=+\infty}$, where $\omega$ is the radial solution of (P$_\Psi$). \end{enumerate} \end{proposition} \begin{proof} Without loss of generality we can assume that $\varphi (r)\leq 2(N-2)/r^2$ for $r\in (0,1]$, since otherwise we can replace $\varphi$ with $\overline{\varphi}=\min\left\{ \varphi, 2(N-2)/r^2\right\} $. It is immediate that $\lim_{r\to 0} \varphi (r)=+\infty$ implies $\lim_{r\to 0} \overline{\varphi}(r)=+\infty$ and that $0\leq \liminf_{r\to 0}\Psi(r)/\varphi (r)\leq\liminf_{r\to 0} \Psi(r)/\overline{\varphi } (r)$. We begin by constructing by induction two sequence $\{x_n\}$, $\{y_n\}\subset (0,1]$ in the following way: $x_1=1$ and, knowing the value of $x_n$ $(n\geq 1)$, take $y_n$ and $x_{n+1}$ such that $$x_{n+1}<y_n<x_n e^{-1/x_n^3}<x_n,$$ \ \noindent where $y_n\in (0, x_n e^{-1/x_n^3})$ is chosen such that $$\varphi(y_n)>(n+1)\frac{2(N-2)}{\left(x_n e^{-1/x_n^3}\right)^2},$$ \noindent which is also possible since $\lim_{r\to 0} \varphi (r)=+\infty$. The inequality $x_{n+1}<x_n e^{-1/x_n^3}$ for every integer $n\geq 1$ implies that $\{ x_n \}$ is a decreasing sequence tending to zero as $n$ goes to infinity. For this reason, to construct the radial function $\Psi$ in $B_1\setminus \{ 0\}$, it suffices to define $\Psi$ in every interval $[x_{n+1},x_n)=[x_{n+1},y_n)\cup [y_n, x_n e^{-1/x_n^3}]\cup (x_n e^{-1/x_n^3},x_n)$. First, we define $$\Psi(r):=\frac{2(N-2)}{r^2}, \ \ \ \mbox{ if } \ \ x_n e^{-1/x_n^3}<r<x_n,$$ $$\Psi (y_n):=\frac{\varphi (y_n)}{n+1}.$$ By the definition of $y_n$ we have that $$\Psi (y_n)=\frac{\varphi (y_n)}{n+1}>\frac{2(N-2)}{\left(x_n e^{-1/x_n^3}\right)^2}\ \mbox{ and }\ \Psi (y_n)<\varphi(y_n)\leq\frac{2(N-2)}{y_n^2}.$$ Thus, it is a simple matter to see that it is possible to take a decreasing function $\Psi$ in $(y_n, x_n e^{-1/x_n^3}]$ such that $\Psi(r)<2(N-2)/r^2$ and $\Psi'(r)<0$ for $r\in(y_n, x_n e^{-1/x_n^3}]$ and $\Psi \in C^\infty ([y_n,x_n))$. Finally, we will define similarly $\Psi$ in $[x_{n+1},y_n)$. Taking into account that $$\Psi (y_n)<\varphi(y_n)\leq\frac{2(N-2)}{y_n^2}<\frac{2(N-2)}{x_{n+1}^2},$$ \noindent we see at once that it is possible to take a decreasing function $\Psi$ in $[x_{n+1}, y_n)$ such that $$\Psi (x_{n+1})=\frac{2(N-2)}{x_{n+1}^2},$$ $$\partial_r^{(k)} \Psi (x_{n+1})=\partial_r^{(k)} \left(2(N-2)/r^2\right)(x_{n+1}), \ \ \mbox{for every } k\geq 1,$$ $$\Psi(r)<2(N-2)/r^2 \ \mbox{ and } \ \Psi'(r)<0 \ \ \ \mbox{for }r\in(x_{n+1},y_n),$$ $$\Psi \in C^\infty ([x_{n+1},x_n)).$$ Once we have constructed the radial function $\Psi$ it is evident that $\Psi\in C^\infty (\overline{B_1}\setminus \{ 0\})$ an unbounded radially symmetric decreasing function satisfying i). To prove ii) it is sufficient to observe that the sequences $\{x_n \}$, $\{ y_n \}$ tend to zero and satisfy $x_n^2 \Psi(x_n)=2(N-2)$ and $\Psi (y_n)/\varphi(y_n)=1/(n+1)$ for every integer $n\geq 1$. It remains to prove iii). To this end consider an arbitrary $K>0$. Since $\{x_n \}$ tends to zero, applying Lemma \ref{AB} we can assert that there exists a natural number $m$ such that $$\int_0^1\omega[x_m e^{-1/{x_m}^3},x_m](r)dr\geq K.$$ \ Observe that $\Psi\geq \Psi_{x_m e^{-1/{x_m}^3},x_m}$. By Proposition \ref{key} it follows that $\omega\geq\omega[x_m e^{-1/{x_m}^3},x_m]$. Thus $$\int_0^1\omega (r) dr\geq \int_0^1\omega[x_m e^{-1/{x_m}^3},x_m](r)dr\geq K.$$ Since $K>0$ is arbitrary we conclude $\int_0^1\omega (r) dr=+\infty$. \end{proof} \ \noindent\textbf{Proof of Theorem \ref{liminf}.} Consider the function $\Psi$ of Proposition \ref{peasofuncion} and let $\omega$ be the radial solution of $(P_\Psi)$. Since $\Psi\in C^\infty (\overline{B_1}\setminus \{ 0\})$ we obtain $\omega\in C^\infty (\overline{B_1}\setminus \{ 0\})\cap W^{1,2}(B_1)$. Define the radial function $u$ by $$u(r):=\int_r^1 \omega (t)dt, \ \ 0<r\leq 1.$$ It is obvious that $u\in C^\infty (\overline{B_1}\setminus \{ 0\})$. Since $u'=-\omega$ (with radial notation), we have $u\in W^{2,2}(B_1)\subset W^{1,2}(B_1)$. Moreover, from $\int_0^1 \omega(r)dr=+\infty$ we see that $u$ is unbounded. On the other hand, since $u'=-\omega<0$ in $(0,1]$ (by Proposition \ref{key}), it follows that $u$ is a decreasing $C^\infty$ diffeomorphism between $(0,1]$ and $[0,+\infty)$. Therefore we can define $f\in C^\infty ([0,+\infty))$ by $$f:=(-\Delta u)\circ u^{-1}.$$ \ We conclude that $u\in W_0^{1,2} (B_1)$ is an unbounded solution of (P$_\lambda$) for $\lambda=1$. Now, substituting $u_r$ by $-\omega$ in (\ref{ahiledao}) it follows that $$-\Delta (-\omega)+f'(u)(-\omega)=\frac{N-1}{r^2}(-\omega) \ \ \mbox{ for } 0<r\leq 1$$. Hence, since $\omega$ is a solution of (P$_\Psi$) we obtain $f'(u)\omega=\Psi \omega$ in $(0,1]$. From $\omega>0$ in $(0,1]$ we conclude that $$f'(u(x))=\Psi(x)\ \ \ \mbox{ for every } x\in \overline{B_1}\setminus \{ 0\}.$$ We now prove that $f$ satisfies (\ref{convexa}). To do this, we first claim that $\omega'(1)\geq -1$. Since $\Psi\leq 2(N-2)/r^2$, applying Proposition \ref{key} with $\Psi_1=\Psi$ and $\Psi_2=2(N-2)/r^2$, we deduce $\omega_1\leq \omega_2$, where $\omega_1=\omega$ and $\omega_2=r^{-1}$, as is easy to check. Since $\omega_1(1)=\omega_2(1)$ it follows $\omega_1'(1)\geq\omega_2'(1)=-1$, as claimed. Thus $$f(0)=f(u(1))=-\Delta u(1)=-u''(1)-(N-1)u'(1)=\omega'(1)+(N-1)\omega(1)$$ $$\geq (-1)+(N-1)>0.$$ On the other hand, since $f'(u(r))=\Psi(r)>0$ for every $r\in (0,1]$ it follows $f'>0$ in $[0,+\infty)$. Moreover $\lim_{s\to+\infty}f'(s)=\lim_{r\to 0}f'(u(r))=\lim_{r\to 0}\Psi(r)=+\infty$, and the superlinearity of $f$ is proven. Finally, to show the convexity of $f$, it suffices to differentiate the expression $f'(u)=\Psi$ with respect to $r$ (with radial notation), obtaining $u'(r)f''(u(r))=\Psi'(r)$ in $(0,1]$. Since $u'<0$ and $\Psi'<0$ we obtain $f''(u(r))>0$ in $(0,1]$, which gives the convexity of $f$ in $[0,+\infty)$. Finally, we show that $u$ is a stable solution of $(P_\lambda)$ for $\lambda=1$. Since $N\geq 10$ then $2(N-2)\leq (N-2)^2/4$, hence $$f'(u(r))=\Psi(r)\leq \frac{2(N-2)}{r^2}\leq \frac{(N-2)^2}{4r^2}\ \ \mbox{ for every } 0<r\leq 1.$$ Thus, by Hardy's inequality, we conclude that $u$ is a stable solution of $(P_\lambda)$ for $\lambda=1$. On the other hand, in \cite[Th. 3.1]{BV} it is proved that if $f$ satisfies (\ref{convexa}) and $u\in W_0^{1,2}(\Omega)$ is an unbounded stable weak solution of ($P_\lambda$) for some $\lambda>0$, then $u=u^\ast$ and $\lambda=\lambda^\ast$. Therefore we conclude that $\lambda^\ast=1$, $u^\ast=u$ and $$\liminf_{r\to 0}\frac{f'(u^\ast(r))}{\varphi (r)}=\liminf_{r\to 0}\frac{\Psi(r)}{\varphi(r)}=0.$$ \qed \noindent\textbf{Proof of Theorem \ref{oscillation}.} Take $\varphi(r)=1/r^2$, $0<r\leq1$, and consider the function $\Psi$ of Proposition \ref{peasofuncion}. Define $$\Phi(r):=\frac{C_2-C_1}{2(N-2)}\Psi(r)+\frac{C_1}{r^2},$$ \noindent for every $0<r\leq 1$. Then it follows easily that $\Phi\in C^\infty (\overline{B_1}\setminus \{ 0\})$ is an unbounded radially symmetric decreasing function satisfying \begin{enumerate} \item[i)] $\displaystyle{\Psi(r)\leq\Phi(r)\leq \frac{(N-2)^2}{4r^2}}$ and $\Phi'(r)<0$ for every $0<r\leq 1$. \item[ii)] $\displaystyle{\liminf_{r\to 0} r^2 \Phi(r)=C_1}$, $\displaystyle{\limsup_{r\to 0} r^2 \Phi (r)=C_2}$. \item[iii)] $\displaystyle{\int_0^1 \varpi(r)dr=+\infty}$, where $\varpi$ is the radial solution of (P$_\Phi$). \end{enumerate} Note that iii) follows from Proposition \ref{key}, Proposition \ref{peasofuncion} and the fact that $\varpi\geq\omega$, being $\omega$ the radial solution of $(P_\Psi$). The rest of the proof is very similar to that of Theorem \ref{liminf}. Since $\Phi\in C^\infty(\overline{B_1}\setminus \{ 0\})$ we obtain $\varpi\in C^\infty (\overline{B_1}\setminus \{ 0\})\cap W^{1,2}(B_1)$. Define the radial function $u$ by $$u(r):=\int_r^1 \varpi (t)dt, \ \ 0<r\leq 1.$$ Analysis similar to that in the proof of Theorem \ref{liminf} shows that $u\in W^{2,2}$ is a decreasing $C^\infty$ diffeomorphism between $(0,1]$ and $[0,+\infty)$. Defining again $f:=(-\Delta u)\circ u^{-1}$, it is obtained that $f\in C^\infty ([0,+\infty))$. Thus $u\in W_0^{1,2} (B_1)$ is an unbounded solution of $(P_\lambda )$ for $\lambda=1$. It remains to prove that $f$ satisfies (\ref{convexa}). At this point, the only difference with respect to the proof of Theorem \ref{liminf} is that $\Phi(r)\leq\Psi_2(r):=(N-2)^2/(4r^2)$ implies that $\varpi\leq\omega_2$, being $\omega_2(r)=r^{-N/2+\sqrt{N-1}+1}$ the solution of the problem $(P_{\Psi_2})$. Hence $\varpi'(1)\geq\omega_2'(1)=-N/2+\sqrt{N-1}+1$. Therefore $$f(0)=f(u(1))=-\Delta u(1)=-u''(1)-(N-1)u'(1)=\varpi'(1)+(N-1)\varpi(1)$$ $$\geq (-N/2+\sqrt{N-1}+1)+(N-1)>0.$$ The rest of the proof runs as before. \qed \ \noindent\textbf{Proof of Theorem \ref{cualquiera}.} Since $0<\Psi\leq (N-2)^2/(4r^2)$ we have that $\Psi$ satisfies the hypothesis of Proposition \ref{key}. Thus we can consider the solution $\omega$ of the problem $(P_\Psi)$. From $\Psi\in C(\overline{B_1}\setminus \{ 0\})$ it follow that $\omega\in C^2(\overline{B_1}\setminus \{ 0\})\cap W^{1,2} (B_1)$. On the other hand, since $\Psi(r)\geq \Psi_1(r):=2(N-2)/r^2$ for $0<r\leq 1$, we have that $\omega(r)\geq\omega_1(r):=r^{-1}$ for $0<r\leq 1$, where have used that $\omega_1$ is the solution of $(P_{\Psi_1})$ and we have applied Proposition \ref{key}. Define the radial function $u$ by $$u(r):=\int_r^1 \omega (t)dt, \ \ 0<r\leq 1.$$ Therefore $u(r)\geq\vert\log r\vert$ for $0<r\leq 1$. In particular, $u$ is unbounded. From been proved, it follows that $u\in C^3(\overline{B_1}\setminus \{ 0\})\cap W^{2,2} (B_1)$. Hence (with radial notation) we have that $u$ is a decreasing $C^3$ diffeomorphism between $(0,1]$ and $[0,+\infty)$. Thus we can define $f\in C^1 ([0,+\infty))$ by $$f:=(-\Delta u)\circ u^{-1}.$$ Analysis similar to that in the proof of Theorems \ref{liminf} and \ref{oscillation} shows that $f$ satisfies (\ref{convexa}), $\lambda^\ast=1$ and $u=u^\ast$. Finally, to prove that $f$ is unique up to a multiplicative constant, suppose that $g$ is a function satisfying (\ref{convexa}), $\lambda^\ast=1$ and $g'(v^\ast(x))=\Psi (x)$, for every $x\in \overline{B_1}\setminus \{ 0\}$, where $v^\ast$ is the extremal solution associated to $g$. From (\ref{ahiledao}) we see that $$ -\Delta v_r^\ast=\left(g'(v^\ast) -\frac{N-1}{r^2}\right) v_r^\ast, \ \ \mbox{ for all }r\in (0,1].$$ It follows immediately that $v_r^\ast (r)/v_r^\ast (1)$ is the solution of the problem $(P_\Psi)$. Since this problem has an unique solution we deduce that $v_r^\ast (r)/v_r^\ast (1)=\omega(r)=-u_r^\ast(r)$, for every $r\in (0,1]$. Thus $v_r^\ast =\alpha u_r\ast$ for some $\alpha >0$, which implies, since $v^\ast(1)=u^\ast(1)=0$, that $v^\ast =\alpha u^\ast$. The proof is completed by showing that $$g(v^\ast (x))=-\Delta v^\ast (x)=\alpha (-\Delta u^\ast (x))=\alpha f(u^\ast(x))=\alpha f(v^\ast (x)/\alpha),$$ \noindent for every $x\in\overline{B_1}\setminus \{ 0\})$ and taking into account that $v^\ast\left(\overline{B_1}\setminus \{ 0\}\right)=[0,+\infty)$. \qed
{'timestamp': '2020-11-30T02:04:55', 'yymm': '2005', 'arxiv_id': '2005.14314', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14314'}
arxiv
\section{Introduction} Consider the system constituted by a hollow rigid body $\mathcal B_1$ whose cavity contains a homogeneous rigid ball $\mathcal B_2$. Let the gap between $\mathcal B_1$ and $\mat
hcal B_2$ be entirely filled by a viscous incompressible fluid $\mathscr{L}$ (simply called {\em liquid}). Let $G$ be the center of mass of the system $\mathscr S_C$ constituted by the outer rigid body $\mathcal{B}_1$ and the liquid. Suppose that $G$ is a fixed point in space and time with respect to an inertial frame of reference $\mathcal I$, and it coincides with the (geometrical) center of the ball $\mathcal B_2$. \footnote{ The geometrical center of the ball is also its center of mass due to the homogeneity and geometrical symmetry of $\mathcal B_2$. } We are interested in the {\em free rotations} of the whole system of {\em rigid bodies with a liquid-filled gap}. This type of motion occurs when no external forces and torques are applied, and the system is constrained to rotate (without friction) around $G$ driven by only its inertia once an initial angular momentum is imparted, see Figure \ref{fig:liquid_gap}. \begin{figure}[h] \centering \psfrag{1}{$\mathcal B_2$} \psfrag{2}{$\mathscr{L}$} \psfrag{3}{$\mathcal B_1$} \psfrag{G}{\footnotesize $G$} \psfrag{o}{$\V \omega_{20}$} \psfrag{a}{$\V \omega_{10}$} \includegraphics[width=.5\textwidth]{liquid_gap} \caption{Initial configuration for a system of rigid bodies with a liquid-filled gap. In this pictorial situation, the motion of the whole system is driven by the initial velocity imparted on the liquid and the initial angular velocities $\V \omega_{10}$ and $\V \omega_{20}$ of $\mathcal B_1$ and $\mathcal B_2$, respectively. }\label{fig:liquid_gap} \end{figure} This type of fluid-solid interaction problems have been widely studied in connection to some geophysical problems related to the motion of the Earth's inner (solid and liquid) core and its influence on the geodynamo (i.e., the mechanism responsible for the generation of Earth's magnetic field and its maintenance against the Ohmic dissipation), see \cite{Ka,Proud,Bu1,Bu4,ScCDV,Va2004}. From the mathematical point of view, there have been several contributions aimed at proving the existence of solutions to the relevant equations of motion and analyzing their stability properties. In the case where no rigid core is within the liquid-filled cavity, it was conjectured by Zhukovskii (\cite{Zh}) and rigorously proved by the present author and collaborators that the liquid has a {\em stabilizing effect} on the motion of the solid (see \cite{Ma,Ma2,DiGaMaZu,MaPrSi19,MaPrSi}). In fact, there exists a finite interval of time (whose length depends on the liquid viscosity) where the motion of the system has a “chaotic” nature (as shown numerically in \cite{DiGaMaZu} and experimentally in \cite{Ma2}). After this interval of time, the system reaches (at an exponentially fast rate) a more orderly configuration, corresponding to a steady state in which the system moves as a whole rigid body with a constant angular velocity (see \cite{MaPrSi19,MaPrSi} for a rigorous mathematical proof of this phenomena when the liquid is subject to no-slip and partial slip boundary conditions, respectively). Concerning the motion of solids with fluid-filled gaps, known results mainly focus on the translational and rotational motions of rigid bodies in a liquid occupying a bounded domain with a {\em prescribed motion of the liquid outer boundary}. The works \cite{sather,FuSa,Sau} provide the first results of existence of weak solutions \`a la Leray-Hopf to the Navier-Stokes equations in bounded regions with moving boundaries. For the fluid-solid interaction problems with a finite number of rigid bodies within a liquid, existence of weak solutions {\em up to collisions} are proved in \cite{DeEs,CoMaTu}. The work \cite{GrMa} deals with local strong solutions, whereas \cite{Tuc,gunzburger,Fe2003,Fei2003} provide the first results of existence of global weak solutions for both incompressible and compressible cases. We refer also to \cite{Hillairet,Hillairet2009,Hillairet2015,Chemetov2017} where different boundary conditions and regularity of the boundary are considered for the global existence theory, and to \cite{Sueur2015} where uniqueness of Leray-Hopf solutions in the 2D case is proved. In this paper, we show that the problem of free rotations of rigid bodies with a liquid-filled gap admits global weak solutions \`a la Leray-Hopf. In addition, we determine the largest space of initial data for which the equations of motion are well-posed in the setting of maximal $L^p-L^q$ regularity and time-weighted $L^p$ spaces. It is worth emphasizing that for the problem at hand, possible translations of the solids are disregarded. This simplifying assumption has to be contrasted with the existing (cited above) literature in which the motion of the outer solid is instead prescribed. The novelty of the paper lies on considering the full moving boundary problem\footnote{Note that different portions ($\mathcal C$ and $\mathcal S$) of the liquid boundary move with different (unknown) motion. } and proving the existence of weak solutions (Theorem \ref{th:weak}) together with important properties like a Serrin-type result for weak-strong uniqueness (Theorem \ref{th:continuous_dependence}). One of the main objective of this work is to show that, similarly to the case when no solid is within the liquid-filled cavity, the fluid has a {\em stabilizing effect on the motion of both solids}. In fact, it will be shown that the long-time dynamics\footnote{The long-time behavior of solutions to the governing equations in the Leray-Hopf class for any initial data with finite kinetic energy.} of the whole system is completely characterized by the rest state for the liquid and solid cores relatively to the outer solid, and the system moving as a whole rigid body (see \eqref{eq:decay0}). Such stabilization property is obtained for a large class of fluid-solids configurations. In particular, no restriction will be imposed on the initial data or on physical properties like the Reynolds number, the mass distribution in the outer solid or on the size of the inner core. As an example, take the situation depicted in Figure \ref{fig:liquid_gap} as initial configuration. Note that the initial angular velocities of the solids are in opposite directions (in fact, they could be around any axis). If there was no liquid in the gap between the solids, the motion of the thick crust and inner core would be completely decoupled. When a viscous incompressible fluid fills that gap, the eventual motion of the whole system will be a rigid body motion with crust and inner core at relative rest. From the mathematical point of view, this effect is captured by introducing the new variable $\V \omega$ (see \eqref{eq:new_variables}), the equivalent formulation \eqref{eq:Motion} and by proving the decay \eqref{eq:decay0}. Finally, we prove a local well-posedness result (Theorem \ref{th:strong}) in the functional setting of maximal $L^p-L^q$ regularity in time-weighted $L^p$ spaces. This result is the first of the kind for this class of fluid-solid interactions. Here is the plan of the paper. After presenting the basic notation and recalling a well-known Gr\"onwall-type lemma, we proceed with Section \ref{sec:preliminary_formulation} containing the mathematical formulation of the problem as the coupled system of differential equations \eqref{eq:motion}, given by the Navier-Stokes equations and the balances of angular momentums of $\mathcal B_1$ and $\mathcal B_2$, respectively. In Section \ref{sec:functional}, we introduce our functional setting. As equations \eqref{eq:motion} involve both differential and integral terms, in Section \ref{sec:equivalent_formulation}, we provide an equivalent formulation of the problem by replacing the (physical) equations of motion with those governing the motion of a rigid body with a cavity completely filled by a viscous impressible fluid with varying density. In Section \ref{sec:weak}, we prove the existence of weak solutions and related properties. In Section \ref{sec:strong}, we demonstrate the existence of strong solution in the $L^p-L^q$ setting. The notation used throughout this paper is quite standard. $\ensuremath{\mathbb{N}}$ denotes the set of natural numbers. $\ensuremath{\mathbb{R}}$ indicates the set of real numbers, and $\ensuremath{\mathbb{R}}^n$ the Euclidean $n$-dimensional space equipped with the canonical basis $\{\V e_1, \V e_2,\dots, \V e_n\}$. The components of a vector $\V{\mathsf v}$ with respect to the canonical basis are indicated by $(\mathsf v_1,\mathsf v_2,\dots,\mathsf v_n)$, whereas $|\V{\mathsf v}|$ represents the magnitude of $\V{\mathsf v}$. We will use the Einstein convention for the summation of dummy indexes, and ``$:$'' will denote the tensor contraction. Moreover, $B_R(G)$ denotes the ball in $\ensuremath{\mathbb{R}}^3$ with center at a point $G\in \ensuremath{\mathbb{R}}^3$ and radius $R$. The ball centered at the origin of a coordinates system $\{\V e_1, \V e_2, \V e_3\}$ will be simply denoted by $B_R$. If $A$ is an open set of $\ensuremath{\mathbb{R}}^n$, $s\in \ensuremath{\mathbb{R}}$ and $p\in [1,\infty]$, then $L^p(A)$, $W^{s,p}(A)$, $W^{s,p}_0(A)$ denote the Lebesgue and (generalized) Sobolev spaces, with norms $\norm{\cdot}_{L^p(A)}$ and $\norm{\cdot}_{W^{s,p}(A)}$, respectively\footnote{Unless confusion arises, we shall use the same symbol for spaces of scalar, vector and tensor functions. }. For a bounded, Lipschitz domain $A$, with outward unit normal $\V n$, we will often use the following well-known Helmholtz-Weyl decomposition (e.g., \cite[Section III.1]{Ga}): \begin{equation}\label{eq:HW} L^q(A) = H_q(A) \oplus G_q(A), \end{equation} where $q\in (1,\infty)$, $H_q(A):=\{\V u\in L^q(A):\; \mathop{\mathrm{div}} \V u=0\text{ in }A\text{, and } \V u\cdot \V n=0 \text{ on }\partial A\}$ ($\mathop{\mathrm{div}} \V u$ and $\V u \cdot \V n$ have to be understood in the sense of distributions), and $G_q(A):=\{w \in L^q(A):\; w=\nabla \pi, \text{ for some }\pi\in W^{1,q}(A)\}$. In the case of $q=2$, we will simply write $H(A)$ and $G(A)$, respectively. If $(X, \norm{\cdot}_X)$ is a Banach space, for an interval $I$ in $\ensuremath{\mathbb{R}}$ and $1\le p<\infty$, $L^p(I;X)$ (resp. $W^{k,p}(I;X)$, $k\in \ensuremath{\mathbb{N}}$) will denote the space of functions $f$ from $I$ to $X$ for which $\left(\int_I \norm{f(t)}^p_X\; \d t\right)^{1/p}<\infty$ (resp. $\sum^k_{\ell=0}\left(\int_I \norm{\partial^\ell_t f (t)}^p_X\; \d t\right)^{1/p}<\infty$). Similarly, $C^k(I;X)$ indicates the space of functions which are $k$-times differentiable with values in $X$, and having $\max_{t\in I}\norm{\partial^\ell_t \cdot}_X < \infty$, for all $\ell = 0,1,...,k$. Finally, $C_w(I;X)$ is the space of functions $f$ from $I$ to $X$ such that that the map $t \in I \mapsto \phi(f(t))\in \ensuremath{\mathbb{R}}$ is continuous for all bounded linear functionals $\phi$ defined on $X$. We conclude this section by recalling the following Gr\"onwall-type lemma that will be used in the paper. For its proof, we refer the interested reader to \cite{Ma2}. \begin{lemma}\label{lem:gronwall1} Suppose that a function $y\in L^\infty(0,\infty)$, $y\ge 0$, satisfies the following inequality for a.~a. $s\ge 0$ and all $t\ge s$: \begin{equation*} y(t)\le y(s)-k\int_s^ty(\tau)\,d\tau+\int_s^tF(\tau)\,d\tau\,. \end{equation*} Here, $k>0$, and $F\in L^q(a,\infty)\cap L^1_{{\rm loc}}(0,\infty)$, for some $a>0$ and $q\in [1,\infty)$, satisfies $F(t)\ge 0$ for a.~a. $t\ge 0$. Then $$ \lim_{t\to\infty}y(t)=0\,. $$ If $F\equiv 0$, then $$ y(t)\le y(s)\,{\rm e}^{-k(t-s)} \,,\ \ \mbox{for all $t\ge s$}\,. $$ \end{lemma} We are now ready to introduce the equations governing the motion of the system of rigid bodies with a liquid-filled gap. \section{A preliminary mathematical formulation of the problem}\label{sec:preliminary_formulation} Consider $\mathcal B_1:=\mathcal V_1\setminus \overline{\mathcal V}$, with $\mathcal V_1$ and $\mathcal V$ bounded domains in $\ensuremath{\mathbb{R}}^3$, $\overline{\mathcal V}\subset \mathcal V_1$, $\overline{B_R(G)}\subset \mathcal V$, and $\mathcal B_2:=B_R(G)$. Let us denote $\mathcal C:=\partial \mathcal V$, $\mathcal S:=\partial \mathcal B_2$~\footnote{$\mathcal S$ is the sphere in $\ensuremath{\mathbb{R}}^3$ centered at $G$ with radius $R$.}, and ${\mathscr{L}}:=\mathcal V\setminus \overline{B_R(G)}$ be the volume occupied by the liquid at each time. Throughout the paper, we will assume that ${\mathscr{L}}$ is of class $C^2$. Let $\mathcal{F}\equiv\{G, \V e_1,\V e_2,\V e_3\}$ be the {\em non-inertial} reference frame with origin at $G$, and axes coinciding with the {\em central axes of inertia} of the coupled system $\mathscr S_C$; these axes are directed along the eigenvectors of the inertia tensor $\T I_{C}$ of $\mathscr S_C$ with respect to $G$, and with corresponding (positive and time-independent) eigenvalues $\lambda_1$, $\lambda_2$, and $\lambda_3$ (also called {\em central moment of inertia}). Let us denote by $\T I_{\mathcal B}$ the inertia tensor of the rigid body $\mathcal B_1$ with respect to $G$. Since $\mathcal B_2$ is a homogeneous rigid ball with center at $G$, then any axis passing through its center is also a central axis of inertia. Thus, the inertial tensor of $\mathcal B_2$ with respect to $G$ is simply $\lambda (\V e_1\otimes \V e_1+\V e_2\otimes \V e_2+\V e_3\otimes \V e_3)$ with $\lambda=2/5\; mR^2$, and $m$ the mass of the rigid ball. With respect to the reference frame $\mathcal F$, all the volumes considered above are time-independent. The following system of differential equations describes the dynamics of the given system in the reference frame $\mathcal F$ \footnote{We refer to \cite{Ma}, and \cite{Ma2} for more details about this kind of formulation obtained for similar problems in liquid-solid interactions. }. \begin{equation}\label{eq:motion} \begin{aligned} &\left.\begin{split} &\rho\left(\frac{\partial \V u}{\partial t}+\V v\cdot \nabla \V u+\V \omega_1\times \V u\right) =\mathop{\mathrm{div}} \T T(\V u,p) \\ &\mathop{\mathrm{div}} \V u=0 \end{split}\right\}&&\text{ on }\mathscr{L}\times (0,\infty), \\ &\T I_{\mathcal B}\cdot\dot{\V \omega}_1 +\V \omega_1\times \T I_{\mathcal B}\cdot \V \omega_1 =-\int_{ \mathcal C}\V x\times \T T(\V u,p)\cdot \V n\; \d \sigma \qquad &&\text{ in }(0,\infty), \\ &\lambda(\dot{\V \omega}_2 +\V \omega_1\times \V \omega_2) =-\int_{\mathcal S}\V x\times \T T(\V u,p)\cdot \V n\; \d \sigma \qquad &&\text{ in }(0,\infty), \\ &\V u=\V \omega_1\times \V x\qquad &&\text{ on }\mathcal C, \\ &\V u=\V \omega_2\times \V x\qquad &&\text{ on }\mathcal S. \end{aligned} \end{equation} Here, $\V u,p, \mu$ and $\rho$ denote the Eulerian absolute velocity and pressure of the liquid, its shear viscosity and (constant) density, respectively. In addition, $\V v$ indicates the Eulerian velocity of the liquid relative to $\mathcal B_1$ \begin{equation}\label{eq:relative_velocity} \V v:=\V u-\V \omega_1\times \V x. \end{equation} We notice that $\mathop{\mathrm{div}} \V v=0$, and it enjoys the following boundary conditions \begin{equation}\label{eq:bc_v} \V v=\V 0\qquad \text{on }\mathcal C,\quad \text{and }\quad \V v\cdot \V n=0\qquad \text{on }\mathcal S. \end{equation} Moreover, $\T T(\V u,p)$ denotes the Cauchy stress tensor for a viscous incompressible fluid \begin{equation}\label{eq:Cauchy_stress} \T T(\V u,p):=-p\T 1+2\mu \T D(\V u), \qquad \text{where } \; \T D(\V u):=\frac 12 (\nabla \V u+(\nabla \V u)^T). \end{equation} Finally, $\V \omega_1$ and $\V \omega_2$ are the angular velocities of $\mathcal B_1$ and $\mathcal B_2$, respectively. Equations \eqref{eq:motion}$_{1,2}$ with \eqref{eq:relative_velocity} and \eqref{eq:Cauchy_stress} are the {\em Navier-Stokes equations} in the non-inertial reference frame $\mathcal F$. These equations describe the dynamics of the liquid. Equations \eqref{eq:motion}$_{3,4}$ are the {\em balances of angular momentum} (with respect to $G$) of $\mathcal B_1$ and $\mathcal B_2$, respectively. In particular, the surface integrals in \eqref{eq:motion}$_{3,4}$ represent the total torque exerted by the liquid on the cavity surface $\mathcal C$ and on the sphere $\mathcal S$, respectively. The equations of motion are augmented with the {\em no-slip} boundary conditions \eqref{eq:motion}$_{5,6}$ at $\mathcal C$ and $\mathcal S$, respectively. Equations \eqref{eq:motion} feature a combination of {\em dissipative} and {\em conservative} components. The {\em dissipative} role is played by the liquid variable through equations \eqref{eq:motion}$_{1,2,5,6}$. Whereas, the {\em conservative} feature comes from the coupling with the equations \eqref{eq:motion}$_{3,4}$ describing the dynamics of the solids. As a matter of fact, the energy dissipates only in the liquid variable (see equation \eqref{eq:energy} below), and the total angular momentum (with respect to $G$) of the whole system is conserved at all times (see equation \eqref{eq:conservation0} below). These properties are satisfied for ``sufficiently regular'' solutions. \begin{lemma}[Energy Balance] \label{lem:energy} Consider $t_0\ge 0$, and assume that the quadruple $(\V u, p, \V \omega_1,\V \omega_2)$ satisfies the following regularity properties for all $T>0$: \begin{equation}\label{eq:regularity}\begin{split} &\V u\in C^0([t_0, t_0+T];W^{1,2}(\mathscr{L})\cap H(\mathscr{L})) \cap L^2(t_0,t_0+T;W^{2,2}(\mathscr{L})), \\ &\quad\frac{\partial \V u}{\partial t}\in L^2(t_0,t_0+T;L^2(\mathscr{L})),\; \quad p\in L^2(t_0,t_0+T;W^{1,2}(\mathscr{L})),\; \\ &\qquad \qquad\qquad\qquad\V \omega_1,\V \omega_2 \in W^{1,\infty}(t_0,t_0+T). \end{split}\end{equation} If $(\V u,p,\V \omega_1,\V \omega_2)$ satisfies \eqref{eq:motion} a.e. in $(t_0,\infty)$, then the following {\em energy balance} holds. \begin{equation}\label{eq:energy} \frac 12 \frac{\d }{\d t}\left[\rho \norm{\V u}_{L^2(\mathscr{L})}^2+\V \omega_1\cdot \T I_{\mathcal B}\cdot \V \omega_1 +\lambda |\V \omega_2|^2\right]+2\mu\norm{\T D(\V u)}^2_{L^2(\mathscr{L})}=0. \end{equation} \end{lemma} \begin{proof} Let us take the $L^2$-inner product of \eqref{eq:motion}$_2$ with $\V u$, we find that \[ \frac \rho2 \frac{\d }{\d t}\norm{\V u}_{L^2(\mathscr{L})}^2+\int_{\mathscr{L}}(\V v\cdot \nabla \V u)\cdot \V u\; \d V -\int_{\mathscr{L}}\V u\cdot \mathop{\mathrm{div}}\T T\; \d V=0. \] Since $\mathop{\mathrm{div}} \V v=\mathop{\mathrm{div}} \V u=0$ by \eqref{eq:motion}$_2$, using \eqref{eq:bc_v} and Gauss' Theorem, we can infer the following \[ \int_{\mathscr{L}}(\V v\cdot \nabla \V u)\cdot \V u\; \d V=0. \] By \eqref{eq:motion}$_{5,6}$ and \eqref{eq:Cauchy_stress}, and again by Gauss' Theorem, we get \[ \frac \rho2 \frac{\d }{\d t}\norm{\V u}_{L^2(\mathscr{L})}^2 -\V \omega_1\cdot\int_{\mathcal C}\V x\times \T T\cdot \V n\; \d \sigma -\V \omega_2\cdot\int_{\mathcal S}\V x\times \T T\cdot \V n\; \d \sigma +2\mu\norm{\T D(\V u)}^2_{L^2(\mathscr{L})}=0. \] From the latter displayed equation, \eqref{eq:energy} immediately follows by using \eqref{eq:motion}$_{3,4}$ dot-multiplied by $\V \omega_1$ and $\V \omega_2$, respectively. \end{proof} With the same hypotheses of the previous lemma, we can show the following. \begin{lemma}[Conservation of total angular momentum]\label{lem:conservation} If the quadruple $(\V u,p,\V \omega_1,\V \omega_2)$ satisfies \eqref{eq:regularity} for some $t_0\ge 0$, and \eqref{eq:motion} a.e. in $(t_0,\infty)$, then \begin{equation}\label{eq:balance_angular_momentum} \V{\dot A}+\V \omega_1\times \V A=\V 0, \end{equation} where \begin{equation}\label{eq:anugular_momentum} \V A:=\rho \int_{\mathscr{L}} \V x\times \V u\; \d V+\T I_{\mathcal B}\cdot \V \omega_1+\lambda \V \omega_2 \end{equation} is the {\em total angular momentum} of the whole system with respect to $G$. In particular, equation \eqref{eq:balance_angular_momentum} implies that \begin{equation}\label{eq:conservation0} |\V A(t)|=|\V A(t_0)|,\qquad \text{all }t\ge t_0. \end{equation} \end{lemma} \begin{proof} From \eqref{eq:motion}$_{1,3,4}$, we find that \begin{equation}\label{eq:dotA} \begin{split} \V{\dot A}&=\rho \int_{\mathscr{L}} \V x\times \frac{\partial \V u}{\partial t}\; \d V +\T I_{\mathcal B}\cdot \dot{\V \omega}_1+\lambda \dot{\V \omega}_2 \\ &=\int_{\mathscr{L}} \V x\times\left(\mathop{\mathrm{div}} \T T(\V u,p)-\rho\V v\cdot \nabla \V u -\rho\V \omega_1\times \V u\right)\; \d V -\V \omega_1\times \T I_{\mathcal B}\cdot \V \omega_1 \\ &\qquad -\int_{ \mathcal C}\V x\times \T T(\V u,p)\cdot \V n\; \d \sigma -\lambda \V \omega_1\times \V \omega_2 -\int_{\mathcal S}\V x\times \T T(\V u,p)\cdot \V n\; \d \sigma. \end{split} \end{equation} Since the Cauchy stress tensor is symmetric, by Gauss' Theorem we get that \[ \int_{\mathscr{L}} \V x\times\mathop{\mathrm{div}} \T T(\V u,p)\; \d V -\int_{ \mathcal C}\V x\times \T T(\V u,p)\cdot \V n\; \d \sigma -\int_{\mathcal S}\V x\times \T T(\V u,p)\cdot \V n\; \d \sigma=\V 0. \] Using again Gauss' Theorem together with \eqref{eq:bc_v}, we also find that \[\begin{split} -\rho\int_{\mathscr{L}} \V x\times\left(\V v\cdot \nabla \V u+\V \omega_1\times \V u\right)\d V &=\rho\int_{\mathscr{L}}\left[\V u\times (\V \omega_1\times \V x) + \V x\times\left(\V u\times \V \omega_1\right)\right]\d V \\ &=-\rho\int_{\mathscr{L}}\V \omega_1\times (\V x\times \V u)\d V. \end{split}\] In the last equality, we have used the following property of the cross product in $\ensuremath{\mathbb{R}}^3$: \[ \V a\times (\V b\times \V c)+\V b\times (\V c\times \V a)=-\V c\times (\V a\times \V b),\quad \text{all } \V a,\V b,\V c\in \ensuremath{\mathbb{R}}^3. \] Therefore, \eqref{eq:dotA} becomes \[ \V{\dot A}=-\V \omega_1\times \left(\rho\int_{\mathscr{L}}\V x\times \V u\;\d V +\T I_{\mathcal B}\cdot \V \omega_1+\lambda \V \omega_2\right)=-\V \omega_1\times \V A. \] This shows \eqref{eq:balance_angular_momentum}, from which \eqref{eq:conservation0} immediately follows by taking the dot-product of \eqref{eq:balance_angular_momentum} by $\V A$. \end{proof} In the next section, we will provide the functional setting in which we will study the existence of solutions to the equations of motion. \section{Functional spaces}\label{sec:functional} Consider the spaces \[\begin{aligned} \mathcal R(\mathcal V)&:=\{\V u\in C^\infty(\mathcal V):\; \V u=\V \omega_u \times \V x\text{ on }\mathcal V, \text{ for some }\V \omega_u \in \ensuremath{\mathbb{R}}^3\}, \\ C^\infty_R(\mathcal V)&:=\left\{\V u\in C^\infty(\mathcal V):\; \V u=\V \omega_u \times \V x\text{ in a neighborhood of }\mathcal B_2, \text{ for some }\V \omega_u \in \ensuremath{\mathbb{R}}^3\right\}. \end{aligned}\] For every $1\le q<\infty$, let us consider the norm \begin{equation}\label{eq:norm_w} \norm{\V u}_q:=\left(\int_{\mathcal V}\tilde \rho\V u^q\right)^{1/q} =\left(\rho\norm{\V u}^q_{L^q(\mathscr{L})}+\lambda|\V \omega_u|^q\right)^{1/q},\qquad\text{for all }\V u\in C^\infty_R(\mathcal V). \end{equation} In the above equation, \begin{equation}\label{eq:density} \tilde \rho:=\left\{\begin{split} \rho\qquad\quad &\text{on }\mathscr{L} \\ \frac{15\lambda}{8\pi R^5}\qquad &\text{on }\mathcal B_2. \end{split}\right. \end{equation} $L^q_R(\mathcal V)$ indicates the completion of $C^\infty_R(\mathcal V)$ in the norm $\norm{\cdot}_q$. In the particular case of $q=2$, $L^2_R(\mathcal V)$ is a Hilbert space endowed with the inner product \begin{equation}\label{eq:inner_product_w} (\V u,\V v):=\int_{\mathcal V}\tilde \rho\V u\cdot \V v=\int_\mathscr{L} \rho\; \V u\cdot \V v+\lambda \V \omega_u\cdot \V \omega_v. \end{equation} One can show that the following characterization holds for every $1\le q<\infty$ (see e.g. \cite[Chapter 1, Section 1]{temam1985}) \[ L^q_R(\mathcal V)=\{\V u\in L^q(\mathcal V):\; \V u=\V \omega_u\times \V x\;\text{ on }\mathcal B_2\;\text{ for some }\V \omega_u\in \ensuremath{\mathbb{R}}^3\}. \] Consider the spaces \[ \mathcal D_R(\mathcal V):= \{\V u\in C^\infty_R(\mathcal V)\cap C^\infty_0(\mathcal V):\; \mathop{\mathrm{div}} \V u=0\; \text{ on }\mathcal V\}, \] and for $T>0$ \begin{multline*} \mathcal D_R(\mathcal V_T):= \{C^\infty_0(\mathcal V\times[0,T)):\; \mathop{\mathrm{div}} \V u=0\; \text{ on }\mathcal V\times[0,T), \\ \qquad\qquad\quad\V u=\V \omega_u \times \V x\text{ in a neighborhood of }\mathcal B_2, \text{ for some }\V \omega_u \in C^\infty_0([0,T))\}. \end{multline*} In addition, $\mathcal H_q(\mathcal V)$ denotes the completion of $\mathcal D_R(\mathcal V)$ with respect to the norm $\norm{\cdot }_q$. In a similar fashion to the classical space of the hydrodynamics (see e.g. \cite[Section III.2]{Ga}), one can show that, the space $\mathcal H_q(\mathcal V)$ has the following representation \[\begin{split} \mathcal H_q(\mathcal V)=\{\V u\in L^q_R(\mathcal V):\;& \mathop{\mathrm{div}} \V u=0\;\text{ on }\mathcal V,\; \V u\cdot \V n=0\;\text{ on }\mathcal C\}. \end{split}\] Moreover, we can consider the projection operator $\mathcal P_q$ of $L^q_R(\mathcal V)$ onto $\mathcal H^q(\mathcal V)$ (c.f. \cite[Remark III.1.1 \& Theorem III.1.2]{Ga}). Let $1<q<\infty$. The space $\mathcal H^1_q(\mathcal V)$ denotes the completion of $\mathcal D_R(\mathcal V)$ with respect to the norm \begin{equation}\label{eq:norm_1q} \norm{\cdot}_{1,q}:=\left(\norm{\cdot}_q^q+2\mu\norm{\T D(\cdot)}^q_{L^q(\mathcal V)}\right)^{1/q}. \end{equation} The right-hand side of latter displayed equation defines indeed a norm due to the following {\em Korn inequality} (\cite[Theorem 1]{Geymonat}). \begin{lemma}\label{lem:korn_q} For $1 <q<\infty$ the space $U_q:=\{\V u\in L^q(\mathcal V):\; \T D(\V u)\in L^q(\mathcal V)\}$ is equal to $W^{1,q}(\mathcal V)$. Moreover, there exist two constants $0<c_1<c_2$ such that \[ c_1\norm{\V u}_{W^{1,q}(\mathcal V)}\le\left(\norm{\V u}_{L^q(\mathcal V)}^q+\norm{\T D(\V u)}^q_{L^q(\mathcal V)}+\norm{\mathop{\mathrm{div}}(\V u)}^q_{L^q(\mathcal V)}\right)^{1/q} \le c_2\norm{\V u}_{W^{1,q}(\mathcal V)}, \] for all $\V u\in U_q$. \end{lemma} The following characterization holds \begin{equation*} \mathcal H^1_q(\mathcal V)=\{\V u\in W^{1,q}_0(\mathcal V):\; \mathop{\mathrm{div}} \V u=0\;\text{ on }\mathcal V,\; \V u=\V \omega_u\times \V x\;\text{ on }\mathcal B_2\;\text{ for some }\V \omega_u\in \ensuremath{\mathbb{R}}^3\}. \end{equation*} We notice that $\mathcal D_R(\mathcal V)\subset \mathcal H^1_q(\mathcal V)$, so $\mathcal H^1_q(\mathcal V)$ is dense in $\mathcal H_q(\mathcal V)$. Moreover, since $W^{1,q}(\mathcal V)$ is compactly embedded in $L^q(\mathcal V)$ for all $\displaystyle 1\le q<\infty$ (\cite[Theorem 6.3]{Adams}), we have the following lemma. \begin{lemma}\label{lem:embedding1} If $\displaystyle 1\le q<\infty$, then the embedding of $\mathcal H^1_q(\mathcal V)$ in $\mathcal H_q(\mathcal V)$ is compact. \end{lemma} We are now in position to state some inequalities that will be used in the next sections. The proof are standards and will be omitted. We start with the following {\em Korn-type equalities}. \begin{lemma}[Korn's equality in $\mathcal H^1_2$] For all $\V v, \V w\in \mathcal H^1_2(\mathcal V)$ the following equality holds \[ 2\int_{\mathscr{L}}\T D(\V v):\T D(\V w)\; \d V =\int_{\mathcal V}\nabla \V v:\nabla \V w\; \d V. \] In particular, \begin{equation}\label{eq:korn_2} \norm{\nabla \V v}_{L^2(\mathcal V)}=\sqrt 2 \norm{\T D( \V v)}_{L^2(\mathscr{L})}. \end{equation} \end{lemma} In a similar fashion as in \cite[Proposition 3.]{Geymonat}, and applying Lemma \ref{lem:korn_q} to \eqref{eq:norm_1q} one can easily show the following Poincar\'e-Korn inequality. \begin{lemma}[Poincar\'e-Korn inequality in $\mathcal H^1_q$] Let $1<q<\infty$. There exist two positive constants $k_1<k_2$ such that \begin{equation}\label{eq:korn_q} k_1\norm{\V v}_{W^{1,q}(\mathcal V)}\le \norm{\V v}_{1,q}\le k_2\norm{\T D(\V v)}_{L^q(\mathscr{L})},\qquad \text{for all }\;\V v\in \mathcal H^1_q(\mathcal V). \end{equation} \end{lemma} Recall that $\mathcal V=\mathscr{L}\cup \overline{\mathcal B_2}$. Next lemma follows directly from \eqref{eq:korn_2} and Poincar\'e inequality. \begin{lemma}\label{le:estimates_2} The following estimates hold for all $\V v\in \mathcal H^1_2(\mathcal V)$. \begin{enumerate} \item Let $\V \omega_v \in \ensuremath{\mathbb{R}}^3$ be such that $\V v=\V \omega_v \times \V x$ on $\mathcal B_2$, then \begin{equation}\label{eq:korn1} \norm{\nabla \V{ v}}_{L^2(\mathscr{L})}^2+\frac 83 \pi R^3 |\V \omega_v|^2=2\norm{\T D (\V{v})}_{L^2(\mathscr{L})}^2 \end{equation} \item There exists a positive constants $C_1$ depending only on $\mathscr{L}$ (and independent of $\V v$) such that \begin{equation}\label{eq:poincare_korn} \norm{\V v}_{L^2(\mathscr{L})}\le C_1\norm{\T D(\V v)}_{L^2(\mathscr{L})}. \end{equation} \end{enumerate} \end{lemma} Since $\mathcal D_R(\mathcal V)$ is dense in $\mathcal H^1_R(\mathcal V)$, by Sobolev inequality together with \eqref{eq:korn_q}, we can prove the following lemma. \begin{lemma}\label{le:sobolev_korn} For all $s<3$, there exists a positive constant $k$ depending only on $\mathscr{L}$ (and independent of $\V v$) such that \begin{equation}\label{eq:sobolev_korn} \norm{\V v}_{q}\le k\norm{\T D(\V v)}_{L^s(\mathscr{L})}, \qquad \text{for all }\; \V v\in \mathcal H^1_q(\mathcal V) \end{equation} if and only if $q=6/(3-s)$. \end{lemma} We conclude this section by introducing the space $\mathcal H^k_q(\mathcal V)$ as the completion of $\mathcal D_R(\mathcal V)$ with respect to the norm $\norm{\cdot }_{W^{k,q}(\mathcal V)}$ for all $1\le q<\infty$ and $k\in \ensuremath{\mathbb{N}}$, $k\ge 2$. In particular, $\mathcal H^2_q(\mathcal V)$ is a Banach space endowed with the norm \begin{equation}\label{eq:norm_2q} \norm{\cdot}_{2,q}:=\left(\norm{\cdot}^q_{L^q(\mathcal V)}+\norm{\T D(\cdot)}^q_{L^q(\mathcal V)}+\norm{\T H(\cdot)}^q_{L^q(\mathcal V)}\right)^{1/q}, \end{equation} where $\T H$ denote the third order tensor of second order derivatives. Similarly to Lemma \ref{lem:embedding1}, the following embedding also holds. \begin{lemma}\label{lem:embeddingk} If $\displaystyle 1\le q<\infty$ and $k\ge 1$, then the embedding of $\mathcal H^k_q(\mathcal V)$ in $\mathcal H_q(\mathcal V)$ is compact. \end{lemma} The previous results together with Lemma \ref{lem:energy} and Lemma \ref{lem:conservation} allow us to present a new mathematical formulation of the problem. This new formulation is equivalent to \eqref{eq:motion}, and will reveal more features of the dynamics of our physical system. \section{An equivalent formulation}\label{sec:equivalent_formulation} Let us introduce the new variable \begin{equation}\label{eq:new_variables} \V \omega:=\V \omega_2-\V \omega_1. \end{equation} The definition of the variable $\V \omega$ comes from the following heuristic reasoning. Due to the liquid viscosity (since also $\T D(\V u)=\T D(\V v)$), we expect the velocity of the liquid relative to $\mathcal B_1$ (and also the one relative to $\mathcal B_2$) to decay to zero as time approaches to infinity. If this happens, from the boundary conditions \eqref{eq:bc_v}, also $\V \omega$ is expected to decay, and the system would then move as a whole rigid body. Let $\T I:=\T I_{C}+\lambda \T 1=(\lambda_1+\lambda)\V e_1\otimes\V e_1 +(\lambda_2+\lambda)\V e_2\otimes\V e_2+(\lambda_3+\lambda)\V e_3\otimes\V e_3$ be the inertia tensor of the whole system with respect to $G$. Here, $\T 1$ denotes the identity tensor in $\ensuremath{\mathbb{R}}^3\times \ensuremath{\mathbb{R}}^3$. We note that $\T I_{C}=\T I_{\mathscr{L}}+\T I_{\mathcal B}$, where \[ \V b\cdot \T I_{\mathscr{L}}\cdot \V c=\rho\int_{\mathscr{L}}(\V x\times \V b)\cdot (\V x\times \V c)\; \d V, \qquad \V b, \V c\in \ensuremath{\mathbb{R}}^3. \] The tensor $\T I$ is a symmetric and positive definite (thus, invertible). To simplify the notation, let us introduce the vector field \begin{equation}\label{eq:omega_R} \V \omega_R:=-\T I^{-1}\cdot\left[\rho\int_{\mathscr{L}}\V x\times \V v\; \d V+\lambda \V \omega\right]. \end{equation} In terms of the variables $(\V v,p,\V \omega_1,\V \omega)$, and taking into account \eqref{eq:Cauchy_stress} together with Lemma \ref{lem:conservation}, the equations of motion \eqref{eq:motion} can be equivalently reformulated as follows: \begin{equation}\label{eq:Motion} \begin{aligned} &\left.\begin{split} &\rho\left(\frac{\partial \V v}{\partial t}+\V{\dot \omega}_1\times \V x+ \V v\cdot \nabla \V v+2\V \omega_1\times \V v\right) \\ &\qquad\qquad\qquad\qquad\quad\qquad =\frac \rho2 \nabla |\V \omega_1\times \V x|^2+\mathop{\mathrm{div}} \T T(\V v,p) \\ &\mathop{\mathrm{div}} \V v=0 \end{split}\right\}&&\text{on }\mathscr{L}\times (0,\infty), \\ &\T I\cdot(\V{\dot \omega}_1-\V{\dot \omega}_R) +\V \omega_1\times \T I\cdot (\V \omega_1-\V \omega_R)=\V 0 \qquad&&\text{in }(0,\infty), \\ &\lambda\left(\V{\dot \omega}+\V{\dot \omega}_1 +\V \omega_1\times \V \omega\right) =-\int_{\mathcal S}\V x\times \T T(\V v,p)\cdot \V n\; \d \sigma \qquad &&\text{in }(0,\infty), \\ &\V v=\V 0\qquad &&\text{on }\mathcal C, \\ &\V v=\V \omega\times \V x\qquad &&\text{on }\mathcal S. \end{aligned} \end{equation} The proof of the equivalence between the formulations \eqref{eq:motion} and \eqref{eq:Motion} goes along the one provided in the case when no rigid body is within the cavity of $\mathcal B_1$ (namely, if $R\equiv 0$). We refer the interested reader to \cite[Appendix]{DiGaMaZu} and \cite[Sections 2.1 and 2.2]{Ma2}. The energy balance \eqref{eq:energy} can be rewritten as follows \begin{equation}\label{eq:Energy0} \begin{split} \frac 12 \frac{\d }{\d t}\left[\rho \norm{\V v}^2_{L^2(\mathscr{L})}+\lambda |\V \omega|^2-\V \omega_R\cdot \T I \cdot \V \omega_R+(\V \omega_1-\V \omega_R)\cdot \T I\cdot (\V \omega_1-\V \omega_R)\right] +2\mu\norm{\T D( \V v)}^2_{L^2(\mathscr{L})}=0. \end{split}\end{equation} Consider the functionals \begin{equation}\label{eq:a_varphi} \begin{split} b:\;\V w\in\mathcal H_q(\mathcal V)\mapsto b(\V w)&:=-\T I^{-1}\cdot \int_{\mathcal V}\tilde \rho\V x\times \V w \\ &=-\T I^{-1}\cdot \left(\rho\int_{\mathscr{L}}\V x\times \V w+\lambda \V \omega_w\right)\in \ensuremath{\mathbb{R}}^3, \end{split}\end{equation} and taking $q=2$ in the previous definition, we define \begin{equation}\label{eq:lyapunov} \mathcal E:\; \V w\in \mathcal H_2(\mathcal V)\mapsto \mathcal E(\V w):=\norm{\V w}^2_2-b(\V w)\cdot\T I\cdot b(\V w)\in \ensuremath{\mathbb{R}}. \end{equation} In particular, if we consider the field \begin{equation}\label{eq:extension} \V{\tilde v}:=\left\{\begin{split} \V v\quad\quad \text{in }&\mathscr{L}, \\ \V \omega \times \V x\quad \text{in }&\mathcal B_2, \end{split}\right.\end{equation} and use \eqref{eq:norm_w} and \eqref{eq:omega_R}, we find that $b(\V{\tilde v})=\V \omega_R$ and \begin{equation}\label{eq:energy_d} \mathcal E(\V{\tilde v})=\rho \norm{\V v}^2_{L^2(\mathscr{L})}+\lambda |\V \omega|^2-\V \omega_R\cdot \T I \cdot \V \omega_R. \end{equation} The following lemma ensures that $\mathcal E$ is a positive definite functional. Actually, it says a little more. \begin{lemma}\label{le:kokr} There exists a constant $c\in (0,1)$ such that \begin{equation}\label{eq:kokr} c\norm{\V w}^2_2\le \mathcal E(\V w) \le\norm{\V w}^2_2, \end{equation} for all $\V w\in \mathcal H_2(\mathcal V)$. Moreover, for every $\V w\in \mathcal H^1_2(\mathcal V)$, there exists a positive constant $C$ such that \begin{equation}\label{eq:dissipation} \mathcal E(\V w)\le C\norm{\T D( \V w)}^2_{L^2(\mathscr{L})}. \end{equation} \end{lemma} \begin{proof} To prove \eqref{eq:kokr}, we will borrow some ideas from \cite[Section 7.2.3]{KoKr}. Consider the linear operator with finite dimensional range \begin{equation}\label{eq:compact} \mathbb B:\; \V w\in \mathcal H_q(\mathcal V)\mapsto (\mathbb B\V w)(\V x):=-b(\V w)\times \V x\in \mathcal R(\mathcal V), \end{equation} where $b(\cdot)$ has been defined in \eqref{eq:a_varphi}. If $q=2$, $\mathbb B$ is a nonnegative self-adjoint operator in $\mathcal H_2(\mathcal V)$ endowed with the inner product $(\cdot,\cdot)$ defined in \eqref{eq:inner_product_w}. In fact, since $\T I$ is symmetric, for all $\V w$ and $\V z\in \mathcal H_2(\mathcal V)$ we have \[\begin{split} (\mathbb B\V w,\V z)&=-\rho\int_{\mathscr{L}}(b(\V w)\times \V x)\cdot\V z\; \d V-\lambda \V \omega_z\cdot b(\V w) \\ &=-b(\V w)\cdot\left(\rho\int_{\mathscr{L}}\V x\times \V z\; \d V+\lambda \V \omega_z\right) =b(\V w)\cdot \T I\cdot b(\V z)=(\V w,\mathbb B\V z). \end{split}\] In particular, since $\T I$ is positive definite, $ (\mathbb B\V w, \V w)=b(\V w)\cdot \T I\cdot b(\V w)\ge 0$. Moreover, \begin{equation}\label{eq:1-B} ((\T 1-\mathbb B)\V w,\V w)=\norm{\V w}^2_2-b(\V w)\cdot \T I\cdot b(\V w) =\mathcal E(\V w). \end{equation} The inequality on the right-hand side of \eqref{eq:kokr} follows immediately from the latter displayed equations. Thus, to complete the proof of \eqref{eq:kokr}, it is enough to show that the operator $\T 1-\mathbb B$ admits a bounded inverse in $( \mathcal H_2(\mathcal V),\norm{\cdot}_2)$. First, we will show that $\T 1-\mathbb B$ is a nonnegative operator on $ \mathcal H_2(\mathcal V)$. Using the above calculations, we have the following: \[\begin{split} ((\T 1-\mathbb B)\V w,\V w)&=\norm{\V w}^2_2-b(\V w)\cdot \T I\cdot b(\V w) \\ &=\norm{\V w+b(\V w)\times \V x}^2_2-\rho\norm{b(\V w)\times \V x}^2_{L^2(\mathscr{L})} \\ &\quad-2b(\V w)\cdot \left(\rho\int_{\mathscr{L}}\V x\times \V w\; \d V+\lambda \V \omega_w\right) -\lambda |b(\V w)|^2 -b(\V w)\cdot \T I\cdot b(\V w) \\ &=\norm{\V w+b(\V w)\times \V x}^2_2-b(\V w)\cdot(\T I_{\mathscr{L}}+\lambda\T 1)\cdot b(\V w) +2b(\V w)\cdot \T I\cdot b(\V w) \\ &\quad -b(\V w)\cdot \T I\cdot b(\V w) \\ &=\norm{\V w+b(\V w)\times \V x}^2_2-b(\V w)\cdot(\T I_{\mathscr{L}}+\lambda\T 1)\cdot b(\V w) +b(\V w)\cdot \T I\cdot b(\V w) \\ &=\norm{\V w+b(\V w)\times \V x}^2_2 +b(\V w)\cdot \T I_{\mathcal B}\cdot b(\V w)\ge 0 \end{split}\] since $\T I_{\mathcal B}=\T I-\T I_{\mathscr{L}}-\lambda \T 1$ is also a positive definite tensor. In addition to this, one can also show that $((\T 1-\mathbb B)\V w,\V w)=0$ iff $\V w\equiv \V 0$ on $\mathcal V$. We need to show only that $((\T 1-\mathbb B)\V w,\V w)=0$ implies that $\V w\equiv \V 0$ (the converse implication is obvious). If $((\T 1-\mathbb B)\V w,\V w)=0$, then $b(\V w)\cdot \T I_{\mathcal B}\cdot b(\V w)=0$. Since $\T I_{\mathcal B}$ is positive definite, then the previous statement implies that $b(\V w)\equiv \V 0$, and also $(\mathbb B\V w,\V w)=0$. Thus, \[ \norm{\V w}_2=(\mathbb B\V w,\V w)=0, \] implying that $\V w\equiv \V 0$ in $\mathcal V$. Summarizing, we have shown that $\mathbb B$ is a linear, nonnegative and self-adjoint operator with finite dimensional range, and for which $\gamma=1$ is not an eigenvalue. Necessarily, $\gamma=1$ is in the resolvent of $\mathbb B$, implying that $\T 1-\mathbb B$ admits a bounded inverse in $\mathcal H_2(\mathcal V)$ endowed with the norm defined in \eqref{eq:norm_w}. This concludes the proof of \eqref{eq:kokr}. The estimate \eqref{eq:dissipation} is an immediate consequence of \eqref{eq:kokr}together with \eqref{eq:sobolev_korn}. \end{proof} Using \eqref{eq:energy_d} and \eqref{eq:lyapunov} in \eqref{eq:Energy0}, the balance of energy then reads as follows \begin{equation}\label{eq:Energy} \frac{\d }{\d t}\left[\mathcal E(\V{\tilde v})+(\V \omega_1-\V \omega_R)\cdot \T I\cdot (\V \omega_1-\V \omega_R)\right]+4\mu\norm{\T D( \V v)}^2_{L^2(\mathscr{L})}=0, \end{equation} where $\V{\tilde v}$ has been define in \eqref{eq:extension}. From the physical viewpoint, $\mathcal E(\V{\tilde v})+(\V \omega_1-\V \omega_R)\cdot \T I\cdot (\V \omega_1-\V \omega_R)$ represents the {\em total kinetic energy} of the whole system of rigid bodies with a liquid-filled gap. Thanks to Lemma \ref{le:kokr}, we can introduce the inner product \begin{equation}\label{eq:inner_product_B} (\V v,\V w)_B:=((\T 1-\mathbb B)\V v,\V w),\qquad \text{for all }\V v,\V w\in \mathcal H_2(\mathcal V) \end{equation} with associated norm $\norm{\cdot}_B:=\sqrt{((\T 1-\mathbb B)\cdot,\cdot)}=\sqrt{\mathcal E(\cdot)}$. In addition to the energy balance, the conservation of the total angular momentum \eqref{eq:conservation0} for the whole system can be rewritten in terms of the new variables \begin{equation}\label{eq:conservation} |\T I\cdot (\V \omega_1(t)-\V \omega_R(t))|=|\T I\cdot (\V \omega_1(0)-\V \omega_R(0))|\qquad \text{ all }t\ge 0. \end{equation} One can also obtain \eqref{eq:conservation} by taking the dot-product of \eqref{eq:Motion}$_3$ by $\T I\cdot (\V \omega_1-\V \omega_R)$. \section{Weak solutions and their properties}\label{sec:weak} Our investigation on the inertial motion about a fixed point of the system of two rigid bodies with a liquid-filled gap is carried out in a considerably large class of solutions to \eqref{eq:Motion} having finite kinetic energy. A {\em weak formulation} for the problem \eqref{eq:Motion}, can be found by dot-multiplying both sides of \eqref{eq:Motion}$_1$ by $\V \varphi\in\mathcal H^1_2(\mathcal V)$, integrating (by parts) the resulting equation over $\mathscr{L}\times (0,t)$, and using \eqref{eq:Motion}$_{3,4}$ together with \eqref{eq:xtimesomegatimesx} and \eqref{eq:omegatimesxtimesomegatimesx}. This leads to the following of problem: {\em find a solution $(\V{\tilde v},\V \Omega)$ to the following system of equations} \begin{equation}\label{eq:weak} \begin{aligned} (\V{\tilde v}(t), \V \varphi)_B &+2\mu\int^t_0\int_{\mathcal V}\T D(\V{\tilde v})\mathbin : \T D(\V \varphi)\;\d V\d \tau + b(\V \varphi)\cdot\int^t_0 [\V \Omega+b(\V{\tilde v})]\times \T I\cdot \V \Omega\;\d \tau \\ &+ \int^t_0\int_{\mathcal V}\tilde \rho[\V{\tilde v}\cdot \nabla \V{\tilde v} +2(\V \Omega+b(\V{\tilde v}))\times \V{\tilde v}]\cdot \V \varphi\;\d V\d \tau=(\V{\tilde v}(0), \V \varphi)_B, \\ & \qquad\qquad\qquad\qquad\qquad \qquad\text{ for all $\V \varphi\in\mathcal H^1_2(\mathcal V)$, }\text{and all $t\in[0,\infty)$. } \\ \T I\cdot\V{\Omega}(t)&+\int^t_0[\V \Omega+b(\V{\tilde v})]\times \T I\cdot \V \Omega\;\d \tau=\T I\cdot\V{\Omega}(0), \qquad\qquad\text{for all }t\in[0,\infty). \end{aligned} \end{equation} \begin{definition}\label{def:weak} The triple $(\V v,\V \omega_1,\V \omega)$ is a {\em weak solution} to \eqref{eq:Motion} if the following requirements are met. \begin{enumerate} \item Consider the field $\V{\tilde v}$ in \eqref{eq:extension}. Then, \[ \V{\tilde v}\in C_w ([0,\infty);\mathcal H_2(\mathcal V))\cap L^\infty(0,\infty;\mathcal H_2(\mathcal V))\cap L^2(0,\infty;\mathcal H^1_2(\mathcal V)). \] \item The vector field $\V \Omega=\V \omega_1-b(\V{\tilde v})\in C^0([0,\infty))\cap C^1((0,\infty))$. \item $(\V{\tilde v},\V \Omega)$ satisfies \eqref{eq:weak}. \item The following {\em strong energy inequality} holds: \begin{multline}\label{eq:strong_energy} \mathcal E(\V{\tilde v}(t))+\V \Omega(t)\cdot \T I\cdot \V \Omega(t)+4\mu\int^t_s\norm{\T D( \V{\tilde v}(\tau))}^2_{L^2(\mathscr{L})}\;\d \tau \le\mathcal E(\V{\tilde v}(s))+\V \Omega(s)\cdot \T I\cdot \V \Omega(s), \end{multline} for all $t\ge s$ and a.a. $s\ge 0$ including $s=0$. \end{enumerate} \end{definition} From the previous definition, it immediately follows that the physical velocity fields $(\V v,\V \omega_1,\V \omega)$ enjoy the following properties \begin{equation}\label{eq:regularity_weak} \begin{split} &\V v\in C_w ([0,\infty);H(\mathscr{L}))\cap L^\infty(0,\infty;H(\mathscr{L}))\cap L^2(0,\infty;H(\mathscr{L})\cap W^{1,2}(\mathscr{L})) \\ &\V \omega_1\in C([0,\infty))\cap L^\infty(0,\infty), \\ &\V \omega\in C([0,\infty))\cap L^\infty(0,\infty)\cap L^2(0,\infty), \\ &\V v=\V 0\quad \text{on }\; \mathcal C, \qquad\V v=\V \omega\times \V x\quad\text{on }\;\mathcal S\text{ (in the trace sense).} \end{split} \end{equation} In particular, if $(\V v,\V \omega_1,\V \omega)$ is a weak solution, by \eqref{eq:strong_energy} together with \eqref{eq:kokr} and \eqref{eq:inner_product_w}, it follows that there exists a constant $c_0=c_0(\V v(0),\V \Omega(0),\V\omega(0))$ such that \[ \rho\norm{\V v}_{L^2(\mathscr{L})}^2+\lambda|\V \omega|^2\le c^2_0,\;\qquad \quad \text{for all }\; t\ge 0. \] Furthermore, up to redefining the above constant $c_0$, we also have \[\begin{aligned} |\V \omega_R(t)|\le \rho\int_{\mathscr{L}}|\V x\times \V v|\; \d V+\lambda|\V \omega&|\le c_0\;&&\text{ for all }\; t\ge 0, \\ \V \Omega(t)\cdot \T I\cdot \V \Omega(t)=\V \omega_1(t)\cdot \T I\cdot \V \omega_1(t)-2\V \omega_R(t)\cdot \T I \cdot \V \omega_1(t)&\le c^2_0\;&&\text{ for all }\; t\ge 0. \end{aligned}\] Thus, for every $\varepsilon>0$, \[\begin{split} \lambda_{\min}|\V \omega_1(t)|^2\le c^2_0+2\V \omega_R(t)\cdot \T I \cdot \V \omega_1(t) &\le c^2_0+2\lambda_{\max}|\V \omega_R(t)|\;|\V \omega_1(t)| \\ &\le c^2_0+\frac{\lambda_{\max}}{\varepsilon}|\V \omega_R(t)|^2+\lambda_{\max}\varepsilon|\V \omega_1(t)|^2. \end{split}\] Here, $\lambda_{\min}$ and $\lambda_{\max}$ denote the minimum and maximum eigenvalue of $\T I$, respectively. Choosing $\varepsilon:=\lambda_{\min}/(2\lambda_{max})$, we can conclude that \[ \frac 12 \lambda_{\min}|\V \omega_1(t)|^2\le c^2_0\left(1+2\frac{\lambda^2_{\max}}{\lambda_{\min}}\right)\;\qquad \quad \text{for all }\; t\ge 0. \] \begin{remark} Equations \eqref{eq:weak} together with \eqref{eq:strong_energy} represent the ``classical'' weak formulation ({\em \`a la Leray-Hopf}) for the problem of a rigid body having a cavity $\mathcal V$ completely filled by a viscous liquid with the varying density $\tilde \rho$ defined in \eqref{eq:density}. However, setting $\V \omega_1=\V \Omega+\V \omega_R$ and using \eqref{eq:xtimesomegatimesx} and \eqref{eq:omegatimesxtimesomegatimesx}, one can immediately observe that the system of equations in \eqref{eq:weak} is the appropriate weak formulation obtained by testing \eqref{eq:Motion} with functions $\V \psi\in C^\infty(\mathscr{L})$, $\mathop{\mathrm{div}} \V \psi=0$ on $\mathscr{L}$ and satisfying the boundary conditions $\V \psi=\V 0$ on $\mathcal C$ and $\V \psi=\V \omega_\psi\times \V x$ on $\mathcal S$. In fact, for such test functions\footnote{Due to its regularity, we can extend $\V \psi$ by its boundary value on $\mathcal B_2$ and use it as test function in \eqref{eq:weak}. } $\V \psi$ and all $t\in(0,\infty)$, \begin{equation}\label{eq:weak_v} \begin{split} &\int_{\mathscr{L}} \rho[\V v(t)+\V \omega_1(t)\times \V x] \cdot \V \psi\; \d V +\lambda\left[\V \omega(t)+\V \omega_1(t)+\int^t_0\V \omega_1\times \V \omega\;\d \tau\right]\cdot \V \omega_\psi \\ &\quad\qquad +2\mu\int^t_0\int_{\mathscr{L}}\T D(\V v)\mathbin : \T D(\V \psi)\;\d V\d \tau+ \int^t_0\int_{\mathscr{L}}\rho[\V v\cdot \nabla \V v +2\V \omega_1\times \V v]\cdot \V \psi\;\d V\d \tau \\ &\quad=\int_{\mathscr{L}} \rho[\V v(0)+\V \omega_1(0)\times \V x] \cdot \V \psi\; \d V +\lambda[\V \omega(0)+\V \omega_1(0)]\cdot\V \omega_\psi, \\ &\T I\cdot(\V \omega_1(t)-\V \omega_R(t))+\int^t_0\V \omega_1\times \T I\cdot \V \Omega\;\d \tau=\T I\cdot(\V \omega_1(0)-\V \omega_R(0)). \end{split} \end{equation} \end{remark} \begin{remark}\label{re:strong_2} Assume that $\V{\tilde v}$ possesses enough regularity to allow differentiation with respect to time and integration by parts in \eqref{eq:weak}$_1$. Then \[ \V \omega_1=\V \Omega+b(\V{\tilde v})=\V \Omega+\V \omega_R\;\in C^1(0,\infty), \] and \eqref{eq:Motion}$_3$ is satisfied for a.a. $t\in (0,\infty)$. Moreover, the fields $\V v$ and $\V \omega\times \V x$ in \eqref{eq:extension} maintain the same regularity of $\V{\tilde v}$ on $\mathscr{L}$ and $\mathcal B_2$, respectively. By \eqref{eq:weak}, we find that $\V{\tilde v}$ also satisfies \begin{equation}\label{eq:weak_s} \begin{split} &(\frac{\partial \V{\tilde v}}{\partial t}+\V{\dot \omega}_1\times \V x+\V{\tilde v}\cdot \nabla \V{\tilde v} +2\V{\omega}_1\times \V{\tilde v}, \V \varphi) +2\mu\int_{\mathcal V}\T D(\V{\tilde v})\mathbin :\T D(\V \varphi) =0 \end{split} \end{equation} for all $\V \varphi\in\mathcal H^1_2(\mathcal V)$ and all $t\in(0,\infty)$. In particular, \[\begin{split} \int_{\mathscr{L}}\left[\rho\left(\frac{\partial \V v}{\partial t}+\V{\dot \omega}_1\times \V x+\V{v}\cdot \nabla \V{v} +2\V{\omega}_1\times \V{v}\right)-\mu\Delta \V v\right]\cdot \V \varphi=0 \end{split}\] for every $\V \varphi\in H(\mathscr{L})\cap W^{1,2}_0(\mathscr{L})$. Thus, there exists $\tilde p\in L^2(0,\infty;W^{1,2}(\mathscr{L}))$ such that \[ \rho\left(\frac{\partial \V v}{\partial t}+\V{\dot \omega}_1\times \V x+\V{v}\cdot \nabla \V{v} +2\V{\omega}_1\times \V{v}\right)-\mu\Delta \V v=\nabla \tilde p\qquad \text{a.e. in }\mathscr{L}\times (0,\infty). \] Set \[ p:=\tilde p-\frac \rho2 |\V{\omega}_1\times \V x|^2\qquad \text{in }\mathscr{L}, \] then one immediately notices that equations \eqref{eq:Motion}$_{1,2,5,6}$ are satisfied almost everywhere in space-time. Dot-multiplying \eqref{eq:Motion}$_1$ by $\V\varphi \in \mathcal H^1_2(\mathcal V)$ such that $\V\omega_\varphi=\V e_i$, $i=1,\,2,\,3$, and integrating the resulting equation over $\mathscr{L}$ we find \begin{multline*} \int_{\mathscr{L}}\rho\left[\frac{\partial \V v}{\partial t}+\V{\dot \omega}_1\times \V x+ \V v\cdot \nabla \V v+2\V \omega_1\times \V v\right]\cdot \V \varphi =\int_{\mathcal S}(\V x\times \T T\cdot \V n)\cdot \V e_i-2\mu\int_{\mathscr{L}} \T D(\V v)\mathbin : \T D(\V \varphi). \end{multline*}Using \eqref{eq:xtimesomegatimesx} and \eqref{eq:omegatimesxtimesomegatimesx}, the latter displayed equation is equivalent to the following one: \begin{multline*} (\frac{\partial \V{\tilde v}}{\partial t}+\V{\dot \omega}_1\times \V x+\V{\tilde v}\cdot \nabla \V{\tilde v} +2\V{\omega}_1\times \V{\tilde v}, \V \varphi) +2\mu\int_{\mathcal V}\T D(\V{\tilde v})\mathbin :\T D(\V \varphi) \\ -\lambda(\V{\dot \omega}+\V{\dot \omega}_1+\V \omega_1\times \V \omega)\cdot \V e_i=\int_{\mathcal S}(\V x\times \T T\cdot \V n)\cdot \V e_i. \end{multline*} By \eqref{eq:weak_s}, we can then conclude that \[ \lambda(\V{\dot \omega}+\V{\dot \omega}_1+\V \omega_1\times \V \omega)\cdot \V e_i =-\int_{\mathcal S}(\V x\times \T T\cdot \V n)\cdot \V e_i, \] for all $i=1,2,3$, and this proves that also \eqref{eq:Motion}$_4$ is satisfied. \end{remark} The proof of the existence of weak solutions will be accomplished by using the Galerkin method together with a suitable approximation of the liquid velocity in $\mathcal H_2(\mathcal V)$. To this aim, we will prove the existence of a special basis of $\mathcal H_2(\mathcal V)$ and of a special basis of $\mathcal H^2_2(\mathcal V)$. We start by noticing that, taking \eqref{eq:norm_1q} with $q=2$, the norm $\norm{\cdot}_{1,2}$ is induced by the following inner product \begin{equation}\label{eq:inner_product_w_1} (\V v,\V w)_1=(\V v,\V w)+2\mu\int_{\mathscr{L}}\T D(\V v)\mathbin : \T D(\V w)\; \d V, \end{equation} and the latter makes $\mathcal H^1_2(\mathcal V)$ a Hilbert space. Consider the bilinear form $a:\; \mathcal H^1_2(\mathcal V)\times \mathcal H^1_2(\mathcal V)\to \ensuremath{\mathbb{R}}$ defined as follows \begin{equation}\label{eq:a} a(\V v,\V w):=2\mu\int_{\mathscr{L}}\T D(\V v)\mathbin :\T D(\V w). \end{equation} By \eqref{eq:norm_1q} and \eqref{eq:korn_q} with $q=2$, $a(\cdot,\cdot)$ is a continuous and coercive bilinear form in $\mathcal H^1_2(\mathcal V)$. Thus, by Lax-Milgram Theorem, for every $\V f\in \mathcal H_2(\mathcal V)$ there exists a unique solution $\V w\in \mathcal H^1_2(\mathcal V)$ to the variational problem \begin{equation}\label{eq:variational_stokes} a(\V w,\V \varphi)=( \V f,\V \varphi),\qquad \text{ for all }\V \varphi\in \mathcal H^1_2(\mathcal V), \end{equation} where the inner product $(\cdot,\cdot )$ has been defined in \eqref{eq:inner_product_w}. In other words, $\V w$ is a generalized solution (with respect to the inner product \eqref{eq:inner_product_w}) to the problem \begin{equation}\label{eq:stokes0} \begin{split} &\left.\begin{split} &-\frac{1}{\tilde \rho}\mathop{\mathrm{div}} \T T(\V{\tilde v},p)=\V g \\ &\mathop{\mathrm{div}} \V{\tilde v}=0 \end{split}\quad\right\}\quad \text{in }\mathcal V \\ &\ \ \V{\tilde v}=\V 0\qquad \text{on }\mathcal C, \end{split} \end{equation} where $\V g\in L^2(\mathcal V)$ is such that $\V f=\mathcal P_2 \V g$. \footnote{We recall that $\mathcal P_2$ is the orthogonal projection of $L^2_R(\mathcal V)$ onto $\mathcal H_2(\mathcal V)$ with respect to the inner product $(\cdot,\cdot)$, defined in \eqref{eq:inner_product_w} (see Section \ref{sec:functional}). } With an argument similar to the one that leads to the classical estimates for the Stokes problem (see \cite[Theorem IV.6.1]{Ga}), one can further show that $\V w\in \mathcal H^2_2(\mathcal V)$, and there exists a unique (up to a constant) pressure field $q\in W^{1,2}(\mathcal V)$ such that equations \eqref{eq:stokes0}$_{1,2}$ are satisfied almost everywhere on $\mathcal V$. Moreover, $(\V w, q)$ satisfies the following estimates \begin{equation}\label{eq:stokes_estimate} \norm{\V w}_{2,2}+\norm{q}_{W^{1,2}(\mathcal V)}\le c\norm{\V g}_2, \end{equation} with $c=c(\mu,\rho,\lambda,R,\mathcal V)$ a positive constant. Consider the linear operator \[ A:\; \V u\in \mathcal H^2_2(\mathcal V)\mapsto A\V u:=-\nu \mathcal P(\Delta \V u)\in \mathcal H_2(\mathcal V), \] where $\nu:=\mu/\rho$ is the liquid coefficient of kinematic viscosity. An integration by parts implies that $a(\V u,\V w)=(A\V u,\V w)$ for all $\V u,\V w\in \mathcal H^2_2(\mathcal V)$. Thus, $A$ is a symmetric operator. Moreover, $A$ is invertible and closed. In fact, the inverse is defined by the operator \[ A^{-1}:\;\V f\in\mathcal H_2(\mathcal V)\mapsto A^{-1}\V f=\V{\tilde w}\in \mathcal H^1_2(\mathcal V), \] the unique solution to \eqref{eq:variational_stokes}, and $A^{-1}$ is bounded because of \eqref{eq:stokes_estimate}. Therefore, $A$ and $A^{-1}$ are self-adjoint. In addition, thanks to the estimate \eqref{eq:stokes_estimate}, we have the following lemma. \begin{lemma} There exists a positive constant $c$ such that \begin{equation}\label{eq:Pdelta} \norm{\V w}_{2,2}\le c\nu\norm{\mathcal P(\Delta \V w)}_2\qquad \text{for all } \V w\in \mathcal H^2_2(\mathcal V). \end{equation} \end{lemma} Let us consider the following inner product in $\mathcal H^2_2(\mathcal V)$ \begin{equation}\label{eq:inner_product_w_2} (\V u,\V w)_2:=(A\V u,A\V w),\qquad\text{for all }\;\V u,\V w\in \mathcal H^2_2(\mathcal V). \end{equation} By \eqref{eq:Pdelta}, the associated norm is equivalent to $\norm{\cdot}_{2,2}$. We are now ready to prove the existence of a special basis. \begin{theorem}\label{th:basis} The spectral problem \begin{equation}\label{spectral} (\V u,\V \varphi)_2=\lambda (\V u,\V \varphi)_B\qquad \text{for all}\; \V \varphi\in\mathcal H^2_2(\mathcal V) \end{equation} admits a denumerable number of positive eigenvalues $\{\lambda_n\}_{n\in \ensuremath{\mathbb{N}}}$ clustering at $+\infty$. The corresponding eigenfunctions $\{\V w_n\}_{n\in \ensuremath{\mathbb{N}}}$ belong to $\mathcal H^2_2(\mathcal V)$and form an orthonormal basis in $\mathcal H_2(\mathcal V)$ with respect to the inner product $( \cdot,\cdot)_B$ defined in \eqref{eq:inner_product_B}. Furthermore, $\{\V w_n/\sqrt{\lambda_n}\}_{n\in \ensuremath{\mathbb{N}}}$ forms an orthonormal basis in $H^2_2(\mathcal V)$ with respect to the inner product $(\cdot,\cdot)_2$ defined in \eqref{eq:inner_product_w_2}. \end{theorem} \begin{proof} By Lemma \ref{le:kokr} and Lax-Milgram Theorem, for every $\V f\in\mathcal H_2(\mathcal V)$ there exists a unique solution to the problem \begin{equation}\label{eq:lax} (\V u,\V \varphi)_2= (\V f,\V \varphi)_B\qquad \text{for all}\; \V \varphi\in\mathcal H^2_2(\mathcal V). \end{equation} Consider the operator $S_0:\V f\in\mathcal H_2(\mathcal V)\mapsto S_0\V f:=\V u\in\mathcal H^2_2(\mathcal V)$ the unique solution to \eqref{eq:lax}. By Lemma \ref{lem:embeddingk}, the injection $J:\mathcal H^2_2(\mathcal V)\to \mathcal H_2(\mathcal V)$ is compact. Thus, the operator $S:=J\circ S_0:\V f\in\mathcal H_2(\mathcal V)\to S\V f\in\mathcal H_2(\mathcal V)$ is also compact. Moreover, $S$ is symmetric with respect to the inner product $( \cdot,\cdot)_B$ defined in \eqref{eq:inner_product_B}. In fact, for every $\V f_1$ and $\V f_2\in\mathcal H_2(\mathcal V)$, we know that there exist unique $\V u_1$ and $\V u_2\in\mathcal H^2_2(\mathcal V)$ solutions to \eqref{eq:lax} with $\V f$ replaced by $\V f_1$ and $\V f_2$, respectively. So, $S\V f_1=\V u_1$ and $S\V f_2=\V u_2$, and \begin{multline*} ( S\V f_1,\V f_2)_B=(\V u_1,\V f_2)_B=( \V f_2,\V u_1)_B=(\V u_2,\V u_1)_2 =(\V u_1,\V u_2)_2=( \V f_1,\V u_2)_B=( \V f_1,S\V f_2)_B. \end{multline*} In addition, if $\V f_1=\V f_2\equiv\V f$, then $\V u_1=\V u_2\equiv\V u$ and $( S\V f,\V f)_B=(\V u,\V u)_2$. Thus, $S$ is also a positive definite operator. Finally, $S$ is self-adjoint. To prove the latter, we notice that $S$ is a compact perturbation of the identity, and $-1$ is not an eigenvalue of $S$. Thus, $Range(S)=\mathcal H_2(\mathcal V)$ (\cite[Theorem 1, Section 5, Chapter X]{Yosida}). Since $Range(S)=\mathcal H_2(\mathcal V)$ and $S$ is symmetric, then $S$ is self-adjoint (\cite[Corollary to Theorem 1, Section 3, Chapter VII ]{Yosida}). By the Hilbert-Schmidt Theorem, $(\mathcal H_2(\mathcal V),(\cdot,\cdot)_B)$ admits an orthonormal basis of eigenfunctions $\{\V w_n\}_{n\in \ensuremath{\mathbb{N}}}$ of $S$ with corresponding positive eigenvalues $\{\nu_n\}_{n\in N}$ converging to $0$ as $n\to\infty$. Let us denote $\lambda_n:=\nu_n^{-1}>0$ for every $n\in \ensuremath{\mathbb{N}}$. So, $\{\lambda_n\}_{n\in \ensuremath{\mathbb{N}}}$ forms a sequence of eigenvalues of the problem \eqref{eq:lax} clustering at infinity as $n\to \infty$ and with corresponding eigenfunctions $\{\V w_n\}_{n\in \ensuremath{\mathbb{N}}}$. Indeed, by the definition of $S$, we find that $\V w_n\in\mathcal H^2_2(\mathcal V)$ and \[ \nu_n(\V w_n,\V \varphi)_2=( S\V w_n,\V \varphi)_2=( \V w_n,\V \varphi)_B,\quad\text{for every }\V \varphi\in\mathcal H^2_2(\mathcal V),\;n\in \ensuremath{\mathbb{N}}. \] Finally, $\{\V w_n/\sqrt{\lambda_n}\}_{n\in \ensuremath{\mathbb{N}}}$ forms an orthonormal basis in $H^2_2(\mathcal V)$ with respect to the inner product $(\cdot,\cdot)_2$ defined in \eqref{eq:inner_product_w_2}. To see this, consider $\V u\in \mathcal H^2_2(\mathcal V)$ be such that $(\V w_n,\V u)_2=0$ fo every $n\in \ensuremath{\mathbb{N}}$. Then, \[ 0=\nu_n\left(\V w_n,\V u\right)_2=(S\V w_n,\V u)_2=( \V w_n,\V u)_B \] for every $n\in \ensuremath{\mathbb{N}}$, and this implies that $\V u=0$ since $\{\V w_n\}_{n\in \ensuremath{\mathbb{N}}}$ forms a basis in $\mathcal H_2(\mathcal V)$ endowed with the inner product $(\cdot,\cdot)_B$. Therefore, $\{\V w_n/\sqrt{\lambda_n}\}_{n\in \ensuremath{\mathbb{N}}}$ is a basis of $\mathcal H^2_2(\mathcal V)$. Furthermore, \[\begin{split} (\frac{\V w_n}{\sqrt{\lambda_n}},\frac{\V w_m}{\sqrt{\lambda_m}})_2&=\frac{1}{\sqrt{\lambda_n}} \frac{1}{\sqrt{\lambda_m}}\left(\V w_n,\V w_m\right)_2 =\frac{\lambda_n}{\sqrt{\lambda_n}\sqrt{\lambda_m}}\left(S\V w_n,\V w_m\right)_2 \\ &=\frac{\lambda_n}{\sqrt{\lambda_n}\sqrt{\lambda_m}}( \V w_n,\V w_m)_B =\frac{\lambda_n}{\sqrt{\lambda_n}\sqrt{\lambda_m}}\delta_{nm}\qquad\text{for all }n,m\in \ensuremath{\mathbb{N}}. \end{split}\] \end{proof} We are now in position to prove the following result about the existence of weak solutions to \eqref{eq:Motion}. \begin{theorem}\label{th:weak} For every $\V v_0\in H(\mathscr{L})$, $\V \omega_{10},\; \V \omega_{0}\in \ensuremath{\mathbb{R}}^3$ such that $\V v_0=\V \omega_0\times \V x$ on $\mathcal S$, there exists at least one weak solution to \eqref{eq:Motion} such that \begin{enumerate} \item $\lim_{t\to 0^+}\norm{\V{v}(t)-\V{v}_0}_2=\lim_{t\to 0^+}|\V \omega_1(t)-\V \omega_{10}|=\lim_{t\to 0^+}|\V \omega(t)-\V \omega_{0}|=0$. \item The following decays hold \begin{equation}\label{eq:decay0} \lim_{t\to\infty}\norm{\V v}_{L^2(\mathscr{L})}=0\quad \text{and}\quad\lim_{t\to \infty}|\V \omega(t)|=0. \end{equation} In particular, if $\lambda_1=\lambda_2=\lambda_3$, then the rate of the previous decays is exponential. \item Equation \eqref{eq:conservation} holds. \end{enumerate} \end{theorem} \begin{proof} Consider the basis of $\mathcal H_2(\mathcal V)$ constructed in Theorem \ref{th:basis}. We look for ``approximate'' solutions \begin{equation}\label{eq:approximations} \V{\tilde v}_n(\V x,t)=\sum^n_{p=1}c_{np}(t)\V w_p(\V x),\qquad \V\Omega_{n}(t)=\sum^3_{i=1}\hat c_{ni}(t)\V e_i \end{equation} satisfying \eqref{eq:weak}$_1$ with $\V \varphi=\V w_r$, and \eqref{eq:weak}$_2$. Set \[ \V{\tilde v}_0:=\left\{\begin{split} \V v_0\qquad &\text{in }\mathscr{L}, \\ \V \omega_0\times \V x\quad &\text{in }\mathcal B_2. \end{split}\right. \] Then, $\V{\tilde v}_0\in \mathcal H_2(\mathcal V)$. Moreover, set $\V \Omega_0:=\V \omega_{10}+b(\V{\tilde v}_0)\in \ensuremath{\mathbb{R}}^3$. Let $\V{\tilde v}_{0n}$ denote the projection of $\V{\tilde v}_0$ on the $span\{\V w_1,\dots,\V w_n\}$. Replacing \eqref{eq:approximations} in \eqref{eq:weak}, we find that $(c_{nr},\hat c_{nk})_{r=1,\dots,n,\;k=1,2,3}$ satisfy the following system of $(n+3)\times(n+3)$ first order initial value problems \begin{equation}\label{eq:coefficients}\begin{aligned} &\left.\begin{aligned} &\dot c_{nr}(t)+2\mu \sum^n_{p=1}a_{pr}c_{np}(t)+\sum^n_{p=1}\sum^n_{q=1}b_{pqr}c_{np}(t)c_{nq}(t) \\ &\qquad\qquad +\sum^3_{i=1}\sum^n_{p=1}d_{ipr}\hat c_{ni}(t)c_{np}(t) +\sum^3_{i=1}\sum^3_{j=1}f_{ijr}\hat c_{ni}(t)\hat c_{nj}(t)=0 \\ &c_{nr}(0)=( \V{\tilde v}_{0n},\V w_r)_B \end{aligned}\right\}&&\text{ for }r=1,\dots,n, \\ &\left.\begin{split} &\ell_k\dot{\hat c}_{nk}(t)+\sum^3_{i=1}\sum^3_{j=1}g_{ijk}\hat c_{ni}(t)\hat c_{nj}(t) +\sum^n_{p=1}\sum^3_{j=1}h_{pjk}c_{np}(t)\hat c_{nj}(t)=0 \\ &\hat c_{nk}(0)=\V\Omega_{0}\cdot \V e_k \end{split}\right\}&&\text{ for }k=1,2,3, \end{aligned}\end{equation} where the (constant) coefficients are: $\ell_k:=\V e_k\cdot\T I\cdot\V e_k>0$, \[\begin{split} &a_{pr}:=2\mu\int_{\mathcal V}\T D(\V w_p)\mathbin :\T D(\V w_r)\;\d V, \\ &b_{pqr}:=\int_{\mathcal V}\tilde \rho\left[\V w_p\cdot \nabla\V{\tilde w}_q -2\left(\T I^{-1}\cdot\int_{\mathcal V}\tilde \rho \V x\times \V w_p\;\d V\right)\times \V{\tilde w}_q\right]\cdot \V w_r\;\d V, \\ &d_{ipr}:=2\int_{\mathcal V}\tilde \rho (\V e_i\times\V w_p)\cdot\V w_r -\V e_i\cdot \T I\cdot\left[\left(\T I^{-1}\cdot\int_{\mathcal V}\tilde \rho \V x\times \V w_p\;\d V\right)\times\left(\T I^{-1}\cdot\int_{\mathcal V}\tilde \rho \V x\times \V w_r\;\d V\right)\right]\d V, \\ &f_{ijr}:=\V e_i\cdot\left[\left(\T I^{-1}\cdot\int_{\mathcal V}\tilde \rho \V x\times \V w_r\;\d V\right)\times \T I\cdot \V e_j\right], \quad g_{ijk}:=\V e_k\cdot (\V e_i\times \T I\cdot \V e_j), \\ &h_{pjk}:=-\V e_k\cdot \left[\left(\T I^{-1}\cdot\int_{\mathcal V}\tilde \rho \V x\times \V w_p\;\d V\right)\times \T I\cdot \V e_j\right]. \end{split}\] By the classical theory of ordinary differential equations, the initial value problem \eqref{eq:coefficients} admits a unique solution $(c_{nr},\hat c_{nk})_{r=1,\dots,n,k=1,2,3}$ defined in some interval $[0,T_n)$ with $T_n>0$. Actually, $T_n=+\infty$ for all $n\in \ensuremath{\mathbb{N}}$. In fact, the approximate solutions satisfy the following system of equations \begin{equation}\label{eq:weak_n} \begin{aligned} &(\frac{\d \V{\tilde v}_n}{\d t}, \V w_r)_B +2\mu\int_{\mathcal V}\T D(\V{\tilde v}_n)\mathbin : \T D(\V w_r)\;\d V + b(\V w_r)\cdot[(\V \Omega_n+b(\V{\tilde v}_n))\times \T I\cdot \V \Omega_n] \\ &\qquad\qquad\qquad\quad + \int_{\mathcal V}\tilde \rho[\V{\tilde v}_n\cdot \nabla \V{\tilde v}_n +2(\V \Omega_n+b(\V{\tilde v}_n))\times \V{\tilde v}_n]\cdot \V w_r\;\d V=0, &&\text{ for all }r,n\in \ensuremath{\mathbb{N}}, \\ &\T I\cdot\V{\dot \Omega}_n+[\V \Omega_n+b(\V{\tilde v}_n)]\times \T I\cdot \V \Omega_n=0,&&\text{ for all }n\in \ensuremath{\mathbb{N}}, \end{aligned} \end{equation} and the energy equality \begin{equation}\label{eq:energy_n} \frac 12 \frac{\d }{\d t}\left[\mathcal E(\V{\tilde v}_n)+\V\Omega_{n}\cdot \T I \cdot \V \Omega_{n}\right]+2\mu\norm{\T D(\V{\tilde v}_n)}^2_{L^2(\mathscr{L})}=0\quad \text{ in }(0,T_n),\text{ for all }n\in \ensuremath{\mathbb{N}}. \end{equation} The latter equality is obtained by multiplying \eqref{eq:coefficients}$_1$ by $c_{nr}$ and summing over $r=1,\dots,n$, by multiplying \eqref{eq:coefficients}$_3$ by $\hat c_{nk}$ and summing over $k=1,2,3$, and then adding the resulting equations. Integrating \eqref{eq:energy_n} in $[0,t]$, $t<T_n$, and using \eqref{eq:kokr}, we find that \begin{equation}\label{eq:apriori2} c\norm{\V{\tilde v}_n(t)}_2^2+\V\Omega_{n}(t)\cdot \T I \cdot \V \Omega_{n}(t)+2\mu\int_0^t\norm{\T D(\V{\tilde v}_n)}^2_{L^2(\mathscr{L})}\;\d \tau\le \norm{\V{\tilde v}_0}^2_2+\V\Omega_{0}\cdot \T I \cdot \V \Omega_{0}, \end{equation} for all $t\in[0,T_n)$. Since the right-hand side does not depend on $n$ and $t$, necessarily $T_n=+\infty$ by the standard continuation theorem for ordinary differential equations. Moreover, the sequence $\{(\V{\tilde v}_n,\V \Omega_n)\}_{n\in \ensuremath{\mathbb{N}}}$ enjoys the following properties. \begin{enumerate} \item[(a)] By \eqref{eq:apriori2}, $\{\V{\tilde v}_n\}_{n\in \ensuremath{\mathbb{N}}}$ is uniformly bounded in $L^\infty(0,\infty;\mathcal H_2(\mathcal V))$. \item[(b)] $\{\V{\tilde v}_n\}_{n\in \ensuremath{\mathbb{N}}}$ is uniformly bounded also in $L^2(0,\infty;\mathcal H^1_2(\mathcal V))$ by \eqref{eq:apriori2} and \eqref{eq:sobolev_korn}. \item[(c)] $\{\V \Omega_n\}_{n\in \ensuremath{\mathbb{N}}}$ is uniformly bounded in $C^0([0,\infty))\cap C^1(0,\infty)$, by \eqref{eq:weak_n}$_2$ and \eqref{eq:apriori2}. \item[(d)]$\{\d\V{\tilde v}_n/\d t\}_{n\in \ensuremath{\mathbb{N}}}$ is uniformly bounded in $L^2(0,T;(\mathcal H^2_2(\mathcal V))')$ for every $T>0$. To show this, let $\mathbb P_n$ be the orthogonal projection of $\mathcal H^2_2(\mathcal V)$ onto $span\{\V w_1/\sqrt{\lambda_1},\dots,\V w_n/\sqrt{\lambda_n}\}$. By Theorem \ref{th:basis}, for every $\V w\in\mathcal H^2_2(\mathcal V)$ one has \begin{equation}\label{eq:mathbbPn} \V w=\sum^\infty_{\ell=0}(\V w,\V w_\ell)_2\V w_\ell\qquad \text{and}\qquad\norm{\mathbb P_n\V w}_{2,2}\le \norm{\V w}_{2,2},\quad\text{for all }n\in \ensuremath{\mathbb{N}}. \end{equation} For every $\V w\in\mathcal H^2_2(\mathcal V)$, \[\begin{split} (\frac{\d \V{\tilde v}_n}{\d t}, \V w)_B&=(\frac{\d \V{\tilde v}_n}{\d t}, \mathbb P_n\V w)_B =-2\mu\int_{\mathcal V}\T D(\V{\tilde v}_n)\mathbin : \T D(\mathbb P_n\V w)\;\d V \\ &\quad - \int_{\mathcal V}\tilde \rho[\V{\tilde v}_n\cdot \nabla \V{\tilde v}_n +2(\V \Omega_n+b(\V{\tilde v}_n))\times \V{\tilde v}_n]\cdot (\mathbb P_n\V w)\;\d V \\ &\quad-b(\mathbb P_n\V w)\cdot[(\V \Omega_n+b(\V{\tilde v}_n))\times \T I\cdot \V \Omega_n]\quad \text{ for all }n\in \ensuremath{\mathbb{N}}. \end{split}\] We recall the following classical estimates that can be obtained using an integration by parts together with H\"older inequality, \eqref{eq:korn_q} and \eqref{eq:sobolev_korn}. For every $\V u_1, \V u_2\in \mathcal H^1_2(\mathcal V)$ and $\V z\in \mathcal H^2_2(\mathcal V)$ one has \begin{multline}\label{eq:nonlinear} \left|\int_{\mathcal V}\tilde \rho(\V u_1\cdot\nabla \V u_2)\cdot \V z\; \d V\right|= \left|\int_{\mathcal V}\tilde \rho(\V u_1\cdot\nabla\V z)\cdot \V u_2\; \d V\right| \\ \le \norm{\V u_1}_6\norm{\nabla \V z}_3\norm{\V u_2}_2 \le c\norm{\T D(\V u_1)}_{L^2(\mathscr{L})}\norm{\V z}_{2,2}\norm{\V u_2}_2. \end{multline} Using again H\"older inequality, \eqref{eq:nonlinear} and \eqref{eq:apriori2}, we find that \begin{multline*} \left|(\frac{\d \V{\tilde v}_n}{\d t}, \V w)_B\right|=\left|(\frac{\d \V{\tilde v}_n}{\d t}, \mathbb P_n\V w)_B\right| \le c_1\norm{\T D(\V{\tilde v}_n)}_{L^2(\mathscr{L})} \norm{\V w}_{2,2} \\ +c_2\norm{ \T D(\V{\tilde v}_n)}_{L^2(\mathscr{L})}\norm{\V w}_{2,2}\norm{\V{\tilde v}_n}_2 +c_3\norm{\V{\tilde v}_n}_2\norm{\V w}_{2,2} +c_4|\V \Omega_n|\norm{\V w}_{2,2}. \end{multline*} Since the previous estimates hold for every $\V w\in \mathcal H^2_2(\mathcal V)$ and $\mathcal H_2(\mathcal V)\hookrightarrow (\mathcal H^2_2(\mathcal V))'$ , by properties (a), (b) and (c), we can conclude that the sequence $\{\d\V{\tilde v}_n/\d t\}_{n\in \ensuremath{\mathbb{N}}}$ belongs to a bounded set of $L^2(0,T;(\mathcal H^2_2(\mathcal V))')$ for every $T>0$. \end{enumerate} Properties (b) and (d) imply that the sequence $\{\V{\tilde v}_n\}_{n\in \ensuremath{\mathbb{N}}}$ remains in a bounded set of the following space \[ \{\V u\in L^2(0,T;\mathcal H^1_2(\mathcal V)):\; \d \V u/\d t\in L^2(0,T;\mathcal (H^2_2(\mathcal V))') \}. \] Moreover, $\mathcal H^1_2(\mathcal V))\hookrightarrow \mathcal H_2(\mathcal V)\hookrightarrow (H^2_2(\mathcal V))'$, with the first embedding being compact (Lemma \ref{lem:embedding1}). Taking into account all these features and properties (a)-(d), we can claim the existence of functions \[\begin{split} &\V{\tilde v}\in L^\infty(0,\infty;\mathcal H_2(\mathcal V))\cap L^2(0,\infty;\mathcal H^1_2(\mathcal V)), \\ &\V \Omega\in C^0([0,\infty))\cap C^1(0,\infty), \end{split}\] and subsequences, again denoted by $\{\V{\tilde v}_n\}_{n\in\ensuremath{\mathbb{N}}}$ and $\{\V \Omega_n\}_{n\in\ensuremath{\mathbb{N}}}$, such that \begin{equation}\label{eq:convergence}\begin{split} &\lim_{n\to \infty}\V{\tilde v}_n=\V{\tilde v}\quad\text{weakly$-*$ in }L^\infty(0,\infty;\mathcal H_2(\mathcal V)), \\ &\lim_{n\to \infty}\V{\tilde v}_n=\V{\tilde v}\quad\text{weakly in }L^2(0,\infty;\mathcal H^1_2(\mathcal V)), \\ &\lim_{n\to \infty}\V \Omega_n=\V \Omega\quad\text{uniformly in every closed interval }J\subset[0,\infty), \\ &\lim_{n\to \infty}\V{\tilde v}_n=\V{\tilde v}\quad\text{strongly in }L^2(0,T;\mathcal H_2(\mathcal V))\quad \text{for every }T>0. \end{split}\end{equation} The latter convergence is a consequence of properties (b) and (d), and of the Aubin-Lions compactness lemma (see \cite[Theorem 2.1, Chapter III]{Temam}). To conclude the proof of the theorem, we need to show that the couple $(\V{\tilde v},\V \Omega)$ satisfies \eqref{eq:weak}. In other words, we need to pass to the limit as $n\to \infty$ in the following equation obtained from \eqref{eq:weak_n}, after an integration with respect to time: \begin{equation}\label{eq:weak_nr} \begin{split} (\V{\tilde v}_n(t),\V \varphi)_B-(\V{\tilde v}_n(0),\V \varphi)_B &+2\mu\int^t_0\int_{\mathcal V}\T D(\V{\tilde v}_n)\mathbin : \T D(\V \varphi)\;\d V\d \tau \\ &+ \int^t_0\int_{\mathcal V}\tilde \rho[\V{\tilde v}_n\cdot \nabla \V{\tilde v}_n +2(\V \Omega_n+b(\V{\tilde v}_n))\times \V{\tilde v}_n]\cdot\V \varphi\;\d V\d \tau \\ &+ b(\V \varphi)\cdot\int^t_0 [\V \Omega_n+b(\V{\tilde v}_n)]\times \T I\cdot \V \Omega_n\;\d \tau=0, \\ \T I\cdot\V{\Omega}_n(t)-\T I\cdot\V{\Omega}_n(0)&+\int^t_0[\V \Omega_n+b(\V{\tilde v}_n)]\times \T I\cdot \V \Omega_n\;\d \tau=0, \text{ for all }t\in[0,\infty). \end{split} \end{equation} Thanks to \eqref{eq:convergence}, the convergence of both linear and nonlinear terms in the above equations follows from standard arguments. We have then shown that, for every $T>0$, the couple $(\V{\tilde v},\V \Omega)$ satisfies \eqref{eq:weak} for every $\V \varphi\in \mathcal H^2_2(\mathcal V)$ and all $t\in [0,T)$. Since $\mathcal H^2_2(\mathcal V)$ is dense in $\mathcal H^1_2(\mathcal V)$, \eqref{eq:weak}$_1$ is also satisfied for every $\V \varphi\in \mathcal H^1_2(\mathcal V)$. Moreover, $\V{\tilde v}\in C_w([0,T);\mathcal H_2(\mathcal V))$ since it satisfies \eqref{eq:weak} in $[0,T)$ for every $T>0$. In fact, from the weak formulation, one can easily show that if $t_0\in [0,T)$, then for every $\varepsilon>0$ there exists $\delta=\delta(\varepsilon)>0$ such that for every $t\in (t_0-\delta,t_0+\delta)$: \[ |(\V{\tilde v}(t)-\V{\tilde v}(t_0),\V \varphi)_B|<\varepsilon,\qquad \text{for all }\;\V \varphi\in\mathcal H^1_2(\mathcal V). \] By the density of $\mathcal H^1_2(\mathcal V)$ in $\mathcal H_2(\mathcal V)$, the latter property continues to hold for every $\V \varphi\in\mathcal H_2(\mathcal V)$. In addition, taking the limit as $n\to \infty$ in \eqref{eq:apriori2} and using \eqref{eq:convergence}$_{2,3,4}$ with $\V{\tilde v}\in C_w([0,T);\mathcal H_2(\mathcal V))$, we can conclude that $(\V{\tilde v},\V \Omega)$ satisfies the strong energy inequality \eqref{eq:strong_energy}. Let us prove properties {\em 1.} to {\em 3.} in the statement. Let $\V \omega_1:=\V \Omega+b(\V{\tilde v})$ and recall that $\V{\tilde v}$ has the following representation \[ \V{\tilde v}=\left\{\begin{split} \V v\qquad &\text{in }\;\mathscr{L}, \\ \V \omega\times \V x\;\quad &\text{in }\,\mathcal B_2. \end{split}\right. \] Then, $(\V v,\V \omega_1,\V \omega)$ satisfy \eqref{eq:regularity_weak}. Recall \eqref{eq:energy_d} and \eqref{eq:kokr}, thus property {\em 1.} immediately follows from the strong energy inequality \eqref{eq:strong_energy} and the lower semicontinuity at zero of the map: $t\to \norm{\V v(t)}_2^2$. For what concerns the decays stated in property {\em 2.}, by \eqref{eq:strong_energy} and \eqref{eq:dissipation}, for all $t\ge s$ and a.a. $s\ge 0$ including $s=0$, we find that \[ \mathcal E(\V{\tilde v}(t))+C\mu\int^t_s\mathcal E(\V{\tilde v}(\tau))\;\d \tau\le\mathcal E(\V{\tilde v}(s))+G(t,s), \] where $G(t,s):=\V \Omega(t)\cdot \T I\cdot \V \Omega(t)-\V \Omega(s)\cdot \T I\cdot \V \Omega(s)$. By \eqref{eq:weak}$_2$, \eqref{eq:strong_energy} with $s=0$ and H\"older inequality, we find that \[ G(t,s)= 2\int^t_s\V \Omega\cdot[b(\V{\tilde v})\times \T I\cdot \V \Omega]\;\d \tau\le c_1\int^t_s F(\tau)\; \d \tau \] where $c_1$ is a positive constant (independent of time) and $F(t):=\norm{\V{\tilde v}(t)}_2$. Hence, \eqref{eq:decay0} follows by Lemma \ref{lem:gronwall1}. In particular, if $\lambda_1=\lambda_2=\lambda_3$, then $\V \Omega\cdot[b(\V{\tilde v})\times \T I\cdot \V \Omega]=0$, and also the exponential decay follows. Finally, we obtain \eqref{eq:conservation} from \eqref{eq:weak}$_2$ by dot-multiplying it by $\T I\cdot\V \Omega$ and recalling that $\V \Omega=\V \omega_1-~b(\V{\tilde v})$. \end{proof} Due to the coupling with the Navier-Stokes equations, also for the problem at hand, it is an open problem whether weak solutions constructed in Theorem \ref{th:weak} continuously depend upon the initial data, and are in particular unique. Nevertheless, such property holds for any weak solution possessing a further regularity, as for the classical Navier-Stokes case. \begin{theorem}\label{th:continuous_dependence} Consider two weak solutions $(\V v,\V \omega_1,\V \omega)$ and $(\V v^*,\V \omega_1^*,\V \omega^*)$ to \eqref{eq:Motion} corresponding to initial data $(\V v_0,\V \omega_{10},\V \omega_0)$ and $(\V v_0^*,\V \omega_{10}^*,\V \omega_0^*)$, respectively. Suppose that there exists a time $T>0$ such that \begin{equation}\label{eq:serrin} \V v^*\in L^p(0,T;L^q(\mathscr{L})), \qquad \frac 2p+\frac 3q=1,\quad \text{for some }\; q>3. \end{equation} Then, the following properties hold. \begin{itemize} \item[a)] There exists a positive constant $c$ depending only on $\norm{\V v^*}_{L^\infty(0,T;L^2(\mathscr{L}))}$, $\norm{\V v^*}_{L^p(0,T;L^q(\mathscr{L}))}$, $\max_{t\in[0,T]}|\V \omega_1^*(t)|$ and $\max_{t\in[0,T]}|\V \omega^*(t)|$ such that \begin{multline*} \norm{\V v(t)-\V v^*(t)}_{L^2(\mathscr{L})}+|\V \omega_1(t)-\V \omega_1^*(t)|+|\V \omega(t)-\V \omega^*(t)| \\ \le c\left( \norm{\V v_0-\V v^*_0}_{L^2(\mathscr{L})}+|\V \omega_{10}-\V \omega_{10}^*|+|\V \omega_0-\V \omega^*_0|\right), \quad \text{ for all }t\in[0,T]. \end{multline*} \item[b)] If $(\V v_0,\V \omega_{10},\V \omega_0)=(\V v_0^*,\V \omega_{10}^*,\V \omega_0^*)$, then $(\V v,\V \omega_1,\V \omega)=(\V v^*,\V \omega_1^*,\V \omega^*)$ a.e. in $[0,T]\times \mathscr{L}$. \end{itemize} \end{theorem} To show the previous theorem, we need some preliminary lemmas. Their proofs are standard, they are similar to the ones provided in \cite[Chapter 3]{Ma}. \begin{lemma}\label{lem:equivalent_weak} Consider a weak solution $(\V v,\V \omega_1,\V \omega)$ of \eqref{eq:Motion} and the extension $\V{\tilde v}$ of $\V v$ defined in \eqref{eq:extension}. Then, $\V{\tilde v}$ can be redefined on a set of zero Lebesgue measure in such a way that $\V{\tilde v}\in L^2_R(\mathcal V)$ for all $t\in [0,T)$ and it satisfies the following equation \begin{equation}\label{eq:weak_v_d}\begin{split} &-\int^t_s\left[(\V{\tilde v},\frac{\partial \V \phi}{\partial t})_B-b\left(\frac{\partial \V \phi}{\partial t}\right)\cdot \T I \cdot \V \Omega\right]\d \tau \\ &\qquad+(\V{\tilde v}(t), \V \phi(t))_B-b(\V \phi(t))\cdot\T I \cdot\V \Omega(t)-(\V{\tilde v}(s), \V \phi(s))_B+b(\V \phi(s))\cdot\T I \cdot\V \Omega(s) \\ &\qquad+2\mu\int^t_s\int_{\mathcal V}\T D(\V{\tilde v})\mathbin : \T D(\V \phi)\;\d V\d \tau + \int^t_s\int_{\mathcal V}\tilde \rho[\V{\tilde v}\cdot \nabla \V{\tilde v}+2(\V \Omega+b(\V{\tilde v}))\times \V{\tilde v}]\cdot \V \phi\;\d V\d \tau =0, \end{split}\end{equation} for all $0\le s\le t$, $t<T$ and $\V \phi\in\mathcal D_R(\mathcal V_T)$. \end{lemma} For a Banach space $X$, we will consider the {\em (time-)mollification} $\V w_h$ of a function $\V w\in L^2(0,T;\mathcal H_2^1(\mathcal V))$ as the function defined by \[ \V w_h(\V x,t):=\int^T_0 j_h(t-s)\V w(\V x,s)\; \d s\in C^\infty([0,T];\mathcal H_2^1(\mathcal V)), \] where $\{j_h\in C^\infty_0(-h,h):\, 0<h<T\}$ is a family of mollifiers. Then, the following lemma is an immediate consequence of \cite[Theorem 2.29]{Adams} and \cite[Lemma 1.3.3. \& Remark 1.3.8 (b)]{ArBaHiNe}. \begin{lemma}\label{lem:mollification} Let $H$ be a Hilbert space with the inner product $\langle\cdot,\cdot\rangle$. If $\V u\in C_w([0,T),H)$, then \[ \lim_{h\to 0}\langle\V u-\V u_h,\V \psi\rangle=0 \] uniformly on every closed interval $J\subset [0,T)$ and for every $\V \psi\in H$. Let $X$ be a Banach space. For every $\V w\in L^p(0,T;X)$, $1\le p<\infty$, \[ \lim_{h\to 0}\norm{\V w-\V w_h}_{L^p(0,T;X)}=0. \] Moreover, let $\{\V w_n\}_{n\in \ensuremath{\mathbb{N}}}$be a sequence converging to $\V w$ in $L^p(0,T;X)$. Then, \[ \lim_{n\to \infty}\norm{(\V v_n)_h-\V w_h}_{L^p(0,T;X)}=0,\qquad \text{ for all }0<h<T. \] \end{lemma} Moreover, the following result holds. \begin{lemma} For every $\V u, \V w\in C_w([0,T);L^2_R(\mathcal V))\cap L^2(0,T;L^2_R(\mathcal V))$ \begin{equation}\label{eq:d_mollification} \lim_{h\to 0}\int^t_0\left(( \V u,\frac{\partial \V w_h}{\partial \tau})_B+(\frac{\partial \V u_h}{\partial \tau},\V w)_B\right)\; \d \tau=( \V u(t),\V w(t))_B-(\V u(0),\V w(0))_B \end{equation} $t\in [0,T)$. \end{lemma} \begin{lemma}\label{lem:approximation} $\mathcal D_R(\mathcal V_T)$ is dense in $L^2(0,T;\mathcal H^1_2(\mathcal V))$. In particular, every $\V w\in L^2(0,T;\mathcal H^1_2(\mathcal V))$ can be approximated in $L^2(0,T;\mathcal H^1_2(\mathcal V))$ by the family $\{\V w_{n,h}:\; n\in \ensuremath{\mathbb{N}},\, 0<h<T\}$ of functions \[ \V w_{n,h}:=\sum^n_{k=1}(\V w_h,\V \Psi_k)_1\V \Psi_k, \] where $\{\V \Psi_k\}_{k\in \ensuremath{\mathbb{N}}}\subset \mathcal D_R(\mathcal V)$ is a basis of $\mathcal H_1(\mathcal V)$. Moreover, the following convergences hold: \begin{equation*} \begin{aligned} &\lim_{n\to \infty}\norm{\V w_{n,h}-\V w_h}_{1,2}=0\qquad &&\text{ for all }t\in [0,T]\,\text{ and }\,h<T, \\ &\lim_{n\to \infty}\norm{\V w_{n,h}-\V w_h}_{L^2(0,T;\mathcal H^1_2(\mathcal V))}=0\qquad &&\text{ for all }h<T, \\ &\lim_{h\to 0}\left(\lim_{n\to \infty}\norm{\V w_{n,h}-\V w}_{L^2(0,T;\mathcal H^1_2(\mathcal V))}\right)=0. \end{aligned} \end{equation*} \end{lemma} We are now in position to prove Theorem \ref{th:continuous_dependence} \begin{proof}[Proof of Theorem \ref{th:continuous_dependence}] Consider the extensions $\V{\tilde v}$ and $\V{\tilde v}^*$ of $\V v$ and $\V v^*$ (together with the corresponding initial conditions), defined in \eqref{eq:extension}, respectively. Set $\V \Omega=\V \omega_1-b(\V{\tilde v})$ and $\V \Omega^*=\V \omega_1^*-~b(\V{\tilde v}^*)$. Let $\{\V{\tilde v}_{n,h}:\,n\in \ensuremath{\mathbb{N}},\, 0<h<T\}$ and $\{\V{\tilde v}^*_{n,h}:\,n\in \ensuremath{\mathbb{N}},\, 0<h<T\}$ be the approximating families of $\V{\tilde v}$ and $\V{\tilde v}^*$ in $L^2(0,T;\mathcal H^1_2(\mathcal V))$ given by Lemma \ref{lem:approximation}, respectively. For every $n\in \ensuremath{\mathbb{N}}$ and $h\in (0,T)$, let us replace $\V{\tilde v}^*_{n,h}$ and $\V{\tilde v}_{n,h}$ in place of $\V \phi$ in \eqref{eq:weak_v_d} with $s=0$, for $\V{\tilde v}$ and $\V{\tilde v}^*$, respectively. The following equations hold: \begin{equation*}\begin{split} &-\int^t_0\left[(\V{\tilde v},\frac{\partial \V{\tilde v}^*_{n,h}}{\partial \tau})_B-b\left(\frac{\partial \V{\tilde v}^*_{n,h}}{\partial \tau}\right)\cdot\T I\cdot\V \Omega\right]\; \d \tau +(\V{\tilde v}(t),\V{\tilde v}^*_{n,h}(t))_B -(\V{\tilde v}_0,\V{\tilde v}^*_{n,h}(0))_B \\ &\quad -b(\V{\tilde v}^*_{n,h}(t))\cdot\T I \cdot\V \Omega(t)+b(\V{\tilde v}^*_{n,h}(0))\cdot\T I \cdot\V \Omega_0 +2\mu\int^t_0\int_{\mathcal V}\T D(\V{\tilde v})\mathbin : \T D(\V{\tilde v}^*_{n,h})\;\d V\d \tau \\ &\quad+\int^t_0\int_{\mathcal V}\tilde \rho[\V{\tilde v}\cdot \nabla \V{\tilde v} +2(\V \Omega+b(\V{\tilde v}))\times \V{\tilde v}]\cdot \V{\tilde v}^*_{n,h}\;\d V\d \tau =0, \end{split}\end{equation*} and \begin{equation*}\begin{split} &-\int^t_0\left[(\V{\tilde v}^*,\frac{\partial \V{\tilde v}_{n,h}}{\partial \tau})_B-b\left(\frac{\partial \V{\tilde v}_{n,h}}{\partial \tau}\right)\cdot\T I\cdot\V \Omega^*\right]\; \d \tau +(\V{\tilde v}^*(t), \V{\tilde v}_{n,h}(t))_B -(\V{\tilde v}^*_0, \V{\tilde v}_{n,h}(0))_B \\ &\quad -b(\V{\tilde v}_{n,h}(t))\cdot\T I \cdot\V \Omega^*(t)+b(\V{\tilde v}_{n,h}(0))\cdot\T I \cdot\V \Omega^*_0 +2\mu\int^t_0\int_{\mathcal V}\T D(\V{\tilde v}^*)\mathbin : \T D(\V{\tilde v}_{n,h})\;\d V\d \tau \\ &\quad+ \int^t_0\int_{\mathcal V}\tilde \rho[\V{\tilde v}^*\cdot \nabla \V{\tilde v}^* +2(\V \Omega^*+b(\V{\tilde v}^*))\times \V{\tilde v}^*]\cdot \V{\tilde v}_{n,h}\;\d V\d \tau =0. \end{split}\end{equation*} Taking the limit as $n\to \infty$ in the preceding two equations, we find that \begin{equation}\label{eq:approx_1_h} \begin{split} &-\int^t_0\left[(\V{\tilde v},\frac{\partial \V{\tilde v}^*_{h}}{\partial \tau})_B-b\left(\frac{\partial \V{\tilde v}^*_{h}}{\partial \tau}\right)\cdot\T I\cdot\V \Omega\right]\; \d \tau +(\V{\tilde v}(t),\V{\tilde v}^*_{h}(t))_B -(\V{\tilde v}_0,\V{\tilde v}^*_{h}(0))_B \\ &\quad -b(\V{\tilde v}^*_{h}(t))\cdot\T I \cdot\V \Omega(t)+b(\V{\tilde v}^*_{h}(0))\cdot\T I \cdot\V \Omega_0 +2\mu\int^t_0\int_{\mathcal V}\T D(\V{\tilde v})\mathbin : \T D(\V{\tilde v}^*_{h})\;\d V\d \tau \\ &\quad+\int^t_0\int_{\mathcal V}\tilde \rho[\V{\tilde v}\cdot \nabla \V{\tilde v} +2(\V \Omega+b(\V{\tilde v}))\times \V{\tilde v}]\cdot \V{\tilde v}^*_{h}\;\d V\d \tau =0, \end{split} \end{equation} and \begin{equation}\label{eq:approx_2_h} \begin{split} &-\int^t_0\left[(\V{\tilde v}^*,\frac{\partial \V{\tilde v}_{h}}{\partial \tau})_B-b\left(\frac{\partial \V{\tilde v}_{h}}{\partial \tau}\right)\cdot\T I\cdot\V \Omega^*\right]\; \d \tau +(\V{\tilde v}^*(t), \V{\tilde v}_{h}(t))_B-(\V{\tilde v}^*_0, \V{\tilde v}_{h}(0))_B \\ &\quad -b(\V{\tilde v}_{h}(t))\cdot\T I \cdot\V \Omega^*(t)+b(\V{\tilde v}_{h}(0))\cdot\T I \cdot\V \Omega^*_0 +2\mu\int^t_0\int_{\mathcal V}\T D(\V{\tilde v}^*)\mathbin : \T D(\V{\tilde v}_{h})\;\d V\d \tau \\ &\quad+ \int^t_0\int_{\mathcal V}\tilde \rho[\V{\tilde v}^*\cdot \nabla \V{\tilde v}^* +2(\V \Omega^*+b(\V{\tilde v}^*))\times \V{\tilde v}^*]\cdot \V{\tilde v}_{h}\;\d V\d \tau =0. \end{split} \end{equation} In the previous limits, the convergence of the linear terms is standard thanks to Lemma \ref{lem:approximation}. For what concerns the nonlinear terms, the convergence follows from the following estimates, Lemma \ref{lem:approximation} and Lebesgue dominated convergence theorem. For every $\V u_1,\V u_2\in L^\infty(0,T;\mathcal H(\mathcal V))\cap L^2(0,T;\mathcal H_2^1(\mathcal V))$: \begin{align*} \int^t_0\int_{\mathcal V}\tilde \rho (\V u_1\cdot \nabla \V u_1)\cdot [(\V u_2)_{n,h}-(\V u_2)_h]\;\d V\d \tau &\le \int^t_0\norm{\V u_1}_{6}\norm{\nabla \V u_1}_{2}\norm{(\V u_2)_{n,h}-(\V u_2)_h}_{3}\;\d \tau \\ &\le c_1\int^t_0\norm{\nabla \V{\tilde u_1}}^2_{2}\norm{(\V u_2)_{n,h}-(\V u_2)_h}_{1,2}\;\d \tau \end{align*} by H\"older inequality, \eqref{eq:sobolev_korn}, \eqref{eq:korn_q} and Sobolev embedding theorem. Moreover, for every $\V a\in L^\infty(0,T)$, by H\"older inequality and \eqref{eq:korn_q} \begin{equation}\label{eq:nonlinear_estimate0} \begin{split} \int^t_0\int_{\mathcal V}2\tilde \rho(\V a+b(\V u_1)\times \V u_1)\cdot [(\V u_2)_{n,h}-(\V u_2)_h]\;\d V\d \tau &\le \int^t_0\norm{\V a\times \V u_1}_2\norm{(\V u_2)_{n,h}-(\V u_2)_h}_{1,2}\; \d \tau \\ &\le c_2\int^t_0\norm{(\V u_2)_{n,h}-(\V u_2)_h}_{1,2}^2,\;\d \tau, \end{split} \end{equation} where $c_2$ is a positive constant depending on $\norm{\V u_1}_{L^2(0,T;\mathcal H^1_2(\mathcal V))}$ and $\max_{t\in [0,T]}|\V a(t)|$. From \eqref{eq:weak}$_2$ for $\V \Omega$ and $\V \Omega^*$, we find that \begin{multline*} \int^t_0 b\left(\frac{\partial \V{\tilde v}^*_{h}}{\partial \tau}\right)\cdot\T I\cdot\V \Omega\; \d \tau -b(\V{\tilde v}^*_{h}(t))\cdot\T I \cdot\V \Omega(t)+b(\V{\tilde v}^*_{h}(0))\cdot\T I \cdot\V \Omega_0 \\ =\int^t_0 b(\V{\tilde v}^*_{h})\cdot [(\V \Omega+b(\V{\tilde v}))\times \T I\cdot \V \Omega]\; \d \tau \end{multline*} and \begin{multline*} \int^t_0 b\left(\frac{\partial \V{\tilde v}_{h}}{\partial \tau}\right)\cdot\T I\cdot\V \Omega^*\; \d \tau -b(\V{\tilde v}_{h}(t))\cdot\T I \cdot\V \Omega^*(t)+b(\V{\tilde v}_{h}(0))\cdot\T I \cdot\V \Omega^*_0 \\ =\int^t_0 b(\V{\tilde v}_{h})\cdot [(\V \Omega^*+b(\V{\tilde v}^*))\times \T I\cdot \V \Omega^*]\; \d \tau. \end{multline*} Hence, adding \eqref{eq:approx_1_h} and \eqref{eq:approx_2_h}, we find that \begin{equation}\label{eq:approx_3_h} \begin{split} &-\int^t_0\left[(\V{\tilde v},\frac{\partial \V{\tilde v}^*_{h}}{\partial \tau})_B+(\V{\tilde v}^*,\frac{\partial \V{\tilde v}_{h}}{\partial \tau})_B\right]\;\d \tau +(\V{\tilde v}(t),\V{\tilde v}^*_{h}(t))_B-(\V{\tilde v}_0,\V{\tilde v}^*_{h}(0))_B \\ &\qquad +(\V{\tilde v}^*(t), \V{\tilde v}_{h}(t))_B-(\V{\tilde v}^*_0, \V{\tilde v}_{h}(0))_B \\ &\qquad+\int^t_0 b(\V{\tilde v}^*_{h})\cdot [(\V \Omega+b(\V{\tilde v}))\times \T I\cdot \V \Omega]\; \d \tau +\int^t_0 b(\V{\tilde v}_{h})\cdot [(\V \Omega^*+b(\V{\tilde v}^*))\times \T I\cdot \V \Omega^*]\; \d \tau \\ &\quad\quad+2\mu\int^t_0\int_{\mathcal V}[\T D(\V{\tilde v})\mathbin : \T D(\V{\tilde v}^*_{h}) +\T D(\V{\tilde v}^*)\mathbin : \T D(\V{\tilde v}_{h})]\;\d V\d \tau \\ &\quad\quad+\int^t_0\int_{\mathcal V}\tilde \rho[\V{\tilde v}\cdot \nabla \V{\tilde v} +2(\V \Omega+b(\V{\tilde v}))\times \V{\tilde v}]\cdot \V{\tilde v}^*_{h}\;\d V\d \tau \\ &\quad\quad+ \int^t_0\int_{\mathcal V}\tilde \rho[\V{\tilde v}^*\cdot \nabla \V{\tilde v}^* +2(\V \Omega^*+b(\V{\tilde v}^*))\times \V{\tilde v}^*]\cdot \V{\tilde v}_{h}\;\d V\d \tau =0. \end{split} \end{equation} Next, we take the limit as $h\to 0$ in \eqref{eq:approx_3_h}. Again, the convergence of the linear terms follows easily thanks to \eqref{eq:d_mollification} and Lemma \ref{lem:mollification}. For what concerns the nonlinear terms, we use \eqref{eq:nonlinear_estimate0} and the following classical inequality \begin{multline}\label{eq:nonlinear_estimate_rs} \left|\int^T_0\int_{\mathcal V}\tilde \rho (\V u_1\cdot \nabla \V u_2)\cdot \V u_3\;\d V\d \tau\right| \\ \le c \left(\int^T_0\norm{\nabla \V u_1}_2^2\;\d \tau\right)^{3/2q}\left(\int^T_0\norm{\nabla \V u_2}_2^2\;\d \tau\right)^{1/2}\left(\int^T_0\norm{\V u_3}^p_q\norm{\V u_1}^2_2\;\d \tau\right)^{1/p} \end{multline} which holds for every $\V u_1,\V u_2\in L^\infty(0,T;\mathcal H(\mathcal V))\cap L^2(0,T;\mathcal H^1_2(\mathcal V))$ and $\V u_3\in L^p(0,T;L^q(\mathcal V))$ with $p$ and $q$ satisfying \eqref{eq:serrin} (see \cite[Lemma 1]{Serrin}). Moreover, from \eqref{eq:weak}$_2$, we find that \begin{multline*} \V \Omega^*(t)\cdot \T I \cdot \V \Omega(t)- \V \Omega^*_0\cdot \T I \cdot \V \Omega_0 \\ =-\int^t_0\V \Omega^*\cdot [(\V \Omega+b(\V{\tilde v}))\times \T I\cdot \V \Omega]\; \d \tau -\int^t_0\V \Omega\cdot [(\V \Omega^*+b(\V{\tilde v}^*))\times \T I\cdot \V \Omega^*]\; \d \tau. \end{multline*} Hence, the couples $(\V{\tilde v},\V \Omega)$ and $(\V{\tilde v}^*,\V \Omega^*)$ satisfy the following equality \begin{equation}\label{eq:2vv} \begin{split} (\V{\tilde v}(t),\V{\tilde v}^*(t))_B&-(\V{\tilde v}_0,\V{\tilde v}^*_0)_B +\int^t_0 [\V \Omega^*+b(\V{\tilde v}^*_{h})]\cdot [(\V \Omega+b(\V{\tilde v}))\times \T I\cdot (\V \Omega-\V \Omega^*)]\; \d \tau \\ &+\V \Omega^*(t)\cdot \T I \cdot \V \Omega(t)- \V \Omega^*_0\cdot \T I \cdot \V \Omega_0+4\mu\int^t_0\int_{\mathcal V}\T D(\V{\tilde v})\mathbin : \T D(\V{\tilde v}^*)\;\d V\d \tau \\ &+\int^t_0\int_{\mathcal V}\tilde \rho[(\V{\tilde v}-\V{\tilde v}^*)\cdot \nabla \V{\tilde v} +2(\V \Omega-\V \Omega^*+b(\V{\tilde v}-\V{\tilde v}^*))\times \V{\tilde v}]\cdot \V{\tilde v}^*\;\d V\d \tau =0. \end{split} \end{equation} We recall that, by Definition \ref{def:weak}, $(\V{\tilde v},\V \Omega)$ and $(\V{\tilde v}^*,\V \Omega^*)$ satisfy the strong energy inequality \eqref{eq:strong_energy} for all $t\in[0,T]$: \begin{equation}\label{eq:strong_energy_v} \mathcal E(\V{\tilde v}(t))+\V \Omega(t)\cdot \T I\cdot \V \Omega(t)+4\mu\int^t_s\norm{\T D( \V{\tilde v}(\tau))}^2_{L^2(\mathscr{L})}\;\d \tau\le\mathcal E(\V{\tilde v}_0)+\V \Omega_0\cdot \T I\cdot \V \Omega_0, \end{equation} and \begin{equation}\label{eq:strong_energy_vstar} \mathcal E(\V{\tilde v}^*(t))+\V \Omega^*(t)\cdot \T I\cdot \V \Omega^*(t)+4\mu\int^t_s\norm{\T D( \V{\tilde v}^*(\tau))}^2_{L^2(\mathscr{L})}\;\d \tau\le\mathcal E(\V{\tilde v}_0)^*+\V \Omega^*_0\cdot \T I\cdot \V \Omega^*_0. \end{equation} Adding \eqref{eq:strong_energy_v} and \eqref{eq:strong_energy_vstar}, and subtracting twice of \eqref{eq:2vv}, we find that the fields $\V w:=\V{\tilde v}-\V {\tilde v}^*$ and $\V \xi:=\V \Omega-\V\Omega^*$ must satisfy the following inequality \begin{equation}\label{eq:subtraction} \begin{split} \mathcal E(\V w(t))+\V \xi(t)\cdot \T I\cdot \V \xi(t)&+4\mu\int^t_s\norm{\T D( \V w(\tau))}^2_{L^2(\mathscr{L})}\;\d \tau \\ &\le \mathcal E(\V w_0)+\V \xi_0\cdot \T I\cdot \V \xi_0-2\int^t_0 [\V \xi+b(\V w)]\cdot [(\V \Omega^*+b(\V{\tilde v}^*_{h}))\times \T I\cdot \V \xi]\; \d \tau \\ &\quad+2\int^t_0\int_{\mathcal V}\tilde \rho[\V w\cdot \nabla \V w+2(\V \xi+b(\V w))\times \V w]\cdot \V{\tilde v}^*\;\d V\d \tau, \end{split} \end{equation} where $\V w_0:=\V{\tilde v}_0-\V {\tilde v}^*_0$ and $\V \xi_0:=\V \Omega_0-\V\Omega^*_0$. By H\"older inequality, \eqref{eq:nonlinear_estimate_rs} and Young's inequality, we get the following estimates \begin{multline*} \mathcal E(\V w(t))+\V \xi(t)\cdot \T I\cdot \V \xi(t)+2\mu\int^t_s\norm{\T D( \V w(\tau))}^2_{L^2(\mathscr{L})}\;\d \tau \le \mathcal E(\V w_0)+\V \xi_0\cdot \T I\cdot \V \xi_0 \\+ c_3\int^t_0[\norm{\V{\tilde v}^*(\tau)}^p_{L_q(\mathcal V)}+\norm{\V w(\tau)}_{L^2(\mathcal V)}+|\V \xi(\tau)|][\mathcal E(\V w(\tau))+\V \xi(\tau)\cdot \T I\cdot \V \xi(\tau)] \;\d \tau. \end{multline*} Recalling \eqref{eq:extension} and \eqref{eq:energy_d} and using Gr\"onwall's Lemma together with \eqref{eq:kokr}, properties {\em (a)} and {\em (b)} of Theorem \ref{th:continuous_dependence} immediately follow. \end{proof} \section{Existence of strong solution}\label{sec:strong} In this section, we will prove the local in time existence and continuous dependence upon initial data of strong solutions to \eqref{eq:Motion} for a considerably ``large'' class of initial conditions. The approach is the one of maximal $L^p-L^q$ regularity in time-weighted $L^p$-spaces (see Appendix \ref{sec:maximal_regularity} for a brief discussion on such approach). Let us introduce some notation. For the remaining part of the paper, the brackets $[\cdot,\cdot]_\theta$ denote the complex interpolation, whereas $(\cdot,\cdot)_{\alpha,\gamma}$ are used for the real interpolation. For $p\in (1,\infty)$, $1/p<\upmu\le 1$ and a Banach space $X$, the {\em time-weighted $L^p$-spaces} are defined as follows \begin{equation}\label{eq:time_weigthed_Lp} \begin{aligned} &\V u\in L^p_{\upmu}((0,T); X) && \Leftrightarrow \quad t^{1-\upmu}\V u\in L^p((0,T);X),\\ &\V u\in H^1_{p,\upmu}((0,T);X) && \Leftrightarrow \quad \V u,\d \V u/\d t\in L^p_{\upmu}((0,T); X). \end{aligned} \end{equation} Consider the operator $(\T A_q,D(\T A_q))$ where \begin{equation}\label{eq:stokes_v} \T A_q:=-\frac{\mu}{\tilde \rho}\;\mathcal P_q\Delta \end{equation} is the Stokes operator with domain $D(\T A_q):=\{\V{\tilde w}\in \mathcal H^2_q(\mathcal V)\cap H_q(\mathcal V):\; \V w=\V 0\text{ on }\mathcal C\}$, $\tilde \rho$ is given in \eqref{eq:density}; we recall that $\mathcal P_q$ is the projection of $L^q_R(\mathcal V)$ onto $\mathcal H_q(\mathcal V)$. Moreover, for $p,\;q\in(1,\infty)$, we consider the spaces $X_0:=\mathcal H_q(\mathcal V)\times \ensuremath{\mathbb{R}}^3$, $X_1:=D(\T A_q)\times\ensuremath{\mathbb{R}}^3$, and the interpolation spaces \[ X_{\gamma,\upmu}:=(X_0,X_1)_{\upmu-1/p,p},\quad X_\alpha=[X_0,X_1]_{\alpha}\quad\text{for }\mu\in (1/p,1],\;\alpha\in (0,1). \] The previous spaces are endowed with the norms \[ \norm{\V u}_{X_0}:=\sqrt{\norm{\V{\tilde v}}_{L^q(\mathcal V)}^2+|\V \omega_1|^2},\qquad\qquad \norm{\V u}_{X_1}:=\sqrt{\norm{\V{\tilde v}}_{W^{2,q}(\mathcal V)}^2+|\V \omega_1|^2} \] and similarly for the interpolation spaces. We recall the following characterization of Besov spaces $B^s_{qp}(\mathcal V)=(H^{s_0}_q(\mathcal V),H^{s_1}_q(\mathcal V))_{\theta,p}$ as real interpolation of Bessel potential spaces, and of Bessel potential spaces $H^s_q(\mathcal V)=[H^{s_0}_q(\mathcal V),H^{s_1}_q(\mathcal V)]_{\theta}$. These characterizations are valid for $s_0\ne s_1\in \ensuremath{\mathbb{R}}$, $p,q\in [1, \infty)$, $\theta\in (0,1)$ and $s=(1-\theta)s_0+\theta s_1$. We also recall that $B^s_{qq}(\mathcal V) = W^{s,q}(\mathcal V)$ and $B^s_{22}(\mathcal V) = W^{s,2}(\mathcal V) =H^s_2(\mathcal V)$. Before stating our main result about existence and related properties of strong solutions to \eqref{eq:Motion}, we need some preliminary observations. Let us consider the initial boundary value problem which describes the motion of a rigid body having a cavity $\mathcal V$ completely filled by a viscous liquid with a varying density $\tilde \rho$ defined in \eqref{eq:density}. \begin{equation}\label{eq:Motion_e} \begin{aligned} &\left.\begin{split} &\frac{\partial \V{\tilde v}}{\partial t}+\V{\dot \omega}_1\times \V x+ \V{\tilde v}\cdot \nabla \V{\tilde v}+2\V \omega_1\times \V{\tilde v} =\frac{\mu}{\tilde \rho}\Delta \V{\tilde v}-\frac{1}{\tilde \rho}\nabla \pi \\ &\mathop{\mathrm{div}} \V{\tilde v}=0 \end{split}\right\}&&\text{ on }\mathcal V\times (0,\infty), \\ &\ \V{\dot \omega}_1-b\left(\frac{\partial \V{\tilde v}}{\partial t}\right)+\T I^{-1}\cdot\left[\V \omega_1\times \T I\cdot (\V \omega_1-b(\V{\tilde v}))\right]=\V 0 &&\text{ in }(0,\infty), \\ &\ \V{\tilde v}=\V 0&&\text{ on }\mathcal C, \\ &\ \V{\tilde v}|_{t=0}=\V{\tilde v}_0,\qquad \V \omega_1(0)=\V \omega_{10}&& \end{aligned} \end{equation} Assume that for some initial data $(\V{\tilde v}_0, \V \omega_{10})$ satisfying the condition \begin{equation}\label{eq:compatibility} \V{\tilde v}_0=\left\{\begin{aligned} &\V v_0&&\text{ on }\mathscr{L}, \\ &\V \omega_0\times \V x&&\text{ on }\mathcal B_2, \end{aligned}\right.\qquad \qquad \text{with }\V v_0=\V \omega_0\times \V x\text{ on }\mathcal S, \end{equation} there exists $(\V{\tilde v},\V \omega_1)$ a strong solution to \eqref{eq:Motion_e} in the class $\mathbb{E}_{1,\upmu}(0,T)$ with $\upmu=1$, defined in \eqref{eq:regularity_c} below. Then there exist $\V v\in H^1_p(0,t_1;H_q(\mathscr{L}))\cap L^p(0,t_1;H^2_q(\mathscr{L}))$ and $\V \omega\in C^1((0,T];\ensuremath{\mathbb{R}}^3)$ such that $\V v=\V 0$ on $\mathcal C$, $\V v=\V \omega\times \V x$ on $\mathcal S$, and \[ \V{\tilde v}= \left\{\begin{aligned} &\V v &&\text{on }\mathscr{L}, \\ &\V \omega\times \V x &&\text{on }\mathcal B_2. \end{aligned}\right. \] Using a duality argument (generalizing that in Remark \ref{re:strong_2}), one can find that the triple $(\V v,\V \omega_1,\V \omega)$ is a strong solution to \eqref{eq:Motion}. Moreover, $(\V v,\V \omega_1,\V \omega)$ satisfies the initial conditions thanks to \eqref{eq:compatibility}. Therefore, the goal of this section is to investigate the existence and related properties of strong solutions to \eqref{eq:Motion_e}. In the following we set \begin{equation*} \mathcal B^{s}_{qp,\sigma}(\mathcal V):= \left\{ \begin{aligned} &\{\V u\in B^{s}_{qp}(\mathcal V)\cap \mathcal H_q(\mathcal V): \V u=\V 0\; \text{on}\; \mathcal C\}, &&s>1/q,\\ &B^{s}_{qp}(\mathcal V)\cap \mathcal H_q(\mathcal V), && s\in [0,1/q).\\ \end{aligned} \right. \end{equation*} In view of the previous observations, next theorem turns out to be the main result of this section. \begin{theorem}\label{th:strong} Suppose \begin{equation} \label{assumptions-pq} p\in(1,\infty),\quad q\in (1,3),\quad 2/p +3/q\le 3, \end{equation} and let (the time-weight) $\upmu$ satisfy \begin{equation} \label{assumptions-mu} \upmu\in (1/p,1],\quad \upmu\ge \upmu_{\rm crit}=\frac{1}{p} + \frac{3}{2q}-\frac{1}{2}. \end{equation} \begin{enumerate} \setlength\itemsep{1mm} \item[{\bf (a)}] Let $\V u_0=(\V{\tilde v}_0,\V \omega_{10})\in \mathcal B^{2\upmu-2/p}_{qp,\sigma}(\mathcal V)\times \ensuremath{\mathbb{R}}^3=X_{\gamma,\upmu}$ be given such that \eqref{eq:compatibility} is satisfied. Then there are positive constants $T=T(\V u_0)$ and $\eta=\eta(\V u_0)$ such that \eqref{eq:Motion_e} admits a unique solution $\V u(\cdot, \V u_0)=(\V{\tilde v},\V \omega_1)$ in \begin{equation}\label{eq:regularity_c} \mathbb E_{1,\upmu}(0,T)=H^1_{p,\upmu}((0,T); X_0) \cap L^p_{\upmu}((0,T); X_1). \end{equation} \item[{\bf(b)}] Suppose $p_j, q_j$, $\upmu_j$ satisfy \eqref{assumptions-pq}-\eqref{assumptions-mu} and, in addition, $p_1\leq p_2$, $q_1\leq q_2$ as well as \begin{equation}\label{mu-j} \upmu_1- \frac{1}{p_1}- \frac{3}{2q_1} \ge \upmu_2- \frac{1}{p_2}- \frac{3}{2q_2}. \end{equation} Then for each initial value $(\V{\tilde v}_0,\V \omega_{10})\in \mathcal B^{2\upmu_1 -2/p_1}_{q_1 p_1,\sigma}(\mathcal V)\times \ensuremath{\mathbb{R}}^3$ satisfying \eqref{eq:compatibility}, problem \eqref{eq:Motion_e} admits a unique solution $(\V{\tilde v},\V \omega_1)$ in the class \begin{equation*} \begin{split} &H^1_{p_1,\upmu_1}((0,T); \mathcal H_{q_1}(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L_{\upmu_1}^{p_1}((0,T); D(\T A_{q_1})\times\ensuremath{\mathbb{R}}^3) \\ &\cap H^1_{p_2,\upmu_2}((0,T); \mathcal H_{q_2}(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L_{\upmu_2}^{p_2}((0,T);D(\T A_{q_2})\times\ensuremath{\mathbb{R}}^3). \end{split} \end{equation*} \item[{\bf (c)}] Each solution exists on a maximal interval $[0,t_+)=[0,t_+(\V u_0))$, and enjoys the additional regularity property \begin{equation*} \V{\tilde v} \in C([0,t_+); \mathcal B^{2\upmu-2/p}_{qp,\sigma}(\mathcal V))\cap C((0,t_+);\mathcal B^{2-2/p}_{qp,\sigma}(\mathcal V)), \quad \V \omega_1\in C^1([0,t_+),\ensuremath{\mathbb{R}}^3). \end{equation*} \item[{\bf (d)}] The solution $\V u=(\V{\tilde v},\V \omega_1)$ exists globally if $\V u([0,t_+))\subset B^{2\upmu-2/p}_{qp}(\mathcal V)\times \ensuremath{\mathbb{R}}^3$ is relatively compact. \end{enumerate} \end{theorem} \begin{proof} The statements in (a), (c) and (d) follow from Theorem \ref{th:MRSP}. We will verify the hypotheses of Theorem \ref{th:MRSP} in the next three steps. \paragraph{Step 1. A semilinear evolution equation}Problem \eqref{eq:Motion_e} can be reformulated as a semilinear evolution equation for the variable $\V u=[\V{\tilde v},\V \omega_1]^T$: \begin{equation}\label{eq:evolution0} \T E\cdot \frac{\d \V u}{\d t}+\T A \V u=\T G(\V u,\V u),\quad\V u(0)=\V u_0, \end{equation} where, \begin{equation}\label{eq:E} \begin{split} &\T E:\;\left[\begin{matrix}\V w \\ \V \xi\end{matrix}\right]\in X_0\mapsto\T E(\V w,\V \xi):=\left[\begin{matrix} \V w+\mathcal P_q\left(\V \xi\times \V x\right) \\ \V\xi-b(\V w)\end{matrix}\right]\in X_0, \\ &\T A:=\left[\begin{matrix} \T A_q & \T 0 \\ \T 0 & \T 0 \end{matrix}\right]:\; X_1\to X_0,\quad\text{$\T A_q$ defined in \eqref{eq:stokes_v}, } \\ &\T G(\V u,\V u):=\left[\begin{matrix} \mathcal P_q(-\V{\tilde v}\cdot \nabla \V{\tilde v}-2\V \omega_1\times \V{\tilde v}) \\ -\T I^{-1}\cdot[\V \omega_1\times \T I\cdot(\V \omega_1-b(\V{\tilde v})) \end{matrix}\right], \end{split}\end{equation} and the functional $b(\cdot)$ has been introduced in \eqref{eq:a_varphi}. The operator $\T E$ is linear, bounded, invertible, and has a bounded inverse. The linearity and boundedness of $\T E$ is obvious from its definition. For what concerns its invertibility, we observe that $\T E=\T 1+\T K$ with \[ \T K:=\left[\begin{matrix} \T 0 & \mathcal P_q(\cdot \times \V x) \\ -b(\cdot) & \T 0 \end{matrix}\right] \] a bounded operator with a finite dimensional range (see \eqref{eq:a_varphi}). A basis for the range of $\T K$ is given by $\{(\V e_i,\mathcal P_q(\V e_i\times \V x)):\; i=1,2,3\}$). Thus, $\T K$ is a compact operator, and $\T E$ is a Fredholm operator of index zero (by \cite[Theorem 5.26, page 238]{Kato}). The invertibility of $\T E$ then follows if we prove that its null space reduces to $\mathsf{N}[\T E]=\{\V 0\}$. The latter immediately follows from Lemma \ref{le:kokr} (actually, from its proof). In fact, $\T E$ is one-to-one when $q=2$. In addition to the previous properties of $\T E$, we can also infer that $\T E^{-1}\equiv\T 1+\T C$, where $\T C:=-\T K\cdot\T E^{-1}:\; X_0\to \mathcal R(\mathcal V)\cap \mathcal H_q(\mathcal V)\times\ensuremath{\mathbb{R}}^3$ is a bounded operator with a finite dimensional range, and then compact. \noindent Let us consider the linear operator $\L:=\T E^{-1}\cdot \T A$ with domain $X_1$, and observe that \begin{equation}\label{eq:op_L} \L=(\T 1+\T C)\cdot \T A=\left[\begin{matrix} \T A_q & 0 \\ 0 & 0 \end{matrix}\right]+\T C\left[\begin{matrix} \T A_q & 0 \\ 0 & 0 \end{matrix}\right], \end{equation} and let us denote $\T N(\V u,\V u):=\T E^{-1}\T G(\V u,\V u)$. Then, equation \eqref{eq:evolution0} (and thus \eqref{eq:Motion_e}) can be equivalently rewritten as \begin{equation}\label{eq:evolution} \frac{\d \V u}{\d t}+\L \V u=\T N(\V u,\V u),\qquad \V u(0)=\V u_0. \end{equation} \paragraph{Step 2. Properties of the linear operator $\L$}\cite[Theorem 2]{Abels2009} implies that $\T A_q\in \mathcal{BIP}(X_0)$ with angle $\theta_{\T A_q}=0$ and $0\in\varrho(\T A_q)$. \noindent Consider the linear operator ${\L}_q:=\T E^{-1}_q\T A_q$ with domain $D({\L}_q)\equiv D(\T A_q)$, and for every $\V u\in \mathcal H_q(\mathcal V)$ \begin{equation}\label{eq:E_q} \T E_q\V u:=\V u+\mathcal P_q\left(b(\V u)\times \V x\right)=\V u+\mathcal P_q\left(\V x\times \T I^{-1}\cdot\int_{\mathcal V}\tilde \rho \V x\times \V u\; \d V\right) \in \mathcal H_q(\mathcal V). \end{equation} With an argument similar to the one done in {\em Step 1}, it can be shown that \[ \L=\left[\begin{matrix} \L_q & \T 0 \\ \T 0 & \T 0\end{matrix}\right] \] and ${\L}_q=(\T 1+\T C_q)\T A_q$ with $\T C_q:\;\mathcal H_q(\mathcal V)\to \mathcal R(\mathcal V)\cap \mathcal H_q(\mathcal V)$ a bounded operator with a finite dimensional range, and then compact. Since $\L_q$ is a compact perturbation of $\T A_q$, $\L_q$ has compact resolvent. In addition, its spectrum consists entirely of eigenvalues of finite algebraic multiplicity, and it is independent of $q$. From Lemma \ref{le:kokr}, it follows that $\L_q$ is positive definite on $\mathcal H_q(\mathcal V)$ when $q=2$. Thus, $\sigma(\L_q)\subset (0,\infty)$. In particular, $0\in \varrho(\L_q)$. \noindent The operator $\T B_q:= \T C_q\T A_q$ is bounded from $D(\T A_q)$ to $\mathcal R(\mathcal V)\cap \mathcal H_q(\mathcal V)$. In particular, there exists $s\in (0,1/q)$ such that \[ \T B_q:\; D(\T A_q) \to D(\T A^{s/2}_q)\quad\text{ is bounded,} \] where $D(\T A_q^{s/2})=[\mathcal H_q(\mathcal V), D(\T A_q)]_{s/2}$ (by \cite[Theorem 3.3.7]{PrSi}). \noindent Proposition \ref{prop:perturbation} and Remark \ref{rem:perturbation} imply that $\L_q\in \mathcal{BIP}(\mathcal H_q(\mathcal V)$, and then $\L\in \mathcal{BIP}(\mathcal H_q(\mathcal V)\times \ensuremath{\mathbb{R}}^3)$ with angle $\theta_{\L_q}<\pi/2$. \paragraph{Step 3. The nonlinear term} For $\beta\in (0,1)$, let $X_\beta:=[X_0,X_1]_\beta$. Then we have $X_\beta={\mathcal H^{2\beta}_{q}(\mathcal V)}\times \ensuremath{\mathbb{R}}^3$, where $\mathcal H^{2\beta}_{q}(\mathcal V)$ is defined by \begin{equation*} \mathcal H^{2\beta}_{q}(\mathcal V):= \left\{ \begin{aligned} &\{\V u\in H^{s}_{q}(\mathcal V)\cap\mathcal H^q(\mathcal V): \V u=\V 0\; \text{on}\; \partial\mathcal C\}, &&s>1/q,\\ &H^{s}_{q}(\mathcal V)\cap \mathcal H_q(\mathcal V), && s\in [0,1/q).\\ \end{aligned} \right. \end{equation*} The fact that $\T N:=\T E^{-1}\T G:\; X_\beta\times X_\beta\to X_0$ is bounded for $\beta=\frac{1}{4}\big(1+\frac{3}{q}\big)$ with $q\in(1,3)$ follows from standard estimates (see e.g. \cite[Section 3]{PrWi} and \cite[proof of Theorem 3.4]{MaPrSi}). For such choice of $\beta$, \eqref{assumptions-pq} implies that $\upmu_{\rm crit}\le 1$. \newline It remains to prove part (b). We note that, under the stated hypotheses, \[ {B}^{2\upmu_1 -2/p_1}_{q_1 p_1,\sigma}(\mathcal V)\times \ensuremath{\mathbb{R}}^3\hookrightarrow {B}^{2\upmu_2 -2/p_2}_{q_2 p_2,\sigma}(\mathcal V)\times \ensuremath{\mathbb{R}}^3 \] and for each fixed $j=1,2$, solutions $\V u_j\equiv(\tilde{\V v_j},\V \omega_{1,j})$ to \eqref{eq:evolution} in the class \[ \mathbb{E}_{1,\upmu_j}(0,T):=H^1_{p_j,\upmu_j}((0,T); \mathcal H_{q_j}(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L^{p_j}_{\upmu_j}((0,T); D(\T A_{q_j})\times\ensuremath{\mathbb{R}}^3) \] are fixed points of the strict contraction \[ {\sf T}:\; \mathbb{M}_j\to\mathbb{M}_j,\qquad {\sf T}\V u:=e^{-t\T L}\V u_0+e^{-t\T L}*\T N(\V u,\V u), \] where $\mathbb{M}_j$ is a closed subset of $\mathbb{E}_{1,\upmu_j}(0,T)$. Since also ${\sf T}:\; \mathbb{M}_1\cap\mathbb{M}_2\to\mathbb{M}_1\cap\mathbb{M}_2$ is a strict contraction, then it admits a unique fixed point which is the unique solution $(\V{\tilde v},\V \omega_1)$ in the class \begin{equation*} \begin{split} &H^1_{p_1,\upmu_1}((0,T); \mathcal H_{q_1}(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L_{\upmu_1}^{p_1}((0,T); D(\T A_{q_1})\times\ensuremath{\mathbb{R}}^3) \\ &\cap H^1_{p_2,\upmu_2}((0,T); \mathcal H_{q_2}(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L_{\upmu_2}^{p_2}((0,T);D(\T A_{q_2})\times\ensuremath{\mathbb{R}}^3). \end{split} \end{equation*} \end{proof} \begin{remark} \begin{itemize} \item[(a)] In the case $p_1=q_1=2$, we obtain $\mu_{\rm crit}=3/4$ and we find the largest space of initial data $X_{\rm crit}$, \begin{equation} \label{p=q=2} X_{\rm crit}:=(\mathcal H_2(\mathcal V)\times\ensuremath{\mathbb{R}}^3, \mathcal H^2_2(\mathcal V)\times\ensuremath{\mathbb{R}}^3)_{1/4,2} \subset \mathcal {H}^{1/2}_2(\mathcal V)\times \ensuremath{\mathbb{R}}^3, \end{equation} corresponding to which, there exists a unique solution to \eqref{eq:evolution} in the class \begin{equation*} \begin{split} &H^1_{2,3/4}((0,T); \mathcal H_2(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L^{2}_{3/4}((0,T); \mathcal H^2_2(\mathcal V)\times\ensuremath{\mathbb{R}}^3) \\ &\cap H^1_{p,\mu}((0,T); \mathcal H_q(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L^{p}_{\mu_2}((0,T); \mathcal H^2_q(\mathcal V)\times\ensuremath{\mathbb{R}}^3), \end{split} \end{equation*} for any $p\ge 2, q\in [2,3)$, with $\mu=1/p + 3/2q -1/2.$ In particular, we can conclude that $v\in C((0,t_+); B^{2-2/p}_{qp}(\mathcal V))$ for any $p\ge 2, q\in [2,3)$. \item[(b)] Theorem~\ref{th:strong}(b) asserts that problem \eqref{eq:evolution} admits for each initial value $$(\V{\tilde v}_0,\V{\omega}_{10})\in \mathcal H^1_2(\mathcal V)\times\ensuremath{\mathbb{R}}^3$$ a unique solution in the class \begin{equation*} \begin{split} &W^{1,2}((0,T); \mathcal H_2(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L^2((0,T); \mathcal H^2_2(\mathcal V)\times\ensuremath{\mathbb{R}}^3) \\ &\cap H^1_{p,\mu}((0,T); \mathcal H_{q}(\mathcal V)\times\ensuremath{\mathbb{R}}^3)\cap L^{p}_{\mu}((0,T);\mathcal H^2_{q}(\mathcal V)\times\ensuremath{\mathbb{R}}^3), \end{split} \end{equation*} for any $p\ge 2, q\in [2,3)$, with $\mu=1/p +3/2q-1/4$. In particular, we can conclude that $v\in C((0,t_+); B^{2-2/p}_{qp}(\mathcal V))$ for any $p\ge 2, q\in [2,3)$. \end{itemize} \end{remark}
{'timestamp': '2021-05-19T02:19:55', 'yymm': '2005', 'arxiv_id': '2005.14478', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14478'}
arxiv
\subsection{Methods} \subsubsection{Sample Fabrication} The plasmonic nanoantennas are fabricated using a top-down approach, based on the method outlined by \citet{horrer2013}. Au(80 nm) films were de
posited using electron-beam evaporation onto glass substrates. Later, Al$_{2}$O$_{3}$(3.5 nm)/Tb$_{18}$Co$_{82}$(15 nm)/Al$_{2}$O$_{3}$(2 nm) films were sputter deposited onto these films, with the complete structure being Au(80 nm)/Al$_{2}$O$_{3}$(3.5 nm)/Tb$_{18}$Co$_{82}$(15 nm)/Al$_{2}$O$_{3}$(2 nm). The Tb$_{18}$Co$_{82}$ layer was deposited through co-sputtering. The additional thin Al$_{2}$O$_{3}$ layers were used as capping and isolating layers for the Tb$_{18}$Co$_{82}$. Here, the composition of the film can be varied by adjusted the relative power of the Co and Tb magnetrons. Calibration films were made with different power ratios on the two magnetrons and compositions were verified using Rutherford back scattering. Electron beam lithography was used to define disk shaped apertures in a MicroChem 496PMMA A4 electron-beam resist. Electron-beam evaporation was used to deposit an Al mask through the resist followed by removal of the PMMA mask with Acetone. The resulting structure was then milled at a 5~deg incidence angle with sample rotation, removing all material unprotected by the Al mask. Any remaining Al mask was then removed with the photoresist developer microdeposit 351, which in this case was used as a selective etcher to target the Al. A conical profile is induced through a combination of the small lateral component of the milling which depends to some extent on the small milling incidence angle~\cite{fleischer2010}. In our samples, this results in a constant slope profile of approximately 62~deg for all nanoantenna arrays. Therefore, by varying the diameter of the Al mask, the resulting structures can be tuned from truncated to conical profiles. \subsubsection{Magneto-optical characterisation} The experimental values of $\theta\_{F}$, $\eta\_{F}$ and $\Theta\_{F}$ were measured using the photoelastic modulator methodology with an applied field of 450~mT along the light propagation direction, which is described in the Supporting Information. A quadratic polynomial was fitted to the raw $\theta\_{F}$ data in order to subtract the background contribution which arises from the Faraday rotation of the fused-silica substrate, which is strongest for short wavelengths and decreases for longer wavelengths~\cite{qiu1998}. For the differential absorption of circularly polarised light measurement, a time varying light polarisation, which alternates between left and right circularly polarised light states at 50 kHz was generated using a photoelastic modulator (PEM) and directed at the sample at normal incidence. This is achieved by passing linearly polarised light orientated at 45$^{\circ}$ to the fast axis of the PEM, with the PEM retardation set to 0.25 wavelengths. Any mechanism in the TNC array which results in a difference in absorption for opposite helicities (including magnetic circular dichroism) will contribute to an oscillating light intensity at the detector at the photoelastic modulator frequency. It is common to express this measurement as the ratio $C\_{\omega}^{q}/C\_{\circ}^{q}$, where $C\_{\omega}^{q}$ is the amplitude of the $\omega$ = 50 kHz signal for a fixed polar magnetization $q = \pm M\_{z}$, and $C\_{\circ}^{q}$ is the DC signal intensity, which contains the helicity independent absorption contribution. Prior to the measurement, a saturating magnetic field was used to initialise the magnetization along the light propagation direction ($q = +M\_{z}$) and then removed. For the subsequent measurement, the magnetization was saturated in the opposite polar direction ($q = -M\_{z}$) and the measurement repeated. It is important to note that the spectra in Figure~\ref{fig4}a contains additional \emph{fake} CD contributions, which arise from leaking-in of the large linear dichroism signal as a result of the rectangular array with which the nanostructures are arranged. By observing the difference between the antiparallel magnetization states, namely $\Delta[C\_{\omega}/C\_{\circ}]$, these effects, which are independent of the magnetization, can be subtracted out, yielding the available magnetic modulation. We define this magnetic modulation of the helicity dependent transmission as $(C\_{\omega}^{-M\_{z}}-C\_{\omega}^{+M\_{z}}) / (C\_{\circ}^{-M\_{z}}+C\_{\circ}^{+M\_{z}})$, and this quantity is plotted in Figure ~\ref{fig4}c as a function of both $\alpha\_{i}$ and wavelength. \begin{acknowledgement} The authors would like to express their gratitude towards Prof. Bengt Lindgren of Uppsala University, Sweden, for fruitful discussions and support with the ellipsometric characterization of TbCo thin film materials. The excellent support and infrastructure of the MyFab facility at the \AA ngstr\"om Laboratory of Uppsala University is also highly appreciated. The authors acknowledge support from the Knut and Alice Wallenberg Foundation project ``{\it Harnessing light and spins through plasmons at the nanoscale}'' (Project No. 2015.0060), the Swedish Research Council (Project No. 2019-03581), the Swedish Foundation for International Cooperation in Research and Higher Education (Project No. KO2016-6889), and the Swedish National Infrastructure for Computing (SNIC). This work is part of a project which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no.\ 737093, ``{\textsc{femtoterabyte}}''. \end{acknowledgement} \newpage \begin{figure*}[t] \centering \includegraphics[angle = -0,width = 0.7\textwidth]{figures/fig1.pdf} \caption[]{a) Schematic representation of the TNC array and resulting Faraday rotation ($\theta\_F$) and ellipticity ($\eta\_F$) of the transmitted light. The Rayleigh anomaly, excited at $\lambda\_{R}^{[n,m]}(\alpha\_{i})$, is depicted by the laterally excited beam associated with the \textit{passing-off} of the diffraction order. Below are schematic representations of the TNC lattice for the two azimuthal orientations ($\phi\_{i}$ = 0 and 90$^{\circ}$) with respect to the incident light polarisation ($E\_i$) and scattering plane, with $p\_E$ denoting the orientation of the electric dipolar plasmon in the individual TNCs. b),c) Calculated transmission spectra for incidence angles from 0 - 20 degrees, with the scattering plane orientated along the 340 nm periodcity (left column) and the 425 nm periodicity (right column). d), e) Measured transmission spectra for the $D\_B$ = 179 nm TNC array, with a scanning electron microscopy figure inset of the TNC array. f), g) Same as d), e) but for the $D\_B$ = 227 nm TNC array.} \label{fig1} \end{figure*} \newpage \begin{figure*}[t] \centering \includegraphics[angle = -0,width = \textwidth]{figures/fig2.pdf} \caption[]{Spectral dependence of Faraday effect as calculated in COMSOL\textsuperscript{\textregistered} for a $D\_{B}$ = 179 nm TNC array (left column), as measured for the $D\_{B}$ = 179 nm TNC array (middle column) and as measured for the for a $D\_{B}$ = 227 nm TNC array. a) -- c) $\Theta\_{F}$ for the $\phi\_i$ = 0 configuration. d) -- l) Spectral dependence of $\theta\_{F}$ (d) -- f)), $\eta\_{F}$ (g) -- i)) and $\Theta\_{F}$ (j) -- l)), for the $\phi\_i$ = 90 configuration. A quadratic polynomial has been fitted to the $\theta\_{F}$ measurements and subtracted to remove the background contribution which arises from the Faraday rotation of the fused-silica substrate, which is strongest for short wavelengths and approaches zero with increasing wavelength. Schematics on the left hand side depict the TNC array orientation with respect to the scattering plane and incident polarisation ($E\_i$) for the $\phi\_{i}$ = 0 and 90 configations. The orientation of the electric dipolar plasmon is depicted by $p\_E$ and the spin-orbit induced magneto-optical dipolar plasmon orientation is depicted by $p\_{MO}$. } \label{fig2} \end{figure*} \newpage \begin{figure*}[t] \centering \includegraphics[angle = -0,width = \textwidth]{figures/fig3.pdf} \caption[]{a) Hysteresis loops recorded at a wavelength of 730 nm demonstrating how the magnitude and sign of the Faraday ellipticity ($\eta\_{F}$) can be controled through the incidence angle ($\alpha\_{i}$). The inset shows a magnified view of the $\eta\_{F}$ for $\alpha\_{i}$ = 0 (blue circles), 10 (red diamonds), and 15 degrees (yellow triangles). The dashed vertical line indicates the 730 nm wavelength where the hysteresis loops in the main figure were recorded. b) The change in Faraday ellipticity ($\delta\eta\_{F}$) between successive wavelengths. Following the onset of the SLM, there is an abrupt change in light ellipticity which is associated with a maximum in $\delta\eta\_{F}$. The peaks have been fitted with Lorentzian peak functions.} \label{fig3} \end{figure*} \newpage \begin{figure*}[t] \centering \includegraphics[angle = -0,width = \textwidth]{figures/fig4.pdf} \caption[]{a) Spectral dependence of the $C\_{\omega}^{q}/C\_{\circ}$ signals, where $q$ = +$M$ or -$M$ for the solid and dashed curves respectively. $C\_{\omega}^{q}$ is related to the total circular dichroism for a particular magnetization state, containing both magnetic and non-magnetic contributions. b) The amplitude of the magnetic modulation of the helicity dependent transmission as function of both wavelength and $\alpha\_i$, which relates to the difference between the solid and dashed curves in Figure ~\ref{fig4}~a). The dashed white line indicates the expected location of the Rayleigh anomaly calculated from Eq.\ (3).} \label{fig4} \end{figure*} \section{Introduction} Nanoscale magnetophotonics merges magnetism with nanophotonics \cite{maccaferri2020}, combining seamlessly magneto-optical (MO) effects with surface plasmons, thus being capable of delivering ultra-high performance biological and chemical sensors \cite{maccaferri2015a, zubritskaya2015}, active tunability in nano-optics by external magnetic fields \cite{maccaferri2020, temnov2010, zhang2017, torrado2010, belotelov2007, belotelov2011, gonzalez-diaz2007, rowan-robinson2019, rollinger2016, lodewijks2014}, and setting a platform for ultrafast opto-magnetism and spintronic \cite{liu2015a} devices on the nanoscale. Pure ferromagnetic plasmonic systems were earlier considered unfeasible for these purposes due to the high ohmic losses associated with the transition-metal ferromagnets. However, to a large extent, these can be overcome through nanopatterning \cite{ctistis2009, papaioannou2010}, materials engineering and fabrication of hybrid noble metal-ferromagnetic nanostructures \cite{zubritskaya2018, kataja2016, martin-becerra2010, pourjamal2018, banthi2012}. The enhancement of various MO effects is typically achieved in these systems through near-field light concentration at the nanoscale, boosting light-magnetism interactions that relate to the MO Voigt parameter of the ferromagnet \cite{moncada-villa2014, gonzalez-diaz2008}. Importantly, by exploiting magnetic anisotropy control, the magnetization can be stabilized in a desired direction and MO effects can be recorded at zero external magnetic field. Linewidth engineering \cite{kataja2016, kataja2015, maccaferri2016} wherein high Q-factor resonances are achieved, can furthermore be employed in ordered arrays of magnetoplasmonic nanoantennas with surface lattice resonances. The use of rare-earth--transition-metal alloys is of paramount interest for future nanoscale magnetophotonic and magnetoplasmonic systems for several key reasons. Firstly, they are known to exhibit very large MO effects \cite{buschow1989, atkinson1988} potentially permitting very high real-time active tunability of light polarization. Secondly, they can exhibit strong perpendicular magnetic anisotropy, yet with an amorphous texture \cite{ciuciulkaite2020, yoshino1984, hebler2016, harris1992, frisk2016a}. For instance, carefully engineered Co/Pt multilayered nanodots having large interfacial spin-orbit coupling with perpendicular magnetic anisotropy, demonstrate tenfold enhancements in MO activity and demonstrate the great potential of out-of-plane magnetic anisotropy materials for magneto\-plasmonics \cite{freire-fernandez2020}. The amorphous texture of RE-TM alloys greatly simplifies the otherwise stringent requirements on material microstructure for obtaining these highly desired magnetic properties. As such, they can be grown on noble metals like Au with minimal residual stresses, and with highly smooth interfaces, thereby maintaining much of their original magnetic properties even after patterning \cite{ciuciulkaite2020}. Importantly, with perpendicular magnetic anisotropy the remanent magnetization state of the magnetic nanostructures can be designed to be parallel to the light propagation direction for normal light incidence, greatly simplifying potential practical applications of magnetoplasmonic crystals. This allows one to explore their MO functionality (such as, tunable Faraday effect) directly, i.e., without the need of external magnetic fields in order to stabilize the magnetization along the out-of-plane axis. Thirdly, ferrimagnetic alloys such as Tb$_{18}$Co$_{82}$, have recently experienced extensive interest due to the demonstration of enhanced spin-orbit torques \cite{finley2016, ueda2016, alebrand2012} and all-optical switching \cite{alebrand2012, mangin2014,AvilesFelix2020,ciuciulkaite2020}, allowing for zero-field magnetic switching, on picosecond timescales, with the use of pulsed lasers. Thus, demonstrating the compatibility of these materials with nanoantennas is essential for the development of nanoscale (i.e., sub-diffraction) all-optical switching technologies \cite{liu2015a}. Here we devise a magnetophotonic crystal composed of nanocone Au plasmonic nanoantenna arrays incorporating an amorphous RE-TM ferrimagnetic alloy, Tb$_{18}$Co$_{82}$, with perpendicular magnetic anisotropy \cite{ciuciulkaite2020}. We show that this hybrid Au/Tb$\_{18}$Co$\_{82}$ system provides high-Q MO resonances, overcoming the losses associated with ferrimagnetic alloys. By Maxwell-theory modelling we show that this is achieved through the resonant collective excitation of surface lattice modes that exhibit a particularly strong angular dispersion. This is a result of the interference of a Rayleigh anomaly with the individual nanoantennas' plasmons, giving rise to surface lattice resonance resonances with characteristic Fano-type asymmetric lineshape in both the optical and MO spectra. We demonstrate an exceptionally strong tunability of the spectral position of such resonances by varying the angle of incidence (incident direction) of the incoming light, exemplifying the potential of magnetophotonic crystals for high-resolution mechanical tilt-angle sensors and, more broadly, for actively-controlled optical systems \cite{haghtalab2020,shi2020}. \section{Results and Discussion} Nanocone antennas were previously shown to exhibit a very strong field enhancement \cite{horrer2013}, with the electromagnetic field concentrated at the tip \cite{horrer2013, schafer2013}. We build large rectangular lattice arrays of Au/Tb$\_{18}$Co$\_{82}$ truncated nanocone antennas (Fig. \ref{fig1}a) \cite{ciuciulkaite2020} with two selected base diameters (179 $\pm$ 5~nm and 227 $\pm$ 4~nm (see SEM insets in Fig. \ref{fig1}f and h respectively). The light incidence angle ($\alpha\_{i}$) is varied with respect to the lattice plane, directed along either one or the other of the array periodicity (Fig. \ref{fig1}b). We first use finite-element Maxwell-theory simulations (COMSOL Multiphysics, see Supporting Information) to pinpoint the emerging resonances' linewidth narrowing and high incidence-angle sensitivity. The magnetophotonic crytstal is built of Au(80 nm)/Tb$\_{18}$Co$\_{82}$(15 nm) truncated nanocones (base diameter, $D\_{B}$ = 179~nm), arranged in a rectangular array with 340~nm $\times$ 425~nm periodicity (Fig.\ \ref{fig1}c). The light incidence direction angle (using the optical convention) defines a scattering plane which is parallel to one (340~nm) or the other (425~nm) of the array periodicity axes with azimuthal angles $\varphi\_{i}$ = 0 or $\varphi\_{i}$ = 90$^{\circ}$, respectively. \newpage \begin{figure}[t!] \centering \includegraphics[width = 0.7\textwidth]{figures/main-text-figs/fig1.jpg} \caption{ Magnetophotonic crystals composed of arrays of truncated nanocone hybrid antennas, with tunable optical transmission response. (a) Schematic of a single Au-TbCo nanoantenna featuring PMA (left) and scanning electron micrograph view of a magnetophotonic crystal (right). (b) Magnetophotonic crystal illumination with resulting Faraday rotation ($\theta\_{F}$), ellipticity ($\eta\_{F}$) of the transmitted light, and the Rayleigh anomaly associated with the passing-off of the diffraction order. (c) Magnetophotonic crystal illumination with two azimuthal orientations ($\varphi\_{i}$ = 0 and 90$^\circ$) with respect to the incident light polarisation ($E\_i$) and scattering plane, with $p\_E$ denoting the orientation of the electric dipolar plasmon in the nanoantennas. The reciprocal lattice vectors $[1, 0]$ and $[0, 1]$ are shown to illustrate the 90$^{\circ}$ rotation of the reciprocal lattice vectors with respect the real space lattice. (d, e) Calculated transmission spectra for incidence angles $\alpha\_{i}$ between 0 and 20 degrees, for the $\varphi\_i$ = 0 (d) and the $\varphi\_i$ = 90$^\circ$ (e) configurations, respectively. (f, g) Measured transmission spectra for incidence $\alpha\_{i}$ angles 0 - 20 degrees for the magnetophotonic crystal built on $D\_B$ = 179~nm nanoantennas for the $\varphi\_{i}$ = 0 (f, inset – SEM of nanoantennas in this magnetophotonic crystal) and the $\varphi\_{i}$ = 90$^\circ$ (g). (h, i) Same as (f, g) but for the magnetophotonic crystal with $D\_B$ = 227~nm nanoantennas (inset in (h) – nanoantennas SEM). } \label{fig1} \end{figure} Surface lattice resonances are the result of the coupling between a broad lossy resonance, in this case the localised plasmon resonances of individual nanoantennas, and diffracted waves in the plane of the nanoantenna array (a detailed description is provided in the supporting information). This condition is generally observed close to a Rayleigh anomaly, where for a given $\alpha\_{i}$ and lattice periodicity, a Rayleigh anomaly exists where a diffracted wave is directed parallel to the grating \cite{rayleigh1907}. This Rayleigh anomaly represents the passing-off of a diffraction order through a laterally excited beam. There can exist a large number of these diffraction orders, which are labeled by two integers $n$ and $m$. The allowed waves are obtained by imposing the condition that the component of the light wave-vector normal to the lattice surface is real, through the expression \begin{equation} k_{\perp} = \sqrt{k_s^2 - \left(\bm{k_{\parallel}} + m \bm{G_1} + n \bm{G_2} \right)^2} \in \Re. \label{lattice diffraction modes} \end{equation} In the above formula, $k\_s = 2\pi n\_{sub} / \lambda $ corresponds to the light wave-vector in the substrate, where $n\_{sub}$ is the refractive index of the fused silica substrate ($n\_{sub}$ = 1.45), $\lambda$ the light wavelength, \newline $\bm{k_{\parallel}}=k_0\left[ \sin(\alpha_i)\cos(\varphi_i)~\bm{u_x} + \sin(\alpha_i)\sin(\varphi_i)~\bm{u_y} \right]$ corresponds to the wave-vector component of the incident radiation (in air/vacuum) parallel to the lattice surface, $k_0 = 2\pi/\lambda$ is the light wave-vector in air and $\bm{G_1} = (2\pi/a) ~\bm{u_x}$, $\bm{G_2} = (2\pi/b) ~\bm{u_y}$ are the reciprocal lattice vectors, with $~\bm{u_x}$, $~\bm{u_y}$ being the reciprocal lattice unit vectors and $a$ = 340 nm, $b$ = 425 nm, being the lattice parameters. The number of diffracted waves depends on the lattice parameters, the angle of incidence, the refractive index of the substrate and the light wavelength. For wavelengths greater than 600 nm, Equation \eqref{lattice diffraction modes} indicates that only the diffracted waves ($n=0 ~;~ m=-1$) for $\varphi_i=0$ and ($n=-1 ~;~ m=0$) for $\varphi_i= 90^{\circ}$ can be obtained by varying the incidence angle between 0 - 20$^{\circ}$ (see Supporting Information). We use reciprocal vector notation, such that the Rayleigh anomaly occurs at wavelengths $\lambda\_{R}^{[n,m]}$ with wave-vector orientated along the reciprocal lattice vectors $[n, m]$. Fig.~\ref{fig1}c demonstrates how the reciprocal lattice vectors are orientated with respect to the real-space lattice. The analytical expressions for the two allowed substrate waves ($[0,-1], [-1,0]$) from Equation \eqref{lattice diffraction modes}, are given by \begin{equation} \lambda\_{R}^{[0,-1]} = a \left[ n\_{sub} + n\_{air} \sin \left( \alpha\_i \right)\right]~~ \textrm{for} ~\varphi\_i = 0, \end{equation} \begin{equation} \lambda\_{R}^{[-1,0]} = b \left[ n\_{sub} + n\_{air} \sin \left( \alpha\_i \right)\right]~~ \textrm{for} ~\varphi\_i = 90^{\circ}, \end{equation} \noindent where $n\_{air} = 1$ is the refractive index of air. We first calculate the spectral transmission through the array for $p$-polarised light (i.e., incident electric field is in the scattering plane) (Fig.\ \ref{fig1}d, e). Individual nanoantenna dipole-type plasmons are excited in the respective scattering planes at 690~nm at normal incidence ($\alpha\_{i}$ = 0). For the $\varphi\_{i}$ = 0 configuration (scattering plane along 340~nm array periodicity, Fig.\ \ref{fig1}d) the surface lattice resonances from Eq. (2) are at $\lambda\_{R}^{[0,-1]}$ = 493~nm, 552~nm, 581~nm and 609~nm for $\alpha\_{i}$ = 0, 10, 15, and 20 degrees, respectively, and therefore not spectrally overlapping with the nanoantennas' individual plasmons. For $\varphi\_{i}$ = 90$^{\circ}$ (scattering plane along 425~nm array periodicity, Fig.\ \ref{fig1}e), Eq.\ (3) gives $\lambda\_{R}^{[-1,0]}$ = 616 nm, 690 nm, 726 nm and 762 nm, strongly overlapping with the nanoantennas' plasmon, resulting in a very substantial tuning of the spectrally abrupt transmission spectrum by changing $\alpha\_{i}$ (see Fig.\ \ref{fig1}e). In the Fano-type resonance description \cite{fano1961, lukyanchuk2010}, the nanoantennas plasmon represents a continuum of states, whereas the Rayleigh anomaly is a narrow line-width diffracted wave, which, upon interfering with the continuum, results in the characteristic asymmetric lineshape of the surface lattice resonances. A similar behaviour has been seen previously with magnetoplasmonic Ni nanoantennas arrays \cite{maccaferri2016, kataja2015}, where the overlap between $\lambda\_{R}^{[n,m]}$ and the nanoantenna plasmon was tuned by varying the lattice periodicity of the magnetoplasmonic crystal. However, a much simpler alternative method of tuning the surface lattice resonance spectral position can be obtained using the angular dispersion of $\lambda\_{R}^{[n,m]}$. This tuning of the spectral position of the surface lattice resonance opens up applications as mechanical tilt-angle transducers/sensors, and in contrast with previously observed transmission/reflectance angular dependence in pure plasmonic arrays \cite{vecchi2009}, this magnetoplasmonic crystal allows one to fully explore angular MO tunability. The dipolar radiation field is strongest transverse to the dipolar plasmon oscillation given by $p\_{E}$ (Fig. \ref{fig1}c). In our simulations we used $p$-polarised light and hence the electric dipole excitation within individual nanocone antennas is orientated within the scattering plane and parallel to the diffraction anomaly. This dipole cannot radiate along the oscillation direction, hence there must exist an additional mechanism for light to be scattered along the other periodicity direction for the excitation of the Rayleigh anomaly. We show this to be the result of an out-of-plane component to the electric dipole due to the illumination at oblique incidence (see Supporting Information), which would radiate in all directions within the plane of the lattice \cite{huttunen2016} providing the excitation for all $\lambda\_{R}^{[n,m]}$, e.g. [-1,~0], [0,~-1], [-1,~-1] waves for $p$-polarised light. The measured transmission spectra are shown in Fig.\ \ref{fig1}f-i. In agreement with the electromagnetic simulations above, for the $\varphi_{i}$ = 0 configuration (Fig.\ \ref{fig1}f, h) the transmission spectra show very little dependence on $\alpha\_{i}$. The nanoantenna plasmon is red-shifted and spectrally broadened as compared to the simulations though, which is likely a result of the thin Al$_{2}$O$_{3}$ isolation layer (see Methods) and oxidation of the exposed Tb$_{18}$Co$_{82}$ side-walls on the fabricated nanocones and also the size and shape distribution of the nanoantenna ensemble. There is a spectral feature between 500 - 600~nm (Fig.\ \ref{fig1}f, h) that migrates to longer wavelengths as $\alpha\_{i}$ increases that is most likely due to $\lambda\_{R}^{[0,-1]}$, since it occurs at the same spectral positions for both the $D\_{B}$ = 179~nm (Fig.\ \ref{fig1}f) and 227~nm (Fig.\ \ref{fig1}h) nanoantennas, suggesting its origin relates to the lattice and not the individual nanoantenna plasmon resonance. When rotated into the $\varphi\_{i}$ = 90$^{\circ}$ configuration (Fig.\ \ref{fig1}g and i), the strong variations in the transmission spectra are observed, in excellent agreement with the simulations, in both spectral position and lineshape, albeit with reduced amplitude. For both $D\_B$ = 179~nm (Fig.\ \ref{fig1}g) and 227~nm (Fig.\ \ref{fig1}i) nanoantennas, the $\alpha\_{i}$ = 0 incidence shows a small blue shift of the plasmon for the $\varphi_{i}$ = 90$^{\circ}$ configuration relative to the $\varphi_{i}$ = 0 configuration. As shown in the inset scanning electron microscopy images, the nanocones are not perfectly circular and this discrepancy is likely a result of this asymmetry. Markedly, the broad spectral distribution with the $D\_B$ = 227~nm nanocone antennas allows for a larger tuning bandwidth, such that there exists a larger range of $\alpha\_{i}$ for which $\lambda\_{R}^{[-1, 0]}$ overlaps with the nanoantenna plasmon. While we readily earn high incidence direction tunability of optical transmission with the designed magnetophotonic crystals, resonances in MO spectra can yield much larger Q-factors \cite{qin2017}. Maccaferri et~al. \cite{maccaferri2013} showed that an out-of-plane magnetization in the presence of the electric dipolar plasmon gives rise to an in-plane MO dipolar plasmon ($p\_{MO}$) which is orientated orthogonal to $p\_{E}$ and is induced in the ferromagnetic layer. The magnitude of $p\_{MO}$ is proportional to the magnitude of $p\_{E}$. Given that a material's optical constants are typically much larger than their MO constants, even lossy broad localised plasmon resonances can give rise to large enhancements in MO activity as compared to ferrimagnets without plasmonic integration. This transverse oscillation is induced via spin-orbit coupling, generating an oscillation of conduction electrons in-the-plane but orthogonal to $p\_{E}$. With the use of p-polarised light, the pure optical dipole is orientated along $p\_{E}$ and the transverse MO dipole is aligned along $p\_{MO}$ (Fig. \ref{fig2}a). Hence, the use of $p$-polarised light results in the MO dipole induced in the Tb$_{18}$Co$_{82}$ layer which radiates strongly in the scattering plane, and is therefore expected to be most sensitive to the angular dispersion of the surface lattice resonances as the crystal is tilted by $\alpha\_{i}$. In Fig. \ref{fig2}b-j the calculated and experimental Faraday rotation ($\theta\_F$), Faraday ellipticity ($\eta\_F$) and Faraday angle ($\Theta\_F = \sqrt{\theta\_{F}^{2} + \eta\_{F}^{2}}$) are presented. The calculated Faraday effect using the experimental permittivity for a Tb$_{18}$Co$_{82}$ thin film is shown in Fig.\ \ref{fig2}b, e, h for the $D\_{B}$ = 179~nm nanocone antennas array (see Supporting Information for details). The $\varphi\_{i}$ = 0 configuration shows no angular dependence for the Faraday effect (see Supporting information) and through fitting a Lorentzian to the $\alpha\_{i}$ = 0 transmission and $\Theta\_{F}$ spectra for the $\varphi\_{i}$ = 0 configuration we estimate that the MO resonance exhibits a two-fold reduction in linewidth relative to the pure optical resonance. While for $\varphi\_{i}$ = 0 (no overlap of nanoantennas plasmon with Rayleigh anomaly) a reasonable spectral feature narrowing is achieved without angular dependence, in the $\varphi\_{i}$ = 90$^\circ$ configuration the experimental Faraday spectra show strong angular dependence and suggest that sizeable Faraday angles of up to 0.3$^\circ$ are readily available. The simulated spectra prompts that extremely sharp features exist that coincide with $\lambda\_{R}^{[-1,0]}$ (Fig.\ \ref{fig2}b, e, h). \begin{figure}[t!] \centering \includegraphics[width = 0.85\textwidth]{figures/main-text-figs/fig2.jpg} \caption{Angle-of-incidence spectral dependence of the Faraday effect in the magnetophotonic crystals. (a) Illumination configuration as in Fig, 1c, with added MO plasmon dipole of nanoantenna ($p\_{MO}$, green). (b, e, h) Calculated spectral $\theta\_{F}$ (b), $\eta\_{F}$ (e) and $\Theta\_{F}$ (h) for incidence angles $\alpha\_{i}$ between 0 and 20 degrees. Measured $\theta\_{F}$ (c, d), $\eta\_{F}$ (f, g) and $\Theta\_{F}$(i, j) for the $D\_{B}$ = 179 nm and 227 nm nanoantenna arrays respectively. A quadratic polynomial has been fitted to the $\theta\_{F}$ measurements and subtracted to remove the background contribution which arises from the Faraday rotation of the fused-silica substrate, which is strongest for short wavelengths and approaches zero with increasing wavelength. } \label{fig2} \end{figure} The Rayleigh anomaly is strongest through the substrate and the observation of strong diffractive effects in the Faraday spectra indicates that the MO dipole induced in the Tb$_{18}$Co$_{82}$ layer is transferred to the rest of the nanoantenna \cite{pourjamal2018}. The experimental MO spectra measured for the nanoantennas with $D_{B}$ = 179 nm (Fig.\ \ref{fig2}c, f, i) compare well to the calculations. The excellent match of the measured spectra with simulations demonstrates the suitability of combining finite-element methods with experimentally measured thin-film permittivity for the calculated design of magnetophotonic devices. For the nanoantennas with $D_{B}$ = 227 nm (Fig.\ \ref{fig2}d, g, j) there is a stronger Faraday effect, but with broader spectral features, demonstrating the trade-off between adding more magnetic material in the nanoantenna whilst maintaining small dimensions for narrow plasmon resonances. From the above it is clear that it is not possible to measure the MO response of the Au-Tb$_{18}$Co$_{82}$ nanoantennas off-resonance, where $\theta\_F$ and $\eta\_F$ quickly drop to values comparable to the measurement uncertainty. In effect, the nanoantennas plasmons strongly amplify the minute magnetic signals that ordinarily wouldn't be resolved. It is possible to estimate the Tb$_{18}$Co$_{82}$ amount in each nanoantenna, corresponding to a nanodisk with 86 $\pm$ 10~nm diameter and 15~nm height for nanoantennas with base diameter of 179~nm. This yields Tb$_{18}$Co$_{82}$ effective film thickness (i.e.\ the thickness of a film made with the same amount of material) of approximately 0.6~nm, of the order of an atomic monolayer, demonstrating the MO amplification obtained through the nanoantenna's plasmons. The experimental $\Theta\_F$, $\eta\_F$ and $\theta\_F$ curves all show abrupt features that onset with the excitation of the surface lattice resonance associated with $\lambda\_{R}^{[-1,0]}$ in the $\varphi\_{i}$ = 90$^\circ$ configuration. However, spectrally just prior to this resonance there is the greatest change in MO activity for the smallest change in wavelength. Since this feature is dependent on the spectral position of $\lambda\_{R}^{[-1,0]}$, it can be effectively tuned by varying $\alpha\_{i}$, indicating the potential use of such magnetophotonic crystals as light incidence direction/angular sensors. This is explored in Fig. \ref{fig3}a, where hysteresis loops are recorded through measurements of the transmitted light ellipticity at a wavelength of 730~nm for the nanoantennas with base diameter 227~nm for different $\alpha\_{i}$. The nanoantenna's Tb$_{18}$Co$_{82}$ tops maintain perpendicular magnetic anisotropy even after the lithography process, which is clear from the large remanent magnetization observed in the hysteresis loops in Fig.\ \ref{fig3}a, reducing the magnetic field strength required to saturate the sample along the out-of-plane direction. The dynamic tunability of the MO activity by varying $\alpha\_{i}$ is remarkable in this case, resulting in a dramatic change in the magnitude of $\eta\_F$, where extraordinarily at $\alpha\_{i}$ = 15$^\circ$ the loop is even inverted (see a view of $\eta\_F$ for the spectral region around the surface lattice resonance in Supporting information; it is clear that this sign change in $\eta\_F$ is associated with the migration of the surface lattice resonance to the measurement wavelength of 730~nm). This is explored further in Fig.\ \ref{fig3}b where the change in Faraday ellipticity ($\delta\eta\_F$) between successive wavelength increments ($\delta\lambda$ = 5~nm) is plotted. Since the gradient of this feature is positive when it coincides with $\lambda\_{R}^{[-1,0]}$ (see inset of Fig.\ \ref{fig3}a), the $\delta\eta\_F$ $<$ 0 data has been excluded from the fits. It is evident that $\eta\_F$ undergoes a sign change, which in turn is tunable by varying $\alpha\_{i}$. This active tuning modality was previously envisioned for refractive index biochemosensing, where the spectral region of maximum sensitivity can be tuned by varying the angle of incidence, thereby allowing to operate in a spectral region where the analyte solution is minimally absorbing \cite{kazuma2013}. Here we foresee that the deviations from a set angle, i.e., a mechanical tilt, could be employed in high-precision tilt-control systems and detected with high accuracy, simply as reduced MO activity in transmittance. The latter feature starkly differentiates this approach from the currently employed optical systems where reflection is captured by a complex system of mirrors/detectors often with the need of a microelectromechanical (MEMS) array of actuators. Lorentzian functions have been fitted to the $\delta\eta\_F$ data, in order to estimate the spectral width of the abrupt transition in $\eta\_F$. Due to the limited number of data points on this abrupt spectral transition, a full estimate of the full-width at half-maximum (FWHM) is difficult to obtain from these fits. However, all values are within the $5-10$ nm range (which is comparable to the wavelength resolution of the setup) with the exception of the $\alpha\_{i}$ = 10$^{\circ}$ where a FWHM of 24 $\pm$ 10~nm is obtained due to the anomalously large error on this particular measurement. Crucially, the perpendicular magnetic anisotropy in this magnetophotonic crystal allows for the measurement of the magnetic differential absorption of circularly polarised light, underpinning $\eta\_F$, without the need for an out-of-plane magnetic field to stabilize the magnetization along the propagation direction of light. When circularly polarized light beam, with a time-varying helicity is incident on the sample, we can measure the ratio, $C\_{\omega}^{q}/C\_{\circ}^{q}$ which is proportional to the differential absorption of circularly polarised light (see Methods) for the two opposite polar magnetization states. Here, $C\_{\omega}^{q}$ is the amplitude of the $\omega/2\pi$~=~50~kHz signal from the modulation of the light circular polarization (see Methods), for a fixed polar magnetization $q = \pm M\_{z}$, while $C\_{\circ}^{q}$ is the DC signal intensity, which contains the helicity independent absorption contribution. Fig.\ \ref{fig4}a shows several spectra for the nanoantennas with base diameter of 227~nm for different values of $\alpha\_{i}$, in the $\varphi\_{i}$ = 90$^{\circ}$ configuration and in zero external magnetic field. The spectral minima strongly depend on $\alpha\_{i}$. If we include an external field, the amplitude of $C\_{\omega}^{q}/C\_{\circ}^{q}$ can be further modulated by reversing the magnetization ($q$ = $+ M\_{z} \rightarrow -M\_{z}$, and vice versa), as indicated by the variation between the dashed and solid curves. The magnetophotonic crystal then exhibits active transmission tunability, whereby absolute transmission can be enhanced or attenuated with the use of a magnetic field. Similar active magnetic transmission tunability has been devised with magnetoplasmonic chiral nanoantennas \cite{zubritskaya2018}, however, an external field was required to orient the magnetization out-of-plane throughout the measurement, whereas here the external field is only required to set the magnetic state. An additional tuning knob is implemented through the light incidence direction/angle $\alpha\_{i}$, whereby the spectral location of this maximum for magnetic modulation can be tuned with the surface lattice resonance. \begin{figure}[t!] \centering \includegraphics[width = 0.85 \textwidth]{figures/main-text-figs/fig3.pdf} \caption{Dynamic Faraday ellipticity in the magnetophotonic crystal. (a) Hysteresis loops under externally-applied magnetic field recorded from the magnetophotonic crystal with $D\_B =227$ nm, at a wavelength of 730 nm demonstrating how the magnitude and sign of the Faraday ellipticity ($\eta\_{F}$) can be controlled through the illumination incidence angle ($\alpha\_{i}$), varying between 0 and 20 degs. (b) The change in Faraday ellipticity ($\delta\eta\_{F}$) measured at different wavelengths. Following the onset of the surface lattice resonance, there is an abrupt change in light ellipticity at various illumination angles (0-20 degrees), which is associated with a maximum in $\delta\eta\_{F}$. The peaks at different incidence angles have been fitted with Lorentzians. } \label{fig3} \end{figure} We define a magnetic asymmetry ratio $(C\_{\omega}^{-M\_{z}}-C\_{\omega}^{+M\_{z}}) / (C\_{\circ}^{-M\_{z}}+C\_{\circ}^{+M\_{z}})$, which represents the available helicity-dependent transmission modulation between the two antiparallel magnetization states (see Methods) which is plotted in Fig.\ \ref{fig4}b. The dispersion of the surface lattice resonance calculated from equation (3) is given by the dashed lines. Here, it is clear that the latter dictates the onset wavelength for the magnetic modulation of the differential circular transmission, meaning that the peak sensitivity can be tuned to arbitrary wavelength between 650~nm - 800~nm. This tunability range is governed by the FWHM of the magnetophotonic crystal transmission spectra. A maximum magnetic asymmetry ratio of around 0.5\% can be obtained, however, we believe there is enormous scope for improvement through composition optimisation of the RE-TMs and the noble metal thicknesses in the nanoantennas, including exploring new geometries sustaining plasmon optically dark modes, which result in a stronger plasmonic enhancement of the MO activity than achieved with the here-used strongly scattering dipolar plasmons \cite{lopez-ortega2020}. The essential operation of a simple mechanical tilt-control/light incidence optical sensor can be further envisioned as in Fig.\ \ref{fig4}c. The differential chiral transmission ($C\_{\omega}^{q}/C\_{\circ}^{q}$) reports the mechanical tilt/change of light incidence direction angle on the pre-magnetized magnetophotonic crystal by having sharp spectral dips at various wavelengths. We can also envision that by using materials exhibiting all-optical magnetization switching \cite{ciuciulkaite2020} such as the TbCo family of alloys employed here, the need for the external magnetic field to setup the magnetic state of the magnetophotonic crystal or for magnetic transmission modulation can be entirely removed, whereby the transmission would be modulated purely optically at the ultrafast (fs) timescale and allowing for sub-wavelength (nanoscale) miniaturization \cite{alebrand2012,liu2015}. \begin{figure}[t!] \centering \includegraphics[angle = -0,width = \textwidth]{figures/main-text-figs/figure4-directional.jpg} \caption[]{Angle-controlled chiral transmittance and mechanical tilt-angle sensing. (a) Spectral dependence of the $C\_{\omega}^{q}/C\_{\circ}$ signals, where $q$ = +$M\_{z}$ or -$M\_{z}$ for the solid and dashed curves respectively. $C\_{\omega}^{q}$ is related to the total circular dichroism for a particular magnetization state, containing both magnetic and non-magnetic contributions. (b) The amplitude of the magnetic modulation of the helicity dependent transmission as function of both wavelength and $\alpha\_{i}$, which relates to the difference between the solid and dashed curves in (a). The dashed white line indicates the expected location of the Rayleigh anomaly calculated from Eq.\ (3). (c) Schematics of the tilt-angle sensing device, where the difference in left- and right- circularly-polarized light, passing through the pre-magnetized magnetophotonic crystal, is detected as spectrally-resolved differential chiral transmission, having sharp spectral dips at distinct wavelengths, depending on the light's angle of incidence.} \label{fig4} \end{figure} \section{Conclusion} In conclusion, our work demonstrates the seamless integration of a rare-earth--transition-metal into magnetophotonic crystals. A strong angular dispersion is engineered through the interference of the Rayleigh anomaly and the nanoantenna's plasmons, producing a sharp surface lattice resonance in both the optical and MO responses. We showcase dynamic tunability of magnetophotonic crystals using the light's incidence direction angle, which strongly modifies the MO response, as rationalized using finite-element method electromagnetic simulations. Further, we have shown the magnetic modulation of the differential circular transmission of a magnetophotonic crystal, with measurements performed in zero external magnetic field, exploiting the perpendicular magnetic anisotropy of the magnetic nanoantennas. More generally, the integration of rare-earth--transition-metals within plasmonic nanoantennas offers an exciting platform for highly tunable, ultrafast all-optical switching active magnetophotonic devices \cite{liu2015,maccaferri2020}. Such architectures could also find further application scope where the optical response from magnetophotonic crystals can be tuned by the angle of incidence \cite{haghtalab2020,shi2020} in combination with the reconfigurable magnetic structure \cite{wang2020} steered by all-optical ultrafast magnetization switching or by external magnetic fields. \section{Experimental Section} \threesubsection{Sample Fabrication} The plasmonic nanoantennas are fabricated using a top-down approach, based on the method outlined by Horrer {\it et al}. \cite{horrer2013}. Au(80 nm) films were deposited using electron-beam evaporation onto fused-silica substrates. Later, Al$_{2}$O$_{3}$(3.5 nm)/Tb$_{18}$Co$_{82}$(15 nm)/Al$_{2}$O$_{3}$(2 nm) films were sputter deposited onto these films. The Tb$_{18}$Co$_{82}$ layer was deposited through co-sputtering, with the complete structure being Au(80 nm)/Al$_{2}$O$_{3}$(3.5 nm)/Tb$_{18}$Co$_{82}$(15 nm)/Al$_{2}$O$_{3}$(2 nm). The additional thin Al$_{2}$O$_{3}$ layers were used as capping and isolating layers for the Tb$_{18}$Co$_{82}$. Here, the composition of the film can be varied by adjusted the relative power of the Co and Tb magnetrons. Calibration films were made with different power ratios on the two magnetrons and compositions were verified using Rutherford back scattering. Electron beam lithography was used to define disk shaped apertures in a MicroChem 496PMMA A4 electron-beam resist. Electron-beam evaporation was used to deposit an Al mask through the resist followed by removal of the PMMA mask with Acetone. The resulting structure was then milled at a 5~deg incidence angle with sample rotation, removing all material unprotected by the Al mask. Any remaining Al mask was then removed with the photoresist developer Microdeposit 351, which in this case was used as a selective etcher to target the Al. A conical profile is induced through a combination of the small lateral component of the milling which depends to some extent on the small milling incidence angle \cite{fleischer2010}. In our samples, this results in a constant slope profile of approximately 62~degrees for all nanoantenna arrays. Therefore, by varying the diameter of the Al mask, the resulting structures can be tuned from truncated to conical profiles. \threesubsection{Magneto-optical characterisation} The experimental values of $\theta\_{F}$, $\eta\_{F}$ and $\Theta\_{F}$ were measured using the photoelastic modulator methodology with an applied field of 450~mT along the light propagation direction, which is described in the Supporting Information. A quadratic polynomial was fitted to the raw $\theta\_{F}$ data in order to subtract the background contribution which arises from the Faraday rotation of the fused-silica substrate, which is strongest for short wavelengths and decreases for longer wavelengths \cite{qiu1998}. For the differential absorption of circularly polarised light measurement, a time varying light polarisation, which alternates between left and right circularly polarised light states at 50 kHz was generated using a photoelastic modulator (PEM) and directed at the sample at normal incidence. This is achieved by passing linearly polarised light orientated at 45$^{\circ}$ to the fast axis of the PEM, with the PEM retardation set to 0.25 wavelengths. Any mechanism in the TNC array which results in a difference in absorption for opposite helicities (including magnetic circular dichroism) will contribute to an oscillating light intensity at the detector at the photoelastic modulator frequency. It is common to express this measurement as the ratio $C\_{\omega}^{q}/C\_{\circ}^{q}$, where $C\_{\omega}^{q}$ is the amplitude of the $\omega$ = 50 kHz signal for a fixed polar magnetization $q = \pm M\_{z}$, and $C\_{\circ}^{q}$ is the DC signal intensity, which contains the helicity independent absorption contribution. Prior to the measurement, a saturating magnetic field was used to initialise the magnetization along the light propagation direction ($q = +M\_{z}$) and then removed. For the subsequent measurement, the magnetization was saturated in the opposite polar direction ($q = -M\_{z}$) and the measurement repeated. It is important to note that the spectra in Fig.~\ref{fig4}a contain additional {\it fake} CD contributions, which arise from leaking-in of the large linear dichroism signal as a result of the rectangular array with which the nanostructures are arranged. By observing the difference between the antiparallel magnetization states, these effects, which are independent of the magnetization orientation, can be subtracted out, yielding the available magnetic modulation. We define this magnetic modulation of the helicity dependent transmission as $(C\_{\omega}^{-M\_{z}}-C\_{\omega}^{+M\_{z}}) / (C\_{\circ}^{-M\_{z}}+C\_{\circ}^{+M\_{z}})$, and this quantity is plotted in Fig.~\ref{fig4}b as a function of both $\alpha\_{i}$ and wavelength. \medskip \textbf{Supporting Information} \par Supporting Information is available from the Wiley Online Library or from the author. \medskip \textbf{Acknowledgements} \par The authors would like to express their gratitude towards Prof.\ Bengt Lindgren of Uppsala University, Sweden, for fruitful discussions and support with the ellipsometric characterization of TbCo thin film materials. The excellent support and infrastructure of the MyFab facility at the \AA ngstr\"om Laboratory of Uppsala University is also highly appreciated. The authors acknowledge support from the Knut and Alice Wallenberg Foundation project ``{\it Harnessing light and spins through plasmons at the nanoscale}'' (Project No.\ 2015.0060), the Swedish Research Council (Project No.\ 2019-03581), the Swedish Foundation for International Cooperation in Research and Higher Education (Project No.\ KO2016-6889), and the Swedish National Infrastructure for Computing (SNIC). This work is part of a project which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no.\ 737093, ``{\textsc{femtoterabyte}}''. P.V. acknowledges funding from the Spanish Ministry of Science and Innovation under the Maria de Maeztu Units of Excellence Programme (MDM-2016-0618) and the project RTI2018-094881-B-I00 (MICINN/FEDER). \medskip \textbf{Author Contributions} \par R.M.R-R.\ and V.K.\ designed the material and nanofabrication processing, with input from A.D.\ concerning the nanocone design approach. R.M.R-R.\ and A.C.\ carried out the thin film deposition and magnetic characterization. R.M.R-R., A.C., I.-A.C.\ and M.P.\ performed all magneto-optical characterization of the nanoarrays. J.H., R.M.R-R., M.Z., P.V.\ and P.M.O.\ did the electromagnetic modelling and simulations. R.M.R-R.\ and V.K.\ wrote the manuscript with input from J.H., P.V., A.D.\ and P.M.O. All authors discussed the results and commented on the manuscript.
{'timestamp': '2020-06-01T02:11:20', 'yymm': '2005', 'arxiv_id': '2005.14534', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14534'}
arxiv
\section{Introduction} Understanding galaxy formation is a major goal of current astrophysical research. Detailed studies of the stellar components of the Milky Way galaxy play a fundamental role in
building this understanding because they provide direct and robust tests of and constraints on theories of galaxy formation. The latter are presently formulated within the framework of hierarchical formation in a $\Lambda$CDM Universe (e.g., \citealt{1991ApJ...379...52W,1998MNRAS.295..319M}), and the past few decades have seen a myriad of results affirming the role that mergers have had in the evolution of the Milky Way, no more vividly than the countless extant substructures that have been discovered and mapped throughout the Galactic stellar halo, whose origin appears to be dominated by the debris of these mergers. Meanwhile, explorations of the evolution of the Galactic disk have a long and rich history that reaches back to the kinematical studies of \cite{1915MNRAS..76...37E}, \cite{1915MNRAS..76...70J}, \cite{1920AJ.....33..113C}, \cite{1922ApJ....55..302K}, \cite{1926PhDT.........1O}, and \cite{1927ApJ....65..108S}, who explored the local stellar velocity distribution to gain information about large-scale Galactic structure. The field significantly advanced when \cite{1950ApJ...112..554R} and \cite{1950AZh....27...41P} discovered a strong correlation between spectral type and UV-excess (i.e., metallicity) and the kinematical properties of stars near the Sun. One of the explanations of this phenomenon, which led to the seminal paper by \cite{1962ApJ...136..748E}, is that the oldest and least chemically enriched stars we observe today were born kinematically hot, and these turbulent birth velocities declined steadily as the gas in the Galaxy dissipated energy and settled into a dynamically cold disk --- i.e., older populations were dynamically hotter \emph{ab initio} \citep[e.g.,][]{2004ApJ...612..894B,2013A&A...558A...9M}. However, it was later suggested that within some fraction of the thin disk, stars are born with random velocities that are small at birth but that increase with time (e.g., \citealt{1951ApJ...114..385S,1987ASIC..207..375F,2000MNRAS.318..658B,2004A&A...418..989N,2009MNRAS.397.1286A}) The velocity distribution function (DF) of stars in the Galaxy can be a valuable aid in uncovering the relationships between kinematics, metallicity, and age for disk and halo stars, and lends insights into the dynamical history of stellar populations \citep{1999A&A...341...86B,2004MNRAS.350..627D,2005A&A...430..165F,2010ApJ...717..617B}. In the Galactic stellar halo, where the timescales for dynamical mixing are long, the history of minor mergers imprints long-lived, dynamically cold substructures that are quite legible fossils from which Galactic archaeology can readily synthesize a sequence of events (e.g., \citealt{1994Natur.370..194I,2001ApJ...548L.165O,2003ApJ...599.1082M,2006ApJ...642L.137B,2006ApJ...643L..17G}). However, a multitude of processes can affect the velocity DF in the Galactic disk, including the potential mixture of the aforementioned {\it ab initio} ``born hot'' and``born cold'' components, and with the latter affected by numerous \emph{disk heating} mechanisms. The sum of these effects that increase velocity dispersions in disk stars complicate the simplest model of a spiral galaxy, where stars in the disk librate about circular orbits in the Galactic plane. From state-of-the-art magnetohydrodynamical cosmological simulations, \cite{2016MNRAS.459..199G} found that in the secular evolution of the disk, bar instabilities are a dominant heating mechanism. The authors also reported that heating by spiral arms \citep[e.g.,][]{2004MNRAS.350..627D,2006MNRAS.368..623M}, radial migration \citep[e.g.,][]{2002MNRAS.336..785S}, and adiabatic heating from mid-plane density growth \citep[e.g.,][]{1992MNRAS.257..620J} are all subdominant to bar heating. Another fundamental astrophysical process that scatters stars onto more eccentric orbits or onto orbits that are inclined to the disk's equatorial plane is the accretion of lower mass systems (see, e.g., \citealt{2008ApJ...688..254K,2014MNRAS.443.2452M}, and references therein). This is particularly relevant given that recent studies suggest that the Milky Way likely underwent such a merger $\sim$ 8 Gyr ago (e.g., \citealt{2010A&A...511L..10N, 2014ApJ...796...38N, 2017ApJ...842...49L, 2018MNRAS.478..611B, 2018MNRAS.tmp.1537K}). Moreover, \cite{2018MNRAS.481..286L}, using N-body simulations of the interaction of the Milky Way with a Sagittarius-like dSph galaxy, showed that the patterns in the large-scale velocity field reported from observations, like vertical waves (e.g., \citealt{2012ApJ...750L..41W}), can be described by tightly wound spirals and vertical corrugations excited by Sagittarius (Sgr) impacts. Studies of the density distribution of stars above and below the Galactic plane have shown that the Galactic disk is in fact composed of two distinct components \citep{1982PASJ...34..365Y,1983MNRAS.202.1025G}: a thin disk with a vertical scaleheight of 300 $\pm$ 50 pc, and a thick component with a scaleheight of 900 $\pm$ 180 pc at the distance of the Sun from the Galactic Center (see \citealt{2016ARA&A..54..529B} for a review of the spread in these quantities found by various authors using different methods). On the other hand, detailed stellar-abundance patterns for disk stars show a clear bimodal distribution in the [$\alpha$/Fe] versus [Fe/H] plane. This bimodality is associated with --- and firm evidence for --- a distinctly separated thin and thick disk \citep{1993A&A...275..101E,2003A&A...410..527B,2011MNRAS.414.2893F,2013A&A...554A..44A}. The high--[$\alpha$/Fe] sequence, corresponding to the thick disk, exists over a large radial and vertical range of the Galactic disk (e.g., \citealt{2014ApJ...796...38N,2015ApJ...808..132H,2018MNRAS.476.5216D}). The existence of two chemically-distinguished disk components points to a different chemical evolution and, hence distinct disk-formation mechanism and epoch (e.g.; \citealt{1997ApJ...477..765C,2015MNRAS.453.1855M,2019MNRAS.tmp..141C}) for the two disk components. Kinematically, the velocity dispersion for the chemically selected thick disk component is, on average, larger than the dispersion reported for the low--[$\alpha$/Fe] thin disk \citep{2008A&A...480...91S,2011ApJ...738..187L,2018MNRAS.474..854A}. However, there is substantial overlap in the thin and thick disk velocity DFs, which makes kinematics a less-reliable diagnostic for discriminating populations. From the standpoint of isolating and studying the chemodynamical properties of these populations free of cross-contamination, this is a bit unfortunate, because, despite many large, dedicated spectroscopic surveys of Galactic stars from which detailed chemical-abundance patterns can be measured for now millions of stars (see below), there are now several orders of magnitude more stars with {\it kinematical} data available, thanks to ESA's {\it Gaia} mission. \emph{Gaia} is an all-sky astrometric satellite from which we can now obtain accurate sky position, parallax, and proper motion, along with the estimated uncertainties and correlations of these properties, for $\sim$ 1.3 billion sources. Moreover, the existence of a \emph{Gaia} sub-sample containing line-of-sight velocities provides unprecedented accuracy for individual space velocities for more than 7 million stellar objects \citep{2018arXiv180409365G}. Unfortunately, there are no $\alpha$-element abundance measurements for the vast majority of this \emph{Gaia} sub-sample. This prevents an unbiased and comprehensive study of the velocity DF for the thin disk, thick disk, and halo treated separately. Fortunately, the astronomical community has invested great effort into building and developing massive multi-object spectroscopic surveys, where the measurement of abundances beyond the simple overall metallicity is possible. Projects like SEGUE \citep{2009AJ....137.4377Y}, RAVE \citep{2008AJ....136..421Z}, LAMOST \citep{2012RAA....12..723Z}, Gaia-ESO \citep{2012Msngr.147...25G}, GALAH \citep{2015MNRAS.449.2604D}, and APOGEE \citep{2017AJ....154...94M} are transforming our understanding of the Milky Way galaxy through their generation of vast chemical abundance databases on Milky Way stars. The future is even more promising, with even larger spectroscopic surveys planned, such as SDSS V \citep{2017arXiv171103234K}, WEAVE \citep{2012SPIE.8446E..0PD}, MOONS \citep{2016ASPC..507..109C}, DESI \citep{2016arXiv161100036D}, 4MOST \citep{2018IAUS..334..225F}, and MSE \citep{2016SPIE.9908E..1PZ}, all aiming to provide individual abundances for millions more Galactic stars in both hemispheres, together with line-of-sight velocities, $v_{\rm los}$, with a precision of a few hundred m s$^{-1}$. While the sum total of observed stars in these spectroscopic surveys will span only $\sim$1\% of even the present \emph{Gaia} sub-sample containing line-of-sight velocities, the chemical data in the former surveys can be used to begin characterizing and interpreting the vastly larger number of sources having kinematical data in the $\emph{Gaia}$ database. One of the main goals of the present study is to perform, for the first time, a detailed and unbiased study of the Galactic velocity DFs --- derived from $\emph{Gaia}$ data --- for the individual, chemically separated stellar populations, and to explore how these distributions change for different Galactocentric radii and distances from the Galactic mid-plane. For this study we use the individual stellar abundances from the APOGEE survey, specifically [Mg/Fe] and [Fe/H], to relatively cleanly discriminate the thin and thick disks, associated with the low and high $\alpha$-sequences, respectively, in the [$\alpha$/Fe]-[Fe/H] plane, as well as the halo stars, which predominantly inhabit other regions of the same plane. Using the \emph{kinematical} properties of these \emph{chemically} defined sub-samples, we build a data-driven kinematical model, which we then apply to the full \emph{Gaia} database to ascertain the contribution of the different Galactic structural components to the velocity-space DF as a function of Galactic cylindrical coordinates, $R$ and $z$. We also create two-dimensional maps in the $R$-$z$ plane, where we explore the behavior of the thick-to-thin-disk density normalization and the halo-to-disk density normalization. This paper is organized as follows. In Section~\ref{APO_Gaia} we describe the APOGEE and \emph{Gaia} data-sets we employ in the analysis. Section~\ref{DF_disk} examines the velocity DF of the Galactic disk and halo, and describes the building of the data-driven model. We discuss the thick-disk normalization and the halo-to-disk density in Section~\ref{thick_norma}, and the most relevant results of the present study are summarized and discussed in Section~\ref{Conclusion}. \section{The APOGEE-2 DR16 and \emph{Gaia} DR2 Data-sets} \label{APO_Gaia} Our study makes use of the data products from Data Release 16 (DR16) of the Apache Point Observatory Galactic Evolution Experiment (APOGEE, \citealt{2017AJ....154...94M}). Part of both Sloan Digital Sky Survey III (SDSS-III, \citealt{2011AJ....142...72E}) and SDSS-IV \citep{2017AJ....154...28B} via APOGEE and APOGEE-2, respectively, the combined APOGEE enterprise has been in operation for nearly a decade, and, through the installation of spectrographs on both the Sloan \citep{2006AJ....131.2332G} and du Pont 2.5-m telescopes \citep{1973ApOpt..12.1430B,2019arXiv190200928W}, has procured high-resolution, $H$-band spectra for more than a half million stars across both hemispheres. The survey provides $v_{\rm los}$, stellar atmospheric parameters, and individual abundances for on the order of fifteen chemical species \citep{2015AJ....150..148H,2015AJ....149..181Z,2016AJ....151..144G}. A description of the latest APOGEE data products from DR14 and DR16 can be found in \cite{2018AJ....156..125H} and J\"onsson et al. (in preparation), respectively. In this study we also use data from the second data release (DR2) of the \emph{Gaia} mission \citep{2018arXiv180409365G}. This catalog provides full \emph{6-dimensional} space coordinates for 7,224,631 stars: positions ($\alpha$, $\delta$), parallaxes ($\varpi$), proper motions ($\mu^{*}_{\alpha}$, $\mu_{\delta}$), and radial line-of-sight velocities ($v_{\rm los}$) for stars as faint as $G$ = 13 \citep{2018arXiv180409369C}. The stars are distributed across the full celestial sphere. \emph{Gaia} DR2 contains $v_{\rm los}$ for stars with effective temperatures in the range $\sim$ 3550 - 6900 K. The median uncertainty for bright sources ($G < 14$) is 0.03 mas for the parallax and 0.07 mas yr$^{-1}$ for the proper motions \citep{2018arXiv180409366L,2018arXiv180409375A}. The precision of the \emph{Gaia} DR2 $v_{\rm los}$ at the bright end is on the order of 0.2 to 0.3 km s$^{-1}$, while at the faint end it is on the order of 1.4 km s$^{-1}$ for $T_{\rm eff}$ = 5000 K stars and $\sim$3.7 km s$^{-1}$ for $T_{\rm eff}$ = 6500 K. For further details about the \emph{Gaia} DR2 sub-sample containing $v_{\rm los}$ measurements, we refer the reader to \cite{2018arXiv180409371S} and \cite{2018arXiv180409372K}. We follow the recommendations from \cite{2019MNRAS.tmp..280B} for studies in Galactic dynamics using \emph{Gaia} to remove stars where the color photometry is suspect, as well as stars where the $v_{\rm los}$ measurement is based on fewer than four transits of the instrument. Individual space velocities in a Cartesian Galactic system were obtained by following the equations in \citet{1987AJ.....93..864J}. That is, from \emph{Gaia} DR2 line-of-sight velocities, proper motions, and parallaxes, we derive the space velocity components ($U$, $V$, $W$). In the case of the APOGEE targets, we use the APOGEE-2 DR16 catalog $v_{\rm los}$ \citep{2019arXiv191202905A}, where the internal precision is better than 0.1 km s$^{-1}$ \citep{2015AJ....150..173N}. When the relative uncertainty in parallaxes become larger, the inverse of a measured parallax is a biased estimate of the distance to a star \citep{1927ApJ....65..108S,2018arXiv180409376L}. For this reason, we select stars with positive parallaxes and a relative parallax uncertainty smaller than 20$\%$ ($\sigma_{\varpi}/\varpi$ $\leq$ 0.2). We also set the \emph{Gaia} flag astrometric-excess-noise = 0 to drop stars with poor astrometric fits. In addition, the flag rv-nb-transits > 4 is set to secure enough \emph{Gaia} transits that robust $v_{\rm los}$ measurements are in-hand. Following \cite{2019MNRAS.tmp..280B}, the selected \emph{Gaia} stars for this study must have reported $G_{\rm BP}$ and $G_{\rm RP}$ magnitudes. That leaves 4,774,723 \emph{Gaia} targets for this exercise. The remaining targets have high-precision \emph{Gaia} parallaxes (the vast majority of the stars have $\sigma_{\varpi}/\varpi$ $\leq$ 0.05), so that their distances can be determined by simple parallax inversion without significant biasing the error derivation (e.g., \citealt{2018AJ....156...58B} and references therein). We adopt a right-handed Galactic system, where $U$ is pointing towards the Galactic center, $V$ in the direction of rotation, and $W$ towards the North Galactic Pole (NGP). For the peculiar motion of the Sun, we adopt the values: $U_{\odot}$ = 11.1 km s$^{-1}$, $V_{\odot}$ = 12.2 km s$^{-1}$, and $W_{\odot}$ = 7.2 km s$^{-1}$ \citep{2010MNRAS.403.1829S}. We also transform the velocities from Cartesian to a cylindrical Galactic system: $\upsilon_{R}$, $\upsilon_{\phi}$, $\upsilon_{z}$. The Sun is assumed to be located at $X = 8.34$ $\pm$ 0.16 kpc and the circular rotation speed at the location of the Sun to be 240 $\pm$ 8 km s$^{-1}$ \citep{2014ApJ...783..130R}. We define $R = (X^{2} + Y^{2})^{1/2}$, as the distance from the Galactic center (GC), projected onto the Galactic plane. In the end, the typical uncertainty in the velocities used here is $\Delta\upsilon \sim$ 1.5 km s$^{-1}$ per dimension. To remove outlier velocities that can yield unrealistic velocity dispersions we select stars where ($\upsilon_{R}^2 + (\upsilon_{\phi} - 240)^2 + \upsilon_{z}^2)^{1/2}$ $<$ 600 km s$^{-1}$, which removes a total of 503 stars. In the next section we explore the kinematical properties of the Milky Way using the data-set just described. \section{The velocity distribution functions of the Galactic disk and halo populations} \label{DF_disk} Stars of different mass synthesize and, upon death, eject into the interstellar medium different chemical elements, and on different timescales. The overall metallicity measured in a star's atmosphere, for the most part, represents an integral over star formation and chemical enrichment prior to that star's birth, while the abundances of individual elements can be used to track the ratio of recent to past star-formation rates. Thus, stellar chemical-abundance {\it patterns} offer a means to discriminate stellar populations that have experienced differing star-formation and chemical-enrichment histories. To procure unbiased velocity DFs for individual stellar populations in the nearby disk, we exploit the [Fe/H] and [Mg/Fe] abundances measured from APOGEE spectra, the combination of which has been show to give very good discrimination of the thin disk, thick disk, and halo (e.g., \citealt{2003A&A...410..527B,2018ApJ...852...49H,2019ApJ...874..102W}). We select from the APOGEE database those stars for which S/N $>$ 70 and no aspcapbad flag is set \citep{2015AJ....150..148H}. The APOGEE survey has a number of focused science programs that target specific objects like the Sgr dSph galaxy \citep{2013ApJ...777L..13M,2017ApJ...845..162H}, the Large and Small Magellanic Clouds \citep{2019arXiv190103448N}, numerous star clusters \citep{2020MNRAS.492.1641M}, and many other stellar astrophysics programs \citep{2013AJ....146...81Z,2017AJ....154..198Z} that are not germane to this study of {\it normal field stars}. To remove these specialized targets from our database, we identified fields associated with the special programs, and removed all targets within those fields. \begin{figure}[ht] \begin{center} \includegraphics[width=1.06\hsize,angle=0]{fe_mg_APO_box.pdf} \end{center} \caption{The distribution of [Mg/Fe] with [Fe/H] abundances for our curated sample of stars from the APOGEE survey. We use these abundances to select stars with low-$\alpha$ abundances as thin disk, and those with high-$\alpha$ abundances as stars associated with the thick disk, as indicated by the selection boxes. For the halo we select all the stars with [Fe/H] $\leq$ $-1.0$. The color code indicates stellar density, where red shows the highest density of stars and blue the lowest. The dashed lines show the solar values for reference.} \label{abund_APOGEE} \end{figure} We summarize our chemistry-based discrimination and selection of our three primary stellar populations in Figure~\ref{abund_APOGEE}. The higher [Fe/H] population with low-$\alpha$ abundances (--0.7 $<$ [Fe/H] $<$ +0.5, --0.1 $<$ [Mg/Fe] $<$ +0.17) is associated with the Galactic \emph{thin disk}, and our selection contains a total of 211,820 of these stars. The [Fe/H] $<$ 0 stars with high-$\alpha$ abundances ($+0.18$ $<$ [Mg/Fe] $<$ $+0.4$ within the metallicity range $-1.0$ $<$ [Fe/H] $<$ 0.0) we associate with the Galactic \emph{thick disk}, and the number of stars we have in that sample is 52,709. Finally, the \emph{Galactic halo} population is identified with stars having [Fe/H] $<$ --1.0 (e.g., \citealt{2018ApJ...852...49H}). The number of APOGEE stars in this population is 5,795. The magnitude-limited \emph{Gaia} data-set used in this study contains different stellar populations associated with different structures in the Galaxy. This is evident in Figure~\ref{vphi_APOGEE}, where we show the distribution of $\upsilon_{R}$, $\upsilon_{\phi}$, and $\upsilon_{z}$ for the population-integrated \emph{Gaia} sample and for the population-separated APOGEE-2 data. From a comparison of the upper and lower panels it becomes obvious that an assignment of population membership to a particular star based exclusively on kinematical properties is not generally possible. Moreover, it is not possible on the basis of kinematical data alone to determine with reliability even the relative contributions of the different populations to the net velocity DF on a statistical basis. Figure~\ref{vphi_APOGEE} shows that the velocity DF of the different Galactic components clearly overlap, but also that individual abundances from high-resolution spectroscopy surveys are a useful tool for apportioning stars to their relative stellar populations \citep{2011MNRAS.412.1203N,2014ApJ...796...38N}. \begin{figure*}[ht] \begin{center} \includegraphics[width=1\hsize,angle=0]{VELO_distribution_Gaia.pdf} \includegraphics[width=1\hsize,angle=0]{VELO_distribution_APOGEE.pdf} \end{center} \caption{The relative distribution functions (normalized to a peak value of unity) for individual velocity components on a logarithmic scale using the \emph{Gaia} data-set (upper panels) and the APOGEE survey sample (lower panels). For the APOGEE sample we can assign stars to their respective stellar populations (indicated by different colors) using their chemistry, to reveal the kinematical properties of each population with little cross-contamination from the others. } \label{vphi_APOGEE} \end{figure*} While severe overlap is expected in the $\upsilon_{R}$ and $\upsilon_{z}$ dimensions, where all three stellar populations share a mean values around 0 km s$^{-1}$, we expect more separation of populations in $\upsilon_{\phi}$ due to variations in asymmetric drift. Nevertheless, stars with $\upsilon_{\phi}$ $>$ 70 km s$^{-1}$ dominate the distribution, and yet have contributions from all three populations, though, naturally, most strongly from the thin and thick disk. We also observe that the thin-disk population shows a very small number of slow-rotating stars; these could represent a small contaminating portion of stars from the accreted halo, which can have metallicities as high as [Fe/H] $\sim$ $-0.5$ \citep{2018ApJ...852...49H}. Interestingly, for the population with [Fe/H] $<$ $-1.0$, we observe an extended velocity DF with a peak around $\upsilon_{\phi}$ $\sim$ 0 km s$^{-1}$, together with a second peak around 120 km s$^{-1}$ (see the middle panel in Figure~\ref{vphi_APOGEE}). We discuss this metal-poor population in more detail in Section~\ref{halo_pop}. While there is no precedent to the accuracy and precision of the \emph{Gaia} astrometry, and for such an enormous number of objects, there is still no detailed chemistry available for the vast majority of the stars in the $Gaia$ data-set. Thus, one cannot yet leverage the huge statistical power of \emph{Gaia} to explore the kinematical properties of individual stellar populations in an unbiased manner, nor use these kinematics to sort stars reliably into populations to help define other gross properties of the populations. However, we will now show how, with a \emph{data-driven model} trained with the velocity DFs defined for the combined APOGEE+\emph{Gaia} dataset for relatively nearby stars, we can harness the power of the greater \emph{Gaia} database without chemical data to sort stars into populations based on their kinematics, and use these statistical memberships to ascertain other bulk properties of the populations --- e.g., to assess the relative densities of stars in these populations over a broad range of the Galactic locations. We describe the steps toward these goals in the next sections. \begin{figure*}[ht] \begin{center} \includegraphics[width=1.\hsize,angle=0]{VELO_THIN_apogee.pdf} \end{center} \caption{Velocity distribution function for the chemically selected APOGEE thin disk (black distribution) for the $\upsilon_{R}$, $\upsilon_{\phi}$ and $\upsilon_{z}$ velocity components, respectively. The red distributions in the three panels show the best-fit following a single Gaussian function. The mean and the standard deviation of the normal distributions are shown in each panel.} \label{velo_thin_apogee} \end{figure*} \subsection{APOGEE Data-Driven Model} \label{model} To build a simple, kinematical data-driven model we do not assume a triaxial Gaussian distribution function \citep{1907NWGot...5..614S,1999A&A...341...86B}; instead, we use the velocity DF for the chemically selected thin-disk, thick-disk and halo Galactic components in the APOGEE data. The DF, $f(\vec{\upsilon}$), is defined such that $f(\vec{\upsilon}$) d$\vec{\upsilon}$ is the number of stars per unit volume with velocity in the range [$\vec{\upsilon}$,$\vec{\upsilon}$ + d$\vec{\upsilon}$]. We now explore the characteristics of the DF for each primary Galactic stellar population. \begin{table*} \caption{Summary of the kinematical properties for different populations subsamples } \begin{center} \begin{tabular}{ccccccc} \hline & $\overline{\upsilon_\phi}$ & $\sigma_{\rm R}$ & $\sigma_{\rm \phi}$ & $\sigma_{\rm z}$ & $\sigma_{\phi}$/$\sigma_{\rm R}$ & $\sigma_{z}$/$\sigma_{\rm R}$ \\ & (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) & & \\ \hline \hline Chemically selected thin disk & +229.43 $\pm$ 0.54 & 37.61 $\pm$ 0.07 & 25.01 $\pm$ 0.04 & 18.53 $\pm$ 0.03 & 0.66 $\pm$ 0.01 & 0.49 $\pm$ 0.01 \\ Chemically selected thick disk & +191.82 $\pm$ 0.24 & 64.68 $\pm$ 0.20 & 50.82 $\pm$ 0.15 & 43.60 $\pm$ 0.13 & 0.78 $\pm$ 0.01 & 0.67 $\pm$ 0.01 \\ Halo ([Fe/H] $<$ -1) & +35.53 $\pm$ 1.28 & 150.57 $\pm$ 1.58 & 115.67 $\pm$ 1.21 & 86.67 $\pm$ 0.91 & 0.77 $\pm$ 0.02 & 0.57 $\pm$ 0.01 \\ \hline Disk-like rotation & +186.10 $\pm$ 1.66 & 59.23 $\pm$ 1.39 & 50.00 $\pm$ 1.17 & 47.55 $\pm$ 0.85 & 0.84 $\pm$ 0.03 & 0.80 $\pm$ 0.02 \\ Non-rotation & --2.35 $\pm$ 1.57 & 165.51 $\pm$ 1.94 & 95.12 $\pm$ 1.11 & 94.10 $\pm$ 1.16 & 0.57 $\pm$ 0.01 & 0.57 $\pm$ 0.01 \\ \end{tabular} \end{center} \label{Tab1} \end{table*} \subsubsection{Chemically Selected Thin Disk} \label{thin} Figure~\ref{velo_thin_apogee} shows the velocity DF for $\upsilon_{R}$, $\upsilon_{\phi}$, and $\upsilon_{z}$ of the low-$\alpha$ sequence population selected in the [Fe/H]-[Mg/Fe] plane (see Figure~\ref{abund_APOGEE}). We also show the best Gaussian fit for each of the three components of velocity (red lines in Figure~\ref{velo_thin_apogee}). By and large, these Gaussian fits are reasonable descriptors of the DFs. Even for the $\upsilon_{\phi}$ component, where we expect a skew in the observed DF due to asymmetric-drift effects, a normal distribution nevertheless reproduces the distribution reasonably well. We find the largest discrepancies between the observed velocity DF and a Gaussian distribution in all three cases to be mainly at the very peaks and in the wings, and, for the latter, especially in the cases of $\upsilon_{\phi}$ and $\upsilon_{z}$. As a demonstration of the utility of Gaussians as descriptors of the DFs, we find that for the chemically selected thin disk the DF skewness($\upsilon_{R}$)\footnote{Because there is no standard naming convention for the variables skewness and kurtosis, and some of the adopted variable names in the literature are redundant with those we use for other quantities, we simply employ the variable names ``skewness'' and ``kurtosis'' here to avoid confusion.} = 0.04, and the kurtosis($\upsilon_{R}$) = 0.61. For the azimuthal velocity, we have skewness($\upsilon_{\phi}$) = --0.44, and kurtosis($\upsilon_{\phi}$) = 0.98. Furthermore, for the vertical component of the velocity, we find skewness($\upsilon_{z}$) = --0.06 and a kurtosis($\upsilon_{z}$) = 1.33. These skewness and kurtosis values lie within the allowable range for normal univariate distributions; e.g., \cite{DM10} argue that values for asymmetry and kurtosis between --2 and +2 are indicative of normal univariate distributions, while even by the more conservative limits of --1.5 and +1.5 advocated by \cite{2012MNRAS.425..969P}, the Figure~\ref{velo_thin_apogee} DFs are found to be well-described as Gaussians. For the low-$\alpha$ sequence population we find the following velocity dispersion values, ($\sigma_{R}$, $\sigma_{\phi}$, $\sigma_{z}$) = (36.81 $\pm$ 0.07, 24.35 $\pm$ 0.04, 18.03 $\pm$ 0.03) km s$^{-1}$. Based on these values, the shape of the velocity ellipsoid for the thin disk is found to be ($\sigma_{R}:\sigma_{\phi}:\sigma_{z}$) = (1.00:0.66:0.49). We also find that the vertex deviation for this population to be $\alpha_{R\phi}$ = --4.01$^{\circ}$ $\pm$ 0.09$^{\circ}$, and the tilt of the velocity ellipsoid to be $\alpha_{Rz}$ = +1.41$^{\circ}$ $\pm$ 0.02$^{\circ}$; these quantities are defined so that $\alpha_{ij}$ corresponds to the angle between the $i$-axis and the major axis of the ellipse formed by projecting the three-dimensional velocity ellipsoid onto the $ij$-plane, where $i$ and $j$ are any of the stellar velocities. We can also calculate the orbital anisotropy parameter \cite{1980MNRAS.190..873B} in spherical polar coordinates; for the thin disk we have $\beta$ = +0.57 $\pm$ 0.01. These findings are summarized in Table~\ref{Tab1} and Table~\ref{Tab2}. \begin{figure*}[ht] \begin{center} \includegraphics[width=1.\hsize,angle=0]{VELO_THICK_apogee.pdf} \end{center} \caption{Same as Figure \ref{velo_thin_apogee}, but for the thick disk. However, for the $\upsilon_{\phi}$ component the red curve represents the sum of a two-component Gaussian decomposition of the distribution, needed to account for the asymmetric tail due to a broader spread in asymmetric drift (see text). The components of this two-component fit are shown by the green and grey curves and listed values. } \label{velo_thick_apogee} \end{figure*} \subsubsection{Chemically Selected Thick Disk} \label{thick} In Figure~\ref{velo_thick_apogee}, the velocity DF for the three individual space velocities is shown for the high-$\alpha$ sequence population defined in Figure~\ref{abund_APOGEE}. For this population we find a mean rotational velocity of $\overline{\upsilon_{\phi}}$ = 191.82 $\pm$ 0.24 km s$^{-1}$ and velocity dispersion of ($\sigma_{R}$, $\sigma_{\phi}$, $\sigma_{z}$) = (62.44 $\pm$ 0.21, 44.95 $\pm$ 0.15, 41.45 $\pm$ 0.15) km s$^{-1}$. As in the case for the thin disk, we find that normal distributions (red lines in Figure~\ref{velo_thick_apogee}) give a good match for the velocity components $\upsilon_{R}$ and $\upsilon_{z}$. For example, we calculate a skewness = 0.03, and kurtosis = 0.62 for the radial velocity component, while for the vertical velocity we have 0.02 and 0.86, respectively. However, for the rotational velocity ($\upsilon_{\phi}$) of the thick-disk population the spread in asymmetric-drift is more prominent than for the thin disk, so that the $\upsilon_{\phi}$ DF is skewed to low velocities. For this velocity component we have a skewness = $-1.38$ and a kurtosis = $-4.46$. Nevertheless, remarkably, it is possible to account for this skewness adequately with only a single other normal distribution (gray line) fit to the $\upsilon_{\phi}$ DF (see middle panel in Figure~\ref{velo_thick_apogee}). While the simplicity of a two-Gaussian fit is very convenient, it is not clear that it has a physical meaning as representing two distinct sub-populations, or that the mathematical contrivance just happens to work well as an explanation for what could be a more complex combination of sub-populations. If it really has a physical meaning as an apparent second sub-population in $\upsilon_{\phi}$, this less-dominant sub-population has Gaussian parameters $\mu$ $\sim$ 92 km s$^{-1}$ and $\sigma$ $\sim$ 115 km s$^{-1}$, and could represent the oldest population in the thick disk, which is, on average, lagging some 100 km s$^{-1}$ behind a more rapidly rotating, and significantly dynamically colder and presumably younger thick-disk population (green distribution in Figure~\ref{velo_thick_apogee}). We find differences in the shape of the velocity ellipsoid for the thick disk with respect to the thin disk (Table~\ref{Tab1}), where ($\sigma_{R}:\sigma_{\phi}:\sigma_{z}$) = (1.00:0.78:0.67). For purposes of this calculation, we treat the total thick-disk sample as one population; this is certain to result in some increase in the measured dispersion in the $\upsilon_{\phi}$ direction. Theoretical studies in the formation of the thick disk predict a wide range of values for $\sigma_{z}$/$\sigma_{R}$. For example, \cite{2010ApJ...718..314V} found a range from $\sim$ 0.4 to 0.9 for a model with formation of the thick disk through heating due to accretion events. Assuming that a merger led to the dynamical heating of a pre-existing precursor disk to the thick disk (e.g., \citealt{2018ApJ...863..113H,2019A&A...632A...4D,2019NatAs...3..932G} and references therein), our findings may be suggestive of an encounter with a satellite on a low/intermediate orbital inclination. Interestingly, we find the tilt of the velocity ellipsoid for the thick disk to be larger than that found for the thin disk, however, our tilt $\alpha_{Rz}$ = +5.16$^{\circ}$ $\pm$ 0.15$^{\circ}$, is lower than previous values reported for the thick disk. For example, \cite{2011ApJ...728....7C} found $\alpha_{Rz}$ = +8.6$^{\circ}$ $\pm$ 1.8$^{\circ}$, where the authors selected the thick disk population using stellar density laws and the vertical height with respect to the Galactic plane. With a sample of $\sim$1200 red giants, \citet{2012ApJ...747..101M} found an even larger value, $\alpha_{Rz}$ = +10.0$^{\circ}$ $\pm$ 0.5$^{\circ}$. The latter authors created the thick disk sample by selecting stars with $\vert$z$\vert$ $>$ 1.3 kpc, and they use the individual velocities of each star to remove outliers they associate with the halo. However, a direct comparison of our results with previous measurements from the literature is difficult, because the tilt angle varies with $z$ \citep{2012ApJ...747..101M,2020MNRAS.493.2952H}. We also find the orbital anisotropy parameter for the Galactic thick disk, $\beta$ = +0.32 $\pm$ 0.01, to be lower than the value found for the thin disk, which suggests that the orbital anisotropy is mildly radial for this population. \begin{figure*}[ht] \begin{center} \includegraphics[width=1.\hsize,angle=0]{VELO_HALO_apogee.pdf} \end{center} \caption{Same as Figures \ref{velo_thin_apogee} and \ref{velo_thick_apogee}, but for the Milky Way halo, defined by stars with [Fe/H] $<$ $-1.0$ (black distribution). In this case, each of the three velocity components are fitted by a mixture of two Gaussian components (see text for details). The mean and the standard deviation of the mixture distributions are shown in the top of each panel in red, and for each Gaussian component in grey and green.} \label{velo_halo_apogee} \end{figure*} \begin{table*} \caption{Tilt of the velocity ellipsoid, vertex deviation, and orbital anisotropy parameter for different populations} \begin{center} \begin{tabular}{cccc} \hline & $\alpha_{\rm Rz}$ & $\alpha_{\rm R\phi}$ & $\beta$ \\ & (degrees) & (degrees) & \\ \hline \hline Chemically selected thin disk & +1.41 $\pm$ 0.02 & --4.01 $\pm$ 0.09 & +0.57 $\pm$ 0.01 \\ Chemically selected thick disk & +5.16 $\pm$ 0.15 & --6.01 $\pm$ 0.47 & +0.32 $\pm$ 0.01 \\ Halo ([Fe/H] $<$ -1) & +11.43 $\pm$ 0.35 & --2.30 $\pm$ 0.15 & +0.20 $\pm$ 0.02 \\ \hline Disk-like rotation & +0.86 $\pm$ 0.38 & +1.18 $\pm$ 0.59 & +0.56 $\pm$ 0.02 \\ Non-rotation & +0.15 $\pm$ 0.02 & +0.16 $\pm$ 0.02 & +0.42 $\pm$ 0.02 \\ \end{tabular} \end{center} \label{Tab2} \end{table*} \subsubsection{Halo} \label{halo_pop} The stellar density of halo stars, simply defined here by stars having [Fe/H] $<$ $-1.0$, is significantly lower than that for the disk-like populations in our volume of study, hence, the selection effects driven by the APOGEE survey design \citep{2017AJ....154..198Z} may have a larger impact on this population. The black lines in Figure~\ref{velo_halo_apogee} show the velocity DF for the halo population. For the distribution in $\upsilon_{\phi}$, we use a mixture of two Gaussian components. The metal-poor stars with disk-like kinematics (gray Gaussian in the middle panel of Figure~\ref{velo_halo_apogee}) may be associated with the thick disk, while the components with a non-rotation average motion may be associated with the inner halo (see \citealt{2000AJ....119.2843C}, \citealt{2010ApJ...712..692C}, \citealt{2018ApJ...852...49H}, \citealt{2018MNRAS.tmp.2814M} and references therein for more details). The stars in the entire halo sample ([Fe/H] $<$ $-1.0$) are characterized by a radially elongated velocity ellipsoid, where we have ($\sigma_{R}$, $\sigma_{\phi}$, $\sigma_{z}$) = (150.57 $\pm$ 1.52, 115.67 $\pm$ 1.08, 86.67 $\pm$ 0.93) km s$^{-1}$. For the entire halo population defined in this study (red lines in Figure ~\ref{velo_halo_apogee}), we find a small mean prograde rotation of 35 km s$^{-1}$. These results are in good agreement with the halo properties reported in \cite{2000AJ....119.2843C} and \cite{2010ApJ...716....1B}, who also defined the halo by simple metallicity cuts. Finally, for the entire [Fe/H] $<-1.0$ halo sample we find a large angle for the tilt of the velocity ellipsoid, $\alpha_{Rz}$ = $+11.43^{\circ}$ $\pm$ $0.35^{\circ}$, while $\beta$ is found to be nearly isotropic for these stars. Meanwhile, the ``halo'' sub-population with disk-like kinematics in our sample shows a mean rotational velocity of $\overline{\upsilon_{\phi}}$ = 186.10 $\pm$ 1.36 km s$^{-1}$ (gray Gaussian in Figure~\ref{velo_halo_apogee}); this velocity is very similar to the mean rotational velocity we found for the Galactic thick disk (see the middle panel in Figure~\ref{velo_thick_apogee}). For the velocity dispersion of this sub-population, we find ($\sigma_{R}$, $\sigma_{\phi}$, $\sigma_{z}$) = (59.25 $\pm$ 1.12, 50.00 $\pm$ 0.91, 47.52 $\pm$ 0.94) km s$^{-1}$. These results are consistent with the values reported for the chemically selected thick disk (see Sect.~\ref{thick}), supporting previous results showing that the thick disk might exhibit an extended metal-poor tail --- more metal-poor than [Fe/H] = $-1.0$ and reaching values of [Fe/H] $\sim$ $-1.5$, or even lower (i.e., \citealt{1986ApJS...61..667N,1990AJ....100.1191M,2002AJ....124..931B,2010A&A...511L..10N,2014A&A...569A..13R,2018ApJ...852...49H,2018ApJ...852...50F,2019arXiv191206847A}). For the other, ``more traditional'' halo component with a non-rotating average motion of $\overline{\upsilon_{\phi}}$ = -2.31 $\pm$ 1.59 km s$^{-1}$ (green Gaussian in Figure~\ref{velo_halo_apogee}), we find that the velocity dispersion is ($\sigma_{R}$, $\sigma_{\phi}$, $\sigma_{z}$) = (165.52 $\pm$ 1.97, 95.10 $\pm$ 1.12, 94.14 $\pm$ 1.14) km s$^{-1}$. This halo population has been extensively discussed in the literature (e.g., \citealt{1978ApJ...225..357S,1985ApJ...291..260R,1998AJ....116..748G,2003A&A...406..131G,2019A&A...632A...4D}, and references therein). These properties for the hotter halo component are consistent with current descriptions of the halo as an accreted population of the Milky Way. For example, \cite{2010A&A...511L..10N} used $\sim$100 halo stars in the solar neighborhood to identify a 'low-$\alpha$' population, for which the kinematics suggest that it may have been accreted from dwarf galaxies \citep{2009ApJ...702.1058Z,2010MNRAS.404.1711P}, some specifically originating from the $\omega$ Cen progenitor galaxy \citep{2003MNRAS.346L..11B,2005MNRAS.359...93M,2012ApJ...747L..37M}. Meanwhile, \cite{2018MNRAS.478..611B} studied the orbital anisotropy for the local stellar halo, and, by comparing the observational results with cosmological simulations of halo formation, concluded that the inner halo was deposited in a major accretion event by a satellite with $M_{\rm vir}$ $>$ 10$^{10}$ M$_{\odot}$, inconsistent with a continuous accretion of dwarf satellites. \cite{2018MNRAS.478..611B} also highlighted the non-asymmetric structure of the remains of the merger in the velocity distribution. Their findings echo discussions presented in simulations including a similar accretion event by \cite{2005MNRAS.359...93M}, as well as the exploration of the \cite{2004AJ....128.1177V} data-set explored by \cite{2011MNRAS.412.1203N}, where the locally sampled velocity distribution of a high-energy accretion event appears to produce a mixture of kinematical populations. In addition to the \cite{2018MNRAS.478..611B} study, \cite{2018Natur.563...85H} selected the retrograde halo population using APOGEE DR14 \citep{2018ApJS..235...42A} and \emph{Gaia} DR2 to conclude that the inner halo is dominated by debris from an object that, at infall, was slightly more massive than the Small Magellanic Cloud. The latter authors also argue that the merger must have led to the dynamical heating of the precursor of the Galactic thick disk, approximately 10 Gyr ago. The dwarf galaxy progenitor of this debris --- now variously called ``Gaia-Enceladus", the ``Gaia Sausage", or collectively the ``Gaia-Enceladus-Sausage'' (GES) --- is thought to have fallen into the Milky Way on a highly eccentric orbit (e $\sim$ 0.85) to account for the predominance of stars with such radial orbits in the inner Galaxy \citep{2018ApJ...863..113H,2018MNRAS.478..611B,2018Natur.563...85H,2019MNRAS.484.4471F}. \begin{figure}[ht] \begin{center} \includegraphics[width=1.\hsize,angle=0]{RZ_Gaia_apo.pdf} \end{center} \caption{({\it Left}) The distribution of distance from the Galactic center projected on the Galactic plane ($R$) in kpc for the APOGEE and \emph{Gaia} sample. The Sun is assumed to be situated at $R$ = 8.3 kpc. ({\it Right}) The distribution of distance from the Galactic plane for the APOGEE and \emph{Gaia} stars. The distribution shows that most of the stars used in this study are in the vertical height range from $-2.0$ $<$ $z$ $<$ 2.0 kpc. The asymmetry in both panels is a result of the predominance of the Northern Hemisphere observations in the APOGEE sample.} \label{Gaia_apo_vol} \end{figure} \begin{figure*}[ht] \begin{center} \includegraphics[width=.497\hsize,angle=0]{DF_thin_R.pdf} \includegraphics[width=.497\hsize,angle=0]{DF_thin_Z.pdf} \includegraphics[width=.497\hsize,angle=0]{DF_thick_R.pdf} \includegraphics[width=.497\hsize,angle=0]{DF_thick_Z.pdf} \includegraphics[width=.497\hsize,angle=0]{DF_halo_R.pdf} \includegraphics[width=.497\hsize,angle=0]{DF_halo_Z.pdf} \end{center} \caption{Velocity distribution function, together with the cumulative fraction, for the three APOGEE velocity components for different ranges in Galactocentric radius, $R$, and Galactic vertical height, $z$. We show the different spatial ranges in different colors. The top panels represent the chemically selected thin disk, the middle panels the thick disk, and the bottom panels the population with [Fe/H] $<$ $-1.0$, for three different ranges in $R$ (left panels), and $z$ (right panels).} \label{RZ_distribution} \end{figure*} \begin{figure*}[ht] \begin{center} \includegraphics[width=1.\hsize,angle=0]{APO_model_Gaia_VELO_SEP.pdf} \end{center} \caption{The relative velocity distribution functions (normalized to a peak value of unity) for the three velocity components for the \emph{Gaia} DR2 data-set employed in this study ({\it thick black line}) and the data-driven model generated using APOGEE data for each velocity component ({\it red line}). The great similarity of the data and model distributions is clear. The figure also shows the model decomposition into the thin disk (solid gray line), thick disk (dash dot gray line), and halo (long dash gray line) for the three velocity components.} \label{data_driven_model} \end{figure*} \subsection{The 3D Velocity DF as a Function of $R$ and $z$} \label{3D_velo} In a larger context, our chemical-selection approach also allows us to explore the 3D velocity DF as a function of $R$ and $z$ for the different sub-populations.Figure~\ref{Gaia_apo_vol} compares the regions of the sky probed by the APOGEE (grey line) and \emph{Gaia} (black line) samples. The samples probe a region between 4 $<$ $R$ $<$ 13 kpc, but the majority of stars are located at distances between 7 to 10.5 kpc. The spatial distribution for the vertical height shows that most of the selected stars have $|z|$ $<$ 2.0 kpc (right panel in Figure~\ref{Gaia_apo_vol}). The spatial distributions for the two samples are remarkably similar, despite the fact that the APOGEE survey works at brighter magnitudes, but is predominantly targeted at giant stars, whereas the larger \emph{Gaia} sample studied here probes fainter magnitudes, is parallax-error-limited, and is dominated by dwarf stars. Differences in the spatial distributions mainly reflect the fact that \emph{Gaia} is an all-sky survey while the APOGEE DR16 subsample is a pencil-beam survey dominated by Northern Hemisphere observations (where APOGEE has been surveying longer), which explains why we see an asymmetry to larger $R$ and an excess for $z$ $>$ 0 with respect to \emph{Gaia} (see right panel in Figure~\ref{Gaia_apo_vol}). Figure~\ref{RZ_distribution} shows the velocity DF for three different ranges in Galactocentric radius and vertical height for the chemically selected thin disk (top panels), thick disk (middle panels), and halo (bottom panels). We also show the cumulative fraction for each distribution. The different colors show the velocity DFs for three ranges in Galactocentric radius and vertical height. In most cases, we do not find large discrepancies between the velocity DF for the three individual components across different ranges of $R$ and $z$, especially for the thin disk and thick disk. The negligible spatial variations in the velocity DF for the different, chemically selected populations justifies and motivates our strategy to use the velocity distributions discussed in Section~\ref{thin}, \ref{thick}, and \ref{halo_pop} to build a data-driven model of Galactic stellar populations. However, it is notable that we observe a small asymmetric-drift variation with $z$ for the thick disk (right figure in the middle panels in Figure~\ref{RZ_distribution}). We also find that the innermost part of the thin disk tends to lag with respect to the rest of the population at larger Galactocentric radii (left figure in the upper panels in Figure~\ref{RZ_distribution}). On the other hand, for the [Fe/H] $< -1$ group of stars we find strong variations in the contributions of the two components in $\upsilon_{\phi}$ described in Section~\ref{halo_pop}, with the thick-disk-like population more prominent in the inner region ($R < 7.3$ kpc) and closer to the Galactic plane ($|z|$ $<$ 0.5 kpc), and the ``inner halo'' population with a non-rotating average motion dominating the distribution for the outer regions in $R$ and $z$ (see bottom panels in Figure~\ref{RZ_distribution}). \subsection{The Stellar Disk and Halo Contribution in the \emph{Gaia} Sample} \label{AD_test} To carry out an unbiased study of the spatial distributions of the Galactic components for \emph{Gaia} stars, the bulk of which lack abundance information, we use the results presented above to model the contribution from the thin disk, the thick disk, and the halo stars in the much larger \emph{Gaia} sample. The \emph{Gaia} velocity DF observed in the three components of space velocity (see Figure~\ref{vphi_APOGEE}) is a combination of the different components of the stellar disk and the Galactic halo. Using the APOGEE velocity DF measured for the chemically distinguished disk and halo components, we can estimate the fraction of stars that belong to each of these components in the much larger \emph{Gaia} data-set. In the previous section, we verified that the APOGEE subsample is a legitimate proxy for the larger Gaia sample. In particular, we checked which regions of the Galaxy are probed by the \emph{Gaia} sample employed in this exercise as well as the APOGEE sample, and find (Figure~\ref{Gaia_apo_vol}) that they survey comparable volumes. To assess further the ability of the derived data-driven model to quantify the number of objects in the \emph{Gaia} sample that are attributable to the thin disk, thick disk, and halo, we apply an Anderson-Darling style statistical test sensitive to discrepancies at low and high values of $\vec{\upsilon}$ \citep{1992nrfa.book.....P}, adopted as follows: $<S_{G}(\vec{\upsilon}) - S_{A}(\vec{\upsilon})>^{2} / S_{A}(\vec{\upsilon})(1 - S_{A}(\vec{\upsilon}))$ , \label{equa} \noindent The Anderson-Darling statistic is based on the empirical cumulative distribution function. The cumulative distribution for the \emph{Gaia} sample (S$_{G}(\vec{\upsilon})$) is calculated directly from the individual velocities in the \emph{Gaia} data-set. For the cumulative distribution calculated using the velocity DF from the APOGEE data-set (S$_{A}(\vec{\upsilon})$), we have a mixture of different distributions for the thin disk, thick disk, and halo that we draw directly from the APOGEE data (black distributions in Figures~\ref{velo_thin_apogee}, \ref{velo_thick_apogee}, and \ref{velo_halo_apogee}). The sum of the three stellar velocity distributions associated with the three Galactic components allows us to derive estimates, guided by Anderson-Darling statistical tests, of the fraction of objects that are part of the thin disk, thick disk, and halo in the {\it Gaia} data set. We can write $f(\vec{\upsilon})_{gaia} = \sum_{i=1}^{3} \epsilon_{i} f(\vec{\upsilon})_{apogee}$, where $i$ represents each velocity component, and $\epsilon_{i}$ = ($\epsilon_{thin}$, $\epsilon_{thick}$, $\epsilon_{halo})$ is the fraction of stars associated to that Galactic component. Following the results in Section~\ref{3D_velo}, we assume that the shape of the velocity distribution for each component does not change in the local volume of study; what changes is the fraction of stars in each Galactic component. Figure~\ref{data_driven_model} shows the velocity distribution function for the three velocity components. The thick black line is the \emph{Gaia} DR2 data-set employed in this study; red is the data-driven model built from the chemically-selected thin disk, thick disk, and halo. That the thick black {\it Gaia} data lines in Figure ~\ref{data_driven_model} are almost completely obscured by the red model lines is a testament to the veracity of the model to explain the data. For the $\upsilon_{R}$ component, we find the fraction of the Galactic components (thin disk, thick disk and halo, respectively), $\epsilon$ = (0.777, 0.208, 0.015), while for $\upsilon_{\phi}$ we find, $\epsilon$ = (0.881, 0.103, 0.016). Finally, for the vertical component, $\upsilon_{z}$, we obtain $\epsilon$ = (0.799, 0.186, 0.015). Figure~\ref{data_driven_model} also shows the decomposition into the thin disk, thick disk, and halo velocity distribution functions (gray lines) from the data-driven model showing the estimation of the fraction of objects that are part of the different Galactic structures. The radial and the vertical velocity DFs give similar fractions for the three components, however the $\upsilon_{\phi}$ velocity yields a larger number of thin-disk stars compared to the thick disk. It is likely that this difference has to do with the fact that while the distribution of $\upsilon_{R}$ and $\upsilon_{z}$ can be reproduced with a Gaussian function (see Sections~\ref{thin} and \ref{thick}), the distribution of $\upsilon_{\phi}$ is strongly non-Gaussian. The latter is highly skewed to low velocities due to the asymmetric drift, especially for the Galactic thick disk (as seen in Figures~\ref{velo_thick_apogee} and \ref{velo_halo_apogee}), and this non-asymmetric behavior is more challenging for our model to describe. Nevertheless, by combining the results calculated using the three individual velocities, and taking the average value, we estimate that 81.9 $\pm$ 3.1 $\%$ of the objects in the selected \emph{Gaia} data-set are thin-disk stars, 16.6 $\pm$ 3.2 $\%$ are thick-disk stars, and 1.5 $\pm$ 0.1 $\%$ belong to the Milky Way halo. \section{Thick-disk normalization} \label{thick_norma} In this section we determine the local density normalization ($f_{\rho}$ = $\rho_{T}$ /$\rho_{t}$) of the thick disk compared to the thin disk using the velocity distribution function derived above. Many previous derivations of $f_{\rho}$ were based on starcount data derived from photometric parallaxes (e.g., \citealt{1983MNRAS.202.1025G, 2008ApJ...673..864J}), with the estimates ranging from 1$\%$ to 12$\%$. \cite{2016ARA&A..54..529B} analyzed the results from 25 different photometric surveys conducted since the discovery of the thick disk \citep{1982PASJ...34..365Y} and concluded that $f_{\rho}$ = 4$\%$ $\pm$ 2$\%$ at the solar circle. However, starcount approaches to this problem are subject to the degeneracy between the derived scalelengths and scaleheights for each the thin-disk and thick-disk components, which drives a large uncertainty in $f_{\rho}$ (\citealt{2001ApJ...553..184C, 2002ApJ...578..151S}). Our novel approach, using the observed {\it velocity} distribution functions for the thin disk, thick disk and halo from APOGEE as a data driven-model applied to the \emph{Gaia} data-set, is not affected by this degeneracy. Moreover, we are able to analyze the behavior of $f_{\rho}$ in the $R$-$z$ plane, averaged over the Galactocentric polar angle, $\phi$. \begin{figure}[ht] \begin{center} \includegraphics[width=1.\hsize,angle=0]{stellar_count.pdf} \end{center} \caption{Stellar number density as a function of Galactic cylindrical coordinates $R$ and $z$. The density is shown on a linear scale and coded from black to white. The larger number of stars in the \emph{Gaia} data-set per $R$-$z$ pixel are within $-1$ $<$ $z$ $<$ 1 kpc and 6 $<$ $R$ $<$ 10 kpc.} \label{stellar_count} \end{figure} To build a given $R$-$z$ pixel, we create intervals of 0.1 kpc in position, and use only pixels where the number of stars for a given interval is N($R_{i}$,$z_{i}$) $\geq$ 50. Figure~\ref{stellar_count} shows the stellar number density as a function of Galactic cylindrical coordinates $R$ and $z$ created through this exercise. The largest number of stars per pixel in the magnitude-limited \emph{Gaia} sample are within $-1$ $<$ $z$ $<$ 1 kpc and 6 $<$ $R$ $<$ 10 kpc. Using the data-driven model described in Section~\ref{model}, we create different velocity DFs, where our free parameters are the fraction of thin-disk, thick-disk, and halo stars in intervals of 1$\%$ for each population. For a given $R$-$z$ pixel, we use the observed velocity DF in the area of study, and we perform an Anderson-Darling style statistical test (as was done in Sec.~\ref{AD_test}). The minimum value in the statistical test (i.e., the maximum deviation between the cumulative distribution of the data and the data-driven model function) is used to estimate the fraction, and hence the density normalization, of each population. We perform this analysis for the three individual velocity components separately. Figure~\ref{AF} shows the results from the statistical Anderson-Darling test for the three velocities (top panel). We find that the distributions are very similar regardless of the velocity components used. The bottom panel of Figure~\ref{AF} also shows the Anderson-Darling test values for the three individual velocities added in quadrature in the $R$-$z$ plane. $R$-$z$ pixels with lower numbers of stars tend to have larger Anderson-Darling test values, i.e., a larger deviation between the data and the model. \begin{figure}[ht] \begin{center} \includegraphics[width=1.\hsize,angle=0]{AF.pdf} \includegraphics[width=1.\hsize,angle=0]{RZ_AD_residual.pdf} \end{center} \caption{({\it Top}) Values from the Anderson-Darling statistic as a goodness-of-fit test for the individual space velocities. The values can be interpreted as the maximum deviation between the cumulative distribution of the data and the data-driven model function. ({\it Bottom}) The Anderson-Darling test values for the three velocity components added in quadrature in the $R$-$z$ plane. } \label{AF} \end{figure} Once we know the fraction of stars that belong to the thin disk, thick disk, and halo, calculation of the thick-disk normalization, $f_{\rho}$, in the $R$-$z$ plane is straightforward. For the final value, we average between the $f_{\rho}$ given for each velocity component. We present the results in Figure~\ref{density_thin_RZ}, where the $R$-$z$ plane is color-coded by $f_{\rho}$. These figures, for the first time, reveal in detail how $f_{\rho}$ varies across the Galaxy. \begin{figure*}[ht] \begin{center} \includegraphics[width=.49\hsize,angle=0]{thick_disk_norma.pdf} \includegraphics[width=.49\hsize,angle=0]{thick_disk_norma_THIN.pdf} \end{center} \caption{The relative density of \emph{Gaia} stars in the $R$-$z$ plane color-coded by $f_{\rho}$ the fraction of stars that belong to the thick disk versus the thin disk ($f_{\rho}$). The dark blue regions, where the $f_{\rho}$ values range from nearly zero to $\sim$ 15 $\%$, are those dominated by the thin disk population. Green regions indicate a \emph{transition region} where we have a more even mix between thin disk and thick disk populations (i.e., $f_{\rho}$ $\sim$ 50 $\%$. Finally, the red areas are where the Galactic thick disk dominates ($f_{\rho}$ $>$ 85 $\%$). The right panel is the same as the left panel, but with a different color-coding that highlights differences at lower thin disk fractions. The visualization of this new range in $f_{\rho}$ allows us to see smaller differences in the disk normalization for a given position.} \label{density_thin_RZ} \end{figure*} From inspection of Figure \ref{density_thin_RZ}, one can clearly see where the thin-disk population dominates (dark blue), where the $f_{\rho}$ values range from nearly zero to $\sim$ 15 $\%$. We also see a \emph{transition region}, where we have a more even mix between thin-disk and thick-disk populations (green), indicated by where the maps have $f_{\rho}$ $\sim$ 50 $\%$. Finally, we also observe the region where the population that belongs to the Galactic thick disk dominates (red; see right panel in Figure~\ref{density_thin_RZ}), shown by where $f_{\rho}$ $>$ 85 $\%$. Interestingly, we observe that the density ratio in the \emph{transition region} shows differences between the North and South. The area around $R\sim 10$ kpc does not show the same transition in density at $z$ $\sim$ $-1.5$ kpc as at $z$ $\sim$ 1.5 kpc. This could be an effect of the Galactic warp (e.g., \citealt{2006ApJ...643..881L,2019A&A...627A.150R} and references therein), but also this could be related to excitation of wave-like structures creating a wobbly galaxy \citep{2012ApJ...750L..41W,2013MNRAS.436..101W}. Furthermore, Figure~\ref{density_thin_RZ} shows a radial trend, in the sense that at increasing $R$ the thin disk dominates at larger $z$ (i.e., the blue region in Figure~\ref{density_thin_RZ} gets thicker at larger Galactocentric radius). This phenomena could be interpreted as the thin disk flaring (e.g., \citealt{2014Natur.509..342F,2019MNRAS.483.3119T}), but also can be related to the different scalelength of the thin disk with respect to the thick disk, where the scalelength for the thick disk is shortened with respect to the one for the thin disk. The latter suggestion is in line with the recent literature on this debated topic. For example, \cite{2011ApJ...735L..46B} and \cite{2012ApJ...753..148B} suggested a shorter scalelength for the chemically distinguished high--$\alpha$ disk compared to the low--$\alpha$ disk. Moreover, in the right panel of Figure~\ref{density_thin_RZ}, we have the same \emph{Gaia} data $R$-$z$ plane color-code by $f_{\rho}$, but in this case the figure is restricted from 0 to 15$\%$. The visualization of this new range in $f_{\rho}$ allows us to see smaller differences in the disk normalization for a given position. We suggest that vertical oscillations in the disk (e.g., \citealt{2018MNRAS.480.4244C}) may be responsible for these small fluctuations in $f_{\rho}$ for the thin Galactic disk, by introducing small changes in the shape of the velocity DF in such a dynamically active disk. In the end, using our novel approach based on {\it velocities}, rather than starcounts, we find the local thick-to-thin disk density normalization to be $\rho_{T}(R_{\odot})$/$\rho_{t}(R_{\odot})$ = 2.1 $\pm$ 0.2 $\%$. Figure~\ref{halo_disk} shows the halo-to-disk density normalization. We find that the halo is most dominant in the \emph{Gaia} data-set in the region $R$ $<$ 8 kpc and $|z|$ $>$ 1.3 kpc. We find the local halo-to-disk density normalization to be $\rho_{H}(R_{\odot})$/($\rho_{T}(R_{\odot})$ + $\rho_{t}(R_{\odot})$) = 1.2 $\pm$ 0.6 $\%$. \section{Summary and Conclusions} \label{Conclusion} Combining the precise stellar abundances from the APOGEE survey with the astrometry from \emph{Gaia}, we study the velocity DF for chemically selected low-$\alpha$ (thin-disk), high-$\alpha$ (thick-disk), and halo ([Fe/H] $< -1$) stars. Using the kinematical properties of these sub-samples, we built a data-driven model, and used it to dissect a 20\% parallax-error-limited \emph{Gaia} sample to understand the contribution of the different Galactic structural components to the velocity-space DF as a function of Galactic cylindrical coordinates $R$ and $z$. We find that 81.9 $\pm$ 3.1 $\%$ of the objects in the selected \emph{Gaia} data-set are thin-disk stars, 16.6 $\pm$ 3.2 $\%$ are thick-disk stars, and 1.5 $\pm$ 0.1 $\%$ belong to the Milky Way halo. \begin{figure}[ht] \begin{center} \includegraphics[width=1.\hsize,angle=0]{HALO_DISK.pdf} \end{center} \caption{The same as Figure \ref{density_thin_RZ}, but color-coded by the ratio of stars that belong to the halo compared to those belonging to the disk (thin + thick). The areas where the halo is most dominant occur at $R$ $<$ 8 kpc and $|z|$ $>$ 1.3 kpc.} \label{halo_disk} \end{figure} The local fraction of the Milky Way thick disk, $\rho_{T}(R_{\odot})$/$\rho_{t}(R_{\odot})$, is still under debate, as evidenced by the large spread in derived values --- ranging from 2$\%$ \citep{1983MNRAS.202.1025G} to 12$\%$ \citep{2008ApJ...673..864J} --- from starcounting methods, which are notoriously fraught with model degeneracies. Our analysis, based on the velocity characteristics of chemically selected populations, helps to break the aforementioned degeneracies, and favors lower values for the normalization (e.g., \citealt{2012ApJ...753..148B}). We find the local thick-to-thin disk density normalization to be $\rho_{T}(R_{\odot})$/$\rho_{t}(R_{\odot})$ = 2.1 $\pm$ 0.2 $\%$, a result consistent with the lower end of values derived using starcount/density analyses, but determined in a completely different way. Using the same methodology, the local halo-to-disk density normalization is found to be $\rho_{H}(R_{\odot})$/($\rho_{T}(R_{\odot})$ + $\rho_{t}(R_{\odot})$) = 1.2 $\pm$ 0.6 $\%$. This is several times larger than other values found recently using kinematically selected samples. For example, \cite{2020MNRAS.tmp...78A}, using high tangential velocity stars ($\upsilon_{t}$ $>$ 200 km s$^{-1}$) in \emph{Gaia} DR2, found the local halo-to-disk density normalization to be 0.47$\%$. This is in agreement with the value (0.45$\%$) found in \cite{2018A&A...615A..70P}, where the local stellar halo was selected through Action Angle distributions using \emph{Gaia} DR1 and RAVE. The differences between these kinematics-based halo samples and our chemically selected halo ([Fe/H] $<$ $-1.0$) might be explained by our halo sample including a non-negligible fraction of metal-poor, but kinematically colder stars that may be related to the thick disk. Their inclusion may inordinantly elevate our derived halo-to-disk density normalization. While overall a chemical discrimination of stellar populations, as undertaken here, does a remarkably good job, and provides many benefits over other population-separation methods, ultimately our approach is limited by the degree to which the metal-weak thick disk and halo populations overlap in chemical-abundance spaces like that shown in Figure 1. In the future, this chemical overlap may be overcome by either the use of a larger number of chemical dimensions, or the combined use of kinematics and chemistry to overcome this one area of population overlap. \begin{acknowledgements} BA acknowledge helpful conversations with B. K. Gibson, K. C. Freeman, I. Minchev and A. C. Robin, as well as comments from the anonymous referee that improved the manuscript. BA, SRM, CRH, and XC appreciate funding from NSF grant AST-1909497. T.C.B. acknowledges partial support from grant PHY 14-30152 (Physics Frontier Center / JINA-CEE), awarded by the U.S. National Science Foundation. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'orio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. This publication made use of NASA's Astrophysics Data System. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the Spanish Virtual Observatory (http://svo.cab.inta-csic.es) supported from the Spanish MINECO/FEDER through grant AyA2014-55216 \end{acknowledgements}
{'timestamp': '2020-06-01T02:07:53', 'yymm': '2005', 'arxiv_id': '2005.14430', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14430'}
arxiv
\section{Introduction} With 5G mobile systems being commercially rolling out gradually, research discussions into next-generation mobile systems, i.e., 6G, have started \cite{Latva-Aho2019}. 5G will
incur 10 times energy consumption of 4G due to increased density of cells and that of antennas, though energy efficiency per bit has increased. With 6G going towards higher spectrum (such as Tera-Hertz or THz) and thus resulting in even denser networks and smaller cells, energy consumption will become a big hurdle on the way to 6G success. On the infrastructure side, a huge amount of energy will be consumed for powering numerous RF chains connected to a vast number of antennas, for extraordinary wideband signal processing, for maintaining a satisfactory coverage and for tracking mobile devices with super-narrow beams. Therefore, reducing energy consumption and jointly coordinating distributed infrastructure to achieve network-wide optimal energy efficiency constitute the first challenge in future 6G. On the other hand, the Internet of Everything (IoE), as envisioned as a major 6G application, means that a vast number of small devices will be connected to networks. These devices are typically either battery-powered or battery-less. However, conventional RF chain enabled communication as well as handshaking based access are both battery killers. In order to extend life-cycle of IoE devices, their energy harvesting capabilities from the ambient environment have to be activated. However, the communication performance of IoE devices is largely constrained by intermittent energy supplies. Therefore, designing super-low-power IoE transceivers and enabling an on-demand and cost-effective manner of energy supply to these IoE devices constitute the second challenge in future 6G. We envision 6G mobile systems have to be energy self-sustainable, both at the infrastructure side (such as traditional base stations) and at the device side (may it be in the form of smart phones or implanted brain computer interface devices), in order to be deployed at large and to achieve commercial success. There is a consensus that visible light (VL) communications will be an integral part of 6G in addition to the traditional medium/spectrum of radio frequency (RF) due to VL's unique properties like massive bandwidth, information security, economical, no harm to humans, etc. This paper serves to bring these two currently separate research domains, i.e., RF and VL, together to provide a full-spectrum 6G mobile system. A joint design that inherently considers both RF and visible light, shall be the methodology to be taken from the onset and in particular with energy self-sustainability borne in mind. Rather than reinventing new technologies into the already crowded family of 6G enabling technologies, this paper aims to explore how the existing members of this family such as THz, VL, intelligence reflecting surface (IRS) and AI can be explored for the energy self-sustainability. And this exploration needs to be conducted in a systematic and holistic manner at the design stage of 6G. Fortunately, each of these 6G candidate technologies provides promising potentials to empower energy self-sustainability. This paper endeavours to shed some light on potential solutions to energy supply issues in 6G with the hope to spark more discussion and research into this critical aspect of 6G mobile systems. It is this article's belief that 6G systems need to be energy self-sustainable and this feature decides 6G's wide commercial success in the future. For this purpose, this article firstly organizes the key 6G enabling technologies into a hierarchical system architecture for future 6G mobile systems, as presented in Section II. Without losing generosity, the presentation of the architecture is more tilted towards the angle of energy supply, which is the focus of this article. Guided by this architecture, mechanisms to potentially enable energy self-sustainability are discussed from two aspects: infrastructure side and IoE device side, respectively in Section III and IV. The infrastructure side focuses on the distributed access network layer with particular focus on three new features of 6G, namely, cell-free access networks and airborne access networks (the organization of the network) as well as smart surface (the composing materials of the network). On the IoE device side, discussions are given to high-resolution signal processing based wireless information and power transfer (WIPT), multifunctional IoE devices and human-in-the-loop based transceiver adaptation. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figure/Structure.eps} \caption{Technical challenges and solutions towards energy self-sustainable 6G.}\label{fig:Structure} \setlength{\belowcaptionskip}{0pt} \setlength{\abovecaptionskip}{0pt} \end{figure} The contribution of the article lies in twofold. Firstly, it proposes a 6G mobile system architecture for the first time where the traditional base stations are decomposed into two complementing layers: central units (CUs) and distributed units (DUs). CUs are analogous to today's base stations in terms of their deployment locations but with more intelligence. DUs reside at very close proximity to end devices due to the employment of much higher frequency spectrum in 6G. DUs may be formed as a small cell-free network and using IRS. Secondly, these 6G enabling technologies are reinvestigated together with other new proposals, from the angle of energy self-sustainability by giving insightful discussions towards a commercially viable future 6G mobile systems. These investigations are centred around two complementing aspects: reducing energy consumption (i.e., energy efficiency) and opening new energy supplies. The technical challenges and solutions towards energy self-sustainable 6G are summarised in Fig. \ref{fig:Structure}. \section{Architecture of Energy self-sustainable 6G} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figure/Architecture.eps} \caption{Architecture of energy self-sustainable 6G network}\label{fig:Arch} \setlength{\belowcaptionskip}{0pt} \setlength{\abovecaptionskip}{0pt} \end{figure} In order to achieve more stringent quality of service (QoS), 6G has to move up to even higher spectral bands than mmWave, such as THz and visible light band. However, 6G may still utilise sub-6 GHz and mmWave band, which are the legacy of 5G. Therefore, 6G is foreseen as a \emph{full-spectrum} mobile communication system. Different spectrum will be exploited for various objectives and applications: $1)$ \textit{Sub-6 GHz band} is used for large coverage and seamless handover; $2)$ \textit{mmWave band} is leveraged both for wireless backhaul and fronthaul to fixed targets and for satisfying high-rate requirement of mobile targets; $3)$ \textit{THz band} is used to provide ultra-high data rate and low-latency service for applications such as pervasive XR; $4)$ \textit{VL band} with extra bandwidth can be further exploited for delivering holographic tele-presence with other support of high-resolution imaging and sensing. VL cannot penetrate blockage and it is capable of creating a secure wireless communication. In this work, we focus on the access part of full-spectrum 6G networks. Our proposed energy self-sustainable architecture for 6G networks, as shown in Fig.~\ref{fig:Arch}, consists of three layers: 1) Central unit (CU), e.g., AI-empowered software RAN, 2) Distributed units (DUs), e.g. self-organised attocells and 3) zero-energy IoE device. The AI-empowered CU determines which spectrum and which DUs are used to serve a group of IoE devices. Based on the dynamics at the IoE devices and surrounding environments, the DUs form the exact cells to serve IoE devices. Each layer in the architecture, from different dimensions, can contribute to the energy sustainability of future 6G networks. The target of a CU is to provide demanded services to the IoE devices in an energy-efficient manner and/or with green energy (e.g., produced by nearby solar panel), all contributing to the network energy self-sustainability. For example, if the IoE device is within the indoor area where dense visible light or THz attocells are deployed, the IoE device can be served by these attocells which have the capability to provide both fast, ultra-reliable, and energy-efficient wireless connectivity. If the IoE device is approaching a passive DU that is equipped with intelligent surfaces, the IoE device could be guided by the AI-empowered central unit to switch to the cell of the passive DU, which could provide low- or even zero-energy wireless connectivity. If an outdoor IoE device requires high-speed wireless connections for applications such as XR, while in the surroundings a fixed-location and high-rate DU is not available, the AI-power CU can schedule a high-rate mobile DU to serve the IoE device. Drones or mobile robots can host the mobile DU. The backhaul enables WIPT to mobile DUs for achieving energy self-sustainability. Due to the complexity of decision making, new AI techniques such as deep reinforcement learning (DRL) is a necessity for problem solving. To avoid overwhelming the CU, some processing and decision making will be conducted at DUs under the coordination of the CU. A right balance between the CU and DU processing is a multi-objective optimization problem that may need DRL. Software defined networking (SDN) and network function virtualization (NFV) will continue to be evolved and then employed to dynamically deploy new QoS (Quality of Service) algorithms or controller on the fly. For most of the 6G DUs, we envision that a massive number of distributed antennas will be densely deployed to achieve both ultra-high reliability and wide coverage. As a result, many ground-breaking techniques could be enabled and leveraged in 6G. One of them is cell-free networking that can provide ultra-reliable and energy-efficient connectivity \cite{DenseVLC2020}. Also, cells formed by the DUs must adapt to surrounding dynamics to perform energy-efficient communication. For example, the attocells must react to dynamic blockages in the surroundings. IoE devices play essential roles in an energy self-sustainable 6G network. First, zero-energy IoE devices, which are powered by wideband WIPT, could be primarily deployed to provide massive connections and to realize IoE. For example, a large number of backscattering nodes can be connected to IRS enabled DUs to create an Internet of Trees, as illustrated in Fig.~\ref{fig:Arch}. Second, in our proposed architecture for future energy self-sustainable 6G networks, it is essential for IoE devices to adapt to the surrounding environment. The adaptation can be performed without the involvement of human beings, such as physically adjusting the receiving antennas to a better position to achieve higher data rates with less energy. Another possibility is the involvement of human beings. With wireless communications moving to THz and VL, the human body can easily block the surrounding wireless links. Therefore, the future 6G network could quickly become more energy-efficient if it guides human beings to position their bodies to recoup the blocked wireless communication links. \section{Distributed Units for Smart Access} Towards energy self-sustainability, every Joule of energy in 6G should be efficiently consumed in access networks by adopting the following methods: 1) Breaking cellular boundaries for globally scheduling DUs to deliver seamless services and to achieve network-wide energy efficiency; 2) Deploying flexible airborne access network to increase energy efficiency by further considering the vertical dimension and to realise wireless charging for battery-powered or batteryless devices from the sky; 3) Actively changing adverse wireless environment to be beneficial by implementing IRSs so as to substantially increase the efficiency of WIPT. \subsection{Cell-Free Access} In a cell-free massive MIMO (CFM-MIMO) system, a large number of DUs are connected to a CU via wired/wireless fronthaul. Joint coordination among these DUs results in cell-free access networks, where IoE devices are simultaneously served by cooperative BSs. Therefore, handover in a cell-free access network is different, since cellular boundaries are broken. Furthermore, joint coordination of a CU may achieve a network-wide energy efficiency \cite{Jiang2018}. However, full-cooperation among DUs may result in heavy tele-traffic load on fronthaul, since it requires the exchange of network state information, the transmission of transmit beamformers and requested information. In higher spectral bands of 6G, such as THz and VL, signal propagation suffers from significant penetration loss, if a lot of opaque obstacles are distributed in wireless environments. By further considering the increasing path-loss in high frequency bands, super-dense DUs have to be deployed for reducing distances and for ensuring line of sight (LoS) between DUs and IoE devices. However, more energy has to be consumed for powering such large-scale access network, while increased interference imposed by super-dense DUs has to be carefully managed and controlled. Furthermore, low-cost DUs can be equipped with solar panels for harvesting energy from ambient environment. They can be connected to smart grid to enable energy trade among their peers. As a result, global energy cooperation may result in an improved network-wide energy efficiency. As the number of DUs grows exponentially in cell-free access networks of 6G, wireless fronthaul is indispensable since the expense of deploying wired fronthaul cannot be afforded. RF signal based WIPT \cite{8421584} is an economic approach to transfer both information and energy from the CU to low-cost DUs via fronthaul in order to maintain energy self-sustainability. Due to hardware limitations, we only have limited RF chains at a THz based transmitter, which is far lower than the number of antennas compacted. Therefore, the accuracy of transmit beams cannot be guaranteed. Energy efficient hybrid transceiver consisting of both digital and analog beamformers is compromised for trying to achieve closest performance to the ideal full-digital solution \cite{8733134}. Moreover, in both THz and VL based cell-free access network, the assignment of DUs and that of antennas to IoE devices can be formulated as a network-wide energy efficiency maximisation problem, while satisfying various QoS requirements of devices, suppressing interference imposed by large-scale DUs and their associated antennas. Handover of mobile devices in cell-free access networks with densely deployed DUs faces novel challenges. Since the number of mobile devices is much lower than that of DUs, user-centric networking is the primary principle in a cell-free access network. Different from conventional handover, where mobile devices traverse across cells, a virtual cell dynamically formed by cooperative DUs tracks the movement of a mobile device, which constitutes a mobile cell serving a specific mobile device. Therefore, the formation of a dynamic virtual cell needs to be completed in real-time in order to provide seamless service experience, which calls for low-complexity heuristic algorithms with near-optimal performance. By exploiting up-to-date location information of mobile devices, we may significantly reduce the complexity of algorithms and increase the energy efficiency for virtual cell formation as well as for DUs/antennas assignment. Moreover, in a cell-free access network, synchronising densely deployed DUs is also essential. By exploiting the deterministic reflection of THz waves and VL, we may achieve over-the-air synchronisation among all the DUs. \subsection{Airborne Access} \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{figure/A_RAN.eps} \caption{Hierarchical aerial access network} \label{fig:ARAN} \end{figure} Different sorts of aircraft constitute hierarchical airborne access networks (AANs) by carrying RF/VL transceivers on board, as illustrated in Fig.~\ref{fig:ARAN}. Generally, based on their flying height, aircraft in AANs are classified into high-altitude platforms (HAPs) (e.g., high-altitude balloons and aircrafts) and low-altitude platforms (LAPs) (e.g., drones). AANs have to be energy self-sustainable, since we cannot provide stable power supply from the ground. Joint coordination among heterogeneous aircraft is capable of extending coverage of air-to-ground communication and of realising flexible network deployment as well as of providing efficient WIPT services. LAPs are mainly constituted by energy-constraint devices powered by embedded batteries, as portrayed in Fig.~\ref{fig:ARAN}. Therefore, 3D trajectories and flight control should be carefully designed in order to reduce energy consumption of aircraft, while achieving efficient-efficient downlink wireless power transfer (WPT) services, downlink/uplink wireless information transfer (WIT) services and computation offloading services to battery-powered or batteryless ground devices \cite{8805125}. Moreover, the energy consumption of ground devices can be substantially reduced, since flying LAPs reduce signal propagation distances to them. In addition, laser charging is an efficient way to power LAPs so as to extend their flight time, since it may focus a high amount of energy in a narrow beam. Laser beams are normally formed by ground stations or aircraft in HAPs. HAPs can be aloft in the air for a relatively long time, as illustrated in Fig.~\ref{fig:ARAN}. A key advantage of HAPs is their self-adjustable positions in order to maintain efficient WIPT towards LAPs. Moreover, HAPs can harvest stable solar energy for powering their engines and transceivers, since they are floating in the stratosphere \cite{9013552}. Careful coordination among aircraft in different layers may result in a network-wide optimal energy efficiency and it may provide significant performance gain in the vertical dimension of energy self-sustainable 6G. \subsection{Holographic Environment} Radiated RF/VL signals are weakened as they propagate in wireless channels, which may substantially degrade both WIPT performance. They may be absorbed, reflected and scattered by obstacles distributed in wireless environment. Signals from different propagation paths may be destructively combined, which results in substantial fading. Wireless channels become even worse in THz and VL band, since signals can be easily absorbed by large molecules and they can only be reflected \cite{Sheikh2020}. Therefore, in order to overcome more serious channel attenuation in THz and VL band, line-of-sight transmission path has to be guaranteed between a transmitter and receiver pair, while more energy has to be consumed for achieving satisfactory WIPT performance. In all classic communication systems, only transmission or receiving strategies at transceivers can be designed for efficient WIPT, such as beamforming, adaptive modulation and coding design at transmitters as well as signal combining and iterative decoding design at receivers, which all aim for counteracting signal attenuation and fading in wireless channels. This is because characteristics of wireless channels cannot be changed as exemplified in Fig. \ref{fig:Surfaces}. As electronic materials rapidly progress, IRSs can be exploited for actively changing characteristics of wireless channels. For example, based on signals' frequencies, phases and amplitudes, IRss can be intelligently adjusted, so that signals reflected by them can be constructively combined at receivers, as illustrated in Fig. \ref{fig:Surfaces}. As a result, we may artificially create a line-of-sight (LoS) transmission path in THz and VL band, when no actual LoS path exists between a transmitter and receiver pair. Therefore, received signal strength can be substantially increased without extra energy consumption, which results in more energy efficient WIPT. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figure/Surfaces.eps} \caption{Basic back-end structure of IRSs and signal propagation with/without smart wireless environment.}\label{fig:Surfaces} \setlength{\belowcaptionskip}{0pt} \setlength{\abovecaptionskip}{0pt} \end{figure} IRSs are not equipped with any active RF chains, while they are not connected to any stable energy sources. As shown in Fig. \ref{fig:Surfaces}, a 2-dimensional IRS consists of a range of passive reflectors. Signals received by a reflector are then transferred to a programmable impedance matching network (IMN). By adjusting electronic components in this IMN, we may arbitrarily adjust its reflection coefficient. Therefore, A portion of received signals penetrate and their energy is harvested and stored in super-capacitors, as illustrated in Fig. \ref{fig:Surfaces}. The other portion are reflected via a programmable analog phase-shifter. Carefully designed phase-shifters may result in a constructive combination of reflected signals at the receiver. Note that programmable analog phase shifters and IMN have to be powered by energy harvested. Therefore, it is essential to design reflection coefficients for all reflectors. Higher reflection coefficients generate more strong reflected signals but limited energy can be harvested by the surface. Given insufficient energy harvested, we are only able to adjust several phase-shifters not all. Therefore, we have to jointly select which phase-shifters to be activated and how they change the phases of reflected signals in order to maximise the signal strength at the receiver. By contrast, lower reflection coefficients result in more energy harvested by the surface. Therefore, more phase-shifters can be activated, which results in well-tuned reflected signals. However, the strengths of the reflected signals are weak, which may degrade WIPT performance. In a nutshell, given an IRS, we need to jointly design reflection coefficients of all the reflectors, activation of phase-shifters and how they tune reflected signals. Furthermore, transmit beamforming at transmitters, intelligent reflecting at IRSs and receive combining at receivers can be jointly designed to create a holographic wireless environment in energy self-sustainable 6G \cite{Holographic}. \section{Zero-Energy IoE Devices} In energy self-sustainable 6G, battery-powered or batteryless IoE devices requires controllable and on-demand WPT for maintaining seamless service experience, while their energy consumption has to be kept as low as possible. Moreover, these IoE devices should also intelligently adapt their transceivers to environments for achieving more efficient information reception and energy harvesting. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figure/Zero-Energy.eps} \caption{Wideband signal propagation and functional modules of zero-energy devices}\label{fig:Zero-Energy} \setlength{\belowcaptionskip}{0pt} \setlength{\abovecaptionskip}{0pt} \end{figure} \subsection{High-resolution Signal Processing based WIPT} RF signals can be exploited for flexible, controllable and on-demand WPT in any time and anywhere. On infrastructure side, we do not need any additional hardware implementation. Unmodulated and modulated RF signals can be relied upon for WIPT. We have to fully exploit characteristics of wideband signals for WIPT in energy self-sustainable 6G. There are full of scatters and reflectors in the propagation of RF/VL signals, which results in multi-path transmissions from a transmitter to a receiver. The number of identified paths at the receiver is determined by the following factors: \begin{itemize} \item Signal power $P_t$: a higher signal power results in a higher signal-to-noise-ratio (SNR) and clearer channel impulsive responses at the receiver. \item Signal bandwidth $W$: If the signal propagation delay between two transmission paths is higher than the coherence time $1/W$, this pair of paths cannot be identified. Otherwise, the receiver is capable of identifying these two transmission paths. \item Geographic positions of scatters and reflectors: Given a pair of transmission paths, if their distance have a difference of $c/W$, where $c$ is the speed of light, they can be identified at the receiver, otherwise they cannot. \end{itemize} Therefore, if we have an extremely high bandwidth $W$, more scatters and reflectors may generate more identified transmission paths \cite{Xu2018}. We do not rely upon coarse-grained signal processing on the basis of antenna-to-antenna. Instead, wide-bandwidth transmission in 6G creates a chance of high-resolution signal processing on the basis of identified transmission paths, which provides substantial spatial multiplexing and diversity gains for WIPT, as illustrated in Fig. \ref{fig:Zero-Energy}. By fully exploiting this additional degree of freedom in signal design and the resultant performance gain, we may also substantially reduce energy consumption of transmitters. \subsection{Multifunctional Transceiver} IoE devices should have ability of gleaning energy from downlink RF/VL signals, while their energy consumption has to be suppressed. Therefore, as portrayed in Fig. \ref{fig:Zero-Energy}, the following functional modules have to be implemented at IoE devices: 1) an IMN, which is a circuit to guarantee the alternative current (AC) energy carried by the received RF signals can be delivered to the back-end by matching the impedance of antennas and that of back-end's circuits; 2) an energy harvester, which is a circuit for rectifying AC energy carried by RF/VL signals to direct current (DC) in order to drive electronic loads or to charge energy storage units. Key electronic elements in rectifiers are diodes; 3) an energy storage unit can be either a battery or a super capacitor. Batteries have long-term energy storage capabilities, but its charging efficiency is low. Super-capacitors can only store energy for a short-term but its charging efficiency is very high. Corresponding to different QoS, we have the following pair of designs for batteryless IoE devices: \begin{itemize} \item High-rate wireless powered communication: IoE devices adopt a harvest-store-then-transmit protocol. In order to increase energy harvesting efficiency, we should guarantee a perfect impedance matching. Only when they have sufficient energy in their storage units, their active information transmissions commence via RF/VL chains. Therefore, IoE devices can obtain high-rate information transmissions \cite{7982605}. However, powering RF/VL chains is an energy-consuming task. \item Low-rate backscatter communication: IoE devices deliberately mismatch the impedance of the receiving antennas and the back-end circuits. Therefore, RF/VL signals received by antennas cannot penetrate to the back-end but backscattered instead. IoE devices can flexibly modulate their own information on backscattered signals in frequency and time domains. Therefore, IoE devices do not need any RF/VL chains for information transmission, which may substantially reduce energy consumption \cite{10.1145/3300061.3345451}. Furthermore, we can fully exploit the abundant full-spectrum of 6G by intelligently changing the frequencies of backscattered signals in order to avoid congested bands. However, IoE devices also need to harvest energy for powering their signal processing modules. Therefore, backscattering coefficients need to be carefully designed for allowing some of the received RF/VL signals flowing into the back-end for energy harvesting. The IMN of IoE devices should also be programmable. \end{itemize} Furthermore, we may integrate the above-mentioned designs into a single IoE, as exemplified in Fig. \ref{fig:Zero-Energy}. Therefore, low-rate control signalling or sporadic information transmissions can be completed by passive backscatter communication, while high-rate data transmissions can be completed by active wireless powered communication. \subsection{Human-in-the-loop} Signal propagation in these bands heavily relies on LoS between transmitters and receivers. If no LoS link exists, WIPT performance would severely degrade. In both indoor and outdoor scenarios, many objects in surrounding wireless environments may block LoS links, such as walls, cabinets, moving objects and etc. Moreover, human bodies are mostly neglected as critical blockages in the design of a communication system operating in high frequency bands, especially when hand-held communication devices are taken into account. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{figure/adaptationtoenvironment.eps} \caption{Human-in-the-loop adaptation to the surrounding environment (BX: blockage)} \label{fig:adaptationtoenvironment} \end{figure} In order to improve WIPT performance, it is crucial for IoE devices to possess abilities of intelligently adapting their transceivers to dynamic wireless environments. This adaptation can be achieved without/with intervention from human users. For instance, reconfigurable antennas of IoE devices can be automatically adjusted to better positions in order to gain a higher information rate and to harvest more energy. Interactions between devices and human users may also result in environmental adaptation. These interactions are known as ``human-in-the-loop''. As illustrated in Fig.~\ref{fig:adaptationtoenvironment}, human users may rotate their bodies to enable LoS links. In a case study of VL based communication network, observe from the right part of Fig.~\ref{fig:adaptationtoenvironment} that allowing a rotation angle of $76^{\circ}$ achieve a system throughput gain of $81\%$~\cite{NutVLC}. The involvement of human users in transceiver adaptation becomes normal in our daily life. Human users holding devices always try to rotate their bodies to gain high information rates. This is more efficient in VL band of future 6G, since VL can be easily captured by human eyes. Human users may readily aim their devices to intensive VL. Autonomous transceiver adaptations naturally find wireless channels having lowest channel attenuation. Therefore, energy consumed by infrastructure to radiate RF/VL signals can be significantly reduced. Based on surrounding wireless environments, IoE devices may decide how to adapt themselves to reduce energy consumption in information transmission and reception as well as to increase energy harvested in downlink WPT, which is also a key step towards zero-energy devices. \section{Conclusion} This article provides a layered architecture of energy self-sustainable 6G in full-spectrum which consists of AI-empowered CUs, massively deployed DUs and IoE devices. In order to deal with a tremendous amount of energy consumed for satisfying unprecedented QoS requirements, key solutions are provided to cell-free access for joint coordination of DUs, airborne access for 3D networking and holographic environmental design enabled by IRSs for counteracting severe channel attenuation in THz and VL. Furthermore, we provide vital approaches to realise zero-energy IoE devices, such as high-resolution signal processing based WIPT, multifunctional transceivers without any batteries and active transceiver adaptation to dynamic environments. However, there are still numerous challenges of making energy self-sustainable 6G a practice, which calls for a joint effort from both academia and industry.
{'timestamp': '2020-06-01T02:09:29', 'yymm': '2005', 'arxiv_id': '2005.14470', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14470'}
arxiv
\section{Introduction} Heavy quarks (HQs), namely charm and bottom, are considered as a solid probe to characterize the matter created in the QGP phase ~\cite{Svetitsky:1987gq,Moore:2004tg,Dong:2019
unq,Prino:2016cni,Andronic:2015wma,Aarts:2016hap,Cao:2018ews,Rapp:2018qla}. The large mass of heavy quarks has several implications in this context. They are produced in the early stage of the collisions by pQCD process and being $M_{HQ}>> T$ also the thermal pair production and annihilation processes are negligible. For a perturbative interaction due also to the large mass leading to collisions with small momentum transfer, the HQ propagation through the QGP medium can be described as a diffusion process assimilated to a Brownian motion \cite{Moore:2004tg,Das:2013kea,Dong:2019unq}. Furthermore, the large mass has the effect to reduce the equilibration rate of heavy quarks in the medium relative to their light counterparts leading to a thermalization time comparable to the one of the life time of the fireball \cite{Greco:2017rro,Dong:2019unq}. Therefore, the standard approach to describe the propagation of HQ in QGP has been quite often treated within the framework of the Fokker-Planck equation \cite{Moore:2004tg,vanHees:2004gq,vanHees:2005wb,vanHees:2007me,LV1,Qin:2010pf,Das:2012ck, Das:2010tj,Alberico:2013bza,He:2011qa,He:2014cla,Xu:2017obm}. However, the evidence of non-perturbative interaction and the large initial temperatures at LHC, $M_c \simeq 3 T \simeq <p_{bulk}>$, hint at a scattering dynamics more appropriately described by a Boltzmann collision integral that implies significant deviations from a gaussian fluctuation around the average momentum of the charm quark \cite{Das:2013kea,Gossiaux:2008jv,Gossiaux:2009mk,Ghosh:2011bw,Uphoff:2012gb,Uphoff:2011ad,Song:2015sfa,Cao:2016gvr,Cao:2017hhk}. One of the main observable for HQs that it has been also extensively used as a probe of QGP, is the nuclear suppression factor, $R_{AA}(p_T)$, ~\cite{Abelev:2006db,Adam:2015sza,Adler:2005xv}. It is defined as the ratio between the heavy flavor hadrons produced in nucleus-nucleus collisions with respect to those produced in proton-proton collisions. Another observable extensively studied is the elliptic flow~\cite{Adare:2006nq,Abelev:2014ipa}, $v_2(p_T)=<cos(2\phi)>$, a measure of the anisotropy in the angular distribution of heavy mesons in momentum space, as a response to the initial anisotropy in coordinate space in non-central collisions. In literature several studies have been performed in these years, to theoretically study both these observables with the aim to understand heavy quark dynamics in QGP employing the Langevin or the on-shell Boltzmann transport equation ~\cite{vanHees:2005wb,vanHees:2007me,Gossiaux:2008jv,Das:2009vy,Alberico:2011zy,Uphoff:2012gb,Lang:2012cx,Song:2015ykw,Das:2013kea,Cao:2015hia,Das:2015ana,Cao:2017hhk,Das:2017dsh,Sun:2019fud,rol2019size,Plumari:2019hzp}. However, being the QGP strongly interacting, a full quantum description of the charm quark interaction should include in principle also the off-shell dynamics, an approach that has been developed only in \cite{Berrehrah:2013mu} for the study of the transport coefficients and it is included in the PHSD approach to heavy-ion collisions \cite{Song:2015ykw,Song:2015sfa,Song:2016rzw}. In this paper we extended this study exploring also the effects of larger widths and in particular discussing the effect also in terms of the Fluctuations Dissipation Theorem (FDT). Moreover, we present a first study of the time evolution of the charm momentum in a bulk medium at fixed temperature T comparing directly the Langevin evolution, the Boltzmann on-shell evolution and an extension of the Boltzmann collision integral to include off-shell dynamics. We also discuss the impact that off-shell dynamics can have on the $R_{AA}(p_T)$. The paper is organized as follows. In the next sections we will briefly present the on-shell Boltzmann transport equation, the Fokker-Planck (Langevin) one and the definition of drag and diffusion coefficients in both on-shell and off-shell approaches. In section \ref{Section:3}, we discuss the results obtained for the transport coefficients in both on-shell and off-shell models. Section \ref{Section:4} is devoted to discuss the dynamical evolution of charm quarks in a bulk medium at finite T by comparing the results obtained in Langevin, on-shell and off-shell Boltzmann approaches. Section \ref{Section:5} contains the summary and conclusions. \section{Boltzmann transport equation and transport coefficients} In this section we are interested to study both the transport coefficients and the time evolution of the phase-space distribution function of heavy quarks (HQs). The starting point in the study of propagation of heavy quark is the relativistic transport equation for HQs scattering in a bulk medium of quarks and gluons. We therefore briefly describe the relativistic Boltzmann-Vlasov equation from which we will deduce the transport coefficients for on-shell dynamics and the Fokker-Planck equation. The on-shell transport equation can be expressed by the Boltzmann-Vlasov equation given by the following integro-differential equation: \begin{eqnarray} \{p^\mu\partial_\mu+m^*(x)\partial_\mu m^*(x)\partial^{\mu}_p\}f_Q(x,p)=C[f_q,f_g,f_{Q}] \nonumber \\ \end{eqnarray} where $f_Q(x,p)$ and $f_{q,g}(x,p)$ are the phase-space distribution functions for the heavy quark and light quarks and gluons respectively, while $C[f_q, f_g, f_Q](x,p)$ is the relativistic Boltzmann-like collision integral allowing to describe the short range interaction between heavy quark and particles of plasma. The distribution function of the bulk medium of quarks and gluons has in general to be determined by another set of equations that could be the Boltzmann-Vlasov equations for quark and gluons \cite{Ruggieri:2013ova,Plumari:2015cfa,Plumari:2019gwq}. In the present study, we want to address a direct comparison between two different dynamics: the relativistic Langevin dynamics and the relativistic Boltzmann transport theory. In the second approach, we will discuss the role of on-shell and off-shell effects on the HQ dynamics. In order to have a better focusing and testing the dynamics between these different approaches the bulk medium will be considered as a thermal bath at equilibrium at some temperature $T$. Moreover, we will calculate the different transport coefficients of HQs in a static medium at finite temperature. This will give the response of the medium to the propagation of HQs under fixed thermodynamical conditions. This is a first step before studying the more complex case of the expanding medium in realistic uRHIC where gradients of density and temperature are involved. Therefore in our calculations we neglect effects caused by space-time variation of the scalar mean fields, $\partial_\mu m^*(x)\approx 0$. Assuming that the distribution function is $x$ independent, i.e. the plasma is uniform, each variation of the distribution function is due to collisions and the Boltzmann equation is simplified to a integro-differential equation only respect to time: \begin{equation} \label{Boltz_simply} p^0\partial_0f_{Q}=C[f_q,f_g,f_{Q}]. \end{equation} We will consider only two-body collisions where the collision integral $C[f_q, f_g, f_Q] (p)$ can be expressed by the following relation: \begin{align}\label{int_finale} \begin{split} &C[f]=\frac{1}{2E_p}\int\frac{d^3\textbf{q}}{2E_q (2\pi)^3}\int\frac{d^3\textbf{q}'}{2E_{q'}(2\pi)^3}\int\frac{d^3\textbf{p}'}{2E_{p'}(2\pi)^3} \\&\cdot\frac{1}{d_Q} \sum_{g,q,\bar q}|{\cal M}(g(q,\bar q) c\rightarrow g(q,\bar q) c))|^2 \\&\cdot (2\pi)^4\delta^4(p+q-p'-q') [f_Q(\textbf{p}')\hat{f}(\textbf{q}')-f_Q(\textbf{p})\hat{f}(\textbf{q})] \end{split} \end{align} where \textbf{p} (\textbf{q}) and \textbf{p$\prime$} (\textbf{q$\prime$}) represent respectively the initial and final momentum of heavy quark (plasma particle) and $|{\cal M_Q}|^2$ is the squared modulus of scattering matrix of the process. In order to solve the collision integral it is necessary to evaluate the scattering matrix $|{\cal M_Q}|^2$. In our calculations the HQs interact with the medium by mean of two-body collisions regulated by the scattering matrix of the processes $g + Q \to g + Q$ and $q(\bar q) + Q \to q(\bar q) + Q$. A successful way to treat non-perturbative effects in heavy-quark scattering is given by Quasi-Particle approach (QPM), in which the interaction is encoded in the quasi-particle masses that behave like massive constituents of free gas plus a background field interaction given by a temperature dependent bag constant, for details see ref.~\cite{Plumari:2011mk}. The main feature of QPM approach is that the resulting coupling is significantly stronger than the one coming from pQCD running coupling, particularly at $T \rightarrow T_c$. It has been shown that QPM can reproduce the lattice QCD Equation of State: pressure, energy density and interaction measure $T^\mu_\mu=\epsilon-3 P$. The relations of the masses of light quarks and gluons to the coupling and temperature are calculated in a perturbative approach: \begin{eqnarray}\label{masse_QPM} &&m_g^2=\frac{1}{6}g(T)^2\left[\left(N_c+\frac{1}{2}N_f\right)T^2+\frac{N_c}{2\pi^2}\sum_{q}\mu_q^2\right],\nonumber \\ &&m_{u,d,s}^2=\frac{N_c^2-1}{8N_c}g(T)^2\left[ T^2+\frac{\mu^2_{u,d}}{\pi^2}\right] \end{eqnarray} where $N_f$ and $N_c$ are respectively the number of flavours and colours, $\mu_q$ is the chemical potential of the q flavour that in our calculation is neglected. Even if the formal relation is perturbative-like, the $g(T)$ is obtained by a fit to the energy density of lattice QCD (lQCD) and it is expressed by: \begin{equation} g^2(T)=\frac{48\pi^2}{[(11N_c-2N_f)ln[\lambda(\frac{T}{T_c}-\frac{T_s}{T_c})]]^2} \end{equation} where $\lambda=2.6$ and $T_s/T_c=0.57$, with $T_c=155 MeV$. We obtain a non perturbative behaviour of the coupling especially for $T\rightarrow T_c$. The evaluation of the scattering matrix has been performed considering the leading-order diagrams. In this approach the effective coupling $g(T)$ leads to effective vertices and a dressed massive gluon propagator for $g + Q \to g + Q$ and massive quark propagator for $q(\bar q)+ Q\to q (\bar q)+Q $ scatterings. The detail of the calculations can be found in Ref. \cite{Berrehrah:2013mu}. \subsection{On-shell Transport Coefficients and Fokker-Planch equation} We give a brief description of the derivation of HQ transport coefficients. We can also express the collision integral in relation to the rate of collisions $\omega(\textbf{p},\textbf{k})$ between HQ and light bulk particles, where \textbf{k} is the transferred momentum during the collision: \begin{equation}\label{integrale_coll} C[f]=\int d^3\textbf{k} [\omega (\textbf{p}+\textbf{k},\textbf{k})f(\textbf{p}+\textbf{k})-\omega(\textbf{p},\textbf{k})f(\textbf{p})]. \end{equation} The rate of collision $\omega(\textbf{p},\textbf{k})$ is given by: \begin{equation}\label{rate} \omega(\textbf{p},\textbf{k})=d_{QGP} \int \frac{d^3\textbf{q}}{(2\pi)^3} \hat f(\textbf{q})v_{q,p} \frac{d\sigma_{p,q\rightarrow p-k,q+k}}{d\Omega}. \end{equation} In this relation $d_{QGP}$ defines degree of freedom of particle in collision with heavy quark, $\hat f(\textbf{q})$ is the time and space independent distribution function of particle in the plasma of momentum $\textbf{q}$, $v_{q,p}$ defines the relative velocity and $\sigma_{p,q\rightarrow p-k,q+k}$ is the differential cross section of the scattering process. Differential cross section can be expressed by the following relation: \begin{equation}\begin{split} \frac{d\sigma_{p,q\rightarrow p-k,q+k}}{d\Omega}=\frac{1}{(2\pi)^6}\frac{1}{v_{p,q}}\frac{1}{2E_q}\frac{1}{2E_p }\frac{1}{d_Q d_{QGP}} \sum|{\cal M_Q}|^2\\\times\frac{1}{2E_{q+k}}\frac{1}{2E_{p-k}}(2\pi)^4\delta(E_p+E_q-E_{p-k}-E_{q+k}) \end{split} \end{equation} with the $\sum$ intended over all the elastic scattering channels $g+Q \rightarrow g+Q$ and $q(\bar q)+ Q\rightarrow q (\bar q)+Q $. The non-linear integro-differential Boltzmann equation cannot be easily solved and a way to simplify the calculation is to employ the Landau approximation leading to a relativistic Fokker-Planck equation with momentum dependent transport equation. This assumption is physical motivated by the suggestion that during collision the transfer momentum $\textbf{k}$ is small and we can operate an expansion of the integral: \begin{align} \begin{split} f(\textbf{p}+\textbf{k})\omega(\textbf{p}+\textbf{k},\textbf{k})=&f(\textbf{p})\omega(\textbf{p},\textbf{k})+\textbf{k}\frac{\partial}{\partial \textbf{p}}(\omega f)\\&\times+\frac{1}{2}k_ik_j\frac{\partial^2}{\partial p_i \partial p_j}(\omega f)+... \end{split} \end{align} Defining the following quantities: \begin{equation}\begin{split}\label{coeff} A_i(\textbf{p},T)&=\int d^3kk_i\omega(\textbf{p},\textbf{k})\\ B_{i,j}(\textbf{p},T)&=\frac{1}{2}\int d^3kk_ik_j\omega(\textbf{p},\textbf{k}) \end{split} \end{equation} the collision integral $C[f]$ in Eq.\ref{int_finale} becomes: \begin{eqnarray}\label{fokker-planck} \frac{df(\textbf{p})}{dt}&=&\frac{\partial}{\partial p_i}\left[A_i(\textbf{p},T)f(\textbf{p})+\frac{\partial}{\partial p_j}[B_{i,j}(\textbf{p},T)f(\textbf{p})]\right]. \nonumber \\ \end{eqnarray} The Eq.\ref{fokker-planck} is the Fokker-Planck equation and the quantities defined by the Eq.\ref{coeff} are the drag and diffusion coefficients of the propagation of HQ in the thermal bath at temperature T. If we consider an isotropic medium, we can express the drag and diffusion coefficients by the following relations: \begin{equation}\begin{split} A_i(\textbf{p},T)&=A(p,T)p_i\\ B_ {i,j}(\textbf{p},T)&=B_L(p,T)P_{i,j}^{||}(\textbf{p})+B_T(p,T)P_{i,j}^{\perp}(\textbf{p}). \end{split} \end{equation} The diffusion coefficient is expressed by a longitudinal $B_L$ and a transversal $B_T$ component respect to the HQ momentum where $P_{i,j}^{||}(\textbf{p})=p_ip_j/\textbf{p}^2$ and $P_{i,j}^{\perp}(\textbf{p})=\delta_{i,j}-(p_ip_j/\textbf{p}^2)$ are the projection operators on the longitudinal and transverse momentum components. Using the definition in Eq.\ref{coeff} we get the following expression for $A_i$: \begin{eqnarray}\label{drag} & & A_i(\textbf{p},T)= \nonumber \\ &=&\frac{1}{2E_p}\int\frac{d^3\textbf{q}}{2E_q (2\pi)^3}\int\frac{d^3\textbf{q}'}{2E_{q'}(2\pi)^3}\int\frac{d^3\textbf{p}'}{2E_{p'}(2\pi)^3}\nonumber \\ &\times&\frac{1}{d_Q}\times \sum|{\cal M_Q}|^2(2\pi)^4\delta^4(p+q-p'-q') \nonumber \\ &\times&\hat f(q)[(p-p')_i]\equiv \left\langle \left \langle(p-p')_i\right \rangle \right\rangle. \end{eqnarray} while for $B_{i,j}$: \begin{equation}\label{diff} B_{i,j}(\textbf{p},T)=\frac{1}{2}\left\langle \left \langle(p-p')_i(p'-p)_j\right \rangle \right\rangle. \end{equation} Finally, drag, transverse and longitudinal diffusion coefficients can be calculated as follows: \begin{align}\begin{split}\label{B1_comp} B_L(p,T)&=\frac{1}{2}\frac{p_ip_j}{p^2}B_{i,j}=\\ &=\frac{1}{2}[\left\langle \left \langle\textbf{p}'\cdot \textbf{p}\right \rangle \right \rangle p^2-2\left\langle \left \langle\textbf{p}'\cdot \textbf{p}\right \rangle \right \rangle+p^2\left\langle \left \langle\textbf{1}\right \rangle \right \rangle] \end{split} \end{align} \begin{equation}\begin{split}\label{B0_comp} B_T(p,T)&=\frac{1}{2}\left[\delta_{i,j}-\frac{p_ip_j}{p^2}\right]B_{i,j}=\\&= \frac{1}{4}[\left\langle \left \langle p'^2\right \rangle \right \rangle-\left \langle \left \langle(\textbf{p}\cdot \textbf{p}')^2\right \rangle \right \rangle/p^2]. \end{split} \end{equation} and for drag coefficient: \begin{equation}\begin{split}\label{drag_comp} A(p,T)&=p_iA_i/p^2=\\&=\left\langle \left \langle\textbf{1}\right \rangle \right \rangle-\left\langle \left \langle\textbf{p}'\cdot\textbf{p}\right \rangle \right \rangle/p^2. \end{split} \end{equation} We recall that the standard approach to evaluate the quantities in Eq.\ref{drag} and Eq.\ref{diff} is to write the integral in the c.m. frame using the c.m. scattering angles and the momentum $\textbf{q}$ of the plasma particle: \begin{align}\begin{split}\label{F_finale} \left\langle \left \langle F(\textbf{p},\textbf{p}\prime,T)\right \rangle \right\rangle =\frac{1}{(2\pi)^3}\int_{0}^{\infty} d\textbf{q}\textbf{q}^2\int_{-1}^{+1} dcos\alpha\\\times \int_{t_{min}}^{t_{max}} dt v_{rel} \frac{d\sigma}{dt} \hat f(\textbf{q}) \int_{0}^{2\pi} d\phi_{cm} F(\textbf{p},\textbf{p}\prime,T) \end{split} \end{align} where $\alpha$ is the polar angle of $\textbf{q}$ and the Mandelstam variable $t$ is expressed in terms of momentum $\hat{\textbf{p}}$ of heavy quark in the c.m. scattering system by $t=(p-p')^2=-2|\hat{\textbf{p}}|^2(1-cos\theta_{cm})$. Finally, the differential cross section takes the form: \begin{equation} \frac{d\sigma}{dt}=\frac{1}{16\pi}\frac{1}{[(s-M_Q^2-m^2)^2-4M_Q^2m^2]}\frac{1}{d_Q}\sum|{\cal M_Q}|^2. \end{equation} \subsection{Off-shell Transport coefficients} In order to have a more accurate description, the propagation of heavy quark can be also treated taking into account off-shell effects due to collisions with quasi-particle in the plasma. The collision integral, that in the on-shell case is expressed by Eq.\ref{int_finale}, in the off-shell case can be written as: \begin{eqnarray}\label{int_finale_off} C[f]&=&\int dm_i A(m_i)\int dm_fA(m_f) \nonumber \\ &\times&\frac{1}{2E_p}\int\frac{d^3\textbf{q}}{2E_q (2\pi)^3}\int\frac{d^3\textbf{q}'}{2E_{q'}(2\pi)^3}\int\frac{d^3\textbf{p}'}{2E_{p'}(2\pi)^3}\nonumber\\ &\times&\frac{1}{\gamma_Q}\sum|{\cal M_Q}|^2(2\pi)^4\delta^4(p+q-p'-q')\nonumber\\ &\times&[f(\textbf{p}')\hat{f}(\textbf{q}',m_f)-f(\textbf{p})\hat{f}(\textbf{q},m_i)] \end{eqnarray} In particular, we want to investigate how the off-shell quantum effects modify the evolution of charm quark respect to the on-shell case that as a first approximation is commonly used to study the propagation of these particles in the bulk of light quarks and gluons. The dynamical quasi-particle model (DQPM) describes QCD properties in terms of the single-particle Green's functions which leads to the description of QGP in terms of strongly interacting massive effective quasi-particles with broad spectral functions \cite{Cassing:2009vt}. In this approach the parton masses and widths are determined by fitting the quasi-particle entropy density to the lQCD entropy density reproducing the QCD equation of state extracted from lattice QCD calculations \cite{Bratkovskaya:2011wp}. The aim of this study is an evaluation of the off-shell effects due to plasma quasi-particles. In the DQPM approach in Ref.\cite{Berrehrah:2013mu}, partons are dressed by non perturbative spectral function $A(q^0)$ which associates a spectrum of energies to a particle of momentum $\textbf{q}$. The ansatz used to model a nonzero width is obtained by replacing the free spectral function by a Lorentzian form \cite{Berrehrah:2013mu}. As shown in Ref.\cite{Berrehrah:2013mu} the Lorentzian form has a peak at small values of $p/T$ at the pole mass of the charm quarks and for it a non-relativistic approximation is a good approximation. In this work, we are interested in a non-relativistic approximation of partonic spectral function in which at small momenta $q^0\approx m$, in this way $A(q^0)$ is parametrized by a Breit-Wigner function $A^{BW}(m_i)$ \cite{Berrehrah:2013mu, Bratkovskaya}. The width of the partons in the perturbative limit are given by $\gamma \approx g^{2} T \ln{g^{-1}}$ where the physical process contributing to the functional form of the widths are elastic scattering like $gg \to gg$, $gq \to gq$ and $qq(\bar{q}) \to qq(\bar{q})$ are included. The functional forms of bulk particles widths $\gamma_g$ and $\gamma_q$ associated to spectral function for $\mu_q=0$ are given by: \begin{equation}\begin{split}\label{largh_QPM} \gamma_g(T)&=\frac{1}{3}N_C\frac{g^2(T/T_C)}{8\pi} T\, ln\left[\frac{2c}{g^2(T/T_C)}+1\right]\\ \gamma_q(T)&=\frac{1}{3}\frac{N_C^2-1}{2N_C}\frac{g^2(T/T_C)}{8\pi} T\, ln\left[\frac{2c}{g^2(T/T_C)}+1\right] \end{split} \end{equation} Fitting the entropy density on the lQCD data, the constant $c$ is fixed to $c=14.4$. The spectral function associated to light quark and gluon in the plasma are expressed by: \begin{equation}\label{Breit-Wigner} A_i^{BW}(m_i)=\frac{2}{\pi}\frac{m_i^2\gamma_i^\ast}{(m_i^2-M_i^2)^2+(m_i\gamma_i^\ast)^2} \end{equation} where $A_i^{BW}$ fulfills the normalization \begin{equation*} \int_{0}^{\infty}dm_iA_i(m_i,T)=1. \end{equation*} In Eq. \ref{Breit-Wigner}, $M_i$ is the pole mass of gluon and light quark defined in Eq.\ref{masse_QPM} and $\gamma_i^*$ is the width associated to each particle mass. Such widths are related to $\gamma_i$ calculated in DQPM approach by the relation $2q^0_i\gamma_i=m_i\gamma^*_i$ \cite{Berrehrah:2013mu}. Since we are taking into account a regime where $\gamma<M_i$, the $\gamma_i^\ast$ of Eq. \ref{Breit-Wigner} can be written $\gamma_i^\ast \approx 2\gamma_i$. The off-shell dynamics implies that the values of partonic masses can be different before and after scattering process, i.e. $m_i\neq m_f$, differently from on-shell case in which $m_i = m_f$, where Breit-Wigner function becomes a delta function centered at Pole mass $M_i$. \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig1.eps} \caption{Ratio between widths calculated in DQPM approach and the Pole Mass $M_i$ as function of temperature for quarks (blue solid line) and gluons (blue dashed line). The green solid line is the correponding case with constant ratio fixed to $\gamma^*_i/M_i=0.75$.}\label{gamma_m} \end{center} \end{figure} In Fig. \ref{gamma_m}, ratio between $\gamma^*_i$ and Pole mass $M_i$ are shown for gluon and light quark. \\ If we consider the off-shell quantum effects for the plasma particle, the quantity of Eq.\ref{F_finale} can be written: \begin{align}\begin{split}\label{F_finale_os} \left\langle \left\langle F(m_i,m_f,\textbf{p},\textbf{p}\prime,T)\right\rangle \right\rangle =\int dm_i A(m_i)\int dm_fA(m_f)&\\\times \frac{1}{(2\pi)^3}\int_{0}^{\infty} d\textbf{q}\textbf{q}^2\int_{-1}^{+1} dcos\alpha\int_{t_{min}}^{t_{max}} dt v_{rel}&\\\times \frac{d\sigma}{dt} \hat f(m_i,\textbf{q}) \int_{0}^{2\pi} d\phi_{cm} F(m_i,m_f,\textbf{p},\textbf{p}\prime,T) \end{split} \end{align} where $m_i$ and $m_f$ are respectively the initial and final mass of partons respectively. In this case the Mandelstam variable $t=(p-p')=2M_Q^2-2\hat{E}_p\hat{E}_{p'}+2|\hat{\textbf{p}}||\hat{\textbf{p}}'|cos\theta_{cm}$. \section{Results for transport coefficients: on-shell and off-shell}\label{Section:3} In the following we will compare the results coming from the on-shell expression in Eq.\ref{F_finale} with the one in Eq.\ref{F_finale_os} that include the off-shell effects. Before systematically study and compare the transport coefficients between the two different approaches presented in the previous sections we describe the common features in the following calculations. The number of thermal quark flavors is set to $n_f = 3$, the medium temperature is kept fixed. A Boltzmann distribution is used for the thermal light flavor quark and gluon distribution. The charm quark mass is fixed to $M_c=1.3 \, GeV$. To regulate the collinear divergence of the t-channel in the scattering matrix, the following replacement is performed $1/t \to 1/(t - \mu_D^2 )$. Where we have set the Debye screening mass to $m_D=\sqrt{4\pi\alpha_s(T)}T=g(T)T$. In the off-shell case this replacement takes the form $1/t \to 1/(t - \mu_D^2+i2\gamma_g(p^0_f-p^0_i) )$ where $p^0$ is the energy of charm quark \cite{Berrehrah:2013mu}. In Fig.\ref{Fig:drag_T} and Fig.\ref{Fig:BT_T} we compare the transport coefficients as a function of the medium temperature with a fixed HQ momentum of $p = 0.1$ $GeV/c$ for the two different approaches studied in this paper. Solid lines refers to on-shell calculations while red dashed lines for off-shell calculations. In Fig.\ref{Fig:drag_T} we show the results for the drag coefficient $A$. If quantum off-shell effects for bulk are considered the drag coefficients decrease of about $30 \%$ in the temperature regime of $T\sim1-2T_c$. Similar conclusion we get also for the diffusion coefficient $B_T$ as show in Fig.\ref{Fig:BT_T}, where in this case we observe a reduction of about $35$ $\%$ in the same range of temperature. Both difference decrease at increasing temperature. We clarify that here we are keeping the couplig of quark and gluons to be the same in the on-shell and off-shell case to see the main direct effect of the inclusion of a finite widths for the quasi-particles. Of course another approach could be to upscale $g(T)$ to have the same energy density in both on-shell and off-shell case. Being the change in energy density of about $10-15\%$ this case corresponds to a change of $g(T)$ by only few percent. \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig2.eps} \caption{Drag coefficient ($A(T)$) as a function of the medium temperature at fixed HQ momentum $p=0.1 \, GeV$ for on-shell approach (black solid line) and off-shell approach (red dashed line).}\label{Fig:drag_T} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig3.eps} \caption{Diffusion coefficient $B_T(T)$ as a function of the medium temperature at fixed HQ momentum $p=0.1 \, GeV$ for on-shell approach (black solid line) and off-shell approach (red dashed line).}\label{Fig:BT_T} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig4.eps} \caption{Ratio between drag coefficient ($A$) and energy density ($\epsilon$) as function of temperature for two different HQ momentum $p=0.1 \, GeV$ and $p=5 \, GeV$. Black solid and red dashed line respectively for on-shell and off-shell for $p=0.1 \, GeV$ while full black circles and open red circles respectively for on-shell and off-shell for $p=5 \, GeV$}\label{Fig:drag_eps_T} \end{center} \end{figure} We have checked if the decrease in the drag can be a mere effect of the decrease of the equilibrium energy density $\epsilon$. Therefore we have calculated the $A/\epsilon$ ratio as shown in Fig.\ref{Fig:drag_eps_T} for different values of temperature. We can see that when we divide drag coefficient obtained in on-shell and off-shell mode by the respective values of the energy density of the bulk system, the decrease of coefficient in off-shell case is completely re-absorbed for intermediate and high momentum. The scaling with $\epsilon$ is only partially fulfilled in the limit $p\rightarrow 0$ and we see that a difference between on-shell and off-shell mode of about $10 \%$ remains even when the comparison is done renormalizing at the same energy density. This allows to draw a first conclusion about the fact that there is an impact of off-shell effects at low $p$, but a sizeable part can be traced-back to a change of the energy density when simply increasing increasing the width. Furthermore already at intermediate momenta $p$ the transport coefficients become the same once renormalized to the energy density, as we can see in Fig. \ref{Fig:drag_eps_T} comparing open and filled circles. This is true at least when the values of the widths are relatively small as in DQPM approach. In this context, we want also to check the violation of fluctuation-dissipation theorem (FDT) \cite{Das:2013kea} for on-shell and off-shell case. The validity of this relation can be verified evaluating the ratio between diffusion coefficient $B_T$, obtained by scattering matrix ${\cal M_Q}$ with the value of $B_T$ predicted by fluctuation-dissipation relation. In order to fulfill the FDT this ratio should be equal to 1. In general, when we calculate transport coefficient with scattering matrix one obtains however a significant deviation \cite{Das:2013kea}. In Fig.\ref{Fig:FDT} it is shown the ratio between $B_T$ and $TEA$ where $E=\sqrt{p^2+M^2}$ for the two cases discussed in this paper for on-shell and off-shell partons. We observe that the FDT is better verified when off-shell bulk is taken into account where we obtain an improvement with respect to on-shell case of about $10 \%$. Moreover we observe that the FDT is better verified at higher temperature where we obtain a deviation lower than $15 \%$. \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig5.eps} \caption{$B_T/TEA$ as a function of temperature for both on-shell and off-shell approaches and $p=0.1 \, GeV$ and $p=5 \, GeV$. Same legend as in Fig.\ref{Fig:drag_eps_T}.}\label{Fig:FDT} \end{center} \end{figure} In the results shown in the previous figures we have considered the widths $\gamma^*_i$ given by the DQPM approach\cite{Berrehrah:2013mu}. As shown in Fig.\ref{gamma_m}, such widths (i.e. $\gamma^*_{q}\approx 260$ MeV, $\gamma^*_{q}\approx 110$ MeV at $T=200$ MeV) are significant smaller than the quasi-particle masses. In order to explore also the impact of quantum off-shell effects on the transport coefficients we artificially increase the widths considering $\gamma^*$ about 2-3 times larger than those of DQPM approach for quarks, i.e. $\gamma^*/M=0.75$ for both quarks and gluons (i.e. $\gamma^*_{g}\approx 520$ MeV, $\gamma^*_{q}\approx 330$ MeV at $T=200$ MeV) as shown by green line in Fig.\ref{gamma_m}. We consider larger widths with respect to the DQPM, because they are conceivable in other approaches especially considering values of the shear viscosity to entropy density ratio, $\eta/s$, is about 0.1, while for DQPM it is stays in the range $\eta/s \sim 0.2 - 0.3$ for $T \sim T_c$. In Fig.\ref{drag200} and Fig.\ref{BT200} we have shown the HQ transport coefficients, respectively drag $A$ and diffusion $B_T$, as function of charm momentum at fixed $T=0.2 \, GeV$ including now the $\gamma^*/M=0.75$ case. If we can see that considering bigger widths for Breit-Wigner distribution with respect to DQPM one, there is a decrease of the transport coefficients. Furthermore, a limited improvement for FDT validity is observed for the case of larger widths where the FDT is satisfied within $10$ $\%$, as shown by open green circles in Fig.\ref{FDT200}. \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig6.eps} \caption{Drag coefficient ($A$) as a function of momentum for fixed medium temperature at $T=0.2 \, GeV$ for on-shell approach (black solid line) and off-shell approach (red dashed line). The open green circles refers to the case with larger widths with fixed ratio $\gamma^*_i/M_i=0.75$.}\label{drag200} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig7.eps} \caption{Diffusion coefficient ($B_T$) as a function of momentum and for fixed medium temperature at $T=0.2 \, GeV$ for on-shell approach (black solid line) and off-shell approach (red dashed line). The open green circles refers to the case with larger widths with fixed ratio $\gamma^*_i/M_i=0.75$.}\label{BT200} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig8.eps} \caption{$B_T/TEA$ as a function of charm momentum and fixed temperature $T=0.2 \, GeV$ for both on-shell and off-shell approaches. Same legend as in Fig.\ref{drag200}.}\label{FDT200} \end{center} \end{figure} Finally, in Fig.\ref{Fig:drag_eps} it is shown the $A(p)/\epsilon$ ratio as a function of the charm quark momentum and for temperature $T=0.2 \, \rm GeV$. Comparing black solid line with red dashed line, we observe a scaling between on-shell and off-shell calculation for $p \ge 2-3 \, \rm GeV$ and breaking a lower momentum. This suggests that for the widths used in DQPM the difference in the drag coefficient in off-shell case is completely re-adsorbed for high momentum of charm. Furthermore, if we increase the widths as shown by open green circles we get that the drag coefficient shows a larger breaking of the scaling at least for $p\lesssim 2-3 \, \rm GeV$ that at $p \rightarrow 0$ is maximal and corresponds to a reduction of about a $40 \%$. \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{fig9.eps} \caption{$A(p)/\epsilon$ ratio as function of charm momentum at $T=0.2$ GeV for on shell (black solid line) and off-shell $\gamma^*_{DQPM}$ (red dashed line) and $\gamma^*=0.75 M$ (open green circles). }\label{Fig:drag_eps} \end{center} \end{figure} \section{Heavy quarks momentum evolution in the QGP: on- and off-shell Boltzmann and Langevin dynamics} \label{Section:4} In this section we discuss about the time evolution of HQs within Boltzmann scattering with on-shell quarks and gluons and the extension of the Boltzmann collision integral to account for off-shell conditions by mean of the Breit-Wigner spectral functions as done for the transport coefficients, see Eq. \ref{int_finale_off}. We are interested in the evolution of the HQ distribution function $f_{Q}(x,p)$ in a thermal bulk described through QPM approach and we have considered a plasma in equilibrium in a box with constant temperature T. In this study, the starting point to investigate the HQ evolution for both on-shell and off-shell approaches is the simplified form of the Boltzmann equation that is expressed in Eq.\ref{Boltz_simply}. We can write: \begin{equation}\label{bolt_vera} \frac{\partial f_{Q}}{\partial t}=\frac{1}{E_{Q}}C[f_q,f_g,f_{Q}]. \end{equation} Since in the previous equation the field gradients are discarded, it is valid for both on-shell and off-shell dynamics, with the last embedded in $C[f_q,f_g,f_{Q}]$ according to Eq. \ref{int_finale_off}. After a time discretization the Boltzmann equation can be written as \begin{equation}\label{Eq:discr} f(t+\Delta t,p)=f(t,p)+\frac{\Delta t}{E_{Q}}C[f] + O(\Delta t^3). \end{equation} As for the transport coefficients, the numerical solution of Boltzmann equation is obtained by a code that implement a Monte-Carlo integration method for the full collision kernel described by Eq.s \ref{int_finale} and \ref{int_finale_off}. Different tests have been done in order to verify the convergency of the collision integrals both in on-shell and off-shell case. It is important to fix the number of MonteCarlo samples $N_s$, in particular for off-shell case, where we have two additional integrations over spectral function that give the weight of each initial and final mass of light partons in the bulk. In this study, we have discretized the time and the HQ momentum \textit{p} in the propagation in order to calculate the evolution of phase-space distribution function of charm quarks. We want that the integral over the distribution function is conserved. Therefore, we can write: \begin{equation} \frac{\partial N}{\partial t}=\int d^3p \frac{\partial f}{\partial t}=\int d^3p \frac{C[f]}{E_{Q}}\equiv \bar{C} \end{equation} where $N$ is the number of charms quarks. If the integral is not conserved, we can assume a variation $\Delta N$ according to $N(t)=N_0+\bar{C}\Delta t$ where $N_0$ is the initial number of charm quarks. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig10.eps} \caption{$N(t)-N_0$ as a function of time. Left panel different lines are for different momentum discretization with a fixed number of MC sampling to $N_s=10^8$ while in the right panel the different lines refers to the different MC sampling used for $\Delta P= 0.005 GeV$.}\label{conv_dp} \end{center} \end{figure} In Fig.\ref{conv_dp} it is shown an example of study of the convergence of the off-shell collision integral, similar results we get also for the on-shell case. In particular, we have studied the time evolution of $\Delta N$ for different momentum discretization $\Delta p$ (left panel) and number of samples $N_s$ (right panel) used for the Monte-Carlo calculation of the collision integral. We have found that the most appropriate number of momentum discretization and Monte-Carlo samples is $\Delta p =5 \times 10^{-3} GeV$ and $N_s=10^8$. A similar study it has been performed for the time step $\Delta t$ we found that $\Delta t=0.1 fm$ is enough to get the convergency of differential equation Eq.\ref{Eq:discr}. Within the numerical approach used in this paper both particle number and energy are conserved to an accuracy better than $10^{-4}$ within the time range explored in the following figures. We have also checked that at the thermalization time $\tau_{eq.}$, the distribution reaches the equilibrium condition defined by Juttner-Boltzmann solution and the integral over distribution function is conserved at each time step. In soft scattering approximation, another standard approach used to describe the HQ propagation in the bulk medium of quarks and gluons is by means of a Fokker-Planck equation of Eq.\ref{fokker-planck}. The Fokker-Planck equation is solved by a stochastic differential equation given by the Langevin equation where the equations of motion of the HQs are given by \begin{eqnarray}\label{Langevin} dx_i&=&\frac{p_i}{E}dt \nonumber \\ dp_i&=&-Ap_idt+C_{i,j}\rho_j\sqrt{dt}. \end{eqnarray} This set of equations describe the variation of coordinate $dx_i$ and momentum $dp_i$ in each time step $dt$ \cite{Rapp:2009my,LV,LV1}. In the previous equation, $A$ represents drag force and $C_{i,j}$ is the covariance matrix that describes stochastic force in term of independent Gaussian-normal distributed random variables $\rho_j$. The random variable $\rho_j$ obey to the following distribution $p(\rho)=(2\pi)^{-3/2}e^{-\rho^2/2}$ with the conditions that $<\rho_i>=0$ and $<\rho_i \rho_j>=\delta(t_i-t_j)$. This covariance matrix is related to diffusion coefficient in the following way: \begin{equation}\label{C} C_{i,j}=\sqrt{2B_T}P_{i,j}^\perp+\sqrt{2B_L}P_{i,j}^{||} \end{equation} where $B_T$ and $B_L$ are respectively the transverse and longitudinal component of diffusion coefficient. In general $B_L=B_T=D$ for $p\rightarrow 0$ and it is a standard choice by several groups also at finite momenta $p$ when studying the HQ observables in realistic simulation of ultra-relativistic collisions \cite{LV,LV1,Rapp1,Hees}. In Langevin approach, the fluctuation-dissipation relation $B_T=TEA$ is commonly employed even if a microscopic derivation in general violates such relation at finite momentum as we have discussed in the previous sections. We have verified that the numerical solution of Langevin equation converges to the equilibrium solution $f_{eq}=e^{-E/T}$ at very large time. In order to fulfill this condition we reformulate the fluctuation-dissipation theorem as suggested by the pre-Ito interpretation\cite{Rapp:2009my} and we solve the Langevin equation with the condition: \begin{equation}\label{pre-ito} A(p)=\frac{D(p)}{ET}-\frac{D^\prime(p)}{p} \end{equation} therefore taking $D(p)=B_T(p)$ as calculated by scattering matrix according to Eq. \ref{B1_comp} and Eq. \ref{F_finale} and we evaluate the correct drag force to achieve equilibrium distribution at themalization time. This procedure is necessary to guarantee that for $t\rightarrow\infty$ ($\gtrsim \tau_{eq.}$) also the Langevin approach converges to the correct equilibrium distribution as naturally occurs for the Boltzmann evolution. Such agreement is shown in the right-low panel of Fig. \ref{Fig:OnOff}. \subsection{Results on HQ moment evolution} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig11.eps} \caption{Charm quark momentum distribution as a function of the charm quark momentum at four different times $t=1 \, fm/c$ (left upper panel) and $t=2 \, fm/c$ (right upper panel), $t=4 \, fm/c$ (left lower panel) and $t=8 \, fm/c$ (right lower panel). Black solid lines are for on-shell dynamics while red dashed lines are for off-shell dynamics. The solid thin green line is the initial distribution that is the same for both calculations.}\label{Fig:OnOff} \end{center} \end{figure} To investigate the differences between the heavy quark dynamics implied by Boltzmann on-shell dynamics and Off-shell dynamics, we study the heavy quark time evolution of the momentum distribution. In the following results we have considered a thermal bulk of light quarks and gluons at a temperature of $T=0.2 \, GeV$. In our calculation the initial charm quark distribution is assumed as an approximately delta distribution at $p_0 = 5 \,GeV$ shown by the green line in the left panel of Fig.\ref{Fig:OnOff}. In Fig. \ref{Fig:OnOff}, we show the time evolution of the momentum distribution $dN/dp$ for both on-shell (black solid line) and off-shell dynamics (red dashed line) with $\gamma^*_{DQPM}$. As shown the Boltzmann approach with off-shell collision integral has a slower dynamics than the on-shell one. This can be understood as due to the fact that in the off-shell case the spreading of the bulk mass according to the quarks and gluons spectral functions can be assimilated as a system with a larger average effective mass, considering that the part of the spectral function at larger mass has anyway a larger phase space. At $t > 4 \, fm/c $ for both cases the momentum distribution tend towards a thermal distribution at $T=0.2 \, \rm GeV$ as shown in the right lower panel of Fig.\ref{Fig:OnOff} by the open square points. The main difference is a faster evolution for the on-shell case that is however mainly due to the fact that the on-shell and off shell dynamics have an underlying bulk system with a different energy density and the drag coefficients are those corresponding to Fig.\ref{Fig:drag_eps} In the following discussion we will show instead two different calculations for two different drag and diffusion coefficient implementation. The motivation is twofold. From one hand we try to discard the pure off-shell effect in the HQ dynamics from the on-shell one. From the other hand we are motivated by the fact that different approaches have been used to extract the HQ transport coefficients from the comparison of the observables, like nuclear modification factor and anisotropic flows, with the experimental data. In particular we will compare the results obtained within Langevin approach and within the on-shell and off-shell dynamics. We firstly have considered one case where we scale the drag coefficient of the off-shell kernel to the on-shell one by the energy density for the case at larger width considered $\gamma^*/M=0.75$. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig12.eps} \caption{Time evolution of the charm quark momentum distribution in a thermal bulk at $T=0.2 \, GeV$. Solid lines are for the on-shell dynamics while dashed lines are for off-shell dynamics. Different colors are for different times. For the off-shell case in this calculation $\gamma^*_i/M_i=0.75$ and the energy density of the bulk is the same to the on-shell case.}\label{Fig:evo_den} \end{center} \end{figure} Here we considered the evolution on-shell and off-shell ($\gamma^*/M=0.75$) for a bulk that has been tuned to have the same energy density in agreement with lQCD calculation. This case is different from the previous one because here the bulk QGP as the same energy density in the two cases. As was shown in the previous section in Fig.\ref{Fig:drag_eps}, the effect of the transport coefficient between off-shell and on-shell in large part due to the difference in the energy density is damped when considered the physical case where both on-shell and off-shell are tuned to the same energy density. In order to achieve this point, we upscale the off-shell scattering matrix by a constant factor $k=\epsilon_{on-Shell}(T)/\epsilon_{off-Shell}(T)$ that in our simulation for a thermal bulk at $T=0.2 \, GeV$ is about $k\approx 1.5$ corresponding to an underlying increase of the coupling $g(T)$ of about a $6\%$. In Fig.\ref{Fig:evo_den} it is shown the time evolution of the charm momentum distribution for both on-shell and off-shell dynamics. We can notice that off-shell drag coefficient remains smaller than on-shell one especially at low momenta and this implies an off-shell dynamics that is slightly slower with respect to the on-shell case, but the effect remains quite small. From these calculation we can assert that the differences seen in Fig. \ref{Fig:OnOff} are mainly due to the different energy density induced by the fact that keeping equal the pole value of the mass and dressing the system by a finite width induces a decreasing of the energy of the system. Such an effect is nearly negligible for the off-shell case with $\gamma^*_{DQPM} \simeq 0.3-0.4\, M$, but becomes sizeable for $\gamma^*/M=0.75$. However the pure off-shell dynamics does not show relevant differences as we can see comparing the on-shell (solid lines) and off-shell mode (dashed lines). This suggests that on-shell Boltzmann equation is still a quite good approximation to study the evolution of the charm momentum distribution at least up to $\gamma^*<M$. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig13.eps} \caption{Charm quark momentum distribution as a function of the charm quark momentum at four different times $t=1 \, fm/c$ (left upper panel) and $t=2 \, fm/c$ (right upper panel), $t=4 \, fm/c$ (left lower panel) and $t=8 \, fm/c$ (right lower panel). Black solid lines are for on-shell dynamics, red dashed lines are for off-shell dynamics while blue dash-dotted lines are for langevin dynamics. The solid thin green line is the initial distribution that is the same for each calculations.}\label{Fig:OnOff_DRAG2} \end{center} \end{figure} Finally, we have performed another calculation where we upscale the on-shell scattering matrix $|{\cal M_{Q}}|^2$ in order to reproduce the same drag coefficient obtained with the off-shell collision integral. This corresponds to multiply the on-shell scattering matrix $|{\cal M_{Q}}|^2$ by a function $k(p)$. It may be considered as non realistic case because we have seen that the impact of off-shell dynamics on transport coefficient is momentum dependent and leads to induce a slower increase of the drag coefficient at lower momenta. We have considered it to study theoretically what happens if the interaction is such to generate exactly the same drag $A(p)$ at each momentum. We show the results of this set-up for the case $\gamma_{DQPM}$. in Fig. \ref{Fig:OnOff_DRAG2}. We can see that the time evolution of the HQ momentum distribution for the three different approaches on-shell Boltzmann (solid lines), off-shell Boltzmann (dashed lines) and Langevin (dot-dashed lines). By comparing solid lines and dashed lines once where one impose the same drag coefficient the two approaches show the same evolution. Notice that in the Langevin calculations, we have used the pre-ito prescription where the diffusion coefficient is the one obtained within off-shell calculation shown in Fig.\ref{BT200}. As shown the Langevin dynamics consists of a shift of the average momenta with a fluctuation around it. This include the possibility that HQ obviously lose energy moving the distribution to lower momenta but at the same time they can gain energy from the bulk producing a tail with momentum larger than the initial HQ momentum $p_0$. As shown, by the comparison between the Langevin and Boltzmann dynamics, the Boltzmann evolution of the charm quarks momentum does not have a Gaussian shape like one for the Langevin approach. Where at the initial time the Boltzmann dynamics with respect to the Langevin one shows at initial stages a larger contribution from the gain term in the collision integral with a global shape that is far from the Gaussian shape typical of Brownian motion \cite{Scardina:2014gxa}. \subsection{Nuclear Modification factor $R_{AA}$ in Boltzmann and off-shell dynamics} One of the main HQ observable investigated at RHIC and LHC energies is Nuclear Modification factor $R_{AA}$. It expresses the effective energy loss in Nucleus-Nucleus collision with respect to the production in proton-proton collisions. In general, $R_{AA}$ gives a quantitative estimate of heavy quarks-bulk interaction. Motivated by the phenomenological point of view, we have studied the impact of the results shown in the previous section on the evolution of the spectra in terms of the $R_{AA}(p)$ for charm quarks. We evaluate the Nuclear Modification factor using the charm quark distribution function at $t=0$ and $t=t_f$ as $R_{AA}=f_C(p,t_f)/f_C(p,t_0)$ both for on-shell and off-shell dynamics. In these calculation for the initial momentum distribution of charm quark, we have used the charm quark production in Fixed Order + Next - to - Leading Log (FONLL)~\cite{Cacciari:2012ny} which describes the D-meson spectra in proton-proton collisions after fragmentation. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig14.eps} \caption{Nuclear modification factor, $R_{AA}$ as function of charm momentum $p$. Solid lines refers to the case of on-shell calculations. The red dashed line refer to off-shell calculation with scattering matrix scaled with the energy density while the green open circles refer to the same calculation but with larger width fixed to $\gamma^*_i/M_i=0.75$.}\label{Ra} \end{center} \end{figure} In Fig.\ref{Ra} we show the nuclear modification factor $R_{AA}$ as a function of the charm quark momentum for both on-shell and off-shell. Black Solid line refers to the case of on-shell calculations while the red dashed line and the green open circles refer to off-shell calculation with scattering matrix scaled to the case of the on-shell energy density and with $\gamma^*_{DQPM}$ and $\gamma^*_i/M_i=0.75$ respectively. As shown, by comparing solid and dashed line, in the off-shell approach with $\gamma^*_{DQPM}$ the nuclear modification factor $R_{AA}$ does not show significant difference with respect to on-shell calculations especially for intermediate and high momentum of quark charm as shown in the same condition for evolution of distribution function. Also for the case of off-shell dynamics with $\gamma^*/M_i=0.75$ we find that the $R_{AA}(p)$ is slightly larger at high $p$ than on-shell one and it differs from the on-shell calculation less than $10\%$. Therefore a main result of thi work is that the off-shell dynamics does not modify significantly the relation between $R_{A,A}(p)$ and $D_s(T)$ and it would not represent a main source of uncertainty in the phenomenological determination of the space diffusion coefficient that are currently more dependent on the hadronization mechanism, Langevin versus Boltzmann transport equation, assumption for bulk QGP expansion, effects of non-equilibrium in the initial stage \cite{Xu:2018gux,Cao:2018ews,Rapp:2018qla}. \section{Summary and conclusion}\label{Section:5} We have studied the impact of off-shell dynamics on the drag and diffusion transport coefficients. We have found that if one just include the off-shell dynamics of quasi-particles associated to a finite mass width this induce a moderate decrease of the density of the system an this leads to a smaller drag and diffusion charm coefficient that is dependent on charm momentum. However when the comparison is done renormalizing the energy density of the system, that is the one of lattice QCD, one can see that the main effect of off-shell dynamics is to reduce the increase of the drag $A(p,T)$ and $B_T(p,T)$ at lower momenta $p \lesssim 2-3 \, \rm GeV$. Such a reduction depends of the width and is maximal at $p=0$ being for $\gamma_{DQPM} \approx 0.3-0.4 m_{q,g}$ about a $25\% $ while increasing up to about a $35\%$ for the $\gamma^*< 0.75\, m_{q,g}$. In both case at $p> 3 GeV$ such a difference disappears completely. We then have studied how a charm of momentum $p$ loose energy in a bulk QGP in equilibrium at temperature T=0.2 GeV, comparing for the first time the time evolution of the momenta in a Langevin, Boltzmann on-shell and Boltzmann off-shell transport approach. We find that at least in the regime of widths $\gamma^*< M$ the evolution of charm momenta are only slightly modified by off-shell dynamics, also the impact of the last on $R_{AA}(p)$ is of about a $5\%$ at least at momenta $p> 1\, \rm GeV$. Therefore from a phenomenological point of view the relation between the nuclear modification factor $R_{AA}(p_T)$ and the space diffusion coefficient (or the drag) is not significantly modified by off-shell dynamics. \vspace{2mm} \section*{Acknowledgments} S.P. , M.L.S., and V.G. acknowledge the support of INFN-SIM national project and linea di intervento 2, DFA-Unict. S.P. , M.L.S., and V.G. acknowledge the stimulating discussions and comments with G. Coci. \vspace{3mm}
{'timestamp': '2020-06-01T02:05:14', 'yymm': '2005', 'arxiv_id': '2005.14359', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14359'}
arxiv
\section{Introduction} \IEEEPARstart{I}{n} machine learning, computer vision, data mining, and other fields, with the increase of the difficulty of research objects, we have to deal with some high-dim
ensional data, such as text data, image data, and various gene expression data. When processing high-dimensional data, we are confronted with the curse of dimensionality and some other difficulties caused by too many features, which not only makes the prediction results inaccurate, but also consumes the calculation time. So it is very important to process high dimensional data. There are three approaches to deal with high-dimensional data, i.e., feature extraction, feature compression and feature selection. Feature extraction generally maps data from high-dimensional space to low-dimensional space, such as principal component analysis (PCA), linear discriminant analysis (LDA) and neural networks. Feature compression compresses the original feature value to 0 or 1 by quantization. Feature selection uses some evaluation criteria to select feature subsets from the original feature set. In general, the features selected by feature selection method are more interpretable. In addition, the discrimination ability of the selected features is not weaker than that of the extracted or compressed features. As an effective mean to remove irrelevant features from high-dimensional data without reducing performance, feature selection has attracted many attentions in recent years. \par According to whether it is independent of the classification algorithm used later, the feature selection method can be categorized as filter, wrapper, and embedded methods \cite{wang2016sparse}. Filter method \cite{liu2013global, MingkuiMinimax} first selects features and then the feature subset obtained by filtering operation is used to train the classifier, so filter method is fully independent of the classifier. The essence of this method is to use some indexes in mathematical statistics to rank feature subsets, some popular evaluation criteria of filter methods include fisher score, correlation coefficient, mutual information, information entropy, and variance threshold. According to the objective function, wrapper method \cite{freeman2013feature}, \cite{xue2012particle} selects or excludes several features at a time until the best subset is selected. In other words, wrapper method wraps the classifier and feature selection into a black box, and evaluates the performance of the selected feature according to its accuracy on the feature subset. Embedded methods \cite{nie2014clustering}, \cite{mohsenzadeh2013relevance} get the weight coefficients of each feature by learning, and rank features according to the coefficients. The difference between the embedded and filter methods is whether determines the selection of features through training. \begin{figure*}[t] \centering \includegraphics[width=16cm]{fig2/fig1.png} \caption{High dimensional data (a) can preserve structure by multi-step Markov transition probability, through the corresponding minimum transition probability to connect more points or the maximum transition probability to fewer points to obtain the relationship between data points (b), and different structure preservation algorithms will get different relationship between data points (c). For the negative side (MMFS\_minP), the path connecting most data points will be discarded; while for the positive side (MMFS\_maxP), the path connecting minimum data points will be kept. After feature selection, these two algorithms can not only keep the intrinsic data structure, but also shorten the distance between data points in the same class (d) by the obtained relationships.} \label{Fig.main} \end{figure*} \par Since any tiny part of manifold can be regarded as Euclidean space, assuming that the data is a low dimensional manifold uniformly sampled in a high dimensional Euclidean space, manifold learning is to recover the low dimensional manifold structure from the high dimensional sampled data, that is, to find the low dimensional manifold in the high dimensional space, and find out the corresponding embedded mapping to achieve dimension reduction or data visualization. Some popular manifold learning methods include isometric feature mapping (ISOMAP) \cite{tenenbaum2000global}, Locally Linear Embedding (LLE) \cite{roweis2000nonlinear} and Laplacian eigenmaps (LE) \cite{belkin2002laplacian}, \cite{belkin2003laplacian}. From this research line, many unsupervised feature selection methods are proposed, such as \cite{zhao2007spectral}, \cite{he2006laplacian} and \cite{cai2010unsupervised}, which select features by the local structure information of data points, and these methods show that the local structure information is helpful to select the effective feature subset. However, most of these methods only use the structure information between adjacent data points, and the manifold structure of the whole data is not sufficiently employed. \par In this paper, inspired by the above analysis, we propose a new unsupervised feature selection approach called MMFS (Multi-step Markov transition probability for Feature Selection). As shown in Fig. 1, the core idea is to use multi-step transition probability to characterize data relationships on manifold. Based on the idea of keeping data structure, we do feature selection from both positive and negative aspects. On the positive side, the maximum multi-step transition probability that can be reached in a certain number of steps between any data pair is used to describe the compact data structure. The features which can better keep this data structure are selected. On the negative side, to characterize the loose data structure, the minimal multi-step transition probability that can be reached in a certain number of steps between any data pair is used. The features that least maintain this loose data structure are selected. These two ways are also be combined to form a new algorithm which can obtain average performance. Thus, three algorithms are proposed. The main contributions of our work can be summarized as follows. \par 1) A novel unsupervised feature selection approach is proposed, which can sufficiently use and keep data structure on manifold. Instead of directly using Euclidean distance, multi-step Markov transition probability is used to describe the data structure. \par 2) Different from the existing solutions, we design two algorithms from both positive and negative viewpoints. Features which better and least keep the corresponding data structures are selected. After feature selection, the data in the low-dimensional space will be more compact. \par 3) Comprehensive experiments are performed on eight benchmark data sets, which show the good performance of the proposed approach compared with the state-of-the-art unsupervised feature selection methods. \par The rest of this paper is organized as follows. Section \uppercase\expandafter{\romannumeral2} reviews some related work, and Section \uppercase\expandafter{\romannumeral3} presents the notations and definitions used in this paper. In Section \uppercase\expandafter{\romannumeral4}, we propose the new approach MMFS. The optimization method is presented in Section \uppercase\expandafter{\romannumeral5} and the experimental results are presented and analyzed in Section \uppercase\expandafter{\romannumeral6}. Finally, conclusions are given in Section \uppercase\expandafter{\romannumeral7}. \section{RELATED WORK} According to whether to use label information or not, feature selection methods can be divided into three different types: supervised feature selection, semi-supervised feature selection and unsupervised feature selection. Supervised feature selection methods with ground-truth class labels usually make full use of these ground-truth class labels to select more discriminative features. Most supervised methods valuate feature relevance based on the correlations of the features with the labels. e.g., Fisher Score \cite{gu2012generalized}, Relief-F \cite{robnik2003theoretical}, spectral analysis \cite{zhao2007semi}, trace ratio \cite{nie2008trace}, information entropy \cite{yu2003feature}, Pearson correlation coefficients \cite{lee1988thirteen}, mutual information \cite{peng2005feature}, \cite{yang2012effective}, and Hilbert Schmidt independence criterion \cite{song2007supervised}. Recently, some methods \cite{he20122}, \cite{cai2011multi} apply $l_{2,1}$-norm to improve the performance. A number of semi-supervise feature selection methods have been presented during the past ten years. Semi-supervised feature selection focuses on the problem of using a small number of labeled data and a large number of unlabeled data for feature selection. Most semi-supervised feature selection methods score the features based on a ranking criterion, such as Laplacian score, Pearson's correlation coefficient and so on. For example, Zhao et al. \cite{zhao2007semi} presented a semi-supervised feature selection method based on spectral analysis. The method proposed by Doquire et al. \cite{doquire2013graph} introduced a semi-supervised feature selection algorithm that relies on the notion of Laplacian score. And Xu et al. \cite{xu2016semisupervised} combined a max-relevance and min-redundancy criterion based on Pearson's correlation coefficient with semi-supervised feature selections. \par Unsupervised feature selection without any label information is more difficult and challenging. Because ground-truth class labels are costly to be obtained, it is desirable to develop unsupervised feature selection methods. Our method belongs to this category which will be detailed in the following. \subsection{Unsupervised Feature Selection Method} An important approach to achieving unsupervised feature selection is to select features by local structure information of data points. There are many ways to apply the local structure information of data points. In the early stage, the methods usually construct the affinity graph to get the local structure information first, and then select the features. For example, Zhao et al. \cite{zhao2007spectral} proposed a unified framework for feature selection based on spectral graph theory, which is based on general similarity matrix. \par In the later stage, these methods are able to simultaneously obtain structural information and do feature selection. Gu et al. \cite{gu2012locality} proposed a locality preserving feature selection method; it aims to minimize the locality preserving criterion based on a subset of features by learning a linear transformation. Hou et al.\cite{hou2013joint} defined a novel unsupervised feature selection framework, in which the embedding learning and sparse regression perform jointly. Nie et al. \cite{nie2016unsupervised} came up with an unsupervised feature selection framework, which can simultaneously performs feature selection and local structure learning. Shi et al. \cite{shi2016cluster} incorporated spectral clustering, discriminative analysis, and correlation information between multiple views into a unified framework. Zhao et al. \cite{zhao2015graph} presented an unsupervised feature selection approach, which selects features by preserving the local structure of the original data space via graph regularization and reconstructing each data point approximately via linear combination. An et al. \cite{an2017unsupervised} preserved locality information by a probabilistic neighborhood graph to select features and combined feature selection and robust joint clustering analysis. The method proposed by Fan et al. \cite{fan2017structure} can select more informative features and learn the structure information of data points at the same time. Dai et al. \cite{dai2018unsupervised} proposed method that each feature is represented by linear combination of other features while maintaining the local geometrical structure and the ordinal locality of original data. \par Adaptive methods are also introduced to select more effective features. The method proposed by Luo et al. \cite{luo2017adaptive} selects features by an adaptive reconstruction graph to learn local structure information, and imposing a rank constraint on the corresponding Laplacian matrix to learn the multi-cluster structure. Chen et al. \cite{chen2018local} came up with a local adaptive projection framework that can simultaneously learns an adaptive similarity matrix and a projection matrix. Zhang et al. \cite{zhang2019unsupervised} proposed a method that can make the similarities under different measure functions unify adaptively and induced $l_{2,0}$ constraint to select features. \par In addition, various methods are also used to achieve unsupervised feature selection. Some unsupervised feature selection methods removed redundant features to improve the performance. Li et al. \cite{li2018generalized} removed the redundant features and embedded the local geometric structure of data into the manifold learning to preserve the most discriminative information. Wang et al. \cite{wang2018hierarchical} selected features hierarchically to prune redundant features and preserve useful features. Nie et al. \cite{nie2018general} defined an auto-weighted feature selection framework via global redundancy minimization to select the non-redundant features. Xue et al. \cite{xue2019self} presented a self-adaptive algorithm based on EC method to solve the local optimal stagnation problem caused by a large number of irrelevant features. \subsection{Markov Walk} \par Our method MMFS is also closely related to the methods based on Markov random walks. Szummer et al. \cite{szummer2002partially} combined a limited number of labeled samples with Markov random walk representation on unlabeled samples to classify a large number of unlabeled samples. Haeusser et al. \cite{haeusser2017associative} used a two-steps Markov random walk on this graph that starts from the source domain samples and return to the same kind of samples through the target domain data in the embedded space. Sun et al. \cite{sun2019neural} proposed a method, which applying random walker to define the diffusion distance modules for measuring the distance among nodes on graph. Instead of their approaches, we use multi-step transition probability to characterize the data structure and try to select the feature subset which can keep the original data structure. \section{NOTATIONS AND DEFINITIONS} We summarize the notations and the definitions used in our paper. We denote all the matrices with boldface uppercase letters and vectors with boldface lowercase letters. For a matrix $\mathbf{M}$, the element in the $ {i_{th}} $ row and $ {j_{th}} $ column is represented as ${m_{ij}}$ , and the transpose of matrix $\mathbf{M}$ is denoted as ${\mathbf{M}^T} $. Its Frobenius norm is denoted by $ ||\mathbf{M}|{|_F} = \sqrt {\sum\nolimits_{i = 1}^n {\sum\nolimits_{j = 1}^m {m_{ij}^2} } } $. The $l_{2,1}$-norm of the matrix $\mathbf{M}$ is denoted as $ ||\mathbf{M}|{|_{2,1}} = \sum\nolimits_{i = 1}^n {\sqrt {\sum\nolimits_{j = 1}^m {m_{ij}^2} } } $. \begin{table}[] \caption{Notations with Descriptions} \centering \begin{tabular}{ll} \hline Notation & Description \\ \hline d & Dimensionality of data point \\ N & Number of instances in data matrix \\ \textbf{X} $\in {\mathbb{R}^{{\rm{d}} \times {\rm{N}}}}$ & Data matrix \\ \textbf{F} $\in {\mathbb{R}^{{\rm{N}} \times {\rm{d}}}}$ & Templates for feature selection \\ \hline \end{tabular} \end{table} \section{The Proposed Approach} Let $\mathbf{X} = [{{\mathbf{x}}_1}, \cdots, {{\mathbf{x}}_N}]$ denotes a $d$-dimensional data matrix consisting of $N$ instances. In this paper, we want to learn the spatial structure around each point, so that the final selected features can also reflect the spatial structure of the original data in high-dimensional space. In order to obtain the relationship between any two points, we achieve our purpose from negative and positive sides respectively. Multi-step transition probability is used to characterize the data spatial structure. Assume each data point in the high dimensional space is a node (state). Transition probability refers to the conditional probability that a Markov chain is in one state at a certain time, and then it will reach another state in another certain time. The one-step transition probability $ {\mathbf{P}_{ij}} $ from node $i$ to node $j$ can be defined as follows: \begin{equation} {\mathbf{P}_{ij}} = \displaystyle\frac{{{\mathbf{M}_{ij}}}}{{\sum\nolimits_{m = 1}^N {{\mathbf{M}_{im}}} }} \end{equation} where \begin{equation} \displaystyle\textbf{M}\mathop{{}}\nolimits_{{ij}}=\frac{{1}}{{{ \left( {{\left. {\frac{{\textbf{D}\mathop{{}}\nolimits_{{ij}}}}{{{\mathop{ \sum }\nolimits_{{m=1}}^{{N}}{\textbf{D}\mathop{{}}\nolimits_{{im}}}}}}+ \alpha } \right) }}\right. }}}. \end{equation} Furthermore, the self transition probability $\textbf{P}_{ii} = 0$ for any $i$ data point. Here, \textbf{D} is a matrix of Euclidean distances between data points and $\alpha$ is a very small constant to avoid the denominator becoming zero. The closer the distance is, the larger the transition probability is. It is well known that Euclidean distance does not make sense in high dimensional space. Fortunately, any tiny part of manifold can be regarded as Euclidean space, so the one-step transition probability can be computed between the data points that are very close to each other. Naturally, this problem can be converted to select the nearest $k$ points around any data point to calculate the transition probability. So we redefine the one-step probability $ {\mathbf{P}_{ij}}$. That is $\textbf{M}_{ii}$ = 0 and $ \textbf{M}_{ij} $ = 0 when the data point $j$ is not one of the $k$ nearest neighbors of data point $i$. Naturally, the $n$-step transition probability matrix can be calculated as \begin{equation} {\textbf{P}^{(n)}} = {\textbf{P}^{(n - 1)}}{\textbf{P}^{(1)}}. \end{equation} In the remaining parts, we will try to preserve the original data structure from the positive and negative sides and thus propose three algorithms. First, the algorithm based on the negative viewpoint is illustrated. Then, the algorithm from the positive side is presented. In the end, the third algorithm combing the positive and negative algorithms is proposed. \subsection{Unsupervised Feature Selection based on Minimum Multi-step Transition Probability} \par The first algorithm is called MMFS$\_$minP (Multi-step Markov transition probability for feature selection based on minimum probability). The relationship between a node and any other $n$-step reachable point is described by using the minimum transition probability, i.e., the minimum transition probability at some $t$ step, for $t\leq n$, where $n$ is a parameter. In this way, we can get a matrix $ \mathbf{V_{1}} \in {\mathbb{R}^{{\rm{N}} \times {\rm{N}}}}$ that each element represents the actual distance relationships between each data point and their $n$-step reachable points. Please note that $ {\mathbf{ {V_{1}}}_{ii}} $ = 0. In order to ensure that the sum of each row of the matrix $\mathbf{V}_1$ is $1$, we perform row-wise normalization over $\mathbf{V_{1}}$. Based on the minimum reachable relation matrix $V_1$ in $n$ steps, the data relationships are characterized. Thus, we have a template for feature selection as follows, \begin{equation} \mathbf{F_1} = \mathbf{V_1}{\mathbf{X}^T}. \end{equation} Since we select the $k$ nearest points of each data point to determine the actual distance relationship between the two points, this allows our template naturally maintains the manifold structure. So, we have the following objection, \begin{equation} \mathop {\min }\limits_\textbf{w} ||{\textbf{X}^T}\textbf{W} - \mathbf{F_1}||_F^2 + \lambda ||\textbf{W}||_{2,1}^2 \end{equation} where $\mathbf{W}\in {\mathbb{R}^{{\rm{d}} \times {\rm{d}}}} $ is the weight matrix to make the original data approach to the constructed template $ \mathbf{F_{1}} $. The first term denotes the error between each weighted sample and the template. The second regularization term is used to force the weight matrix \textbf{W} to be row sparse for feature selection. $ \lambda > $ 0 is a regularization parameter used to balance the first and second terms. Since the minimum transition probability is used to construct the relationship matrix $ \mathbf{V_1} $, we actually obtain the loose relationship between data points. We should choose those features which do not keep these relationships. Thus, after feature selection, the distance between data points in the same class should be shorten. So we arrange the row vectors of the optimized \textbf{W} in ascending order according to the value of $l_2$ norm; the first $s$ features are selected. \begin{table}[t] \centering \begin{tabular}{p{8cm}} \hline \textbf{Algorithm 1} MMFS\_minP \\ \hline \textbf{Input:} Data matrix \textbf{X}, the parameter $ \lambda $, the steps $n$ and the selected feature number $s$. \\ \textbf{Initialize:} \textbf{Q} as an identity matrix. \\ \textbf{while} not converge \textbf{do} \\ \quad 1. Update $ {\textbf{W}_{t + 1}} = {\left( {\textbf{X}{\textbf{X}^T} + \lambda \textbf{Q}} \right)^{ - 1}}\textbf{X}\mathbf{F_1} $ . \\ \quad 2. Update the diagonal matrix $ {\textbf{Q}_{t + 1}} $ , where the $ {j_{th}} $ diagonal element is $ \frac{{\sum\nolimits_{i = 1}^d {\sqrt {||{\textbf{W}^i}||_2^2 + \varepsilon } } }}{{\sqrt {||{\textbf{W}^{\rm{j}}}||_2^2 + \varepsilon } }} $ \\ \quad 3. $t = t + 1$. \\ \textbf{ end while} \\ \textbf{Output:} Obtain optimal matrix \textbf{W} and calculate each $\|{\textbf{W}^i}|{|_2},i = 1,2,\cdots,d$, then sort in ascending order and select the $s$ top ranking features.\\ \hline \end{tabular} \end{table} \subsection{Unsupervised Feature Selection based on Maximum Multi-step Transition Probability} \par The second algorithm is called MMFS$\_$maxP (Multi-step Markov transition probability for feature selection based on maximum probability). Instead of the first algorithm, we use the maximum transition probability to express the relationship between the data point and any other reachable points in $n$ steps, i.e., the corresponding step is recorded as $t$ where $t \leq n$. As the same as above subsection, we can get a relational matrix $ \mathbf{V_2} $, and normalize the matrix $ \mathbf{V_2} $ by rows. Finally, we get the template as the following, \begin{equation} \mathbf{F_2} = \mathbf{V_2}{\textbf{X}^T}. \end{equation} Then the objective function of the proposed MMFS\_maxP method is \begin{equation} \mathop {\min }\limits_\textbf{w} ||{\textbf{X}^T}\textbf{W} - \mathbf{F_2}||_F^2 + \lambda ||\textbf{W}||_{2,1}^2. \end{equation} Different from the method MMFS\_minP, since we use the corresponding maximum transition probability to construct the relationship matrix $\mathbf{V_2} $ which represents a more compact relationship, we should choose features from the positive viewpoint. In order to shorten the distance between data points in the same class, we should select the first $s$ features in descending order according to the value of $l_2$ norm of the optimized matrix \textbf{W}. \subsection{The Combination of Two Algorithms } In order to combine the above two algorithms, we propose an algorithm called MMFS\_inter. We first find the intersection of the features selected by MMFS\_minP and MMFS\_maxP, and set the number of features in the intersection as $s_1$. Suppose the number of features required to be selected is $s$, when the number of features from the intersection is not enough, we select the first $(s-s_1)/2 $ features from the feature sequences selected by the above two algorithms. \begin{table}[t] \centering \begin{tabular}{p{8cm}} \hline \textbf{Algorithm 2} MMFS\_maxP \\ \hline \textbf{Input:} Data matrix \textbf{X}, the parameter $ \lambda $, the steps $n$ and the selected feature number $s$. \\ \textbf{Initialize:} \textbf{Q} as an identity matrix. \\ \textbf{while} not converge \textbf{do} \\ \quad 1. Update $ {\textbf{W}_{t + 1}} = {\left( {\textbf{X}{\textbf{X}^T} + \lambda \textbf{Q}} \right)^{ - 1}}\textbf{X}\mathbf{F_2} $ . \\ \quad 2. Update the diagonal matrix $ {\textbf{Q}_{t + 1}} $, where the $j$th diagonal element is $ \frac{{\sum\nolimits_{i = 1}^d {\sqrt {||{\textbf{W}^i}||_2^2 + \varepsilon } } }}{{\sqrt {||{\textbf{W}^{\rm{j}}}||_2^2 + \varepsilon } }} $ \\ \quad 3. $t = t + 1$. \\ \textbf{ end while} \\ \textbf{Output:} Obtain optimal matrix \textbf{W} and calculate each $\|{\textbf{W}^i}|{|_2},i = 1,2,\cdots,d $, then sort in descending order and select the $s$ top ranking features.\\ \hline \end{tabular} \end{table} \section{optimization} In this section, a common model is applied to describe the optimization steps for the above two objectives: \begin{equation} \mathop {\min }\limits_\textbf{w} ||{\textbf{X}^T}\textbf{W} - \textbf{F}||_F^2 + \lambda ||\textbf{W}||_{2,1}^2. \end{equation} Obviously, $ ||\textbf{W}||_{2,1} $ can be zero in theory, but this will cause the problem (8) to be non-differentiable. To avoid this situation, $ ||\textbf{W}||_{2,1}^2 $ is regularized as $ {\left( {\sum\nolimits_{j = 1}^d {\sqrt {||{\textbf{W}^{\rm{j}}}||_2^2 + \varepsilon } } } \right)^2} $ where $ \varepsilon $ is a small enough constant. It follows that \begin{equation} \mathop {\min }\limits_\textbf{w} \left( {||{\textbf{X}^T}\textbf{W} - \textbf{F}||_F^2 + \lambda {{\left( {\sum\nolimits_{j = 1}^d {\sqrt {||{\textbf{W}^{\rm{j}}}||_2^2 + \varepsilon } } } \right)}^2}} \right). \end{equation} This problem is equal to problem (8) when $ \varepsilon $ is infinitely close to zero. Assume the function \begin{equation} L(\textbf{W}) = ||{\textbf{X}^T}\textbf{W} - \textbf{F}||_F^2 + \lambda {\left( {\sum\nolimits_{j = 1}^d {\sqrt {||{\textbf{W}^{\rm{j}}}||_2^2 + \varepsilon } } } \right)^2}. \end{equation} \par Finding the partial derivative of L (\textbf{W}) with respect to \textbf{W}, and setting it to zero, we have \begin{equation} \frac{{\partial L(\textbf{W})}}{{\partial \textbf{W}}} = 2\textbf{X}({\textbf{X}^T}\textbf{W} - \textbf{F}) + 2\lambda \textbf{Q}\textbf{W} = 0, \end{equation} where \textbf{Q} is a diagonal matrix whose $ {j_{th}}$ diagonal element is \begin{equation} {q_{jj}} = \frac{{\sum\nolimits_{i = 1}^d {\sqrt {||{\textbf{W}^i}||_2^2 + \varepsilon } } }}{{\sqrt {||{\textbf{W}^{\rm{j}}}||_2^2 + \varepsilon } }}. \end{equation} The matrix \textbf{Q} is unknown and depends on \textbf{W}, we solve \textbf{Q} and \textbf{W} iteratively. With the matrix \textbf{W} fixed, \textbf{Q} can be obtained by Eq. (12). When the matrix \textbf{Q} fixed, according to Eq. (11), we can get $\mathbf{W}$ as follows, \begin{equation} \textbf{W} = {\left( {\textbf{X}{\textbf{X}^T} + \lambda \textbf{Q}} \right)^{ - 1}}\textbf{X}\textbf{F}. \end{equation} We presents an iterative algorithm to find the optimal solution of \textbf{W}. In each iteration, \textbf{W} is calculated with the current \textbf{Q}, and the \textbf{Q} is updated according to the current calculated \textbf{W}. The iteration procedure is repeated until the algorithm converges. \par Based on the above analysis, we summarize the whole procedure for solving problem (5) in Algorithm 1 and the procedure for solving problem (7) in Algorithm 2. \begin{table}[t] \caption{Descriptions of Data Sets} \centering \begin{tabular}{llllll} \hline ID &Data set &\# Instances &\# Features &\# Classes &Data type\\ \hline 1 & Isolet1 & 1560 & 617 & 26 &Speech data\\ 2 & COIL20 & 1440 & 1024 & 20 &Object image\\ 3 & TOX-171 & 171 & 5748 & 4 &Microarray\\ 4 & Lung & 203 & 3312 & 5 &Microarray\\ 5 & AT\&T & 400 & 644 & 40 &Face image\\ 6 & ORL10P & 100 & 10304 & 10 &Face image\\ 7 & YaleB & 2414 & 1024 & 38 &Face image\\ 8 & USPS & 9298 & 256 & 10 &Digit\\ \hline \end{tabular} \end{table} \begin{table*}[t] \caption{Accuracy (acc\%$ \pm $std\%) of different unsupervised feature selection methods. \protect\\ The best result are enlighten in bold and the second best is underlined.} \centering \begin{tabular}{lllllllll} \hline Datasets & Isolet1 & COIL20 & AT\&T & YaleB & USPS & ORL10P & Lung & TOX\-171 \\ \hline All Features & 58.21 $ \pm $ 3.48 & 58.77 $ \pm $ 5.05 & 60.84 $ \pm $ 3.63 & 9.61 $ \pm $ 0.64 & 65.63 $ \pm $ 2.75 & 67.04 $ \pm $ 7.92 & 69.23 $ \pm $ 9.11 & 42.16 $ \pm $ 2.63 \\ LS & 59.36 $ \pm $ 3.60 & 54.97 $ \pm $ 4.39 & 59.71 $ \pm $ 3.28 & 9.98 $ \pm $ 0.48 & 62.49 $ \pm $ 4.16 & 61.91 $ \pm $ 6.86 & 60.44 $ \pm $ 9.26 & 39.60 $ \pm $ 4.38 \\ MCFS & 60.74 $ \pm $ 3.88 & 58.93 $ \pm $ 4.76 & 61.13 $ \pm $ 3.62 & 9.73 $ \pm $ 0.71 & 64.96 $ \pm $ 4.04 & 68.31 $ \pm $ 8.03 & 64.66 $ \pm $ 7.93 & 40.58 $ \pm $ 3.07 \\ UDFS & 57.92 $ \pm $ 3.33 & 59.79 $ \pm $ 4.34 & 60.75 $ \pm $ 3.40 & 11.36 $ \pm $ 0.62 & 63.46 $ \pm $ 1.69 & 65.27 $ \pm $ 6.83 & 66.13 $ \pm $ 9.57 & 43.29 $ \pm $ 3.10 \\ NDFS & 64.50 $ \pm $ 4.19 & 59.54 $ \pm $ 4.52 & 60.53 $ \pm $ 3.35 & 12.12 $ \pm $ 0.69 & 65.05 $ \pm $ 3.41 & 67.23 $ \pm $ 7.98 & 66.79 $ \pm $ 8.81 & \textbf{45.12 $ \pm $ 3.27} \\ RUFS & 62.48 $ \pm $ 3.97 & 60.14 $ \pm $ 4.45 & 61.28 $ \pm $ 3.35 & \textbf{14.66 $ \pm $ 0.91} & \underline{65.78 $ \pm $ 2.88} & 68.52 $ \pm $ 7.79 & 68.38 $ \pm $ 8.44 & 44.39 $ \pm $ 2.84 \\ AUFS & 48.77 $ \pm $ 2.69 & 57.23 $ \pm $ 4.31 & 61.82 $ \pm $ 3.14 & 11.17 $ \pm $ 0.48 & 63.37 $ \pm $ 3.11 & 66.82 $ \pm $ 6.78 & 70.14 $ \pm $ 9.62 & 44.19 $ \pm $ 2.49 \\ MMFS\_minP & \textbf{70.39 $ \pm $ 2.25} & 59.08 $ \pm $ 3.99 & \underline{68.51 $ \pm $ 2.83} & \underline{14.50 $ \pm $ 0.67} & 65.52 $ \pm $ 2.99 & 65.20 $ \pm $ 5.93 & 57.41 $ \pm $ 7.54 & \underline{44.47 $ \pm $ 3.26} \\ MMFS\_maxP & 64.87 $ \pm $ 3.35 & \textbf{65.60 $ \pm $ 2.87} & 68.20 $ \pm $ 2.28 & 9.50 $ \pm $ 0.34 & \textbf{68.13 $ \pm $ 2.70} & \textbf{81.60 $ \pm $ 6.54} & \textbf{73.20 $ \pm $ 9.98} & 42.78 $ \pm $ 3.62 \\ MMFS\_inter & \underline{67.20 $ \pm $ 3.09} & \underline{65.14 $ \pm $ 2.86} & \textbf{69.58 $ \pm $ 2.57} & 12.47 $ \pm $ 0.35 & 65.24 $ \pm $ 3.45 & \underline{78.70 $ \pm $ 4.26} & \underline{70.76 $ \pm $ 7.86} & 43.95 $ \pm $ 3.53\\ \hline \end{tabular} \end{table*} \begin{table*}[t] \caption{NMI (NMI\%$ \pm $STD\%) of different unsupervised feature selection methods. \protect\\the best result are enlighten in bold and the second best is underlined.} \centering \begin{tabular}{lllllllll} \hline Datasets & Isolet1 & COIL20 & AT\&T & YaleB & USPS & ORL10P & Lung & TOX-171 \\ \hline All Features & 74.35 $ \pm $ 1.58 & 73.71 $ \pm $ 2.44 & 80.51 $ \pm $ 1.82 & 12.98 $ \pm $ 0.80 & 60.88 $ \pm $ 0.92 & 75.82 $ \pm $ 5.19 & 57.58 $ \pm $ 5.55 & 13.65 $ \pm $ 3.31 \\ LS & 74.71 $ \pm $ 1.60 & 69.99 $ \pm $ 1.99 & 78.75 $ \pm $ 1.63 & 14.77 $ \pm $ 0.74 & 59.62 $ \pm $ 1.95 & 68.30 $ \pm $ 4.11 & 43.05 $ \pm $ 4.75 & 8.36 $ \pm $ 2.95 \\ MCFS & 74.60 $ \pm $ 1.77 & 73.35 $ \pm $ 2.35 & 80.16 $ \pm $ 1.85 & 14.16 $ \pm $ 0.91 & 60.98 $ \pm $ 1.75 & 75.54 $ \pm $ 5.24 & 45.79 $ \pm $ 4.71 & 10.18 $ \pm $ 2.01 \\ UDFS & 73.30 $ \pm $ 1.88 & 72.63 $ \pm $ 2.11 & 79.70 $ \pm $ 1.71 & 17.61 $ \pm $ 0.70 & 58.24 $ \pm $ 0.74 & 73.29 $ \pm $ 4.12 & 47.72 $ \pm $ 6.59 & 15.31 $ \pm $ 5.09 \\ NDFS & 77.86 $ \pm $ 1.66 & 73.88 $ \pm $ 2.29 & 79.80 $ \pm $ 1.66 & 19.29 $ \pm $ 0.79 & 60.62 $ \pm $ 1.39 & 75.06 $ \pm $ 4.94 & 48.35 $ \pm $ 4.94 & \underline{18.12 $ \pm $ 4.31} \\ RUFS & 76.70 $ \pm $ 1.75 & 73.93 $ \pm $ 2.32 & 80.54 $ \pm $ 1.70 & \underline{23.00 $ \pm $ 0.77} & 61.50 $ \pm $ 1.34 & 77.32 $ \pm $ 4.69 & 50.12 $ \pm $ 5.42 & 16.41 $ \pm $ 4.06 \\ AUFS & 65.67 $ \pm $ 1.50 & 72.36 $ \pm $ 2.27 & 81.02 $ \pm $ 1.66 & 17.93 $ \pm $ 1.01 & 59.60 $ \pm $ 1.18 & 77.15 $ \pm $ 4.78 & 51.78 $ \pm $ 4.85 & 15.89 $ \pm $ 3.94 \\ MMFS\_minP & \underline{79.32 $ \pm $ 1.1}3 & 74.38 $ \pm $ 1.54 & 84.74 $ \pm $ 1.38 &\textbf{24.60 $ \pm $ 0.68} & \underline{61.69 $ \pm $ 0.74} & 73.27 $ \pm $ 3.39 & 45.51 $ \pm $ 4.08 & \textbf{20.72 $ \pm $ 3.59} \\ MMFS\_maxP & 78.42 $ \pm $ 0.98 & \textbf{78.03 $ \pm $ 1.71} & \underline{85.35 $ \pm $ 0.85/} & 14.95 $ \pm $ 0.33 & \textbf{64.13 $ \pm $ 1.52} & \textbf{85.96 $ \pm $ 2.85} & \textbf{61.56 $ \pm $ 4.96} & 15.10 $ \pm $ 3.77 \\ MMFS\_inter & \textbf{79.49 $ \pm $ 1.30} & \underline{77.02 $ \pm $ 1.68} & \textbf{85.73 $ \pm $ 1.06} & 20.76 $ \pm $ 0.38 & 61.36 $ \pm $ 0.93 & \underline{84.34 $ \pm $ 2.20 }& \underline{60.16 $ \pm $ 3.55} & 17.55 $ \pm $ 7.11\\ \hline \end{tabular} \end{table*} \section{Experiments} In this section, we test the proposed feature selection method in publicly available data sets and compare our methods with several state-of-the-art methods. \subsection{Data Sets} In order to validate the method proposed in this paper, the experiments are conducted on 8 benchmark data sets. The details of these 8 data sets are also summarized in Table \uppercase\expandafter{\romannumeral2}. \subsubsection{Isolet1 \cite{fanty1991spoken}} It contains 1560 voice instaces for the name of each letter of the 26 alphabets. \subsubsection{COIL20 \cite{nene1996columbia}} It contains 20 objects. The images of each objects were taken 5 degrees apart as the object is rotated on a turntable and each object has 72 images. The size of each image is 32$ \times $32 pixels, with 256 grey levels per pixel. Thus, each image is represented by a 1024-dimensional vector. \subsubsection{AT\&T \cite{samaria1994parameterisation}} It contains 40 classes, and each person has 10 images. We simply use the cropped images and the size of each image is 28$ \times $23 pixels. \subsubsection{YaleB \cite{georghiades1998illumination}} It contains 2414 images for 38 individuals. We simply use the cropped images and the size of each image is 32$ \times $32 pixels. \subsubsection{USPS \cite{hull1994database}} The USPS handwritten digit database. It contains 9298 16$ \times $16 handwritten images of ten digits in total. \subsubsection{ORL10P \cite{samaria1994parameterisation}} It contains 100 instances for 10 classes. And each image is represented by a 10304-dimensional vector. \subsubsection{Lung \cite{bhattacharjee2001classification}} It contains 5 classes and 203 instances in total, where each instance consists of 3312 features. \subsubsection{TOX-171 \cite{stienstra2010kupffer}} It contains 171 instances in four categories and each instance has 5748 features. \subsection{Experimental Setting} In order to prove the efficiency of the proposed approach, on each data set, we compare the proposed algorithms with the following six unsupervised feature selection methods. \subsubsection{Baseline} All features. \subsubsection{Laplacian Score (LS) \cite{he2006laplacian}} It evaluated the importance of features according to the largest Laplacian scores, which used to measure the power of locality preservation. \subsubsection{Multi-Cluster Feature Selection (MCFS) \cite{cai2010unsupervised}} According to the spectral analysis of the data and $L_1$-regularized models for subset selection, it select those features that the multi-cluster structure of the data can be best preserved. \subsubsection{Unsupervised Discriminative Feature Selection (UDFS) \cite{yang2011l2}} It selects features through a joint framework that combines the discriminative analysis and $l_{2,1}$-norm-norm minimization. \subsubsection{Nonnegative Discriminative Feature Selection (NDFS) \cite{li2012unsupervised}} In order to select the most discriminative features, NDFS performs spectral clustering to learn the cluster labels of the input samples and adds $l_{2,1}$-norm minimization constraint to reduce the redundant or even noisy features. \subsubsection{Robust Unsupervised Feature Selection (RUFS) \cite{qian2013robust}} It selects features by a joint framework of local learning regularized robust nonnegative matrix factorization and $l_{2,1}$-norm minimization. \subsubsection{Adaptive Unsupervised Feature Selection (AUFS) \cite{qian2015joint}} It uses a joint adaptive loss for data fitting and a $l_{2,0}$ minimization for feature selection. \par For LS, MCFS, UDFS, NDFS, RUFS, AUFS, and MMFS, we fixed the number of neighbors as 5 for all data sets. As for regularization parameter $ \lambda $ in problem (9), the optimal parameter is selected at the candidate set \{0.001, 0.01, 0.1, 1, 10, 100, 1000\}, and parameter $n$ is selected at the set \{5, 6, 7, ..., 18, 19, 20\}. In order to make a fair comparison between different unsupervised feature selection methods, we fix the number of the selected features as {50, 100, ..., 250, 300} for all data sets except the USPS data set. For USPS, the number of the selected features was set to {50, 80, ..., 170, 200}, because its total number of features is 256. \begin{figure*} \centering \subfigure[]{ \label{Fig.sub.a} \includegraphics[width=3.9cm]{fig2/acc_isolet.png}} \subfigure[]{ \label{Fig.sub.b} \includegraphics[width=3.9cm]{fig2/acc_coil.png}} \subfigure[]{ \label{Fig.sub.c} \includegraphics[width=3.9cm]{fig2/acc_att.png}} \subfigure[]{ \label{Fig.sub.d} \includegraphics[width=3.9cm]{fig2/acc_yale.png}} \subfigure[]{ \label{Fig.sub.e} \includegraphics[width=3.9cm]{fig2/acc_usps.png}} \subfigure[]{ \label{Fig.sub.f} \includegraphics[width=3.9cm]{fig2/acc_orl.png}} \subfigure[]{ \label{Fig.sub.g} \includegraphics[width=3.9cm]{fig2/acc_lung.png}} \subfigure[]{ \label{Fig.sub.h} \includegraphics[width=3.9cm]{fig2/acc_tox.png}} \caption{ACC of various feature selection methods with different number of selected features. (a) Isolet1. (b) COIL20. (c) AT\&T. (d) YaleB. (e) USPS. (f) ORL10P. (g) Lung. (h)TOX-171} \label{Fig.main} \end{figure*} \begin{figure*}[t] \centering \subfigure[]{ \label{Fig.sub.a} \includegraphics[width=3.9cm]{fig2/nmi_isolet.png}} \subfigure[]{ \label{Fig.sub.b} \includegraphics[width=3.9cm]{fig2/nmi_coil.png}} \subfigure[]{ \label{Fig.sub.c} \includegraphics[width=3.9cm]{fig2/nmi_att.png}} \subfigure[]{ \label{Fig.sub.d} \includegraphics[width=3.9cm]{fig2/nmi_yale.png}} \subfigure[]{ \label{Fig.sub.e} \includegraphics[width=3.9cm]{fig2/nmi_usps.png}} \subfigure[]{ \label{Fig.sub.f} \includegraphics[width=3.9cm]{fig2/nmi_orl.png}} \subfigure[]{ \label{Fig.sub.g} \includegraphics[width=3.9cm]{fig2/nmi_lung.png}} \subfigure[]{ \label{Fig.sub.h} \includegraphics[width=3.9cm]{fig2/nmi_tox.png}} \caption{NMI of various feature selection methods with different number of selected features. (a) Isolet1. (b) COIL20. (c) AT\&T. (d) YaleB. (e) USPS. (f) ORL10P. (g) Lung. (h)TOX-171} \label{Fig.main} \end{figure*} \par We use K-means clustering algorithm to perform the clustering task. Since K-means clustering is sensitive to its initialization of the clustering seeds, we repeat the experiments 20 times with random initialization of the seeds, and record the mean and standard deviation of ACC and NMI. Given a data set, let $ {p_i}$ and $ {q_i}$ be the labels obtained by clustering algorithm and the labels provided by data set respectively. And the ACC \cite{wang2016sparse} can be defined as follows: \begin{equation} ACC = \frac{{\sum\nolimits_{i = 1}^n {\delta \left( {{p_i},\mbox{map}\left( {{q_i}} \right)} \right)} }}{n} \end{equation} where if $a = b$, then $ \delta (a, b) = 1$, otherwise $ \delta (a, b) = 0$. The map($\cdot$) represents an optimal permutation mapping function that matches the clustering label obtained by clustering algorithm with the label of ground-truth. The construction of mapping function can refer to the Kuhn-Munkres algorithm \cite{lovasz2009matching}. The higher the ACC is, the better the clustering performance is. Normalized Mutual Information (NMI) is an normalization of the Mutual Information (MI) score to scale the results between 0 and 1. NMI is the similarity between the clustering results and the ground-truth labels of the data sets. The higher the NMI is, the better the clustering performance is. Scikit-learn (sklearn) \cite{scikit-learn} is a third-party module in machine learning, which encapsulates the common machine learning methods. In our experiment, we calculate NMI by a method in package sklearn as follows: sklearn.metrics.normalized\_mutual\_info\_score (labels\_true, labels\_pred). \subsection{Experimental Results and Analysis} We conclude the experimental results in Table \uppercase\expandafter{\romannumeral3} and \uppercase\expandafter{\romannumeral4}. And the clustering accuracy and NMI with different number of features are shown in Fig. 2 and Fig.3. The results of the compared methods in the figures and tables refer to some experimental data from the previous method \cite{han2018unified}, while our experimental settings are the same as \cite{han2018unified}, such as the number of neighboring parameter, the number of repetition of the experiment and the number of selected features. The higher the mean is and the smaller the standard deviation is, the better the clustering performance is. \begin{figure*} \centering \subfigure[]{ \label{Fig.sub.a} \includegraphics[width=5.3cm]{fig2/lung.png}} \subfigure[]{ \label{Fig.sub.b} \includegraphics[width=5.3cm]{fig2/lung_min.png}} \subfigure[]{ \label{Fig.sub.c} \includegraphics[width=5.3cm]{fig2/lung_max.png}} \caption{The distribution of Lung data after using different algorithms. (a) Initial state (b) MMFS\_minP. (c) MMFS\_maxP.} \label{Fig.main} \end{figure*} \par According to Tables \uppercase\expandafter{\romannumeral3} and \uppercase\expandafter{\romannumeral4}, it can be observed that our methods are significantly better than all other methods on two data sets, i.e., AT\&T, and Isolet1 data sets. On other data sets, the performances of MMFS\_minP and MMFS\_maxP are different. In general, when the original data distribution is closer to the ideal situation (most of the data in the same category distributed together), the performance of MMFS\_maxP will be better, such as COIL20, USPS, ORL10P and Lung. Otherwise, when the data distribution is more chaotic, the performance of MMFS\_minP will be better, such as Isolet1, YaleB, TOX-171. The reason is that these two algorithms of MMFS keep the original data manifold structure in different ways accordingly; they retain a compact or loose structure. Our algorithms do not perform very well on YaleB and TOX-171 data sets; one possible reason is that our methods cannot adapt to all kinds of data distributions, so they cannot retain more accurate data structure information. \par As we can see from Figs. 2 and 3, the proposed algorithms MMFS\_minP, MMFS\_maxP and MMFS\_inter have better performance than other methods on most of data sets. However, in some data sets, such as Isolet1 and COIL20, the performance of MMFS\_minP is not effective enough when the number of selected features is not large enough. The performance of MMFS\_inter is usually between that of MMFS\_minP and MMFS\_maxP except USPS, a possible reason is that the way we combine the two methods is not flexible enough so that we can not get a better feature subset. At the same time, we also observe that feature selection technology improves the clustering performance. For example, when the ratio of the number of features used to the total number of features is very small, but its accuracy is much higher than that of using all features in most data sets. Even in the datasets YaleB and Tox-171, ACC with fewer features is also much better than that observed with all features. In most cases, the structural information is helpful to select the more effective features. MCFS, UDFS, NDFS, RUFS and AUFS all apply local structure information. However, the algorithms we proposed not only retain the local structural information, but also obtain the association information between non-adjacent points, and make more sufficient use of the obtained structure information, which is an important reason for our algorithms to obtain good results. Fig. 4 presents the two-dimensional data distribution after adding the relations obtained by the algorithms MMFS\_minP and MMFS\_maxP on the Lung data set. The dimension reduction method used for visualization refers to t-SNE method \cite{maaten2008visualizing}. The first part of Fig. 4 is obtained by projecting data matrix \textbf{X} into a two-dimensional plane, which depicts the original two-dimensional data distribution of the Lung data set. The second and third parts of Fig. 4 are obtained by projecting $ \textbf{X}^T\textbf{W} $ into a two-dimensional plane, where \textbf{W} is the optimal weight matrix obtained by MMFS\_minP and MMFS\_maxP respectively. Therefore, the second and third parts respectively describe the two-dimensional data distribution of the Lung data set under the effect of the relationship obtained by the algorithms MMFS\_minP and MMFS\_maxP. It is observed that some data points in the two-dimensional projection are obviously shortened or even overlapped by the influence of the relationship. Thus, the proposed algorithms MMFS\_minP and MMFS\_maxP can get the relationship of data that makes the data points in the same class more closer. \begin{figure}[t] \centering \subfigure[]{ \label{Fig.sub.a} \includegraphics[width=3.9cm]{fig2/acc_attmin.png}} \subfigure[]{ \label{Fig.sub.b} \includegraphics[width=3.9cm]{fig2/nmi_attmin.png}} \subfigure[]{ \label{Fig.sub.c} \includegraphics[width=3.9cm]{fig2/acc_attmax.png}} \subfigure[]{ \label{Fig.sub.d} \includegraphics[width=3.9cm]{fig2/nmi_attmax.png}} \subfigure[]{ \label{Fig.sub.e} \includegraphics[width=3.9cm]{fig2/acc_isoletmin.png}} \subfigure[]{ \label{Fig.sub.f} \includegraphics[width=3.9cm]{fig2/nmi_isoletmin.png}} \subfigure[]{ \label{Fig.sub.g} \includegraphics[width=3.9cm]{fig2/acc_isoletmax.png}} \subfigure[]{ \label{Fig.sub.h} \includegraphics[width=3.9cm]{fig2/nmi_isoletmax.png}} \caption{Parameter sensitivity demonstration on different data sets on top 200 features. (a)-(d) AT\&T. (e)-(h) Isolet1. (a), (b), (e) and (f) MMFS\_minP. (c), (d), (g) and (h) MMFS\_maxP.} \label{Fig.main} \end{figure} \subsection{Parameter Sensitivity} We use AT\&T and Isolet1 data sets to measure the sensitivity of the algorithms MMFS\_minP and MMFS\_maxP to parameters ($ \lambda $ and $n$), and the results (ACC mean and NMI mean) are shown in Fig. 5. It shows the performances of MMFS\_minP (the four subplots on the left, (a), (b), (e) and (f)) and MMFS\_maxP (the four subplots on the right, (c), (d), (g) and (h)) on AT\&T (the first four subplots, (a)-(d)) and Isolet1 (the last four subplots, (e)-(h)) data sets respectively when the parameters $ \lambda $ and $n$ take different values. From the first four subplots, we can observe that our methods are not significantly sensitive to the regularization parameter $ \lambda $ and the number of step $n$ for the data set AT\&T. However, for the data set Isolet1, our methods are more sensitive to $ \lambda $ than $n$. For example, In the subfigure (e) of Fig. 5, when the parameter $n$ is fixed, the change of $ \lambda $ has a significant impact on the performance. The reason is that $ \lambda $ controls the sparsity of \textbf{W}. On the other hand, when the parameter $ \lambda $ is fixed, the change of $n$ has little impact on the performance. This confirm that the proposed new parameter $n$ is rather robust to our algorithms. For the selection of the parameter $\lambda $, we follow the traditional way \cite{han2018unified}, i.e., a grid search strategy from the candidate set is used to select the best parameter. \section{Conclusion} We proposed a new feature selection approach, MMFS, which can preserve the manifold structure of high dimensional data. There are two ways to achieve our purpose, MMFS\_minP and MMFS\_maxP, and we also combine these two algorithms into an algorithm MMFS\_inter. The new framework learns a weight matrix $\mathbf{W}$ which projects the data to close the data structure constructed by multi-step Markov transition probability; $l_{2,1}$-norm is applied to make $\mathbf{W}$ to be row sparse for feature selection. An iterative optimization algorithm is proposed to optimize the new model. We perform comprehensive experiments on eight public data sets to validate the effectiveness of the proposed approach. \appendices \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{'timestamp': '2020-06-01T02:04:21', 'yymm': '2005', 'arxiv_id': '2005.14335', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14335'}
arxiv
\section{Introduction} \emph{Quantum computing} \cite{nc2010,a2017,aazksw2019part1} is one of the hot topics in computer science of last decades. There are many problems where quantum algorithms outpe
rform the best known classical algorithms \cite{dw2001,quantumzoo,ks2019,kks2019}. One of such problems are problems for strings. Researchers show the power of quantum algorithms for such problems in \cite{m2017,bbbv1997,rv2003,ki2019}. In this paper, we consider the problem of constructing text from dictionary strings with possible intersections. We have a text $t$ of length $n$ and a dictionary $s=s^1,\dots, s^m$. The problem is constricting $t$ only from strings of $s$ with possible intersections. The problem is connected with the sequence assembly method for reconstructing a long DNA sequence from small fragments \cite{msdd2000,bt2013}. We suggest a classical algorithm with running time $O\left(n+L +m(\log n)^2\right)=\tilde{O}(n+L)$, where $L=|s^1|+\dots+|s^m|$, $|s^i|$ is a length of $s^i$ and $\tilde{O}$ does not consider log factors. The algorithm uses segment tree\cite{l2017guide} and suffix array\cite{mm90} data structures, concepts of comparing string using rolling hash\cite{kr87,Fre79} and idea of prefix sum \cite{cormen2001}. The second algorithm is quantum. It uses similar ideas and quantum algorithm for comparing two strings with quadratic speed-up comparing to classical counterparts \cite{ki2019}. The running time for our quantum algorithm is $O\left(n +\log n\cdot(\log m+\log\log n)\cdot \sqrt{m\cdot L}\right)=\tilde{O}\left(n +\sqrt{m\cdot L}\right)$. Additionally, we show the lower bound in a classical case that is $\Omega(n+L)$. Thus, we get the optimal classical algorithm in a case of $m(\log n)^2=O(L+n)$. It is true, for example, in a case of $O(m)$ strings form $s$ has length at least $\Omega\left((\log n)^2\right)$ or in a case of $m=o\left(n/(\log n)^2\right)$. In the general case, the algorithm is an optimal algorithm up to a log factor. The quantum algorithm is better than any classical counterparts in a case of $ \log n\cdot(\log m+\log\log n)\cdot \sqrt{m\cdot L}=o(L)$. It happens if $O(m)$ strings from $s$ has length at least $\Omega(\log n\cdot(\log m+\log\log n))$. Our algorithm uses some quantum algorithms as a subroutine, and the rest part is classical. We investigate the problems in terms of query complexity. The query model is one of the most popular in the case of quantum algorithms. Such algorithms can do a query to a black box that has access to the sequence of strings. As a running time of an algorithm, we mean a number of queries to the black box. The structure of the paper is the following. We present tools in Section \ref{sec:tools}. Then, we discuss the classical algorithm in Section \ref{sec:classical} and quantum algorithm in Section \ref{sec:quantum}. Section \ref{sec:lower} contains lower bound. \section{Tools}\label{sec:tools} Our algorithms uses several data structures and algorithmic ideas like segment tree\cite{l2017guide}, suffix array\cite{mm90}, rolling hash\cite{kr87} and prefix sum \cite{cormen2001}. Let us describe them in this section. \subsection{Preliminaries} Let us consider a string $u=(u_1,\dots,u_l)$ for some integer $l$. Then, $|u|=l$ is the length of the string. $u[i,j]=(u_i,\dots,u_j)$ is a substring of $u$. In the paper, we compare strings in lexicographical order. For two strings $u$ and $v$, the notation $u<v$ means $u$ precedes $v$ in lexicographical order. \subsection{Rolling Hash for Strings Comparing}\label{sec:roll-hash} The rolling hash was presented in \cite{kr87}. It is a hash function \[h_p(u)=\left(\sum\limits_{i=1}^{|u|}index(u_i)\cdot K^{i-1}\right)\mbox{ mod }p,\] where $p$ is some prime integer, $K=|\Sigma|$ is a size of the alphabet and $index(u_i)\in\{0,\dots,K-1\}$ is the index of a symbol $u_i$ in the alphabet. For simplicity we consider binary alphabet. So, $K=2$ and $index(u_i)=u_i$. We can use rolling hash and the fingerprinting method \cite{Fre79} for comparing two strings $u,v$. Let us randomly choose $p$ from the set of the first $r$ primes, such that $r\leq \frac{\max(|u|,|v|)}{\varepsilon}$ for some $\varepsilon>0$. Due to Chinese Theorem and \cite{Fre79}, the following statements are equivalent $h_p(u)=h_p(v)$ and $u=v$ with error probability at most $\varepsilon$. If we have $\delta$ invocations of comparing procedure, then we should choose $\frac{\delta \cdot \max(|u|,|v|)}{\varepsilon}$ primes. Due to Chebishev's theorem, the $r$-th prime number $p_r\approx r\ln r$. So, if our data type for integers is enough for storing $\frac{\delta \cdot \max(|u|,|v|)}{\varepsilon}\cdot (\ln(\delta) + \ln(\max(|u|,|v|))-\ln(\varepsilon))$, then it is enough for computing the rolling hash. Additionally, for a string $u$, we can compute prefix rolling hash, that is $h_p(u[1,i])$. It can be computed in $O(|u|)$ running time using formula \[h_p(u[1,i])=\left(h_p(u[1,i-1])+(2^{i-1}\mbox{ mod }p)\cdot u_i\right)\mbox{ mod }p\mbox{ and }h_p(u[1:0])=0.\] Assume, that we store ${\cal K}_i=2^{i-1}$ mod $p$. We can compute all of them in $O(|u|)$ running time using formula ${\cal K}_i={\cal K}_{i-1}\cdot 2$ mod $p$. Using precomputed prefix rolling hash for a string $u$ we can compute rolling hash for any substring $u[i,j]$ in $O(1)$ running time by formula \[h_p(u[i,j])=\left(\sum\limits_{q=j}^{i}u_q\cdot 2^{q-1-(j-1)}\right)\mbox{ mod }p=\left(\sum\limits_{q=j}^{i}u_q\cdot 2^{q-1}\right)\cdot 2^{-(j-1)}\mbox{ mod }p=\] \[=\left(\left(\sum\limits_{q=1}^{i}u_q\cdot 2^{q-1}\right) - \left(\sum\limits_{q=1}^{j-1}u_q\cdot 2^{q-1}\right)\right)\cdot 2^{-(j-1)}\mbox{ mod }p =\]\[= \left(h_p(u[1,i])-h_p(u[1,j-1])\right)\cdot(2^{-(j-1)} ) \mbox{ $($mod }p).\] For computing the formula in $O(1)$ we should precompute ${\cal I}_i=2^{-i}$ mod $p$. We can compute it in $O(\log p+|u|)$ by the formula ${\cal I}_i={\cal I}_{i-1}\cdot 2^{-1}$ mod $p$ and ${\cal I}_{0}=1$. Due to Fermat's little theorem $2^{-1}$ mod $p=2^{p-2}$ mod $p$. We can compute it with $O(\log p)$ running time using Exponentiation by squaring algorithm. Let $\textsc{ComputeKI}(\beta,p)$ be a procedure that computes ${\cal K}$ and ${\cal I}$ up to the power $\beta$ with $O(\beta+\log p)$ running time. Let $\textsc{ComputePrefixRollingHashes}(u,p)$ be a procedure that computes all prefix rolling hashes for a string $u$ and store them. Assume, that we have two strings $u$ and $v$ and already computed prefix rolling hashes. Then, we can compare these strings in lexicographical order in $O(\log \min(|u|,|v|))$ running time. The algorithm is following. We search the longest common prefix of $u$ and $v$, that is $lcp(u,v)$. We can do it using binary search. \begin{itemize} \item If a $mid\leq lcp(u,v)$, then $h_p(u[1,mid])=h_p(v[1,mid])$. \item If a $mid> lcp(u,v)$, then $h_p(u[1,mid])\neq h_p(v[1,mid])$. \end{itemize} Using binary search we find the last index $x$ such that $h_p(u[1,x])=h_p(v[1,x])$ and $h_p(u[1,x+1])\neq h_p(v[1,x+1])$. In that case $lcp(u,v)=x$ After that, we compare $u_{t}$ and $v_t$ for $t=lcp(u,v)+1$. If $u_t<v_t$, then $u<v$; if $u_t>v_t$, then $u>v$; if $|u|=|v|=lcp(u,v)$, then $u=v$. Binary search works with $O(\log (\min(|u|,|v|)))$ running time because we have computed all prefix rolling hashes already. Let $\textsc{Compare}(u,v)$ be a procedure that compares $u$ and $v$ and returns $-1$ if $u<v$; $1$ if $u>v$; and $0$ if $u=v$. \subsection{Segment Tree with Range Updates}\label{sec:segment-tree} We consider a standard segment tree data structure \cite{l2017guide} for an array $a=(a_1,\dots, a_l)$ for some integer $l$. A segment tree for and array $a$ can be constructed in $O(l)$ running time. The data structure allows us to invoke the following requests in $O(\log l)$ running time. \begin{itemize} \item {\bf Update.} Parameters are three integers $i, j, x$ ($1\leq i\leq j\leq l$). We assign $a_q=\max(a_q,x)$ for $i\leq q\leq j$. \item {\bf Push.} We push all existing range updates. \item {\bf Request.} For an integer $i$ ($1\leq i\leq l$), we should return $a_i$. \end{itemize} Let $\textsc{ConstructSegmentTree}(a)$ be a function that constructs and returns a segment tree for an array $a$ in $O(l)$ running time. Let $\textsc{Update}(st,i,j,x)$ be a procedure that updates a segment tree $st$ in $O(\log l)$ running time. Let $\textsc{Push}(st)$ be a procedure that push all existing range updates for a segment tree $st$ in $O(l)$ running time. Let $\textsc{Request}(st,i)$ be a function that returns $a_i$ from a segment tree $st$. The running time of the procedure is $O(\log l)$. At the same time, if we invoke $\textsc{Push}$ procedure and after that do not invoke $\textsc{Update}$ procedure, then the running time of $\textsc{Request}$ is $O(1)$. \subsection{Suffix Array} Suffix array \cite{mm90} is an array $suf=(suf_1,\dots,suf_{l})$ for a string $u$ and $l=|u|$. The suffix array is a lexicographical order for all suffixes of $u$. Formally, $u[suf_i,l]<u[suf_{i+1},l]$ for any $i\in\{1,\dots,l-1\}$. The suffix array can be computed in $O(l)$ running time. \begin{lemma}[\cite{llh2018}]\label{lm:suf-arr} A suffix array for a string $u$ can be constructed in $O(|u|)$ running time. \end{lemma} Let $\textsc{ConstructSuffixArray}(u)$ be a procedure that constructs a suffix array for a string $u$. \section{Algorithms}\label{sec:algos} Let us formally present the problem. {\bf Problem.} For some positive integers $n$ and $m$, we have a sequence of strings $s=(s^1,\dots,s^m)$. Each $s^i=(s^i_1,\dots,s^i_{l})\in \Sigma^l$ where $\Sigma$ is some finite size alphabet and $l=|s^i|$. We call $s$ dictionary. Additionally, we have a string $t$ of length $n$, where $t=(t_1,\dots,t_n)\in \Sigma^n$. We call $t$ text. The problem is searching a subsequence $s^{i_1},\dots,s^{i_z}$ and positions $q_1,\dots,q_z$ such that $q_1=1$, $q_z=n-|s^{i_z}|+1$, $q_j\leq q_{j-1}+|s^{i_{j-1}}|$ for $j\in\{2,\dots,z\}$. Additionally, $t[q_j,q_j+|s^{i_j}|-1]=s^{i_j}$ for $j\in\{1,\dots,z\}$. For simplicity, we assume that $\Sigma=\{0,1\}$, but all results are right for any finite alphabet. Informally, we want to construct $t$ from $s$ with possible intersections. Firstly, let us present a classical algorithm. \subsection{A Classical Algorithm}\label{sec:classical} Let us present the algorithm. Let $long_i$ be an index of a longest string from $s$ that can start in position $i$. Formally, $long_i=j$ if $s^j$ is a longest string from $s$ such that $t[i,i+|s^j|-1]=s^j$. Let $long_i=-1$ if there is no such string $s^j$. If we construct such array, then we can construct $Q=(q_1,\dots,q_z)$ and $I = (i_1,\dots,i_z)$ that is solution of the problem in $O(n)$. A procedure $\textsc{ConstructQI}(long)$ from Algorithm \ref{alg:qi} shows it. If there is no such decomposition of $t$, then the procedure returns $NULL$. \begin{algorithm} \caption{$\textsc{ConstructQI}(long)$. Constructing $Q$ and $I$ from $last$}\label{alg:qi} \begin{algorithmic} \State $t \gets 1$ \State $i_1\gets long_1$ \State $q_1\gets 1$ \State $left\gets 2$ \State $right\gets |s^{i_1}|+1$ \While{$q_t<n$} \State{$max\_i\gets left$} \State{$max\_q\gets -1$} \If{$long_{left}>0$} \State{$max\_q\gets left + |s^{long_{left}}|-1$} \EndIf \For{$j\in \{left+1,\dots,right\}$} \If{$long_j>0$ and $j + |s^{long_j}|-1>max\_q$} \State{$max\_i\gets j$} \State{$max\_q\gets j + |s^{long_j}|-1$} \EndIf \EndFor \If{$max\_q= -1$ or $max\_q< right$} \State Break the While loop and \Return $NULL$ \Comment{We cannot construct other part of the string $t$} \EndIf \State $t\gets t+1$ \State $i_t\gets long_{max\_i}$ \State $q_t\gets max\_i$ \State $left\gets right+1$ \State $right\gets max\_q+1$ \EndWhile \State \Return $(Q,I)$ \end{algorithmic} \end{algorithm} Let us discuss how to construct $long$ array. As a first step, we choose a prime $p$ that is used for rolling hash. We choose $p$ randomly from the first $z=n\cdot m \cdot 4\lceil\log_2 n\rceil^2\cdot \frac{1}{\varepsilon}$ primes. In that case, due to results from Section \ref{sec:roll-hash}, the probability of error is at most $\varepsilon$ in a case of at most $m \cdot 4\lceil\log_2 n\rceil$ strings comparing invocations. As a second step, we construct a suffix array $suf$ for $t$. Then, we consider an array $a$ of pairs $(len, ind)$. One element of $a$ corresponds to one element of the suffix array $suf$. After that, we construct a segment tree $st$ for $a$ and use $len$ parameter of pair for maximum. As a next step, we consider strings $s^i$ for $i\in\{1,\dots m\}$. For each string $s^i$ we find the smallest index $low$ and the biggest index $high$ such that all suffixes $t[suf_j,n]$ for $low\leq j \leq high$ has $s^i$ as a prefix. We can use binary search for this action. Because of sorted order of suffixes in suffix array, all suffixes with the prefix $s^i$ are situated sequently. As a comparator for strings, we use $\textsc{Compare}$ procedure. Let us present this action as a procedure $\textsc{SearchSegment}(u)$ in Algorithm \ref{alg:search}. The algorithm returns $(NULL,NULL)$ if no suffix of $t$ contains $u$ string as a prefix. \begin{algorithm} \caption{$\textsc{SearchSegment}(u)$. Searching a indexes segment of suffixes for $t$ that have $u$ as a prefix}\label{alg:search} \begin{algorithmic} \State $low\gets NULL$, $high\gets NULL$ \State $l\gets 1$ \State $r\gets n$ \State $Found\gets False$ \While{$Found=False$ and $l\leq r$} \State $mid\gets (l+r)/2$ \State $pref\gets t[suf_{mid},\min(n,suf_{mid}+|u|-1)]$ \State $pref1\gets t[suf_{mid-1},\min(n,suf_{mid-1}+|u|-1)]$ \State $compareRes \gets \textsc{Compare}(pref,u), compareRes1 \gets \textsc{Compare}(pref1,u)$ \If {$compareRes=0$ and $compareRes1=-1$} \State $Found\gets true$ \State $low\gets mid$ \EndIf \If {$compareRes< 0$} \State $l\gets mid+1$ \EndIf \If {$compareRes\geq 0$} \State $r\gets mid-1$ \EndIf \EndWhile \If{$Found=True$} \State $l\gets 1$ \State $r\gets n$ \State $Found\gets False$ \While{$Found=False$ and $l\leq r$} \State $mid\gets (l+r)/2$ \State $pref\gets t[suf_{mid},\min(n,suf_{mid}+|u|-1)]$ \State $pref1\gets t[suf_{mid+1},\min(n,suf_{mid+1}+|u|-1)]$ \State $compareRes \gets \textsc{Compare}(pref,u), compareRes1 \gets \textsc{Compare}(pref1,u)$ \If {$compareRes=0$ and $compareRes1=+1$} \State $Found\gets true$ \State $high\gets mid$ \EndIf \If {$compareRes\leq 0$} \State $l\gets mid+1$ \EndIf \If {$compareRes> 0$} \State $r\gets mid-1$ \EndIf \EndWhile \EndIf \State \Return $(low, high)$ \end{algorithmic} \end{algorithm} Then, we update values in the segment tree $st$ by a pair $(|s^i|,i)$. After processing all strings from $(s^1,\dots,s^m)$, the array $a$ is constructed. We can construct $long$ array using $a$ and the suffix array $suf$. We know that $i$-th element $(len,ind)$ stores the longest possible string $s^{ind}$ that starts from $suf_i$. It is almost definition of $long$ array. So we can put $long_{suf_i}=ind$, if $a_i=(ind,len)$. Finally, we get the following Algorithm \ref{alg:classical} for the text constructing from a dictionary problem. \begin{algorithm} \caption{The classical algorithm for the text $t$ constructing from a dictionary $s$ problem for an error probability $\varepsilon>0$}\label{alg:classical} \begin{algorithmic} \State $\alpha\gets m \cdot 4\lceil\log_2 n\rceil^2$ \State $r\gets n\cdot \alpha/\varepsilon$ \State $p\in_R \{p_1,\dots,p_r\}$\Comment{$p$ is randomly chosen from $\{p_1,\dots,p_r\}$} \State $\textsc{ComputeKI}(n,p)$ \State $\textsc{ComputePrefixRollingHashes}(t,p)$ \For{$j\in\{1,\dots,m\}$} \State $\textsc{ComputePrefixRollingHashes}(s^j,p)$ \EndFor \State $suf\gets\textsc{ConstructSuffixArray}(t)$ \State $a\gets[(0,-1),\dots,(0,-1)]$\Comment{Initialization by $0$-array} \State $st \gets\textsc{ConstructSegmentTree}(a)$ \For{$j\in\{1,\dots,m\}$} \State $(low,high)\gets \textsc{SearchSegment}(s^j)$ \If{$(low,heigh)\neq (NULL,NULL)$} \State $\textsc{Update}(st,low,high,(|s^j|,j))$ \EndIf \EndFor \State $\textsc{Push}(st)$ \For{$i\in\{1,\dots,n\}$} \State $(len,ind)\gets \textsc{Request}(st,i)$ \State $long_{suf_i}\gets ind$ \EndFor \State $(Q,I)\gets\textsc{ConstructQI}(long)$ \State\Return $(Q,I)$ \end{algorithmic} \end{algorithm} Let us discuss properties of Algorithm \ref{alg:classical}. \begin{theorem}\label{th:classical} Algorithm \ref{alg:classical} solves the text $t$ constructing from a dictionary $s=(s^1,\dots,s^m)$ problem with $O\left(n+L +m(\log n)^2 -\log\varepsilon\right)$ running time end error probability $\varepsilon$ for some $\varepsilon>0$, $n=|t|$ and $L=|s^1|+\dots + |s^m|$. The running time is $O\left(n+L +m(\log n)^2\right)$ in a case of $\varepsilon=const$. \end{theorem} \begin{proof} The correctness of the algorithm follows from construction. Let us discuss running time of the algorithm. Note, that $m\leq L=\sum_{j=1}^m|s^j|$ and $1\leq |s^j|\leq n$ for $j\in\{1,\dots,m\}$. Due to results from Section \ref{sec:roll-hash}, $\textsc{ComputeKI}$ works with $O(n+\log p)$ running time. Let us convert this statement. \[O(n+\log p)=O(n+\log (n\cdot \alpha/\varepsilon))=\] \[=O(n + \log n + \log\alpha - \log\varepsilon)=O(n + \log\alpha - \log\varepsilon)=\]\[= O(n + \log(m \cdot 4\lceil\log_2 n\rceil^2) - \log\varepsilon) =O(n + \log m +\log\log n - \log\varepsilon)=\]\[= O(n+\log m - \log\varepsilon)\] Due to results from Section \ref{sec:roll-hash}, $\textsc{ComputePrefixRollingHashes}$ works in linear running time. Therefore, all invocations of $\textsc{ComputePrefixRollingHashes}$ procedure works in $O(n+\sum_{j=1}^m|s^j|)=O(n+L)$ running time. Due to Lemma \ref{lm:suf-arr}, $\textsc{ConstructSuffixArray}$ works in $O(n)$ running time. Initializing of $a$ does $O(n)$ steps. Due to results from Section \ref{sec:segment-tree}, $\textsc{ConstructSegmentTree}$ works in $O(n)$ running time. $\textsc{SearchSegment}$ procedure invokes $\textsc{Compare}$ procedure $O(\log n)$ times due to binary search complexity. $\textsc{Compare}$ procedure works in $O(\log n)$ running time. Therefore, $\textsc{SearchSegment}$ works in $O((\log n)^2)$ running time. Due to results from Section \ref{sec:segment-tree}, $\textsc{Update}$ procedure works in $O(\log n)$ running time. Hence, the total complexity of processing all strings from the dictionary $s$ is $O\left(m\cdot((\log n)^2+\log n)\right)=O\left(m\cdot(\log n)^2\right)$. The invocation of $\textsc{Push}$ works in $O(n)$ running time due to results from Section \ref{sec:segment-tree}. The invocation of $\textsc{Request}$ works in $O(1)$ running time because we do not invoke $\textsc{Update}$ after $\textsc{Push}$. Therefore, constructing of the array $long$ takes $O(n)$ steps. The running time of $\textsc{ConstructQI}$ is $O(n)$ because we pass each element only once. So, the total complexity of the algorithm is \[O\left(n+\log m - \log\varepsilon+n+L+n+n+n+m(\log n)^2 +n+n+n\right)=\]\[=O\left(n+L +m(\log n)^2 -\log\varepsilon\right).\] Let us discuss the error probability. We have $4 \cdot m\lceil\log_2 n\rceil$ invocations of $\textsc{Compare}$ procedure. Each invocation of $\textsc{Compare}$ procedure compares rolling hashes at most $\lceil\log_2 n\rceil$ times. Due to results from Section \ref{sec:roll-hash}, if we compare strings of length at most $n$ using rolling hash $4 \cdot m\lceil\log_2 n\rceil^2$ times and choose $p$ from $r$ primes, then we get error probability at most $\varepsilon$. \hfill$\Box$\\ \end{proof} \subsection{A Quantum Algorithm}\label{sec:quantum} Firstly, let us discuss a quantum subroutine. There is a quantum algorithm for comparing two strings in a lexicographical order with the following property: \begin{lemma}[\cite{ki2019}]\label{lm:str-compr}There is a quantum algorithm that compares two strings of length $k$ in lexicographical order with query complexity $O(\sqrt{k}\log \gamma)$ and error probability $O\left(\frac{1}{ \gamma^3}\right)$ for some positive integer $\gamma$. \end{lemma} Let $\textsc{QCompare\_strings\_base}(u,v,l)$ be a quantum subroutine for comparing two strings $u$ and $v$ of length $l$ in lexicographical order. We choose $\gamma = m\log n$. In fact, the procedure compares prefixes of $u$ and $v$ of length $l$. $\textsc{QCompare\_strings\_base}(u,v,l)$ returns $1$ if $u>v$; it returns $-1$ if $u<v$; and it returns $0$ if $u=v$. Next, we use a $\textsc{QCompare}(u,v)$ that compares $u$ and $v$ string in lexicographical order. Assume that $|u|<|v|$. Then, if $u$ is a prefix of $v$, then $u<v$. If $u$ is not a prefix of $v$, then the result is the same as for $\textsc{QCompare\_strings\_base}(u,v,|u|)$. In the case of $|u|>|v|$, the algorithm is similar. The idea is presented in Algorithm \ref{alg:compare}. \begin{algorithm} \caption{The quantum algorithm for comparing two string in lexicographical order }\label{alg:compare} \begin{algorithmic} \If{$|u|=|v|$} \State $Result \gets\textsc{QCompare\_strings\_base}(u,v,|u|) $ \EndIf \If{$|u|<|v|$} \State $Result \gets\textsc{QCompare\_strings\_base}(u,v,|u|) $ \If{$Result=0$} \State $Result = -1$ \EndIf \EndIf \If{$|u|>|v|$} \State $Result \gets\textsc{QCompare\_strings\_base}(u,v,|v|) $ \If{$Result=0$} \State $Result = 1$ \EndIf \EndIf \State \Return $Result$ \end{algorithmic} \end{algorithm} Let us present a quantum algorithm for the text constructing form a dictionary problem. For the algorithm, we use the same idea as in the classical case, but we replace $\textsc{Compare}$ that uses the rolling hash function by $\textsc{QCompare}$. In that case, we should not construct rolling hashes. Let $\textsc{QSearchSegment}$ be a quantum counterpart of $\textsc{SearchSegment}$ that uses $\textsc{QCompare}$. The quantum algorithm is presented as Algorithm \ref{alg:quantum}. \begin{algorithm} \caption{The quantum algorithm for the text $t$ constructing from a dictionary $s$ problem}\label{alg:quantum} \begin{algorithmic} \State $suf\gets\textsc{ConstructSuffixArray}(t)$ \State $a\gets[(0,-1),\dots,(0,-1)]$\Comment{Initialization by $0$-array} \State $st \gets\textsc{ConstructSegmentTree}(a)$ \For{$j\in\{1,\dots,m\}$} \State $(low,high)\gets \textsc{QSearchSegment}(s^j)$ \State $\textsc{Update}(st,low,high,(|s^j|,j))$ \EndFor \State $\textsc{Push}(st)$ \For{$i\in\{1,\dots,n\}$} \State $(len,ind)\gets \textsc{Request}(st,i)$ \State $long_{suf_i}\gets ind$ \EndFor \State $(Q,I)\gets\textsc{ConstructQI}(long)$ \State\Return $(Q,I)$ \end{algorithmic} \end{algorithm} Let us discuss properties of Algorithm \ref{alg:quantum}. \begin{theorem} Algorithm \ref{alg:quantum} solves the text $t$ constructing from a dictionary $s=(s^1,\dots,s^m)$ problem in $O\left(n +\log n\cdot(\log m+\log\log n)\cdot \sqrt{m\cdot L}\right)$ running time end error probability $O\left(\frac{1}{m+\log n}\right)$. \end{theorem} \begin{proof} The algorithm does almost the same actions as the classical counterpart. That is why the correctness of the algorithm follows from Theorem \ref{th:classical}. Let us discuss the running time. Due to Theorem \ref{th:classical}, the running time of the procedure $\textsc{ConstructSuffixArray}$ is $O(n)$, the running time of the procedure $\textsc{ConstructSegmentTree}$ is $O(n)$, the running time of the procedure $\textsc{Push}$ is $O(n)$, the running time of the array $long$ construction is $O(n)$, the running time of $\textsc{ConstructQI}$ is $O(n)$. Due to Lemma \ref{lm:str-compr}, the running time of $\textsc{QCompare}$ for $s^j$ is $O(\sqrt{|s^j|}(\log m+\log\log n))$. The procedure $\textsc{QSearchSegment}$ invokes $\textsc{QCompare}$ procedure $O(\log n)$ times for each string $s^1,\dots s^m$. So, the complexity of processing all strings from $s$ is \[O\left(\log n\cdot(\log m+\log\log n)\cdot \sum_{j=1}^m\sqrt{|s^j|}\right)\] Let us use the Cauchy-Bunyakovsky-Schwarz inequality and $L=\sum_{j=1}^m|s^j|$ equality for simplifying the statement. \[\leq O\left(\log n\cdot(\log m+\log\log n)\cdot \sqrt{m\sum_{j=1}^m|s^j|}\right)=O\left(\log n\cdot(\log m+\log\log n)\cdot \sqrt{m\cdot L}\right).\] The total running time is \[O\left(n+n+n +\log n\cdot(\log m+\log\log n)\cdot \sqrt{m\cdot L}\right)=\]\[=O\left(n +\log n\cdot(\log m+\log\log n)\cdot \sqrt{m\cdot L}\right)\] Let us discuss the error probability. The algorithm invokes $\textsc{QCompare}$ procedure $2m\lceil\log_2 n \rceil\leq 2m(1+\log_2 n)$ times. The success probability is \[O\left(\left(1-\frac{1}{(m\log n)^3}\right)^{2m(1+\log_2 n)}\right)=O\left(\frac{1}{m\log n}\right).\] \hfill$\Box$\\ \end{proof} \section{Lower Bound}\label{sec:lower} Let us discuss the lower bound for the running time of classical algorithms. \begin{theorem}\label{th:dfreq-compl} Any randomized algorithm for the text $t$ constructing from a dictionary $s=(s^1,\dots,s^m)$ problem works in $\Omega(n+L)$ running time, where $L=|s^1|+\dots+|s^m|$. \end{theorem} \begin{proof} Assume $L>n$. Let us consider $t=(t_1,\dots,t_n)$ such that $t_{\lfloor n/2\rfloor}=1$ and $t_{i}=0$ for all $i\in\{1,\dots, n\}\backslash\{\lfloor n/2\rfloor\}$. Let $|s^i|\leq n/2$ for each $i\in\{1,\dots,m\}$. Note, that in a general case $|s^i|\leq n$. Therefore, we reduce the input data only at most twice. Assume that we have two options: \begin{itemize} \item all $s^i$ contains only $0$s; \item there is $z$ such that we have two conditions: \begin{itemize} \item all $s^i$ contains only $0$s, for $i\in\{1,\dots, m\}\backslash\{ z\}$; \item for $j_0<|s^z|$, $s^z_{j_0}=1$ and $s^z_{j}=0$ for $j\in\{1,\dots, |s^z|\}\backslash\{ j_0\}$. \end{itemize} \end{itemize} In a case of all $0$s, we cannot construct the text $t$. In a case of existing $0$ in the first half, we can construct $t$ by putting $s^z$ on the position $j_0-\lfloor n/2\rfloor+1$ and we get $1$ of the required position. Then, we complete $t$ by other $0$-strings. Therefore, the solution of the problem is equivalent to the search of $1$ in unstructured data of size $L$. The randomized complexity of this problem is $\Omega(L)$ due to \cite{bbbv1997}. Assume $L<n$. Let $m=1$, $|s^1|=1$ and $s^1_1=1$. Assume that we have two options: \begin{itemize} \item $t$ contains only $1$s; \item there is $g$ such that $t_g=0$ and $t_j=1$ for all $j\in\{1,\dots,n\}\backslash\{g\}$. \end{itemize} In the first case, we can construct $t$ from $s$. In the second case, we cannot do it. Here the problem is equivalent to search $0$ among $n$ symbols of $t$. Therefore, the problem's randomized complexity is $\Omega(n)$. Finally, the total complexity is $\Omega(\max(n,L))=\Omega(n+L)$. \hfill$\Box$\\ \end{proof} \bibliographystyle{psc}
{'timestamp': '2020-11-17T02:31:18', 'yymm': '2005', 'arxiv_id': '2005.14494', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14494'}
arxiv
\section{Introduction} Magnetoresistance is one of the fundamental phenomena in the research field of spintronics. Giant magnetoresistance\cite{Baibich1988,Binasch1989,Fert2008} and tunneling magneto
resistance\cite{Julliere1975,Miyazaki1995,Moodera1995,Yuasa2004,Parkin2004} are now essential ingredients in spintronics technology for sensors, memories, and data storage. Recently, a novel type of magnetoresistance called spin Hall magnetoresistance (SMR) has been observed in a bilayer system composed of a normal metal (NM) and a ferromagnetic insulator (FI)\cite{Nakayama2013a,Chen2013a,Hahn2013,Vlietstra2013,Althammer2013,Meyer2014,Marmion2014,Cho2015,Kim2016,Chen2016,Sterk2019,Tolle2019}. SMR is explained by the combination of charge-spin conversions in NM and loss of spins at the NM/FI interface\cite{Nakayama2013a,Chen2013a}. When an in-plane charge current is applied to the NM layer with a large spin-orbit interaction, spin accumulation is caused near the NM/FI interface by the spin Hall effect (SHE). The amount of spin accumulation is affected by the orientation of FI magnetization because it changes the rate of spin loss at the interface. A backward spin current owing to spin diffusion is converted into the charge current again by the inverse spin Hall effect (ISHE) and induces longitudinal magnetoresistance, which depends on the orientation of FI magnetization. The strength of SMR is on the order of $\theta_{\rm SH}^2$, where $\theta_{\rm SH}$ is the spin Hall angle. Recently, SMR has been reported for the bilayer system composed of NM and an antiferromagnetic insulator (AFI) \cite{Hou2017,Lin2017,Cheng2019,Hoogeboom2017a,Fischer2018,Ji2018,Lebrun2019}, where the orientation of the N\'eel vector of AFI has been changed by a strong external field or by the orientation of magnetization of FI using the NM/AFI/FI trilayer structure. The sign of SMR is opposite to the one in the NM/FI bilayer system with respect to the external magnetic field. This sign change of SMR can be explained if the N\'eel vector of AFI is fixed perpendicular to the external magnetic field or to the magnetization of FI via an exchange bias \cite{Nogues1999,Luan2018}. We note that a similar sign change of SMR has been observed in a non-collinear-ferrimagnet/NM bilayer system \cite{Ganzhorn2016,Dong2018}. SMR can be described theoretically by combining the spin diffusion theory with the boundary condition at the interface in terms of the spin-mixing conductance \cite{Nakayama2013a,Chen2013a}. However, in this theory, the spin-mixing conductance at the interface is a phenomenological parameter that has to be determined experimentally; therefore, its temperature dependence cannot be predicted. Furthermore, the magnetization-orientation dependence of the spin-mixing conductance is assumed phenomenologically by its definition. This semiclassical description of SMR seems to be insufficient for studying quantum features of magnetic insulators such as the effect of thermally excited magnons. Recently, a microscopic theory of SMR has been proposed based on a local mean-field approach \cite{Zhang2019,Velez2019}. However, a general microscopic theory applicable to a wide parameter region is still lacking. We note that the physics of SMR is closely related to non-local magnon transport in the NM/FI/NM\cite{Cornelissen2015,Goennenwein2015} and NM/AFI/NM\cite{Lebrun2018} nanostructures. In addition, the thermal noise of pure spin current at the NM/FI interface has been measured using ISHE \cite{Kamra2014}. Although the Onsager relation between the thermal noise and spin-mixing conductance at the interface has been discussed qualitatively \cite{Kamra2014}, thus far, it has not been derived by a microscopic theory. The explicit derivation of the Onsager relation provides an important basis for a theory of nonequilibrium spin-current noise, which generally includes important information on spin transport \cite{Kamra2016a,Kamra2016b,Matsuo2018,Aftergood2018,Joshi2018,Nakata2018,Kato2019} as suggested by studies on electronic current noise \cite{Blanter2000,Thierry2005}. In this study, we construct a microscopic theory of SMR, which is based on the tunnel Hamiltonian method \cite{Bruus2004,Ohnuma2013}. We derive a general formula of a spin conductance at the interface for both NM/FI and NM/AFI bilayer systems. We note that our theory provides the first microscopic description for the NM/AFI bilayers to our knowledge. By applying the spin-wave approximation, we discuss the temperature dependence of SMR well below the magnetic transition temperature. In addition, we formulate the spin-current noise in the same framework, and derive the Onsager relation, i.e., the relation between the thermal spin-current noise and spin conductance. This paper is organized as follows. In Sec. \ref{sec:SMR}, we theoretically describe SMR by combining the spin diffusion theory in NM and spin conductance at the interface. In Sec. \ref{sec:model}, we provide the microscopic Hamiltonian for the NM/FI (or NM/AFI) bilayer system. In Sec. \ref{sec:SpinCurrent}, we formulate the spin conductance at the interface using the tunnel Hamiltonian method. In Sec. \ref{sec:SMRFI} and Sec. \ref{sec:SMRAFI}, we discuss the temperature dependence of spin conductance using the spin-wave approximation. In Sec. \ref{sec:experiment}, we briefly discuss the relevance of this study to the SMR experiments. In Sec.\ref{sec:noise}, we formulate the spin current noise and explicitly derive the Onsager relation. Finally, we summarize our results in Sec. \ref{sec:summary}. The details of the derivation are provided in two appendices. \section{Theoretical Description of SMR} \label{sec:SMR} We theoretically describe the SMR by improving the spin diffusion theory provided in Ref.~\onlinecite{Chen2013a}. First, let us consider a bilayer system composed of NM and FI layers. When we apply electric field to the NM layer in the $+x$ direction, a spin current $j_{s0}^{\rm SH}=\theta_{\rm SH} \sigma E_x$ is driven in the $-y$ direction by the spin Hall effect, where $\theta_{\rm SH}$ is the spin Hall angle, $\sigma$ is the electric conductivity, and $E_x$ is the $x$ component of electric field. Here, we have defined the spin current $j_{s0}^{\rm SH}$ as a difference between charge currents of opposite spins. This spin current induces spin accumulation at the interface between NM and FI layers, as shown in Fig.~\ref{fig_setup}(a) and (b). In the abovementioned figure, the direction of spins accumulated at the interface is denoted with ${\bm \sigma}$. In a steady state, the spin current $j_{s0}^{\rm SH}$ is balanced with a backflow spin current \begin{align} j_s^{\rm B} = -(\sigma/2e)\partial_y \mu_s(y) . \end{align} Here, $\mu_s(y)$ is the spin chemical potential defined as \begin{align} \mu_s(y) = \mu_\uparrow(y)-\mu_\downarrow(y) . \end{align} Using the continuity equation and accounting for spin relaxation, we show that $\mu_s(y)$ obeys the following differential equation \begin{align} \frac{d^2\mu_s}{dy^2} = \frac{\mu_s}{\lambda^2} , \label{eq:diffeq} \end{align} where $\lambda$ is the spin diffusion length. The spin chemical potential $\mu_s(y)$ is obtained as a function of $y$ by solving this equation under the boundary conditions \begin{align} j_s^{\rm B}(y) - j_{s0}^{\rm SH} = \left\{ \begin{array}{ll} -j_s^{\rm I}, & (y=0), \\ 0, & (y=d_N), \end{array} \right. \label{eq:diffmus} \end{align} where $d_N$ is the thickness of the NM layer, and $j_s^{\rm I}$ is the spin absorption rate at the NM/FI interface that depends on the direction of magnetization of FI. \begin{figure}[!tb] \begin{center} \includegraphics[width=80mm]{FigSetup1.eps} \caption{(Color online) Schematic diagram of the normal-metal(NM)/ferromagnetic-insulator(FI) bilayer system for the spin Hall magnetoresistance (SMR) measurement. When an external charge current is applied to NM in the $x$-direction, a spin current $I_s$ with a $z$ component flows in the $y$-direction owing to the spin Hall effect. Spin current induces spin accumulation ${\bm \sigma}$ at the NM/FI interface with magnetization ${\bm M}$. (a) Parallel configuration of ${\bm \sigma} \parallel {\bm M}$. (b) Perpendicular configuration of ${\bm \sigma} \perp {\bm M}$. (c) Chemical potential difference $\delta \mu_s = \mu_{\uparrow} - \mu_{\downarrow}$, which is defined for a quasi-equilibrium steady state, is shown as a function of position $y/\lambda$, where $\lambda$ is the spin diffusion length, and the thickness of NM is set as $d_N = 6 \lambda$. The NM/FI interface is located at $y=0$. } \label{fig_setup} \end{center} \end{figure} In this study, we use another definition of spin current at the interface. We define $I_s$ as a decay rate of spin angular momenta at the NM/FI interface. This can be related with $j_s^{\rm I}$ as follows: \begin{align} j_s^{\rm I} = \frac{e}{\hbar/2} \frac{I_s}{S}, \end{align} where $S$ is the cross section area of the NM/FI interface. We define a dimensionless spin conductance as \begin{align} G_s = \lim_{\mu_s(0)\rightarrow 0} \frac{I_s}{\mu_s(0)}, \label{eq:defGs} \end{align} where $\mu_s(0)$ is the chemical potential difference at the NM/FI interface. The solution of the differential equation~(\ref{eq:diffeq}) is written in the form of $\mu_s(y)=Ae^{-y/\lambda}+Be^{y/\lambda}$. We note that the constants, $A$ and $B$, include $\mu_s(0)$ through the boundary condition, Eq.~(\ref{eq:diffmus}), by approximating the spin current as $I_s \simeq G_s \mu_s(0)$. By solving the self-consistent equation for $\mu_s(0)$, we obtain \begin{align} \mu_s(0) = \frac{\mu_{s0}}{1+g_s \coth(d_N/\lambda)} \end{align} where $\mu_{s0}$ is the chemical potential difference in the absence of the NM/FI interface, and $g_s$ is the dimensionless factor, which is defined as \begin{align} g_s = \frac{4e^2}{\hbar} \frac{G_s}{\sigma S/\lambda}, \end{align} and indicates the amplitude of the absorption rate at the interface. In Ref.~\onlinecite{Chen2013a}, the magnetization-orientation dependence of $G_s$ was discussed in terms of the spin mixing conductance. In this discussion, the spin absorption rate at the interface, $g_s$, is largest (smallest) when the magnetization ${\bm M}$ is perpendicular (parallel) to the accumulated spin ${\bm \sigma}$ (see Fig.~\ref{fig_setup}(a) and (b)). Then, the spatial profile of $\mu_s(y)$ changes depending on the direction of ${\bm M}$ (see Fig.~\ref{fig_setup}(c)). A similar approach was employed also in recent theoretical works on unidirectional SMR\cite{Sterk2019} and low-dimensional-FI/NM systems\cite{Velez2019}. In our study, however, no assumption is made about the magnetization-orientation dependence of $G_s$. As shown later, the magnetization-orientation dependence of the spin absorption rate, which is implicitly assumed in the discussion that is based on the mixing conductance, is not sufficient to discuss the temperature dependence of the SMR signal. The backflow current $j_s^{\rm B}$ induces a charge current in the $x$ direction owing to the inverse spin Hall effect. Then, longitudinal magnetoresistance is calculated as\cite{Chen2013a} \begin{align} \frac{\Delta \rho}{\rho} = \theta_{\rm SH}^2 \frac{g_s \tanh^2(d_N/2\lambda)}{1+g_s \coth(d_N/\lambda)} . \label{SMRformula} \end{align} Thus, SMR is related to the spin conductance $G_s$ via the factor $g_s$. In the subsequent sections, we calculate $G_s$ as a function of the angle between ${\bm M}$ and ${\bm \sigma}$. The theoretical description of SMR for the NM/AFI bilayer is almost the same as for the NM/FI bilayer. SMR can be discussed by calculating $G_s$ as a function of an angle between the N\'eel vector and ${\bm \sigma}$. We note that the present formulation is applicable to more complex systems such as a NM with the Rashba spin-orbit interaction\cite{Tolle2019}. \section{Model} \label{sec:model} In this section, we introduce a model for NM/FI and NM/AFI bilayers. After we provide the Hamiltonian for bulk systems of NM (Sec.~\ref{sec:modelNM}), FI (Sec.~\ref{sec:modelFI}), and AFI (Sec.~\ref{sec:modelAFI}), we describe the model of the interfacial exchange coupling in Sec.~\ref{sec:modelINT}. \subsection{Normal Metal} \label{sec:modelNM} The Hamiltonian for a bulk NM is given as \begin{align} H_{\rm NM} &= \sum_{{\bm k}\sigma} \xi_{\bm k} c^\dagger_{{\bm k}\sigma} c_{{\bm k}\sigma}, \label{eq:HamiltonianNM} \end{align} where $\xi_{\bm k}=\epsilon_{\bm k}-\mu$ is the energy dispersion measured from the chemical potential, and $\sigma = \uparrow,\downarrow$ is the $z$ component of an electron spin. We assume that the spin accumulation at an interface induced by SHE is described by quasi-thermal equilibrium states with spin-dependent chemical potential shifts, $\pm \mu_s/2$, where $\mu_s$ is recognized as its value at the interface, $\mu_s(0)$, given in the previous section. The density matrix for this quasi-thermal equilibrium state is given as $\rho=e^{-\beta {\cal H}_{\rm NM}}/Z$, where \begin{align} {\cal H}_{\rm NM} &= \sum_{{\bm k}\sigma} (\xi_{\bm k}-\sigma \mu_s/2) c^\dagger_{{\bm k}\sigma} c_{{\bm k}\sigma} , \label{eq:HamiltonianNM2} \end{align} where $\beta$ is the inverse temperature, and $Z={\rm Tr}(e^{-\beta {\cal H}})$ is the ground partition function. \subsection{Ferromagnetic Insulator (FI)} \label{sec:modelFI} For the Hamiltonian of bulk FI, we consider the Heisenberg model given as \begin{align} H_{\rm FI} &= J \sum_{\langle i,j \rangle} {\bm S}_i \cdot {\bm S}_j - \hbar \gamma h_{\rm dc} \sum_{i} S^{z'}_{i}, \label{HFI} \end{align} where ${\bm S}_j$ is the localized spin, $J$ ($<0$) is the ferromagnetic exchange coupling, $\langle i,j \rangle$ indicates a pair of nearest-neighbor sites, $\gamma$ is the gyromagnetic ratio, and $h_{\rm dc}$ is the static magnetic field. Here, we introduce a new coordinate $(x',y',z')$ and assume that the net magnetization is aligned in the $+z'$ direction in this new coordinate by the magnetic field (see Fig.~\ref{fig:rotation}(a)): \begin{align} \langle {\bm S}_j \rangle_0 = (\langle S^{x'}_j \rangle_0, \langle S^{y'}_j \rangle_0, \langle S^{z'}_j \rangle_0) = (0,0,\tilde{S}_0), \label{eq:magnetization} \end{align} where $\langle \cdots \rangle_0$ indicates the thermal average in bulk FI, and $\tilde{S}_0$ is the amplitude of the magnetization per site, which depends on the temperature. \begin{figure}[!tb] \begin{center} \includegraphics[width=75mm]{FigRotation.eps} \caption{Relation between the magnetization-fixed coordinate $(x,y,z)$ and laboratory coordinate $(x',y',z')$ for (a) FI and (b) antiferromagnetic insulator (AFI).} \label{fig:rotation} \end{center} \end{figure} \subsection{Antiferromagnetic Insulator} \label{sec:modelAFI} For the Hamiltonian of bulk AFI, we consider the Heisenberg model on a lattice composed of two sublattices, A and B: \begin{align} H_{\rm AFI} &= J \sum_{\langle i, j \rangle} {\bm S}_{{\rm A},i} \cdot {\bm S}_{{\rm B},j}, \end{align} where ${\bm S}_{\nu,j}$ denotes a localized spin on the sublattice $\nu$ ($={\rm A},{\rm B}$); the antiferromagnetic exchange is denoted as $J$ ($>0$), and $\langle i,j \rangle$ indicates the nearest-neighbor pairs between two sublattices. We assume that the magnetization on the sublattice A(B) is aligned in the $+z'$($-z'$) direction (see Fig.~\ref{fig:rotation}(b)): \begin{align} & \langle {\bm S}_{{\rm A},j} \rangle = (\langle S_{{\rm A},j}^{x'} \rangle, \langle S_{{\rm A},j}^{y'} \rangle, \langle S_{{\rm A},j}^{z'} \rangle) = (0,0,\tilde{S}_0), \label{eq:msa} \\ & \langle {\bm S}_{{\rm B},j} \rangle = (\langle S_{{\rm B},j}^{x'} \rangle, \langle S_{{\rm B},j}^{y'} \rangle, \langle S_{{\rm B},j}^{z'} \rangle) = (0,0,-\tilde{S}_0), \label{eq:msb} \end{align} where $\tilde{S}_0$ is the amplitude of staggered magnetization per site, which depends on the temperature. \subsection{Exchange coupling at the interface} \label{sec:modelINT} The interfacial exchange coupling between FI (or AFI) and NM is described by the Hamiltonian \begin{align} H_{\rm ex} &= \sum_{\nu} \sum_{\langle i,j \rangle} \left[ T_{ij}^{\nu} S_{\nu,i}^+ s_j^- + (T_{ij}^\nu)^* S_{\nu,i}^- s_j^+ \right] , \label{eq:HamINT} \end{align} where $S_{\nu,i}^\pm= S^x_{\nu,i} \pm S^y_{\nu,i}$ are creation and annihilation operators of the spin in the laboratory coordinate, $s_j^\pm$ are those of the spin of conduction electrons, $T_{ij}^{\nu}$ is an exchange coupling between a pair of interfacial sites, $\langle i,j \rangle$, and $\nu$ is the sublattice of localized spins. The sublattice is unique ($\nu = {\rm A}$) for FI, and there are two sublattices ($\nu = {\rm A}, {\rm B}$) for AFI. To proceed with the calculation, we need to rewrite the Hamiltonian (\ref{eq:HamINT}) in the spin operators in the magnetization-fixed coordinate $(x',y',z')$. The conversion formula for the spin operators from the magnetization-fixed coordinate $(x',y',z')$ to the laboratory coordinate $(x,y,z)$ is given as \begin{align} S^{x}_{\nu,j} &= \cos \theta \, S_{\nu,j}^{x'} - \sin \theta \, S_{\nu,j}^{z'} , \\ S^{y}_{\nu,j} &= S_{\nu,j}^{y'}, \\ S^{z}_{\nu,j} &= \sin \theta \, S_{\nu,j}^{x'} + \cos \theta \, S_{\nu,j}^{z'} , \end{align} where $\theta$ is the angle of magnetization (see Fig.~\ref{fig:rotation}). By this coordinate transformation, we obtain \begin{align} S^+_{\nu,j} &= \cos^2 (\theta/2) S^{+\prime}_{\nu,j} -\sin^2 (\theta/2)S^{-\prime}_{\nu,j} - \sin \theta S^{z'}_{\nu,j}, \label{eq:calSjplus} \\ S^-_{\nu,j} &= \cos^2 (\theta/2)S^{-\prime}_{\nu,j} -\sin^2 (\theta/2)S^{+\prime}_{\nu,j} - \sin \theta S^{z'}_{\nu,j}, \label{eq:calSjminus} \end{align} where $S^{\pm\prime}_{\nu,j} = S^{x'}_{\nu,j}\pm S^{y'}_{\nu,j}$. Then, the interface exchange interaction can be divided into three parts as \begin{align} H_{\rm ex} &= \sum_{a=1}^3 H_{\rm ex}^{(a)}, \\ H_{\rm ex}^{(a)} &= g_a(\theta) \sum_{\langle i,j \rangle} \left[ T^\nu_{ij} S_{\nu,i}^{(a)} s_j^- + (T^\nu_{ij})^* (S_{\nu,i}^{(a)})^\dagger s_j^+ \right], \end{align} where $S^{(a)}$ and $g_a(\theta)$ ($a=1,2,3$) are defined as \begin{align} & S^{(1)}_{i,\nu}=S^{z'}_{\nu,i}, \quad g_1(\theta) = -\sin \theta, \label{S1} \\ & S^{(2)}_{i,\nu}=S^{+\prime}_{\nu,i} , \quad g_2(\theta) = \cos^2 (\theta/2), \label{S2} \\ & S^{(3)}_{i,\nu}=S^{-\prime}_{\nu,i} , \quad g_3(\theta) =-\sin^2 (\theta/2). \label{S3} \end{align} \section{Spin Current} \label{sec:SpinCurrent} Next, we calculate the spin current using the second order perturbation with respect to the exchange coupling at the interface \cite{Bruus2004,Ohnuma2013,Matsuo2018,Kato2019,Ominato2019,Ominato2020}. We derive a general formula for spin current and spin conductance. Our formula is expressed in terms of spin susceptibilities in NM and FI (or AFI) layers and is general, i.e., the formula does not depend on a specific model. \subsection{Definition} We define the spin current as the absorption rate of the $z$ component of the spin angular momenta in the NM side at the interface: \begin{align} \hat{I}_s &= -\hbar \partial_t s^{z}_{\rm tot} = i [s_{\rm tot}^{z},H], \\ s^{z}_{\rm tot} &= \frac{1}{2} \sum_{\bm k}(c_{{\bm k}\uparrow}^\dagger c_{{\bm k}\uparrow} - c_{{\bm k}\downarrow}^\dagger c_{{\bm k}\downarrow}), \end{align} where $H=H_{\rm NM} + H_{{\rm FI/AFI}} + H_{\rm ex}$ is the total Hamiltonian. The spin current is calculated in the form \begin{align} \hat{I}_s &= \sum_{a=1}^3 \hat{I}_S^{(a)}, \\ \hat{I}_s^{(a)} & = -i g_a(\theta) \sum_{\langle i,j\rangle} \sum_{\nu} (T^\nu_{ij} S_{\nu,i}^{(a)} s_j^- - {\rm h.c.}). \end{align} We note that the $z$ component of the total spin is not conserved at the interface because the magnetization of FI (or AFI) is not necessarily aligned in the $z$ direction. \subsection{Second-order perturbation} We calculate the spin current within the second-order perturbation with respect to the interfacial exchange coupling. For simplicity, we assume that the correlation between the exchange coupling for different pairs vanishes after random averaging on the positions of the interfacial sites: \begin{align} \langle T_{ij}^\nu (T_{i'j'}^{\nu'})^* \rangle_{\rm av} = |T_{ij}^\nu|^2 \delta_{i,i'} \delta_{j,j'} \delta_{\nu,\nu'}, \label{eq:localapproximation} \end{align} Then, the spin current is written as a sum of independent spin exchange processes at different pairs of the interfacial sites. We note that this kind of assumption has been used for long time to describe electric tunnel junctions\cite{Bruus2004,Kato2019,footnoteHT}. Then, the spin current is calculated as~\cite{Bruus2004,Kato2019} \begin{align} & \langle \hat{I}_s^{(a)} \rangle = \hbar^2 g_a(\theta)^2 \sum_{\nu} A_\nu \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} \, \left(-\frac{1}{{\cal N}{\cal N}_{\rm F}} \right)\sum_{{\bm k},{\bm q}} \nonumber \\ & \times {\rm Re} \, \Biggl[ \chi^<({\bm q},\omega) G_{\nu \nu}^{R,(a)}({\bm k},\omega) + \chi^A({\bm q},\omega) G_{\nu \nu}^{<,(a)}({\bm k},\omega)\Biggr], \end{align} where $A_\nu = 2\sum_{\langle i,j\rangle}|T_{ij}^\nu|^2/\hbar$, ${\cal N}$ is the number of sites in NM, and ${\cal N}_{\rm F}$ is the number of unit cells in FI (or AFI). Hereafter, we set $A= A_{\rm A}$ for FI and assume that the exchange coupling is equivalent for the two sublattices, $A=A_{\rm A}=A_{\rm B}$, for AFI. The advanced and lesser spin susceptibilities of NM, $\chi^A({\bm q},\omega)$ and $\chi^<({\bm q},\omega)$, are defined by the Fourier transformation of the following functions: \begin{align} \chi^A({\bm q},t) &= \frac{i}{{\cal N}\hbar} \theta(-t) \langle [ s^+_{\bm q}(t), s^-_{\bm q}(0) ] \rangle_0, \\ \chi^{<}({\bm q},t) &= \frac{i}{{\cal N}\hbar} \langle s^-_{\bm q}(0) s^+_{\bm q}(t) \rangle_0, \end{align} where $\langle \cdots \rangle_0$ indicates an average for the unperturbed Hamiltonian, $s^{\pm}_{\bm q}$ is the spin operator of conduction electrons given as \begin{align} s_{\bm q}^- & = \sum_{\bm k} c_{{\bm k}\downarrow}^\dagger c_{{\bm k}+{\bm q}\uparrow}, \\ s_{\bm q}^+ & = \sum_{\bm k} c_{{\bm k}+{\bm q}\uparrow}^\dagger c_{{\bm k}\downarrow}, \end{align} and $s^{\pm}_{\bm q}(t)=e^{iH_{\rm NM}t/\hbar} s^{\pm}_{\bm q} e^{-iH_{\rm NM}t/\hbar}$. For quasi-thermal equilibrium states, we can prove the dissipation-fluctuation theorem \begin{align} \chi^<({\bm q},\omega) &= -2i f(\hbar \omega + \mu_s) {\rm Im} \, \chi^A({\bm q},\omega), \label{eq:FDrelation} \end{align} using the Lehman representation, where $f(E)=(e^{\beta E}-1)^{-1}$ is the Bose distribution function. The retarded and lesser spin correlation functions of FI (or the AFI), $G_{\nu \nu'}^{R,(a)}({\bm k},\omega)$, and $G_{\nu\nu'}^{<,(a)}({\bm k},\omega)$, are defined by the Fourier transformation of the following functions: \begin{align} & G_{\nu\nu'}^{R,(a)}({\bm k},t) = -\frac{i}{\hbar} \theta(t)\langle [S^{(a)}_{\nu,{\bm k}}(t),(S^{(a)}_{\nu'{\bm k}}(0))^\dagger] \rangle_0, \label{GRa} \\ & G_{\nu\nu'}^{<,(a)}({\bm k},t) = -\frac{i}{\hbar} \langle (S^{(a)}_{\nu',{\bm k}}(0))^\dagger S^{(a)}_{\nu,{\bm k}}(t) \rangle_0, \label{GLa} \end{align} where $S^{(a)}_{\nu,{\bm k}}$ ($a=1,2,3$) are the spin operators defined by the spatial Fourier transformation of Eqs.~(\ref{S1})-(\ref{S3}). To continue calculation, we introduce a fluctuating part of the spin correlation function as $\delta S_{\nu,{\bm k}}^{(a)}(t) = S_{\nu,{\bm k}}^{(a)}(t) - \langle S_{\nu,{\bm k}}^{(a)} \rangle_0$. From Eqs.~(\ref{GRa}) and (\ref{GLa}), we obtain \begin{align} G^{R,(a)}_{\nu \nu}({\bm k},\omega) &= \delta G^{R,(a)}_{\nu \nu}({\bm k},\omega), \label{deltaGr}\\ G^{<,(a)}_{\nu \nu}({\bm k},\omega) &= -\frac{2\pi i {\cal N}_{\rm F} \tilde{S}_0^2}{\hbar} \delta_{a,1} \delta_{{\bm k},{\bm 0}} \delta(\omega) \nonumber \\ & + \delta G^{<,(a)}_{\nu \nu}({\bm k},\omega). \end{align} where $\delta G^{R,(a)}_{\nu \nu}({\bm k},\omega)$ and $\delta G^{<,(a)}_{\nu \nu}({\bm k},\omega)$ are the correlation functions, which are defined by replacing $S_{\nu,{\bm k}}^{(a)}(t)$ with $\delta S_{\nu,{\bm k}}^{(a)}(t)$ in Eqs.~(\ref{GRa}) and (\ref{GLa}). Using the Lehman representation, we can prove the dissipation-fluctuation theorem \begin{align} \delta G_{\nu\nu'}^{<,(a)}({\bm k},\omega) &= 2 i f(\hbar \omega) {\rm Im} \, \delta G^{R,(a)}_{\nu\nu'}({\bm k},\omega). \label{eq:GFD} \end{align} Combining Eqs.~(\ref{eq:FDrelation}), (\ref{deltaGr})-(\ref{eq:GFD}), the spin current is calculated as \begin{align} \langle \hat{I}_s \rangle &= I_{s,1} + I_{s,2}, \label{eq:formula1} \\ I_{s,1} &= \hbar A\sin^2 \theta \, \tilde{S}_0^2 N_{\nu} \, {\rm Im} \, \chi_{\rm loc}^R(0), \label{eq:formula0} \\ I_{s,2} &= \sum_{a=1}^3 2g_a(\theta)^2 \hbar A \sum_{\nu} \int \frac{d(\hbar \omega)}{2\pi} \, {\rm Im} \, \chi_{\rm loc}^R(\omega) \nonumber \\ & \times (-{\rm Im}\, \delta G_{\nu\nu,{\rm loc}}^{R,(a)}(\omega)) [ f(\hbar \omega)-f(\hbar \omega + \mu_s)], \label{eq:formulaa} \end{align} where $N_\nu$ is the number of sublattices ($N_\nu = 1$ for FI and $2$ for AFI). The local spin correlation functions, $\chi_{\rm loc}^{R}(\omega)$ and $G_{\nu\nu,{\rm loc}}^{R,(a)}(\omega)$, are defined as \begin{align} \chi_{\rm loc}^{R}(\omega) &= \frac{1}{\cal N} \sum_{\bm q} \chi^{R}({\bm q},\omega), \\ G_{\nu\nu,{\rm loc}}^{R,(a)}(\omega) &= \frac{1}{{\cal N}_{\rm F}} \sum_{\bm k} G_{\nu\nu}^{R,(a)}({\bm k},\omega). \end{align} Finally, let us summarize the physical meaning of the spin current formula given by Eqs.~(\ref{eq:formula1})-(\ref{eq:formulaa}). We stress that this formula is written in terms of spin susceptibilities for bulk materials and, therefore, is applicable to general systems such as NM with strong electron-electron interactions. The spin current is composed of two parts. The first part, $I_{s,1}$, describes the static part, which is induced by spin flipping owing to the static effective transverse magnetic field via interfacial exchange coupling. Actually, the static part $I_{s,1}$ is almost independent of the temperature well below the magnetic transition temperature and reaches the maximum when the accumulated spin at the interface in the side of NM is perpendicular to the magnetization of FI (or the N\'eel vector of AFI), i.e., $\theta = \pi/2$. This feature of $I_{s,1}$ coincides with the theory that is based on the spin mixing conductance in Refs.~\onlinecite{Nakayama2013a,Chen2013a}. However, there exists an additional contribution $I_{s,2}$, which is induced by the creation or annihilation of magnons. This part can be regarded as a dynamic part. In subsequent sections, we will show that this dynamic part depends on the temperature and that its angle dependence differs from the static part. \subsection{Spin conductance} From Eqs.~(\ref{eq:formula1})-(\ref{eq:formulaa}), the (dimensionless) spin conductance at the interface defined in Eq.~(\ref{eq:defGs}) is calculated as \begin{align} G_{s} &= G_{s,1} + G_{s,2}, \label{eq:Gsformula1} \\ G_{s,1} & = G_0 \sin^2 \theta, \label{eq:Gsformula2} \\ G_{s,2} & = \sum_{a=2}^3 2G_0 g_a(\theta)^2 \frac{1}{N_{\nu}} \sum_\nu \int \frac{dE}{2\pi} \, E \nonumber \\ & \times \left(-\frac{1}{\tilde{S}_0^2} {\rm Im}\, \delta G_{\rm loc}^{R,(a)}(E/\hbar)\right) \left[- \frac{df}{dE}\right], \label{eq:Gsformula3} \end{align} where $G_0 = \hbar A \tilde{S}_0^2 N_{\nu} \pi N(0)^2$, and $N(0)$ is the density of states in NM at the Fermi energy. We note that the spin chemical potential $\mu_s(0)$ is now recognized as $\mu_s$ in the model of NM (see Eq.~(\ref{eq:HamiltonianNM2})). \section{SMR in FI/NM bilayers} \label{sec:SMRFI} In this section, we calculate the spin conductance by employing the spin-wave approximation within description of noninteracting magnons for the ferromagnetic Heisenberg model. Hereafter, we assume that the amplitude of spins for the ground state, $S_0=\tilde{S}(T=0)$, is sufficiently large, and that the temperature is much lower than the magnetic transition temperature. By applying the Holstein-Primakoff transformation to the spin operators in the magnetization-fixed coordinate, the Hamiltonian of FI is approximately written as \begin{align} H_{\rm FI} &= \sum_{{\bm k}} \hbar \omega_{\bm k} b_{\bm k}^\dagger b_{\bm k}, \\ \hbar \omega_{\bm k} &\simeq {\cal D}k^2 + E_0, \end{align} where $b_{\bm k}$ ($b_{\bm k}^\dagger$) is the magnon annihilation (creation) operator, $E_0 = \hbar \gamma h_{\rm dc}$ is the Zeeman energy, and ${\cal D}=|J|S_0a^2$. In the spin-wave approximation neglecting magnon-magnon interaction, the local spin susceptibility is calculated as ${\rm Im} \, \delta G^{R,(1)}_{\rm loc}(E/\hbar) = 0$, and \begin{align} & {\rm Im} \, \delta G^{R,(2)}_{\rm loc}(E/\hbar) = -{\rm Im} \, \delta G^{R,(3)}_{\rm loc}(-E/\hbar) \nonumber \\ & \hspace{5mm} = -2\pi S_0 D_{\rm F}(E), \end{align} where $D_{\rm F}(E)$ is the density of states for magnon excitation per unit cell: \begin{align} D_{\rm F}(E) = \frac{1}{{\cal N}_{\rm F}} \sum_{\bm k} \delta(E - \hbar \omega_{\bm k}). \end{align} Although the magnetization $\tilde{S}_0$ depends weakly on the temperature within the present spin-wave approximation, we neglect it for simplicity $(\tilde{S}_0\simeq S_0)$. Then, the spin conductance takes the form \begin{align} G_s &= G_{s,0} + \Delta G_s \sin^2 \theta , \label{GSform} \end{align} where $G_{s,0}$ is the part that is independent of $\theta$, and the second term describes the angle dependence, i.e., SMR. The amplitude of SMR is calculated as \begin{align} \Delta G_s &= G_0 (1-g_{\rm F}(T)), \label{GSform2} \\ g_{\rm F}(T) & = \frac{1}{S_0} \int_0^{\infty} dE\, E D_{\rm F}(E) \left[- \frac{df}{dE}\right] . \end{align} We note that the first term $G_0$ in Eq.~(\ref{GSform2}) originates from the static part $G_{s,1}$, while the second term $-G_0 g_{\rm F}(T)$ originates from the dynamic part $G_{s,2}$. The factor $g_{\rm F}(T)$ is small under the condition $S_0 \gg 1$, for which the spin-wave approximation based on non-interacting magnon picture works well. If we neglect $g_{\rm F}(T)$, we recover the usual positive SMR behavior, $\Delta G_s =G_0 \sin^2 \theta$. The factor $g_{\rm F}(T)$ weakens positive SMR. When the Zeeman energy is neglected ($E_0 \simeq 0$), the temperature dependence of the SMR signal is obtained at sufficiently low temperatures as \begin{align} \frac{\Delta G_s}{\Delta G_s(T=0)} \simeq 1-\frac{5.2}{S_0} \left( \frac{k_{\rm B}T}{E_{\rm c}} \right)^{3/2} , \end{align} where $E_{\rm c} = {\cal D}k_{\rm c}^2$ is the cutoff energy, which is on the order of the transition temperature (for details, see Appendix~\ref{app:cutoff}). If $g_{\rm F}(T)$ exceeds 1, the sign of SMR changes. The temperature of the sign change, $T_{\rm r}$, is estimated as \begin{align} k_{\rm B}T_{\rm r} \sim \left(\frac{S_0}{5.2}\right)^{2/3} E_{\rm c}. \label{eq:TrFI} \end{align} We note that for $S_0 \gg 1$, $T_{\rm r}$ becomes on order of $E_{\rm c}$ for which the non-interacting magnon approximation is not justified. This estimate indicates that the sign change of SMR occurs if $S_0$ is not large. \section{SMR in AFI/NM bilayers} \label{sec:SMRAFI} In the spin-wave approximation, the Hamiltonian for AFI is obtained in the leading order of $1/S_0$ as \begin{align} H_{\rm AFI} &= \sum_{{\bm k}} \hbar \omega_{\bm k} (\alpha_{\bm k}^\dagger \alpha_{\bm k} + \beta_{\bm k}^\dagger \beta_{\bm k}), \label{AFISW} \end{align} where $\alpha_{\bm k}$ and $\beta_{\bm k}$ are the annihilation operators for magnons, $\omega_{\bm k} = v_{\rm m} |{\bm k}|$ is the dispersion relation, and $v_{\rm m} = zJS_0a/(\sqrt{3} \hbar)$ is the velocity of magnons (see Appendix~\ref{app:SWAAFI}). Here, we approximated the magnon dispersion as a liner dispersion in the long wavelength limit ($|{\bm k}|\rightarrow 0$). The local spin susceptibility is calculated as ${\rm Im} \, \delta G_{\nu\nu,{\rm loc}}^{R,(1)}(E/\hbar) = 0$ and \begin{align} & \sum_{\nu={\rm A},{\rm B}} {\rm Im} \, \delta G_{\nu\nu,{\rm loc}}^{R,(2)}(E/\hbar)= -\sum_{\nu={\rm A},{\rm B}} {\rm Im} \, \delta G_{\nu\nu,{\rm loc}}^{R,(3)}(-E/\hbar) \nonumber \\ & \hspace{5mm} = -2\pi S_0 F(E) (D_{\rm AF}(E)-D_{\rm AF}(-E)), \label{GAFloc} \end{align} where $D_{\rm AF}(E)$ and $F(E)$ are the density of states for magnon excitation and form factor, respectively (see Appendix~\ref{app:SWAAFI}): \begin{align} D_{\rm AF}(E) &= \frac{1}{{\cal N}_{\rm F}} \sum_{\bm k} \delta(E - \hbar \omega_{\bm k}), \\ F(E) &= \frac{\sqrt{3}\hbar v_{\rm m}}{|E|a}. \label{eq:DAFdef} \end{align} Using these results, the spin conductance is calculated in the form of Eq.~(\ref{GSform}). Then, the amplitude of the SMR is given as \begin{align} \Delta G_s &= G_0 (1-g_{\rm AF}(T)) , \\ g_{\rm AF}(T) & = \frac{1}{S_0} \int_0^{\infty} dE\, E F(E) D_{\rm AF}(E) \left[- \frac{df}{dE}\right] . \end{align} For AFI, the temperature dependence of the SMR signal is given as \begin{align} \frac{\Delta G_s}{\Delta G_s (T=0)} & \simeq 1-\frac{4.4}{S_0} \left( \frac{k_{\rm B}T}{E_{\rm c}} \right)^{2}, \label{eq:SMRtempAFI} \end{align} where $E_{\rm c} = \hbar v_{\rm m} k_{\rm c}$ is the cutoff energy (see Appendix~\ref{app:cutoff}). The temperature of the sign change is estimated as \begin{align} k_{\rm B}T_{\rm r} \sim \left(\frac{S_0}{4.4}\right)^{1/2} E_{\rm c}. \label{eq:TrAFI} \end{align} This estimate indicates that the sign change of SMR may occur if $S_0$ is not large. \section{Comparison with Experiments} \label{sec:experiment} Let us first consider SMR for FI. SMR experiments for FI have been performed mainly for Pt/YIG bilayer systems~\cite{Nakayama2013a,Chen2013a,Vlietstra2013,Althammer2013,Hahn2013}. In this theory, the temperature of the sign change of SMR signal estimated for Pt/YIG exceeds the magnetic transition temperature using Eq.~(\ref{eq:TrFI}) and $S_0=10$. This indicates that the correction by the factor of $g_{\rm F}(T)$ is small, and no sign change occurs. This result is consistent with the detailed measurement of of SMR in Pt/YIG\cite{Meyer2014,Marmion2014}, where the observed temperature dependence is interpreted by that of the spin diffusion length. Our result also provides insights into measurement of non-local magnetoresistance in Pt/YIG/Pt nanostructures\cite{Cornelissen2015,Goennenwein2015}, which is induced by magnon diffusion in YIG; our result indicates that the temperature dependence of non-local magnetoresistance mainly comes from that of the spin diffusion in YIG. \begin{figure}[!tb] \begin{center} \includegraphics[width=75mm]{FigCompare2.eps} \caption{(Color online) Experimental data of SMR (amplitude of the magnetization-dependent part of magnetoresistance) measured in Pt/NiO/YIG bilayer systems for the NiO thickness of 2.0, 2.2, and 2.7 nm; obtained from Ref.~\onlinecite{Hou2017}. The solid curves show fitting using quadratic temperature dependence described by Eq.~(\ref{eq:SMRtempAFI}). The data is normalized as $-1$ at zero temperature using the extrapolated value by the fitting. } \label{fig:compare} \end{center} \end{figure} SMR was also measured for AFI/NM bilayer systems in several experiments. In Fig.~\ref{fig:compare}, we show the measured temperature dependence of SMR in Pt/NiO/YIG bilayer systems; obtained from Ref.~\onlinecite{Hou2017}. In this experiment, an estimate of the factor $g_s \coth(d_N/\lambda)$ is much smaller than 1 using $\lambda=1.5 \ {\rm nm}$, $d=4 \ {\rm nm}$, , $\sigma^{-1} = 860 \ \Omega \cdot {\rm nm}$, and $G_s/(\hbar/2e^2)\sim 3\times 10^{12} \ \Omega^{-1} \cdot {\rm m}^{-2}$. Therefore, SMR is proportional to spin conductance (see Eq.~(\ref{SMRformula})). As seen from Fig.~\ref{fig:compare}, the sign of SMR changes at $80$, $140$, and $180 \ {\rm K}$ for NiO thickness of 2.0, 2.2, and 2.7 nm. In addition, we show the fitted curve using quadratic temperature dependence described by Eq.~(\ref{eq:SMRtempAFI}) in the figure. This fitting indicates that the quadratic temperature dependence explains well the experimental data at low temperatures. If we employ $S_0=0.94$ and $E_{\rm c} = 1500 \ {\rm K}$ from the magnon dispersion measured by the neutron experiment~\cite{Hutchings1972}, the temperature of this sign change is estimated for bulk NiO as $T_{\rm r} = 690 \ {\rm K}$ (see also Appendix~\ref{app:cutoff}). This estimated temperature for the sign change is much larger than the experimental observation shown in Fig.~\ref{fig:compare}. However, the N{\'e}el temperature of NiO ($T_N = 525 \ {\rm K}$) is suppressed for the thin layer~\cite{Alders1998}, which indicates a decrease in the magnon velocity. In addition, the non-interacting magnon approximation holds well only at low temperatures compared to the N{\'e}el temperature. In this paper, we discussed SMR at low temperatures using the spin-wave approximation neglecting magnon-magnon interaction. Because the spin current formula derived in this paper is general, SMR can be evaluated for arbitrary temperatures using a numerical method such as the Monte Carlo method. Detailed numerical analysis beyond the non-interacting magnon approximation and the consideration of roughness of the interface is left as a future problem. \section{Onsager Relation} \label{sec:noise} In this section, we formulate noise in thermal equilibrium ($\delta \mu_s = 0$) and derive the Onsager relation, which relates thermal noise to spin conductance. We define the noise power of the spin current as \begin{align} {\cal S} =\int_{-\infty}^{\infty} dt (\langle \hat{I}_{\rm S}(t) \hat{I}_{\rm S}(0) \rangle+\langle \hat{I}_{\rm S}(0) \hat{I}_{\rm S}(t) \rangle). \end{align} In second-order perturbation with respect to the exchange coupling at the interface, we can replace the average with that for an unperturbed system as $\langle \cdots \rangle \simeq \langle \cdots \rangle_0$. Using a similar procedure that is used for spin current, the noise power is calculated as \begin{align} {\cal S} & = {\cal S}_1 + {\cal S}_2, \\ {\cal S}_1 &= 2\hbar^2 A \tilde{S}_0^2 N_\nu \sin^2 \theta \nonumber \\ & \hspace{5mm} \times \lim_{\omega \rightarrow 0} (-i) [\chi_{\rm loc}^<(\omega) + \chi_{\rm loc}^>(\omega) ], \\ {\cal S}_2 &= \sum_{a=1}^3 2\hbar^2 A g_a(\theta)^2 \sum_{\nu} \int_{-\infty}^{\infty}\frac{d(\hbar \omega)}{2\pi} \nonumber \\ & \times \Biggl[\chi_{\rm loc}^<(\omega) \delta G_{\nu\nu,{\rm loc}}^{>,(a)}(\omega) + \chi^>(\omega) \delta G_{\nu\nu,{\rm loc}}^{<,(a)}(\omega) \Biggr]. \end{align} Here, $\chi_{\rm loc}^>(\omega)$ and $G_{\nu\nu',{\rm loc}}^{>,(a)}(\omega)$ are the greater components of local spin susceptibilities defined as \begin{align} & \chi^>_{\rm loc}(\omega) =\frac{i}{{\cal N}^2\hbar} \sum_{\bm q} \int dt \, e^{i\omega t} \langle s_{\bm q}^+(t) s_{\bm q}^-(0) \rangle , \\ & \delta G^{>,(a)}_{\nu \nu',{\rm loc}}(\omega) = -\frac{i}{{\cal N}_{\rm F}\hbar} \sum_{\bm k} \int dt \, e^{i\omega t} \nonumber \\ & \hspace{25mm} \times \langle \delta S_{\nu,{\bm k}}^{(a)}(t) (\delta S_{\nu',{\bm k}}^{(a)}(0))^\dagger \rangle_0, \end{align} Using the dissipation-fluctuation relations \begin{align} \chi_{\rm loc}^>(\omega) &= 2 i (1+f(\hbar \omega+\delta \mu_s)) {\rm Im} \, \chi_{\rm loc}^R(\omega), \\ \delta G_{\nu\nu',{\rm loc}}^{>,(a)}(\omega) &= 2 i (1+f(\hbar \omega)) {\rm Im} \, \delta G_{\nu\nu',{\rm loc}}^{R,(a)}(\omega), \end{align} the thermal spin-current noise is calculated as \begin{align} {\cal S}_1 &= 2 \hbar k_{\rm B} T G_0 \sin^2 \theta, \\ {\cal S}_2 &= \sum_{a=1}^3 8 \hbar^2 A g_a(\theta)^2 \sum_{\nu} \int_{-\infty}^{\infty}\frac{d\omega}{2\pi} \, {\rm Im} \, \chi^R({\bm q},\omega) \nonumber \\ & \hspace{5mm} \times {\rm Im} \, G_{\nu \nu,{\rm loc}}^{R,(a)}({\bm k},\omega) \, f(\hbar \omega)(1+ f(\hbar \omega) ). \end{align} Using the identity \begin{align} f(\hbar \omega)(1+ f(\hbar \omega)) = k_{\rm B}T \left(-\frac{df}{dE}\right), \end{align} and comparing these results with Eqs.~(\ref{eq:Gsformula1})-(\ref{eq:Gsformula3}), we can prove the Onsager relation \begin{align} {\cal S} = 4 \hbar k_{\rm B} T G_s. \end{align} We stress that this proof is general, and this relation holds at arbitrary temperatures for a wide class of the Hamiltonian for NM and FI (or AFI). \section{Summary} \label{sec:summary} We constructed a microscopic theory for spin Hall magnetoresistance observed in bilayer systems composed of a normal metal and a ferromagnetic (or antiferromagnetic) insulator. We formulated the spin current at the interface in terms of spin susceptibilities and clarified that it is composed of static and dynamic parts. The static part of spin current originates from spin flip owing to an effective magnetic field induced by an interfacial exchange coupling. This part is almost independent of the temperature, and takes a maximum when the magnetization (or the N{\'e}el vector) is perpendicular to accumulated spins in a normal metal, which is consistent with intuitive discussions in previous experimental studies. However, the dynamic part, which is induced by creation or annihilation of magnons, depends on the temperature, and has opposite magnetization dependence, i.e., takes a maximum when the magnetization (or the N{\'e}el vector) is parallel to accumulated spins in a normal metal. The dynamic part becomes larger when the amplitude of the localized spin, $S_0$, is smaller. This indicates that the sign of SMR changes at a specific temperature if $S_0$ is sufficiently small. Our study gives the first microscopic description of SMR in the NM/AFI bilayer systems. We also discussed that the measured temperature dependence of the SMR in the Pt/NiO/YIG trilayer system~\cite{Hou2017} is consistent with our results. Finally, we proved the Onsager relation between spin conductance and thermal spin-current noise using a microscopic calculation. Our general formalism, which is applicable to various systems for arbitrary temperatures, is essential for describing spin Hall magnetoresistance. Theoretical analysis beyond the non-interacting magnon approximation and the extension of our thoery toward non-collinear magnets are left as future problems. \acknowledgements T.K. is financially supported by JSPS KAKENHI Grant Numbers JP20K03831. M.M. is financially supported by the Priority Program of Chinese Academy of Sciences, Grant No. XDB28000000 and KAKENHI (No. 20H01863) from MEXT, Japan.
{'timestamp': '2020-09-15T02:14:20', 'yymm': '2005', 'arxiv_id': '2005.14420', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14420'}
arxiv
\section{ Introduction} In this paper, we consider the Dirichlet problem for the Lagrangian mean curvature equation on a uniformly convex, bounded domain $\Omega\subset\mathbb{R}^n$, given by \begin
{align} \begin{cases} F(D^{2}u)=\sum _{i=1}^{n}\arctan \lambda_{i}=\psi(x) \text{ in } \Omega\\ u=\phi \text{ on } \partial \Omega \end{cases}\label{lab} \end{align} where $\lambda_i$'s are the eigenvalues of the Hessian matrix $D^2u$, $\psi$ is the potential for the mean curvature of the Lagrangian submanifold $\{(x,Du(x))|x\in \Omega\}\subseteq\mathbb {R}^n \times \mathbb {R}^n$, and $\phi$ is a given continuous function on $\partial \Omega$. Our main results in this paper are the following: \begin{theorem}\label{main} \label{1} Suppose that $\phi\in C^{0}(\partial \Omega)$ and $\psi: \overline \Omega\rightarrow [(n-2)\frac{\pi}{2}+\delta, n\frac{\pi}{2})$ is in $C^{1,1}(\overline \Omega)$, where $\Omega$ is a uniformly convex, bounded domain in $\mathbb{R}^{n}$ and $\delta>0$. Then there exists a unique solution $u\in C^{2,\alpha}(\Omega)\cap C^{0}(\partial \Omega)$ to the Dirichlet problem (\ref{lab}). \end{theorem} \begin{theorem}\label{2.20} \label{1} Suppose that $\phi\in C^{0}(\partial \Omega)$ and $\psi:\overline \Omega \rightarrow (-n\frac{\pi}{2}, n\frac{\pi}{2})$ is a constant, where $\Omega$ is a uniformly convex, bounded domain in $\mathbb{R}^{n}$. Then there exists a unique solution $u\in C^{0}(\overline{\Omega})$ to the Dirichlet problem (\ref{lab}). \end{theorem} When the phase $\psi$ is constant, denoted by $c$, $u$ solves the special Lagrangian equation \begin{equation} \sum _{i=1}^{n}\arctan \lambda_{i}=c \label{s1} \end{equation} or equivalently, \[ \cos c \sum_{1\leq 2k+1\leq n} (-1)^k\sigma_{2k+1}-\sin c \sum_{0\leq 2k\leq n} (-1)^k\sigma_{2k}=0. \] Equation (\ref{s1}) originates in the special Lagrangian geometry by Harvey-Lawson \cite{HL}. The Lagrangian graph $(x,Du(x)) \subset \mathbb {R}^n \times \mathbb {R}^n$ is called special when the argument of the complex number $(1+i\lambda_1)...(1+i\lambda_n)$ or the phase $\psi$ is constant and it is special if and only if $(x,Du(x))$ is a (volume minimizing) minimal surface in $(\mathbb {R}^n \times \mathbb {R}^n,dx^2+dy^2)$ \cite{HL}. A dual form of (\ref{s1}) is the Monge-Amp\'ere equation \begin{equation*} \sum_{i=1}^n \ln\lambda_i=c. \end{equation*} This is the potential equation for special Lagrangian submanifolds in $(\mathbb {R}^n \times \mathbb {R}^n, dxdy)$ as interpreted in \cite{Hi}. The gradient graph $(x,Du(x))$ is volume maximizing in this pseudo-Euclidean space as shown by Warren \cite{W}. In the 1980s, Mealy \cite{Me} showed that an equivalent algebraic form of the above equation is the potential equation for his volume maximizing special Lagrangian submanifolds in $(\mathbb {R}^n \times \mathbb {R}^n, dx^2-dy^2)$. A key prerequisite for the smooth solvability of the Dirichlet problem for fully nonlinear, elliptic equations is the concavity of the operator on the space of symmetric matrices. The arctangent operator or the logarithmic operator is concave if $u$ is convex, or if the Hessian of $u$ has a lower bound $\lambda\geq 0$. Certain concavity properties of the arctangent operator are still preserved for saddle $u$. The concavity of the arctangent operator in (\ref{lab}) depends on the range of the Lagrangian phase. The phase $(n-2)\frac{\pi}{2}$ is called critical because the level set $\{ \lambda \in \mathbb{R}^n \vert \lambda$ satisfying $ (\ref{lab})\}$ is convex only when $|\psi|\geq (n-2)\frac{\pi}{2}$ \cite[Lemma 2.2]{YY}. The concavity of the level set is evident for $|\psi|\geq (n-1)\frac{\pi}{2}$ since that implies $\lambda>0$ and then $F$ is concave. For a supercritical phase $|\psi|\geq(n-2)\frac{\pi}{2}+\delta$ the operator $F$ can be extended to a concave operator \cite{CPX, CW}. \begin{comment} A key prerequisite for the solvability of the Dirichlet problem for fully nonlinear, elliptic equations is the concavity of the operator on the space of symmetric matrices. The concavity of the arctangent operator depends on the range of the phase. The phase $(n-2)\frac{\pi}{2}$ is called critical because the level set $\{ \lambda \in \mathbb{R}^n \vert \lambda$ satisfying $ (\ref{lab})\}$ is convex only when $|\psi|\geq (n-2)\frac{\pi}{2}$. If $|\psi|\geq (n-1)\frac{\pi}{2}$ then $F$ is concave, and if $ (n-2)\frac{\pi}{2}\leq |\psi|\leq (n-1)\frac{\pi}{2}$ then $F$ has convex level sets but can fail to be concave in general as shown by Yuan in \cite{YY}. The arctangent operator is also concave if $u$ is convex, or if the Hessian of $u$ has a lower bound $\lambda\ge 0$. However, when the phase lies in the supercritical range $|\psi|\geq(n-2)\frac{\pi}{2}+\delta$ the operator $F$ can be extended to a concave operator as shown by Chen-Warren \cite{CW} and Collins-Picard-Wu \cite{CPX}. \end{comment} The Dirichlet problem for fully nonlinear, elliptic equations of the form $ F(\lambda[D^2u])=\psi(x)$ was studied by Caffarelli-Nirenberg-Spruck in \cite{CNS}, where they proved the existence of classical solutions under various hypotheses on the function $F$ and the domain. Their results extended the work of Krylov \cite{kry1}, Ivo\v{c}kina \cite{Ivo1}, and their previous work \cite{CNS1} on equations of Monge-Ampère type. For the Monge-Ampère equation, continuous boundary data leads to only Lipschitz continuous solutions; Pogorelov \cite{P2} constructed his famous counterexamples for the three dimensional Monge-Ampère equation $\sigma_3(D^2u) = \det(D^2u) = 1$, which also serve as counterexamples for cubic and higher order symmetric $\sigma_k$ equations. In \cite{Tru}, Trudinger proved existence and a priori estimates of smooth solutions of fully nonlinear equations of the type of Hessian equations. In \cite{NT}, Ivo\v{c}kina-Trudinger-Wang studied the Dirichlet problem for a class of fully nonlinear, degenerate elliptic equations which depend only on the eigenvalues of the Hessian matrix. In \cite{HL0}, Harvey-Lawson studied the Dirichlet problem for fully nonlinear, degenerate elliptic equations of the form $F(D^2u)=0$ on a smoothly bounded domain in $\mathbb{R}^n$. Interior regularity for viscosity solutions of (\ref{s1}) with critical and supercritical constant phase $ |\psi|\geq(n-2)\frac{\pi}{2}$ was shown by Warren-Yuan \cite{WY} and Wang-Yuan \cite{WaY}. For a subcritical phase $ |\psi|<(n-2)\frac{\pi}{2}$, singular solutions of (\ref{s1}) were constructed by Nadirashvili-Vl\u{a}du\c{t} \cite{NV} and Wang-Yuan \cite{WangY}. The existence and uniqueness of continuous viscosity solutions to the Dirichlet problem for (\ref{s1}) with continuous boundary data was shown by Yuan \cite{Ynotes}. In \cite{BrW}, Brendle-Warren studied a second boundary value problem for the special Lagrangian equation. In \cite{CPX}, Collins-Picard-Wu solved the Dirichlet problem (\ref{lab}) on a compact domain with $C^4$ boundary value under the assumption of the existence of a subsolution and a supercritical phase restriction. In \cite{RB}, Dinew-Do-T{\^o} showed the existence and uniqueness of a $C^0$ solution to (\ref{lab}) on a bounded $C^2$ domain with $C^0$ boundary value under the assumption of the existence of a subsolution and a supercritical phase restriction. In Theorem \ref{main}, we assume $\psi\geq (n-2)\frac{\pi}{2}+\delta$ since by symmetry $\psi\leq- (n-2)\frac{\pi}{2}-\delta$ can be treated similarly. The proof of Theorem \ref{main} follows from a standard continuity method and a uniform approximation of the $C^0$ boundary value. The major difficulty in proving uniform $C^{2,\alpha}$ estimates up to the boundary, which is necessary for the continuity method, is in estimating the double normal derivatives at the boundary without the aid of a given subsolution. We get around this by constructing a lower linear barrier function for $u_n$ by applying Trudinger's technique and a change of basis argument. Once we derive uniform $C^{2,\alpha}$ estimates up to the boundary, we use the a priori interior Hessian estimates proved in \cite{AB} to approximate the $C^0$ boundary value from which Theorem \ref{main} follows. In Theorem \ref{2.20}, we consider all values of the constant Lagrangian phase, which includes subcritical values. The main difficulty here is the lack of uniform ellipticity and concavity. The proof follows via Perron's method using an idea that was introduced by Ishii \cite{Ish2} where we apply comparison principles for strictly elliptic\footnote{$F(D^2u)=\psi$ is strictly elliptic in the sense that $(F_{u_{ij}}(D^2u))>0$}, non concave, fully nonlinear equations \cite{yy2004}. In \cite{HL0}, Harvey-Lawson established the existence and uniqueness of continuous solutions of fully nonlinear, degenerate elliptic equations of the form $F(D^2u)=0$ on a smoothly bounded domain in $\mathbb{R}^n$ under an explicit geometric $F$‐convexity assumption on the boundary of the domain. The key ingredients of their proof were the usage of subaffine functions and Dirichlet duality. As an application, the continuous solvability of the constant phase equation (\ref{s1}) is obtained. In contrast, in Theorem \ref{2.20} of this paper, we focus only on the continuous solvability of the Dirichlet problem of equation (\ref{s1}) and provide a short proof that solely relies on a certain comparison principle. \begin{remark}For Theorem \ref{main}, an assumption weaker than $C^{1}$ on $\psi$ will lead to counterexamples with continuous boundary data. For example, in two dimensions, we consider a boundary value problem of (\ref{lab}) on the unit ball $B_1(0)$ where the phase is in $C^{\alpha}$ with $\alpha\in (0,1)$: $\psi(x)=\frac{\pi}{2}-\arctan (\alpha^{-1}|x|^{1-\alpha})$ and $u(x)=\int_{0}^{|x|}t^{\alpha}dt$ on $\partial B_1$. This problem admits a non $C^2$ viscosity solution $u$ with gradient $D u=|x|^{\alpha-1}x$, thereby proving a contradiction. If the Lagrangian phase is subcritical, i.e. $ |\psi|<(n-2)\frac{\pi}{2}$, then even for the constant phase equation (\ref{s1}) with analytic boundary data, $C^0$ viscosity solutions may only be $C^{1,\varepsilon_{0}}$ but no more as shown by Wang-Yuan \cite{WangY}.\\ However, the existence of $C^{2,\alpha}$ solutions to (\ref{lab}) with critical and supercritical phase, i.e. $|\psi|\geq (n-2)\frac{\pi}{2}$, where $\psi\in C^{1,\varepsilon_{0}}$ or even or even $|\psi|\geq (n-2)\frac{\pi}{2}$ where $\psi\in C^{1,1}$, are still open questions. As of now, it is also unknown if $C^0$ viscosity solutions of (\ref{s1}) are Lipschitz for subcritical phases. \end{remark} \begin{remark} In Theorem \ref{2.20}, if we replace the constant phase with any continuous function lying in the subcritical or critical range, then the existence and uniqueness of $C^{0}$ viscosity solutions of (\ref{lab}) remain open questions. This is due to the lack of a suitable comparison principle for strictly elliptic, non concave, fully nonlinear equations with a variable right hand side. In \cite{hl1}, Harvey-Lawson introduced a condition called “tameness” on the operator $F$, which is a little stronger than strict ellipticity and allows one to prove comparison. In \cite{hl2}, they further proved that for the Lagrangian mean curvature equation, one can only show tamability in the supercritical phase interval. Recently in \cite{kev}, Cirant-Payne established comparison for this equation when the range of the phase is restricted to the intervals $((n-2k)\frac{\pi}{2},(n-2(k-1))\frac{\pi}{2})$ where $1\leq k\leq n$. This in turn solves the Dirichlet problem on these intervals as shown in \cite[Theorem 6.2,C]{hl2}. \\ For $\sigma_k$ equations with a variable right hand side, results analogous to Theorem \ref{2.20} exist. This is due to the fact that the linearized operator has a positive lower bound in determinant unlike the Lagrangian mean curvature equation (\ref{lab}). \end{remark} This article is divided into the following sections: in section two, we state some well known algebraic and trigonometric inequalities satisfied by solutions of (\ref{lab}). In section three, we prove $C^{2,\alpha}$ estimates up to the boundary assuming $C^4$ boundary data. In section four, we first solve the Dirichlet problem with $C^{4}$ boundary data using the method of continuity and then combine it with the Hessian estimates proved in \cite{AB} to solve the Dirichlet problem with continuous boundary data. In section five, we prove Theorem \ref{2.20}. In section six (appendix), we state a well known linear algebra Lemma that we use in estimating the Hessian of $u$ on the boundary and we provide the proof of a certain comparison principle that is essential for the proof of Theorem \ref{2.20}.\\ \textbf{Acknowledgments.} The author is grateful to Y.Yuan for his guidance, support, and several useful discussions. The author is grateful to R.Harvey and B.Lawson for their insightful feedback on the comparison principle. The author thanks R.Shankar and M.Warren for helpful comments and suggestions. \section{Preliminaries} The induced Riemannian metric on the Lagrangian submanifold $\{(x,Du(x))|x\in \Omega\}\subset \mathbb {R}^n \times \mathbb {R}^n$ is given by \[g=I_n+(D^2u)^2. \] On taking the gradient of both sides of the Lagrangian mean curvature equation (\ref{lab}) we get \begin{equation} \sum_{a,b=1}^{n}g^{ab}u_{jab}=\psi_j \label{linearize} \end{equation} where $g^{ab}$ is the inverse of the induced Riemannian metric $g$. From \cite[(2.19)]{HL} we see that the mean curvature vector $\vec{H}$ of this Lagrangian submanifold $\{(x,Du(x))|x\in\Omega\}$ is given by $\vec{H}=J\nabla_g\psi $ where $\nabla_g$ is the gradient operator for the metric $g$ and $J$ is the complex structure, or the $\frac{\pi}{2}$ rotation matrix in $\mathbb{R}^n\times \mathbb{R}^n$. Next we state the following Lemma. \begin{lemma}\label{y1} Suppose that the ordered real numbers $\lambda_{1}\geq \lambda_{2}\geq...\geq \lambda_{n}$ satisfy (\ref{lab}) with $\psi\geq (n-2)\frac{\pi}{2}$. Then we have \begin{enumerate} \item $\lambda_{1}\geq \lambda_{2}\geq...\geq \lambda_{n-1}>0, \lambda_{n-1}\geq |\lambda_{n}|$, \item $\lambda_{1}+(n-1)\lambda_{n}\geq 0$, \item $\sigma_{k}(\lambda_{1},...,\lambda_{n})\geq 0$ for all $1\leq k\leq n-1$ and $n\geq 2$, \item if $\psi\geq (n-2)\frac{\pi}{2}+\delta$, then $D^2u\geq -\cot \delta I_n$. \end{enumerate} \end{lemma} \begin{proof} Properties (1), (2), and (3) follow from \cite[Lemma 2.1]{WaY}. Property (4) follows from \cite[Pg 1356]{YY}. \end{proof} \section{$C^{2,\alpha}$ estimate up to the boundary} We first prove the following $C^{2,\alpha}$ estimate up to the boundary of $\Omega$. \begin{theorem} \label{2} Let $\phi\in C^{4}(\overline \Omega)$ and $\psi:\overline \Omega\rightarrow [(n-2)\frac{\pi}{2}+\delta, n\frac{\pi}{2})$ be in $C^{2,\alpha}(\overline \Omega)$, where $\Omega$ is a uniformly convex domain in $\mathbb{R}^{n}$ and $\delta>0$. Then there exists a universal constant $\alpha\in(0,1)$ such that if $u\in C^{4,\alpha}(\overline \Omega)$ is a solution of (\ref{lab}), then \begin{equation}||u||_{C^{2,\alpha}(\overline \Omega)}\leq C(||\psi||_{C^{1,1}(\overline \Omega)}, ||\phi||_{C^{4}(\overline \Omega)},n, \delta,\overline{\Omega}). \label{bdy} \end{equation} \end{theorem} \begin{proof} We first make the following observation, which will be used for steps 1,2,3.2, and 3.3 below.\\ We pick an arbitrary boundary point $x_{0}\in\partial\Omega$. By a rotation and translation we choose a co-ordinate system such that the chosen boundary point is the origin and $\Omega$ lies above the hyperplane $\{x_{n}=0\}$ with $e_n$ as the inner unit normal at $0$. For such a domain, we can write \begin{equation} \partial\Omega=\{(x',x_n)|x_n=h(x')=\frac{1}{2}(k_1x_1^2+...+k_{n-1}x_{n-1}^2)+o(|x'|^2)\}. \label{lala} \end{equation} At $0\in\partial \Omega$ the boundary value satisfies \begin{align} \phi(x',x_n)=\phi(x',h(x'))=\phi(0)+\phi_{x'}(0)x'\nonumber\\ +\phi_{x_n}(0)h(x')+\phi_{x'x'}(0)x'x'+\phi_{x_nx_n}(0)h(x')h(x')+o(|x'|^2+h^2(x'))\nonumber\\ =Q(x)+o(1)|x'|^2 \nonumber \end{align} where $Q(x)$ is a quadratic. So there exists $C_0=C_0(||\phi||_{C^2(\partial{\Omega})}, n,\kappa)$ such that \begin{align} L^-=-C_0x_n\leq \phi\leq C_0x_n=L^+ \text{ on $\partial\Omega$}. \label{lala1} \end{align}\\ We now prove estimate (\ref{bdy}) in the following four steps. We will estimate all the boundary derivatives of $u$ at the origin. \begin{itemize} \item[Step 1.] Bound for $||u||_{L^{\infty}(\overline{\Omega})}$. \begin{claim} We show the following \begin{equation} ||u||_{L^{\infty}(\overline{\Omega})}\leq C(||\phi||_{C^{2}(\overline \Omega)},n, |\partial \Omega|_{C^2}) .\label{est1} \end{equation} \end{claim} \begin{proof}The function $\psi:\overline \Omega\rightarrow [(n-2)\frac{\pi}{2}+\delta, n\frac{\pi}{2})$ is in $C^{1,1}(\overline \Omega)$, so there exists $\varepsilon>0$ such that $\psi<n\frac{\pi}{2}-\varepsilon$. Fixing this $\varepsilon$ we define $\underline{\psi}=(n-2)\frac{\pi}{2}+\delta$ and $\overline{\psi}=n\frac{\pi}{2}-\varepsilon$.\\ Recalling (\ref{lala1}) we find constants $c_0$ and $C'_0$ depending on $C_0$ above such that on $\partial \Omega$, we have \begin{equation*} -c_0|x|^2+ \frac{1}{2}|x|^2\tan\frac{\overline{\psi}}{n}=-C'_0|x|^2=-C_0x_n\leq \phi \leq C'_0|x|^2+ \frac{1}{2}|x|^2\tan\frac{\underline{\psi}}{n}. \end{equation*} Using relation (\ref{lala}) we define \begin{align} -Cx_n+ \frac{1}{2}|x|^2\tan\frac{\overline{\psi}}{n} =B^-\label{bn}\\ Cx_n+ \frac{1}{2}|x|^2\tan\frac{\underline{\psi}}{n} =B^+.\label{bbnn} \end{align} where $C=C(||\phi||_{C^2(\partial{\Omega})}, n,\kappa_i)$. We observe that \begin{align} F(D^{2}B^-)\geq F(D^{2}u) \geq F(D^{2}B^+) \text{ in $\Omega$}\nonumber\\ B^-\leq u\leq B^+ \text{ on $\partial\Omega$ with equality holding at $0$.} \label{lebu} \end{align} Using comparison principles we see that (\ref{est1}) holds. \end{proof} \item[Step 2.] Bound for $||Du||_{L^{\infty}(\overline{\Omega})}$. \begin{claim} We show the following \begin{equation} ||Du||_{L^{\infty}(\overline{\Omega})}\leq C(||\psi||_{C^{1}(\overline \Omega)},||\phi||_{C^{2}(\overline \Omega)},n, \delta,|\partial \Omega|_{C^2}). \label{1.1} \end{equation} \end{claim} \begin{proof} On linearizing (\ref{lab}), we get (\ref{linearize}) and since $\psi\in C^{1,1}(\overline{\Omega})$, we see that $|g^{ij}\partial_{ij}u_e|\leq C(|\psi|_{C^1(\overline{\Omega})})$. From Lemma (\ref{y1}), we see that u is semi-convex, i.e. $D^2u\geq -\cot\delta I_n$. We modify $u$ to the convex function $u+\cot\delta \frac{|x|^2}{2}$ from which we see that $Du(x)+\cot\delta x$ attains its supremum on the boundary of $\Omega$. So we have \begin{equation} \sup_{\bar{\Omega}}Du(x)=\sup_{\partial \Omega}Du(x)+\cot\delta. \label{1h} \end{equation} For $1\leq i\leq n-1$, $u_{i}=\phi_{i}$, so we only need to estimate $u_{n}(0)$. Recalling (\ref{lebu}), we again use comparison principles and on taking the normal derivative at $0$, we get \[|u_{n}(0)|\leq C(||\psi||_{C^{1}(\overline \Omega)},||\phi||_{C^{2}(\overline \Omega)},n,|\partial \Omega|_{C^2}). \] Combining (\ref{1h}) with the above we get (\ref{1.1}). \end{proof} \item[Step 3.] Bound for $||D^{2}u||_{L^{\infty}(\overline \Omega)}$. \begin{claim} We prove the following \begin{equation}||D^{2}u||_{L^{\infty}(\overline \Omega)}\leq C(||\psi||_{C^{1,1}(\overline \Omega)}, ||\phi||_{C^{4}(\overline \Omega)},n,\delta,|\partial \Omega|_{C^4}) .\label{3} \end{equation} \end{claim} The proof of the above claim follows from the following steps. \item[Step 3.1] We first prove that the Hessian attains its supremum on the boundary of $\Omega$. We show that \begin{equation} ||D^{2}u||_{L^{\infty}(\overline \Omega)}\leq C(||\psi||_{C^{1,1}(\overline \Omega)}, ||D^{2}u||_{L^{\infty}(\partial \Omega)},\delta). \label{4} \end{equation} We differentiate (\ref{lab}) twice and since the phase is supercritical we modify the operator $F$ to a concave operator $\Tilde{F}$ as shown in \cite[pg 347]{CW}. We see that \begin{align*}\Tilde{F}^{ij}\partial_{ij}u_{ee}+\Tilde{F}^{ij,kl}\partial_{ij}u_{e}\partial_{kl}u_{e}=\psi_{ee}\\ \Tilde{F}^{ij}\partial_{ij}\Delta u=\Delta \psi -\sum_{e}\Tilde{F}^{ij,kl}\partial_{ij}u_{e}\partial_{kl}u_{e}\geq \Delta \psi . \end{align*} The last inequality follows from the concavity of the operator. Let $p_0$ be an interior point of $\Omega$. By an orthogonal transformation, we assume $D^2u$ to be diagonalized at $p_0$. We observe that \begin{align*} g^{ij}\partial_{ij}(\Delta u+\frac{C_{1}}{2}|x|^{2})(p_0)\geq \\ -C(||\psi||_{C^{1,1}(\Omega)})+C_1\sum_{i=1}^n\frac{1}{1+\lambda_i^2}>0 \end{align*} where $C_1$ is chosen large enough using the semi-convexity of $u$. This shows that $D^2u$ attains its supremum on the boundary. Next, we estimate the Hessian on the boundary in the following steps. \item[Step 3.2] We estimate the double tangent derivative $u_{TT}$ on the boundary.\\ That is we estimate $u_{ik}(0)$ on $\partial \Omega$ for $1\leq i,k\leq n-1$. There exists a constant $a>0$ for which $(x_1,...,x_{n}-a)$ is orthogonal to $\partial \Omega$ near $0$. Consider the following tangential derivative near $0\in \partial \Omega$ \[\partial_{T_{k} }u(x)=(a-x_n)u_k(x)+x_ku_n(x). \] For $1\leq k\leq n-1$ we have $\partial_ {T_{k}}u|_{\partial \Omega}=\partial _{T_{k}}\phi|_{\partial \Omega}$. So for $1\leq k,i\leq n-1$, we have \begin{align*} \partial_ {T_{k}}u(0)=au_k(0)\\ \partial_ {T_{i}} \partial _{T_{k}}u(0)=au_{ki}(0)+\delta_{ki} u_n(0)\\ \implies \partial_{T_{i}} \partial_ {T_{k}}\phi(0)=au_{ki}(0)+\delta_{ki} u_n(0). \end{align*} Using the estimate in step 2, we have \begin{equation*}|u_{ki}(0)|\leq C(||\psi||_{C^{1}(\overline \Omega)}, ||\phi||_{C^{2}(\overline \Omega)},n,\delta,\Omega). \end{equation*} \item[Step 3.3] We estimate the mixed tangent normal derivative $u_{TN}$ on the boundary.\\ That is we estimate $u_{in}(0)$ on $\partial \Omega$ for $1\leq i\leq n-1$. Let $\tau$ be a vector field generated by rotation such that $\tau(0)=e_i$ for $i<n.$ Note that $u_\tau=\phi_\tau$ on $\partial \Omega$ and $g^{ij}\partial_{ij}u_\tau=\psi_\tau$ in $\Omega$.\\ Applying the argument in (\ref{lala1}) we get the following on $\partial \Omega$ \begin{align} |\phi_{\tau}|\leq C(||\phi_\tau||_{C^{2}(\overline{\Omega})},||\psi||_{C^{1}(\overline \Omega)},n,|\partial \Omega|_{C^2})x_n \leq C|x|^{2}\label{nn}. \end{align} Using the above we choose a constant $c>0$ depending on $C$ above such that \begin{equation*} -c|x|^2+ \frac{1}{2}|x|^2\tan\frac{\overline{\psi}}{n}=-C|x|^2\leq \phi \leq Cx_n+ \frac{1}{2}|x|^2\tan\frac{\underline{\psi}}{n} \text{ on $\partial\Omega$}. \end{equation*} We define $u_0$ to be the subsolution \[u_0=-C'x_n+ \frac{1}{2}|x|^2\tan\frac{\overline{\psi}}{n}\] where $C'=C'(||\phi||_{C^{3}(\overline{\Omega})},||\psi||_{C^{1}(\overline \Omega)},n,|\partial \Omega|_{C^2})$. Let $w=u-u_0$. Since the phase lies in the supercritical range, we again extend the operator $F$ to a concave operator. Using concavity we get the following for some $\varepsilon_0>0$ on a small ball of radius $r$ around the origin \begin{align} g^{ij}w_{ij}\leq-\varepsilon_{0} \text{ inside } \Omega \cap B_{r}(0)\nonumber \\ w\geq 0 \text{ on } \partial(\Omega\cap B_{r}(0))\nonumber \\ w(0)=0. \label{3.7} \end{align} We now choose $\alpha,\beta$ large such that \begin{align} g^{ij}\partial_{ij}(\alpha w+\beta |x|^2 \pm u_\tau)\leq 0 \text{ in } \Omega \cap B_{r}(0) \label{i}\\ \alpha w+\beta|x|^2 \pm u_\tau\geq 0 \text{ on } \partial(\Omega\cap B_{r}(0)).\nonumber \end{align} Since $w\geq 0$ on $\partial(\Omega\cap B_{r}(0))$ we only need to choose $\beta$ large such that \[\beta|x|^2\pm u_\tau\geq0 \text{ on } \partial(\Omega\cap B_{r}(0)).\] We observe that on $\Omega\cap \partial B_{r}(0)$, $\beta \geq \frac{C}{r^{2}}$ where $C= C(||\psi||_{C^{1}(\overline \Omega)}, ||\phi||_{C^{2}(\overline \Omega)},\delta,n,|\partial \Omega|_{C^2}) $ is from the gradient estimate in (\ref{1.1}). And on applying (\ref{nn}) we get the required value of $\beta$ on $\partial \Omega\cap B_{r}(0)$. Fixing the larger of the two values to be the constant $\beta$ we now choose $\alpha$ such that (\ref{i}) holds good. We have \begin{align*} g^{ij}\partial_{ij}(\alpha w+\beta|x|^2 \pm u_\tau)\leq -\alpha\varepsilon_{0} +C \end{align*} where $C=C(\beta,|\psi|_{C^{1}(\overline{\Omega})})$. We choose $\alpha$ large such that $-\alpha\varepsilon_{0} +C\leq 0$. Observe that $\alpha w+\beta|x|^2\pm u_\tau(0)=0$ at $0$. Using Hopf's Lemma we see that \begin{align*} \partial_n(\alpha w+\beta|x|^2\pm u_\tau)(0)\geq 0\\ \implies \pm u_{\tau_{n}(0)}\geq \mp \partial_n(\alpha w+\beta|x|^2\pm u_\tau)(0)\\ \implies |u_{\tau_{n}}(0)|\leq |\alpha w_n(0)|\leq C. \end{align*} Therefore, we have \begin{equation*}|u_{in}(0)|\leq C(||\psi||_{C^{1,1}(\overline \Omega)}, ||\phi||_{C^{3}(\overline \Omega)},n,\delta,|\partial \Omega|_{C^2}). \end{equation*} \item[Step 3.4] Lastly, we estimate the double normal $u_{NN}$ on the boundary.\\ Note that by Lemma \ref{y1}, $D^2u$ is bounded below, so we only need to prove an upper bound for $u_{NN}$, which we find using an idea of Trudinger \cite{Tru}. \\Suppose that $\lambda'$ denotes the eigenvalues of the $n-1\times n-1$ matrix $u_{TT}$ where the tangent vector $T$ acts as \[u_{TT}=\frac{1}{r^2}u_{\theta\theta}+\frac{1}{r}u_{r}, \] when the boundary is a sphere. We denote \[D^2u = \begin{bmatrix} u_{TT} & u_{T\gamma} \\u_{\gamma T} & u_{\gamma \gamma}\end{bmatrix}=\begin{bmatrix} \lambda' & u_{T\gamma} \\u_{\gamma T} & u_{\gamma\gamma}\end{bmatrix}.\] Let $x'_{0}$ be the minimal point of $\Tilde{\Theta}(\lambda')\vert_{\partial \Omega}$ where \[\Tilde{\Theta}(\lambda')=\sum_{i=1}^{n-1}\arctan \lambda_i'-\psi \] and we denote $\lambda'_{0}=\lambda'(x'_0).$ Our goal is to find a lower linear barrier function for $u_\gamma$ at $x'_0$ followed by the same for $u_n$ at $x'_0$ with the help of a change of basis technique. Using this we will find an upper bound of $u_{nn}(x_0')$ followed by an upper bound of $u_{nn}(x)$ for all $x\in\partial\Omega$.\\ Now we estimate the lower bound of $tr(D^2u)|_{T}=\sum_{i=1}^{n-1}\lambda'_{i}$. Observe that $\Tilde{\Theta}(\lambda')\geq\Tilde{\Theta}(\lambda'_0)>\psi-\frac{\pi}{2}>(n-3)\frac{\pi}{2}$. So the level set $\{\lambda'\in \mathbb{R}^{n-1}| \Tilde{\Theta}(\lambda')=\Tilde{\Theta}(\lambda'_0)\}$ should be convex. Heuristically, this property means the following: \begin{align*} \langle D\Tilde{\Theta}(\lambda'_0),\lambda' \rangle \geq \langle D\Tilde{\Theta}(\lambda'_0),\lambda'_0 \rangle=K_0 \text{ and } "="\text{ at } x'_0 \end{align*} where $K_0$ is a constant depending on $|\psi|_{C^{1}(\Omega)}, |\phi|_{C^2(\partial \Omega)}$, and $\delta$. Denoting \[\Bigg[\frac{\partial\Tilde{\Theta}(D^2u(x_0))|_{T}}{\partial D^2u|_T} \Bigg]=A_{ij}(\lambda'_0)\] where $1\leq i,j\leq n-1$, we see that \begin{align*} tr (A_{ij}(\lambda'_0)) (D^2u(x)|_T)\geq K_0 \text{ with equality holding at } x'_0. \end{align*} Denoting the second fundamental form by $II$, we observe that \begin{align} D^2(u-\phi)|_T=(u-\phi)_{\gamma} II|_{\partial \Omega}\nonumber\\ tr[A_{ij}(\lambda'_0)(D^2\phi|_T-\phi_\gamma II|_{\partial \Omega}+u_\gamma II|_{\partial \Omega})] \geq K_0 \text{ with equality holding at } x'_0. \nonumber \end{align} This shows \begin{align} u_\gamma\geq \frac{1}{\sum_{i=1}^{n-1}\Tilde{\Theta}_i(\lambda'_0)\kappa_i(x')}[K_0-tr(A_{ij}(\lambda'_0)(D^2\phi|_T-\phi_{\gamma}II|_{\partial\Omega}))]\text{ with equality holding at } x'_0\label{above}\\ \implies u_\gamma\geq C(|\phi|_{C^4(\overline{\Omega})}, |\partial \Omega|_{C^4}, |\psi|_{C^{1}(\Omega)},\delta) \text{ with equality holding at } x'_0\nonumber \end{align} where the last inequality follows from the observation that for all the terms in the LHS of (\ref{above}) one can find a lower linear barrier function whose Lipschitz norm depends on the $C^{3,1}$ norm of $\phi$ and the $C^1$ norm of $\psi$. Next, we consider a unit local basis at $x_0'$ denoted by $\mathcal{B}=\{e_n, e_{T_{\alpha}}| 1 \leq \alpha\leq n-1 \}$ where $e_n$ is the outward unit normal and $e_{T_\alpha}$ represents vectors in the tangential direction at $x_0'$. By a change of basis we write the unit radial direction vector $e_\gamma$ as $e_\gamma=ae_n+be_{T_{\alpha}}.$ A simple computation shows that\begin{align*} e_\gamma=\frac{\langle e_\gamma,e_n\rangle}{1-\langle e_n, e_{T_{\alpha}}\rangle ^2}e_n-\frac{\langle e_\gamma,e_n\rangle \langle e_n, e_{T_{\alpha}}\rangle}{1-\langle e_n, e_{T_{\alpha}}\rangle ^2}e_{T_{\alpha}} \end{align*} from which one can easily find a lower linear barrier for $u_n$ at $x_0'$. So far we have \begin{align} u_n\geq L_1^{-}(x',x_n)\text{ on }\partial \Omega \text{ with equality holding at $x'_0$} \label{ma} \end{align} where \begin{equation*} L_1^{-}(x',x_n)=-C(|\phi|_{C^4}, |\partial \Omega|_{C^4}, |\psi|_{C^{1}(\Omega)},\delta)x_n\geq -C|x|^2 . \end{equation*} Now we choose coordinates such that $x'_{0}$ is the origin and the $n-1\times n-1$ matrix $u_{TT}(0)$ is diagonalized. \begin{claim}\label{3.0} We show that \[u_{nn}(0)\leq C\] where $C=C(||\psi||_{C^{1,1}(\overline \Omega)}, ||\phi||_{C^{4}(\overline \Omega)},n,\delta, |\partial \Omega|_{C^4} )$.\\ Note that unlike before $e_n$ is the outward unit normal now. \end{claim} \begin{proof} Note that $u_n$ is a solution of the linearized equation $g^{ij}D_{ij}u_{n}=\psi_n$, so we get \begin{equation}|g^{ij}\partial_{ij}u_n|\leq C(||\psi||_{C^{1}(\Omega)}). \label{h} \end{equation} Now we repeat the process in step 3.3. We define $w=u-B^{-}$ where $B^{-}$ is the subsolution defined in (\ref{bn}) and we see that $w$ satisfies condition (\ref{3.7}). We choose $\alpha$ and $\beta$ large such that \begin{align} g^{ij}\partial_{ij}(\alpha w+\beta|x|^2 + u_n)\leq 0 \text{ in }\Omega \cap B_{r}(0) \label{i1}\\ \alpha w+\beta|x|^2+ u_n\geq 0 \text{ on }\partial(\Omega\cap B_{r}(0)).\nonumber \end{align} As $w\geq 0$ on $\partial (B_r(0)\cap \Omega))$ we first choose $\beta$. On $\partial B_{r}(0)\cap \Omega$, we have $\beta\geq -C/r^{2} $ where $C= C(||\psi||_{C^{1}(\overline \Omega)}, \delta, ||\phi||_{C^{2}(\overline \Omega)},n,|\partial \Omega|_{C^2}) $ is the constant from the estimates in (\ref{1.1}) and (\ref{est1}). On $\partial \Omega\cap B_{r}(0)$, we find $\beta$ using (\ref{ma}). Choosing the larger of the two values we get the required value of $\beta$. Fixing this $\beta$ we choose $\alpha$ such that (\ref{i1}) holds. Using the constant $C$ from (\ref{h}), we choose $\alpha$ large such that $-\alpha\varepsilon_{0}+C<0$ where $C=C(\beta,||\psi||_{C^{1}(\overline{\Omega})})$. Now since $(\alpha w+\beta|x|^2+u_n)(0)=0$, using Hopf's Lemma we get \begin{align*} \frac{\partial}{\partial_n}(\alpha w+\beta|x|^2+u_n)(0)\leq 0\\ \implies u_{nn}(0)\leq C(||\psi||_{C^{1,1}(\overline \Omega)}, ||\phi||_{C^{4}(\overline \Omega)},n,\delta, |\partial \Omega|_{C^4} ). \end{align*} \end{proof} Next we prove the following claim: \begin{claim} If $u_{nn}(0)$ is bounded above, then $u_{nn}(x)$ will be bounded above for all $x\in \partial \Omega$. \end{claim} \begin{proof} Suppose that for some $x_p\in \partial \Omega$, $u_{nn}(x_p)\geq K$ where $K$ is a large constant to be chosen shortly. From claim \ref{3.0}, we see that at $0$, \begin{align*} F(D^2u+N e_n \times e_n)-F(D^2u)=\delta_0 (||\phi||_{C^4(\partial \Omega)},||\psi||_{C^{1,1}(\overline \Omega)})>0\\ \implies \lim_{a\rightarrow\infty} F(D^2u+a e_n \times e_n)\geq F(D^2u+N e_n \times e_n) \geq F(D^2u)+\delta_0 =\psi +\delta_0. \end{align*} From Lemma \ref{3.9}, we see that \[ \sum_{i=1}^{n-1} \arctan \lambda'_i(x_{p})\geq \psi+\delta_0-\frac{\pi}{2} \] and \begin{align*} \psi= F(D^2u)=\sum_{i=1}^{n-1}\arctan\lambda'_i+o(1)+\arctan(u_{nn}+O(1))\\ \geq \psi +\delta_0-\frac{\pi}{2}-\frac{\delta_0}{2}+\arctan(u_{nn}+O(1)). \end{align*} Now if we choose $K$ large enough such that \[ u_{nn}(x_p)>\tan(\frac{\pi}{2}-\frac{\delta_0}{2})-O(1) \] we arrive at a contradiction. Therefore, choosing\\ $K\leq\tan(\frac{\pi}{2}-\frac{\delta_0}{2})-O(1)=C(||\psi||_{C^{1,1}(\overline \Omega)}, ||\phi||_{C^{4}(\overline \Omega)},n,\delta, |\partial \Omega|_{C^4} )$, we see that $u_{nn}(x)\leq K$ for all $x\in \partial \Omega$. Combining all the estimates in step 3 above we obtain (\ref{3}). \end{proof} \item[Step 4.] Bound for $||D^2u||_{C^{\alpha}(\overline{\Omega)}}$.\\ This follows from the interior $C^{2,\alpha}$ estimates by Evans-Krylov \cite{EK,Kr} and the boundary $C^{2,\alpha}$ estimates by Krylov \cite[Theorem 4.1]{Kr}. \\ Therefore, combining all the four steps above we obtain estimate (\ref{bdy}). \end{itemize} \end{proof} \section{Proof of Theorem \ref{main}} In this section we use the $C^{2,\alpha}$ estimate up to the boundary to solve the following Dirichlet problem using the method of continuity. \begin{theorem} \label{1} Suppose that $\phi\in C^{4}(\overline \Omega)$ and $\psi: \overline \Omega\rightarrow [(n-2)\frac{\pi}{2}+\delta, n\frac{\pi}{2})$ is in $C^{1,1}(\overline \Omega)$ where $\Omega$ is a uniformly convex, bounded domain in $\mathbb{R}^{n}$ and $\delta>0$. Then there exists a unique solution $u\in C^{2,\alpha}(\overline{\Omega})$ to the Dirichlet problem (\ref{lab}). \end{theorem} \begin{proof} For each $t\in[0,1]$, consider the family of equations \begin{align} \begin{cases} F(D^{2}u)=t\psi +(1-t)c_0\text{ in } \Omega\\ u=\phi \text{ on }\partial \Omega \label{eq2} \end{cases} \end{align} where $c_0=(n-2)\frac{\pi}{2}+\delta$ and $\psi\in C^{2,\alpha}(\overline{\Omega})$. Let $I=\{t\in[0,1]\vert$ there exists $u_t\in C^{4,\alpha}(\overline{\Omega})$ solving (\ref{eq2})$\}$. We know that $0\in I$ from \cite{YYY}. The fact that $I$ is open is a consequence of the implicit function Theorem and invertibility of the linearized operator (\ref{linearize}). The closedness of $I$ follows from the apriori estimates. Hence, $1\in I$. Now using a smooth approximation\footnote{When $\psi$ is in $C^{1,1}(\overline{\Omega})$ we can take a sequence of smooth functions $\psi_k$ approximating $\psi$ and a sequence of solutions $u_k$ solving (\ref{lab}) with $\psi_k$ as the right hand side. Applying the uniform $C^{2,\alpha}$ estimate and taking a limit solves the equation.} we solve (\ref{lab}) for $\psi\in C^{1,1}$. Uniqueness follows from the maximum principle for fully nonlinear equations. \end{proof} \begin{remark}There exists a unique smooth solution to the Dirichlet problem (\ref{lab}) if all data is smooth and if the phase lies in the supercritical range. \end{remark} \begin{proof} of \textbf{Theorem \ref{main}}\\ We approximate $\phi\in C^0(\partial\Omega)$ uniformly on $\partial \Omega$ by a sequence $\{\phi_k\}_{k\geq 1}$ of $C^4$ functions and solve \begin{align*} \begin{cases} F(D^{2}u_k)=\psi \text{ in } \Omega\\ u_k=\phi_k \text{ on } \partial \Omega \end{cases} \end{align*} using Theorem 4.1. Applying the interior Hessian estimates proved in \cite[Theorem 1.1]{AB} and the compactness in $C^2$ of bounded sets in $C^{2,\alpha}$ along with maximum principles, we get convergence of $\{u_k\}$ to the desired solution $u\in C^{2,\alpha}$ on the interior and convergence of $\{\phi_k\}$ to the desired boundary function $\phi\in C^0$ on the boundary. \end{proof} \section{Proof of Theorem \ref{2.20}} \begin{proof} We denote upper/lower semi-continuous functions by usc/lsc. We define \begin{align*} A=\{u\in usc(\overline{\Omega})| F(D^2u)\geq \psi\text{ in } \Omega, \text{ u }\leq \phi \text{ on }\partial \Omega\}\\ w(x)=\sup\{u(x)|u\in A\}. \end{align*} \begin{claim}\label{cll4.3} The above function $w$ is the unique continuous viscosity solution of (\ref{lab}) where $\psi$ is a constant. \end{claim} \begin{remark} The proof follows from the following four steps. It is noteworthy that the first three steps of the proof hold good for any continuous function $\psi$. The fourth step requires a certain comparison principle (see Theorem 6.1 of Appendix), which is only available for a constant right hand side. As of now, it is unknown if such a comparison principle holds good for a continuous right hand side. In order to highlight this distinction, we present the first three steps of the proof assuming $\psi$ is any continuous function. In the final step, we assume $\psi$ to be a constant, thereby proving Theorem \ref{2.20}. \end{remark} \begin{itemize} \item[Step 1.] We define the following functions: \begin{align*} \underline{z}(x)=\overline{\lim_{y\rightarrow x}}w(y)\\ \overline{z}(x)=\lim_{\overline{y\rightarrow x}}w(y). \end{align*} We first show that $A$ is non-empty and $w,\underline{z},\overline{z}$ are well defined. \\ Since $\psi\in C(\overline{\Omega})$, there exists $\varepsilon'>0$ such that $-n\frac{\pi}{2}+\varepsilon'<\psi(x)<n\frac{\pi}{2}-\varepsilon'$ for all $x\in \overline{\Omega}$. Fixing this $\varepsilon'$ we define the following functions \[ \psi_*=-n\frac{\pi}{2}+\varepsilon'<\psi<n\frac{\pi}{2}-\varepsilon'=\psi^*.\] Recalling (\ref{bn}) and (\ref{bbnn}) we define \begin{align} \underline{w}(x)=-Cx_n+ \frac{1}{2}|x|^{2}\tan\frac{\psi^*}{n}\nonumber\\ \overline{w}(x)=Cx_n+ \frac{1}{2}|x|^{2}\tan\frac{\psi_*}{n}\label{qq} \end{align} where $C=C(||\phi||_{C^2(\partial{\Omega})}, n,|\partial\Omega|_{C^2})$. By definition $\underline{w}\in A$, which shows that $A$ is non-empty. Next, $\max\{u,\underline{w}\}$ is upper semi continuous and still a subsolution of (\ref{lab}), so we replace $u\in A$ by $\max\{u,\underline{w}\}$. This shows $u\geq \underline{w}$ and, therefore, $w$ is well defined. Next, we observe that since $\underline{w},\overline{w}$ are sub and super-solutions of (\ref{lab}) respectively, we have \[ \underline{w}\leq u\leq \overline{w}\] which shows $\underline{z},\overline{z}$ are well defined. \item[Step 2.] We show that $\underline{z}$ is a subsolution of (\ref{lab}).\\ Suppose not. Then we can find a quadratic polynomial $P$ such that $P(x)\geq \underline{z}(x)$ in $B_{\rho}(0)$ with equality holding at $0$, such that $F(D^2P)<\psi_*$ in $B_{\rho}(0)$. Now we choose $\varepsilon>0$ such that \begin{equation} F(D^2P+4\varepsilon I)<\psi_*. \label{name11} \end{equation} From the definition of $w$ and $\underline{z}$, we can find sequences $\{u_k\}\subset A$ and $\{x_k\}\subset \Omega$, with $x_k\rightarrow 0$ such that \[ \underline{z}(0)=\overline{\lim_{y\rightarrow 0}}w(y)=\lim_{x_k \rightarrow 0}u_k(x_k). \] For $k$ large enough, we see that \begin{align*} |u_k(x_k)-P(x_k)-2\varepsilon|x_k|^2|=|u_k(x_k)-P(0)+P(0)-P(x_k)-2\varepsilon|x_k|^2|\\ =o(1)<\varepsilon \rho^2. \end{align*} On $\partial B_{\rho}(0)$, we see \[ u_k(x)\leq w(x)\leq \underline{z}(x)\leq P(x)+2\varepsilon|x|^2-\varepsilon\rho^2. \] Using the definition of $w$ and $\underline{z}$, we see that for any $k$, the following holds in $B_{\rho}(0)$ \[Q(x)=P(x)+2\varepsilon|x|^2\geq u_k(x). \] Fixing a $k$ large enough, we observe the following. The functions $u_k(x_k)$ and $Q(x_k)$ are less than $\varepsilon\rho^2$ apart, but $u_k$ is at a distance of more than $\varepsilon\rho^2$ below $Q$ on $\partial B_{\rho}(0)$. So we drop $Q$ at most $\varepsilon \rho^2$ so that it touches $u_k$ at a point inside $B_{\rho}(0)$ while still remaining above $u_k$ on $\partial B_{\rho}(0)$. So there exists $\gamma\leq \varepsilon\rho^2$ such that in $B_{\rho}(0)$ \[u_k(x)\leq P(x)+2\varepsilon|x|^2-\gamma \] with equality holding at an interior point of $B_{\rho}$. Now since $u_k$ is a subsolution, we have \[\psi\leq F(D^2P+4\varepsilon I). \] This contradicts (\ref{name11}).\\ Noting that $\underline{z}$ is upper semi-continuous, we see that it is a subsolution of (\ref{lab}). \item[Step 3.] We show that $\overline{z}$ is a supersolution of (\ref{lab}).\\ Suppose not. Then we can find a quadratic polynomial $P$ such that $P(x)\leq \overline{z}(x)$ in $B_{\rho}(0)$ with equality holding at $0$, such that $F(D^2P)>\psi^*$ in $B_{\rho}(0)$. We choose $\varepsilon>0$ small enough such that \begin{equation} F(D^2P-2\varepsilon I)>\psi^* .\label{name22} \end{equation} We have $\overline{z}\geq P-\varepsilon|x|^2$. We define a new quadratic $Q(x)=P(x)-\varepsilon|x|^2+\varepsilon\rho^2.$ Observe that, since $\overline{z}(0)=\underline{\lim}_{x_k\rightarrow 0}w(x_k)$, so for $k$ large enough, we have \begin{align*} w(x_k)=\overline{z}(0)+o(1)=P(0)-P(x_k)+P(x_k)+o(1)\\ =P(x_k)+o(1)=Q(x_k)-\varepsilon\rho^2+o(1)< Q(x_k). \end{align*} This contradicts the supremum definition of $w$ since $Q$ is a subsolution of (\ref{lab}) by (\ref{name22}). Noting that $\overline{z}$ is lower semi-continuous, we see that it is a supersolution of (\ref{lab}). \item[Step 4.] We take care of the boundary value in this final step. This is where we assume (for the first time) that $\psi$ is a constant. Note that now we may assume the boundary value $\phi\in C^2(\partial \Omega)$ since we can always approximate $\phi$ by a sequence of smooth functions $\phi_\delta$, that solve \begin{align*} \begin{cases} F(D^2u_\delta)=\psi \text{ in $\Omega$}\\ u_\delta=\phi_\delta \text{ on $\partial \Omega$} \end{cases} \end{align*} and apply the comparison principle\footnote{see Appendix} to get \[\max_{\Omega}|u_{\delta_1}-u_{\delta_2}|\leq \max_{x \rightarrow \partial \Omega}|(\phi_{\delta_1}-\phi_{\delta_2})(x)|\rightarrow 0 \] as $\delta_1, \delta_2 \rightarrow 0$. We have $u_\delta\rightarrow u$ in $C^0$ as $\delta\rightarrow 0$. Next, we pick an arbitrary point $x_0\in \partial \Omega$ and recall the construction of $\underline{w}, \overline{w}$ from (\ref{qq}). Defining similar functions at $x_0$ and on using the comparison principle, we get $\underline{w}\leq u\leq \overline{w}$ with equality holding at $x_0$ for all $u\in A$. Again since $\max(u,\underline{w} )\in A$ for all $u\in A$, we can replace \[w(x)=\sup_{u\in A}\max (u, \underline{w}). \] We get $\underline{w}\leq u\leq \overline{w}$ with equality holding at $x_0$, which shows \[\overline{z}(x_0)= \phi(x_0)=\underline{z}(x_0). \] Since $x_0\in \partial \Omega$ is arbitrary, we have $\overline{z}=\underline{z}=\phi $ on $\partial \Omega$. Combining the above steps and on using the comparison principle we see \[\overline{z}=\underline{z}=w\in C^0(\overline{\Omega}) \] is the desired solution. This proves the existence part of claim (\ref{cll4.3}). Uniqueness again follows from the comparison principle \end{itemize} \end{proof} \section{Appendix} We state the following linear algebra Lemma that was used in proving the double normal estimate in step 3.4 of section 3. \begin{lemma}\cite[Lemma 1.2]{CNS}\label{3.9} Consider the following $n \times n$ symmetric matrix $$ M=\begin{bmatrix} \lambda'_1 & & & & a_1\\ & . & & & .\\ & &. & & .\\ & & & \lambda'_{n-1} & a_{n-1}\\ a_1 & . & . & a_{n-1} & a\\ \end{bmatrix}. $$\\ where $\lambda'_1, \lambda'_2,..,\lambda'_{n-1}$ are fixed, $|a_i|<C$ for $1\leq i\leq n-1$, and $|a|\rightarrow+\infty$. Then the eigenvalues $\lambda_1,\lambda_2,...,\lambda_n$ of $M$ behave like \begin{equation*} \lambda'_1+o(1),\lambda'_2+o(1),..., \lambda'_n+o(1),a+O(1) \end{equation*} where $o(1)$ and $O(1)$ are uniform as $a\rightarrow\infty$. \end{lemma} For the sake of completeness we state and prove the well known comparison principle for strictly elliptic equations\footnote{We learned this proof from \cite{yy2004}. }. \begin{theorem}\label{A} Suppose that $u$ is a usc subsolution and $v$ is a lsc supersolution of the strictly elliptic equation (\ref{s1}) in $\Omega\subset\mathbb{R}^n$. If $u\leq v$ on $\partial \Omega $, then $u\leq v$ in $\Omega$. \end{theorem} \begin{proof} W.l.o.g we assume $\Omega=B_1(0)$ and $u\leq v-2\delta$ on $\partial B_1$ for some small $\delta>0$. We re-write equation (\ref{s1}) as \[F(D^2u)=\sum _{i=1}^{n}\arctan \lambda_{i}-c=0. \] Let $u^{\varepsilon}$ be an upper parabolic envelope\footnote{For $\varepsilon>0$, we define the upper $\varepsilon$-envelope of $u$ to be \[u^{\varepsilon}(x_0)=\sup_{x\in \overline{H}}\{u(x)+\varepsilon-\frac{1}{\varepsilon}|x-x_0|^2\}, \text{ for }x_0\in H \] where $H$ is an open set such that $\overline{H}\subset B_1$.} satisfying \begin{align*} F(D^2u^{\varepsilon})\geq 0 \\ D^2u^{\varepsilon}\geq -C/\varepsilon\\ ||u^{\varepsilon}||_{C^{0,1}}\leq C/\varepsilon \end{align*} outside a measure zero subset where $u^{\varepsilon}$ is punctually second order differentiable and $C$ is chosen such that $u^{\varepsilon}-v_{\varepsilon}\leq C-\varepsilon|x-x_0|^2$ on $\partial B_1$ with equality holding at $x_0\in B_1.$ We see that $0\leq u^{\varepsilon}(x)-u(x)\leq u(x^*)-u(x)+\varepsilon$ where $x^*\rightarrow x$ as $\varepsilon\rightarrow 0$. By symmetry, the lower parabolic envelope $v_{\varepsilon}$ satisfies \begin{align*} F(D^2v_{\varepsilon})\leq 0 \\ D^2v_{\varepsilon}\leq C/\varepsilon\\ ||v_{\varepsilon}||_{C^{0,1}}\leq C/\varepsilon \end{align*} and $0\geq v_{\varepsilon}(x)-v(x)\geq v(x_*)-v(x)-\varepsilon$ where $x_*\rightarrow x$ as $\varepsilon\rightarrow 0$. Note that $v_{\varepsilon}-u^{\varepsilon}\leq L+\frac{C}{\varepsilon}|x-x_0|^2$ for $x_0\in B_1$ where $L$ is a linear function. The convex envelope $\Gamma(v_{\varepsilon}-u^{\varepsilon})$ is in $C^{1,1}$. From Alexandroff's estimate we have \[\sup_{B_1}(v_{\varepsilon}-u^{\varepsilon})^{-}\leq C(n)[\int_{\Sigma}\det D^2\Gamma]^{1/n} \] where $\Sigma=\{x\in B_1|\Gamma(x)=v_{\varepsilon}(x)-u^{\varepsilon}(x)\}$. Now in $\Sigma$, we have $0\leq D^2\Gamma\leq D^2(v_{\varepsilon}-u^{\varepsilon})$ or $L(x)\leq v_{\varepsilon}(x)-u^{\varepsilon}(x)$ near $x_0\in \Sigma$. For $K$ large since $u^{\varepsilon}+\frac{K}{\varepsilon}|x|^2$ is convex and $v_{\varepsilon}-\frac{K}{\varepsilon}|x|^2$ is concave, we have the following for a.e. $x_0\in B_1$ \begin{align*} v_{\varepsilon}=\Gamma+\frac{K}{\varepsilon}|x|^2+O(|x-x_0|^2)\\ u^{\varepsilon}=\Gamma+\frac{K}{\varepsilon}|x|^2+O(|x-x_0|^2). \end{align*} Again since $v_{\varepsilon}$ is a super solution and $u^{\varepsilon}$ is a sub solution, for a.e. $x_0\in B_1$, we have \begin{align*} F(D^2v_{\varepsilon}(x_0))\leq 0\\ F(D^2u^{\varepsilon}(x_0))\geq 0\\ F(D^2v_{\varepsilon}(x_0))-F(D^2u^{\varepsilon}(x_0))\leq 0. \end{align*} Also, a.e. $x_0\in \Gamma$, we have $D^2v_{\varepsilon}(x_0)-D^2u^{\varepsilon}(x_0)\geq 0$. However, $F$ is strictly elliptic, so we must have $F(D^2v_{\varepsilon})-F(D^2u^{\varepsilon})\geq 0$, which shows \begin{align*} F(D^2v_{\varepsilon}(x_0))=F(D^2u^{\varepsilon}(x_0))\text{ a.e } x_0\in \Sigma. \end{align*} Again, given that $F$ is strictly elliptic, the line with the positive direction $D^2v_{\varepsilon}(x_0)-D^2u^{\varepsilon}(x_0)$ intersects the level set $\{F=C\}$ only once, which implies $D^2v_{\varepsilon}(x_0)=D^2u^{\varepsilon}(x_0)$. This shows $ \sup_{B_1}(v_{\varepsilon}-u^{\varepsilon})^{-}\leq 0$, which proves that \[v\geq v_{\varepsilon}\geq u^{\varepsilon}\geq u \text{ in } B_1. \] \end{proof} \bibliographystyle{amsalpha}
{'timestamp': '2020-06-01T02:08:26', 'yymm': '2005', 'arxiv_id': '2005.14443', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14443'}
arxiv
\section{Introduction} \label{intro} Neutron star is a compact stellar object which can provide valuable information on the physics of dense nuclear matter because the density of the central part o
f a neutron star can reach several times of the normal nuclear matter density. In this dense environment, the quantum degeneracy pressure of hadrons dominates and the microscopic calculation based on QCD is almost impossible due to the highly non-perturbative nature of the strong interaction. Hence many effective approaches including energy density functionals have been used to understand the properties of nuclear matter and the macroscopic quantities of neutron stars such as masses and radii. Most of the observed neutron stars are in radio pulsars. Even though the magnetic field strength of most pulsars can be measured, if they are not in close binaries, their masses and radii are not easily extractable. Masses and radii of neutron stars have been measured in various kinds of binaries; low-mass X-ray and optical binaries, neutron star-white dwarf binaries, double neutron star binaries, and neutron star-main sequence binaries~\cite{Pra13}. In two of neutron star-white dwarf binaries, $2 M_\odot$ neutron stars have been observed~\cite{demorest2010,antoniadis2013,cromartie19}. These observations set the maximum mass of neutron stars to lie above $2 M_\odot$. On the other hand, most of the well-measured neutron star masses in double neutron star binaries lie below $1.5 M_\odot$. These observations imply that the actual masses of neutron stars may strongly depend on the evolution of neutron star binaries and the evolution of their progenitors~\cite{Lee14,Lee17}. Detection of gravitational waves (GW170817) followed by gamma-ray burst (GRB 170817A) and electromagnetic afterglows (AT 2017gfo) from the merger of a neutron star binary opened a new era of multi-messenger astronomy and astrophysics~\cite{GW170817PRL,GW170817ApJ}. First of all, the estimation of the masses and the tidal deformabilities of neutron stars from gravitational wave signals in GW170817 proved that gravitational waves can provide valuable information on the inner structure of neutron stars. This implies that nuclear and particle physics in dense environments can be tested by gravitational waves. In this article, we discuss the neutron star equation of state based on the nuclear energy density functionals. Secondly, series of afterglow observations in various frequency bands confirmed a possibility that a kilonova can be formed by a merger of neutron stars in a binary. Since the mergers of neutron star binaries can provide neutron rich environment which is essential for the production of heavy elements via r-process, it is believed that many heavy elements are produced in this kilonova. As noticed above, multi-messenger observations of the merger of a neutron star binary provided new possibilities to nuclear and particle physics. In the work of Kim et al.~\cite{Kim2018a,Kim2018b}, we investigated several Skyrme forces and energy density functionals (EDFs) including KIDS (Korea: IBS-Daegu-Sungkyunkwan) which are consistent with the properties of nuclear matter in finite nuclei and the constraints from neutron star observations. KIDS model is a density functional model which has been developed in Korea~\cite{KIDSprc97}. In this article, we review the expansion scheme of energy density functional KIDS and its applications to finite nuclei and neutron stars. In Sec.~\ref{sec2}, we summarize KIDS and EDFs used in Kim et al.~\cite{Kim2018a,Kim2018b}, focusing on the constraints from various nuclear physics experiments. In Sec.~\ref{sec3}, we summarize our numerical results on the neutron star properties, focusing on the constraints provided by the electromagnetic and gravitational wave observations. In Sec.~\ref{sec4}, we discuss the future possibilities of multi-messenger observations and ground-based experiments. \section{Nuclear matter properties with nuclear energy density functionals} \label{sec2} Neutron stars provide conditions extreme both in density $\rho = \rho_n + \rho_p$ and in the neutron-proton asymmetry $\delta = (\rho_n-\rho_p)/(\rho_n+\rho_p)$ where $\rho_n$ and $\rho_p$ are the neutron and the proton density, respectively. An important issue in the recent study of nuclear structure and dense nuclear matter theory is to understand the validity of predictions obtained from extrapolation to these extreme conditions, and to make the quantitative estimation of uncertainty feasible. Among several possibilities, an expansion scheme with a suitable expansion variable can provide a way to realize the uncertainty quantification for nuclei and neutron stars. The uncertainty quantification constitutes an essential part of the motivation to develop the KIDS EDF model, which aims to construct a theoretical frame that can provide reliable extrapolation and prediction in the extreme conditions of neutron stars. Based on the observation that $k_F/m_\rho$ could be an expansion parameter for the energy density of infinite nuclear matter relevant to neutron stars, where $k_F$ is the Fermi momentum and $m_\rho$ is the rho-meson mass, we assume expansion of the energy per particle in homogeneous nuclear matter in the powers of the cubic root of the density as \begin{eqnarray} {\cal E} (\rho, \delta) = {\cal T}(\rho, \delta) + \sum_{n=0}^{N-1} (\alpha_n + \beta_n \delta^2) \rho^{1+n/3}, \end{eqnarray} where ${\cal T}$ is the kinetic energy, and the summation represents the contribution from strong interactions. We assume quadratic approximation for the asymmetry $\delta$. Parameters $\alpha_n$ and $\beta_n$ are fit to empirical or well-known properties of symmetric and asymmetric nuclear matters, respectively. It has been shown that the number of terms necessary for an optimal description of the nuclear matter in the density range from well below the saturation to the core of the heaviest neutron star is 7, which consists of 3 from the symmetric part ($\alpha_0 \sim \alpha_2$) and 4 from the asymmetric part ($\beta_0 \sim \beta_3$) \cite{KIDSprc97}. In addition, it is shown that even if we consider terms more than 3 for the symmetric matter and 4 for the asymmetric matter, additional terms seldom affect the predictions for the basic properties of nuclei and neutron stars \cite{prc100}. Small difference in the results with and without additional terms suggests the amount of uncertainty from higher order contributions. Nuclear matter is an extension of finite nuclei in the limit of infinite number of nucleons. In a majority of cases, models of nuclear structures are determined by fitting the parameters to a set of selected nuclear data, and the nuclear matter properties are obtained as results (or predictions) of the model. Compared to this conventional approach, our approach with KIDS model takes a reverse engineering: model parameters are fit to the nuclear matter equation of state first, and additional parameters in the following steps are determined from nuclear data. Extention of KIDS model to the application to nuclei is described in depth in \cite{prc99}. We note that in any case, regardless of the fitting procedure, a realistic nuclear structure model should describe both finite nuclei and infinite nuclear matter well and simultaneously. In this work, we adopt the parameterization denoted as `KIDS-ad2' in \cite{KIDSprc97}. In the KIDS-ad2 model, $\alpha_0 \sim \alpha_2$ are determined to produce three basic properties of symmetric nuclear matter, saturation density $\rho_0$ ($=0.16$ fm$^{-3}$), binding energy per nucleon $E/A$ ($=16$ MeV), and incompressibility $K_0$ ($=240$ MeV) at the saturation density. With thus determined $\alpha_n$'s, $\beta_0 \sim \beta_3$ are fit to the equation of state of pure neutron matter calculated theoretically \cite{apr}. In Ref.~\cite{dutra}, 11 experimental/empirical (Exp/Emp in short) data about the nuclear matter properties are adopted to test nuclear models. Among the 240 Skyrme force models, only 16 models satisfy all the 11 Exp/Emp constraints. In Tab. \ref{tab1} and Fig. \ref{fig1}, we show the nuclear matter properties obtained from the KIDS model and compare them with Exp/Emp ranges. In Table \ref{tab1}, $\rho_0$, $E/A$ and $K_0$ are assumed to be the given values, and the parameters of the KIDS model in the symmetric part are solved to reproduce them. $Q_0$ is obtained as a consequence of the solution. On the other hand, since the parameters in the asymmetric part are fitted to the pure neutron matter (PNM) equation of state in \cite{apr}, symmetry energy parameters $J$, $L$ and $K_\tau$ are calculated as results of the fitting. Figure \ref{fig1}(a) shows the energy of PNM at sub saturation densities. Theoretical equations of state calculated with an EFT \cite{dss} and a lattice chiral EFT \cite{lattice} are consistent with the prediction of KIDS model. In the subsequent works, contributions from the chiral three-nucleon forces are taken into account \cite{Dri16,Tew16}. Around $\rho \sim 0.016-0.020$ fm$^{-3}$ ($=0.1-0.125 \rho_0$), new calculations predict $E_{\rm NM} \sim 4$ MeV, so the result of KIDS model is consistent with the updated predictions from EFTs. One can also find $E_{\rm PNM}$ at very low densities calculated with Av4 potential and quantum Monte Carlo method \cite{qmcav4} (QMC Av4). In Ref. \cite{prc99}, $E_{\rm PNM}/E_{\rm FG}$ ($E_{\rm FG}$ the energy of the free gas) is compared with various models. Though not fully consistent, KIDS model prediction agrees to the result of QMC Av4 in the dilute neutron matter. Figure \ref{fig1}(b, c) presents the supra saturation-density behavior of pressure in PNM (b), and in the symmetric nuclear matter (SNM) (c). Model prediction is compared to the result from heavy ion collision experiments. \begin{table*} \caption{Properties of nuclear matter calculated with KIDS model. Saturation density $\rho_0$ is in unit of fm$^{-3}$. Exp/Emp values are quoted from Ref.~\cite{dutra}. $E/A$, $K_0$, and $Q_0$ are binding energy per particle, compression modulus, and skewness (the third derivative of energy per particle) at the saturation density in the symmetric nuclear matter, respectively. $J$, $L$, and $K_\tau$ are parameters in the symmetry energy of nuclear matter. $E_0$, $K_0$, $Q_0$, $J$, $L$, and $K_\tau$ are in units of MeV.} \label{tab1} \begin{center} \begin{tabular}{llllllll} \hline\noalign{\smallskip} & $\rho_0$ & $E/A$ & $K_0$ & $-Q_0$, & $J$ & $L$ & $-K_\tau$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} KIDS & $0.160$ & $16.00$ & $240.0$ & $372.7$ & $32.8$ & $49.1$ & $375.1$ \\ Exp/Emp & $\simeq 0.16$ & $\simeq 16.0$ & 200-260 & 200-1200 & 30-35 & 40-76 & 372-760 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} \begin{figure*} \begin{center} \resizebox{0.8\textwidth}{!}{ \includegraphics{fig1.png} } \end{center} \caption{Energy below (a) and pressure above (b, c) the saturation density. Result of an EFT and lattice chiral EFT in (a) is taken from \cite{dss} and \cite{lattice}, respectively. Results from experiment in (b) are from \cite{flow}, and those in (c) are from \cite{flow,kaon,Fuc06,You99}. } \label{fig1} \end{figure*} As aforesaid, a realistic nuclear structure model should be able to reproduce basic properties of nuclei. We transform the KIDS EDF to a Skyrme-type in-medium nuclear potential, and solve the Hartree-Fock equation to obtain the wave function of a nucleus \cite{prc100,prc99}. Two model parameters are added when a Skyrme-type potential is constructed. One term accounts for the gradient of density distribution in a nucleus, and the other for the spin-orbit interactions. They are fit to the binding energy per nucleon and charge radius of $^{40}$Ca, $^{48}$Ca, and $^{208}$Pb, and the rest of the nuclear properties are obtained as predictions of the model. Tables \ref{tab2} and \ref{tab3} show the result of KIDS model and compare them with experimental data. Though only small portion of parameters (2 among 9) is fit to nuclear data, prediction of the bulk properties of spherical magic nuclei agrees to experiment remarkably well with deviation at the order of 0.1\% or less for medium and heavy nuclei. \begin{table*} \caption{Binding energy per nucleon in units of MeV. Deviation is defined as $|{\rm Exp}-{\rm KIDS}|/{\rm Exp}\times 100$. Data are taken from \cite{exp1,exp2}. } \label{tab2} \begin{center} \begin{tabular}{ccccccc} \hline\noalign{\smallskip} & $^{16}$O & $^{40}$Ca & $^{48}$Ca & $^{90}$Zr & $^{132}$Sn & $^{208}$Pb \\ \noalign{\smallskip}\hline\noalign{\smallskip} Exp & 7.9762 & 8.5513 & 8.6667 & 8.7100 & 8.3550 & 7.8675 \\ KIDS & 7.8684 & 8.5565 & 8.6564 & 8.7328 & 8.3563 & 7.8809 \\ Deviation (\%) & 1.35 & 0.06 & 0.12 & 0.26 & 0.02 & 0.17 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} \begin{table*} \caption{Charge radius in units of fm. Deviation is defined as $|{\rm Exp}-{\rm KIDS}|/{\rm Exp}\times 100$. Data are taken from \cite{exp1,exp2}. } \label{tab3} \begin{center} \begin{tabular}{ccccccc} \hline\noalign{\smallskip} & $^{16}$O & $^{40}$Ca & $^{48}$Ca & $^{90}$Zr & $^{132}$Sn & $^{208}$Pb \\ \noalign{\smallskip}\hline\noalign{\smallskip} Exp & 2.6991 & 3.4776 & 3.4771 & 4.2694 & 4.7093 & 5.5012 \\ KIDS & 2.7618 & 3.4781 & 3.4867 & 4.2476 & 4.7089 & 5.4887 \\ Deviation (\%) & 2.32 & 0.01 & 0.28 & 0.51 & 0.01 & 0.23 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} It has been shown in many papers that nuclear equation of state calibrated to similar conditions at the saturation density can behave very differently at high densities. Masses of the neutron stars estimated from the observation GW170817 are about $1.4 M_\odot$. Density at the center of $1.4 M_\odot$ neutron star is in the range $(2 \sim 3) \rho_0$, the uncertainty being originated from the nuclear model. In order to study of the uncertainty of the tidal deformability due to the dependence on the nuclear model, we consider four non-relativistic Skyrme force models GSkI \cite{gsk1}, SLy4 \cite{sly4}, SkI4 \cite{ski4}, SGI \cite{sg1}, and two relativistic mean field models MS1, MS1b \cite{ms1}. Table \ref{tab4} shows that the models have similar nuclear matter properties at the saturation density, and they are also in reasonable agreement with experiment. \begin{table*} \caption{Four basic nuclear matter properties obtained from models in comparison. Saturation density $\rho_0$ is in fm$^{-3}$, and $E/A$, $K_0$ and $J$ are in units of MeV. MS1b model gives the results same with MS1.} \label{tab4} \begin{center} \begin{tabular}{cccccc} \hline\noalign{\smallskip} & GSkI & SLy4 & SkI4 & SGI & MS1 \\ \noalign{\smallskip}\hline\noalign{\smallskip} $\rho_0$ & 0.159 & 0.160 & 0.160 & 0.154 & 0.148 \\ $E/A $ & 16.02 & 15.97 & 15.95 & 15.89 & 15.75 \\ $K_0 $ & 230.2 & 229.9 & 248.0 & 261.8 & 250 \\ $J $ & 32.0 & 32.0 & 29.5 & 28.3 & 35 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} \begin{figure}[t] \begin{center} \resizebox{0.4\textwidth}{!}{\includegraphics{fig2a.png}} \centerline{(a) $p$ vs $\rho/\rho_0$} \resizebox{0.4\textwidth}{!}{\includegraphics{fig2b.png}} \centerline{(b) $p$ vs $\epsilon$} \end{center} \caption{Pressure ($p$) vs density ($\rho/\rho_0$) and energy density ($\epsilon$) for various nuclear equations of state~\cite{Kim2018a,Kim2018b}.} \label{fig2} \end{figure} \section{Neutron star equation of state and tidal deformability with nuclear energy density functionals} \label{sec3} In Fig.~\ref{fig2}, the nuclear equations of state which we consider in this work are plotted. In order to understand the characteristics of each equation of state, we plot the adabatic index in Fig.~\ref{fig3}; \begin{equation} \Gamma (p) \equiv \frac{d (\ln p)}{d (\ln \rho)}. \end{equation} An ideal gas equation of state can be characterized by a constant adiabatic index $\Gamma$, e.g., $\Gamma = 5/3$ for a non-relativistic ideal gas and $\Gamma=4/3$ for an ultra-relativistic ideal gas. However, as one can see in Fig.~\ref{fig3} (a), the adabatic index $\Gamma$ is not a constant for more realistic nuclear equations of state and $\Gamma > 2$ for most of the models at high densities because the strong interaction dominates. Recently, spectral expansions of the adiabatic index have been introduced to parameterize neutron star equations of state~\cite{Lindblom18} and used to analyze the tidal deformability of neutron stars~\cite{LSC18}. In Fig.~3 (b), the adiabatic index is plotted as a function of pressure for the comparison. Note that the adiabatic indexes of MS1 and MS1b at low (high) densities are much larger (smaller) than those of other models. On the other hand the adiabatic index of KIDS model shows distinctive behavior among the models that were considered in this work. In Fig.~\ref{fig4} speed of sound is plotted. At high density, even though KIDS satisfies the causality limit $c_s^2/c_0^2 < 1$, it gives large speed of sound, far above the ideal gas limit $c_s^2/c_0^2 = 1/3$. Note that large speed of sound at high density is required to make stiff equation of state which is consistent with current bservations~\cite{demorest2010,antoniadis2013,cromartie19}. \begin{figure}[t] \begin{center} \resizebox{0.4\textwidth}{!}{\includegraphics{fig3a.png}} \centerline{(a) $\Gamma$ vs $\rho/\rho_0$} \resizebox{0.4\textwidth}{!}{\includegraphics{fig3b.png}} \centerline{(b) $\Gamma$ vs $p$} \end{center} \caption{Adabatic index $\Gamma$ vs density ($\rho/\rho_0$) and pressure ($p$) for various nuclear equations of state.} \label{fig3} \end{figure} \begin{figure}[t] \begin{center} \resizebox{0.4\textwidth}{!}{\includegraphics{fig4a.png}} \centerline{(a) $c_c^2/c_0^2$ vs $\rho/\rho_0$ } \centerline{\phantom{x}} \resizebox{0.4\textwidth}{!}{\includegraphics{fig4b.png}} \centerline{(b) $c_s^2/c_0^2$ vs $p$ } \end{center} \caption{Speed of sound ($c_s$) for various equations of state~\cite{Kim2018a,Kim2018b}. $c_0$ is speed of light in vacuum. } \label{fig4} \end{figure} In Fig.~\ref{fig5}, masses, radii and central densities of neutron stars are summarized. As marked with cyan color, the most massive neutron star in neutron star-white dwarf binaries has $M= 2.14^{+0.20}_{-0.18} M_\odot$~\cite{cromartie19} which sets the lower limit of maximum mass of neutron stars. In Fig.~\ref{fig5} (a), radii of MS1 and MS1b are much larger than those of others for a given neutron star mass, and not consistent with current X-ray burst observations~\cite{steiner2010}. In Fig.~\ref{fig5} (b), for a given neutron star mass, central densities of MS1 and MS1b are much smaller than those of other models due to the larger adiabatic indexes (higher pressure) than other models at low densities as in Fig.~\ref{fig3}. Even though the adiabatic index of KIDS model is quite different from those of Skyrme force models (GSkI, SLy4, SkI4, SGI) as shown in Fig.~\ref{fig5} (a), masses and radii of neutron stars with KIDS model are less distinctive than the MS1 and MS1b models, and KIDS, GSkI and SLy4 models produce similar masses and radii which are consistent with current X-ray observations. \begin{figure}[t] \begin{center} \resizebox{0.4\textwidth}{!}{\includegraphics{fig5a.png}} \centerline{(a) Mass vs Radius } \resizebox{0.4\textwidth}{!}{\includegraphics{fig5b.png}} \centerline{(b) Mass vs Central density ($\rho_c$) } \end{center} \caption{ Mass vs radius and central density of neutron stars for various neutron star equations of state~\cite{Kim2018a,Kim2018b}. Upper three (cyan, yellow and green) horizontal bands indicate neutron star masses observed in white-dwarf neutron star binaries~\cite{demorest2010,antoniadis2013,cromartie19}. Lower two (blue and red) horizontal bands indicate neutron star masses estimated in the neutron star binary merger GW170817~\cite{GW170817PRL}. Black circles with horizontal error bars around $R \sim 12$ km in (a) correspond to the probable radii of neutron stars estimated from X-ray bursts observation~\cite{steiner2010}. Blue and red triangles around $R \sim 13$ km in (a) correspond to the new estimates by NICER~\cite{Riley19,Miller19}. } \label{fig5} \end{figure} \begin{figure}[t] \begin{center} \resizebox{0.4\textwidth}{!}{\includegraphics{fig6.png}} \end{center} \caption{Tidal Love number $(k_2)$ of a single neutron star~\cite{Kim2018a,Kim2018b}.} \label{fig6} \end{figure} \begin{figure}[t] \begin{center} \resizebox{0.39\textwidth}{!}{\includegraphics{fig7a.png}} \centerline{(a) $\Lambda$ vs Mass} \resizebox{0.4\textwidth}{!}{\includegraphics{fig7b.png}} \centerline{(b) $\Lambda$ vs Compactness ($M/R$)} \end{center} \caption{Dimensionless tidal deformability ($\Lambda$) of a single neutron star~\cite{Kim2018a,Kim2018b}. Horizontal grey lines in all panels indicate the upper limit of $\Lambda$ with $M=1.4 M_{\odot}$ obtained from GW170817. In (a), cross points of curves with the vertical dashed line correspond to $\Lambda$ at $M = 1.4 M_\odot$.} \label{fig7} \end{figure} The dimensionless tidal deformability ($\Lambda$) which parameterizes the correlation between the external quadrupolar tidal field (${\cal E}_{ij}$) and the induced quadrupole moment of a neutron star ($Q_{ij}$) is defined as \begin{equation} {\cal Q}_{ij} = - \Lambda\, M^5 {\cal E}_{ij}, \end{equation} and the tidal Love number ($k_2$) can be obtained from the relation~\cite{GW170817PRL,Hin10,Fav14} \begin{equation} \Lambda = \frac{2}{3} \left(\frac{M}{R}\right)^{-5} k_2, \label{eq4} \end{equation} where $M/R$ is called compactness of a neutron star. The dimensionless tidal deformability $\Lambda$ can be extracted from gravitational wave signals by accumulating the phase shifts of gravitational waves due to the $\Lambda$ contribution in gravitational waveform models~\cite{GW170817PRL,Fav14}. On the other hand, tidal Love number $k_2$ gives more intuitive information to understand the deformation of neutron stars. Tidal Love number of the neutron star is plotted in Fig.~\ref{fig6}. One can see that the tidal Love number converges to zero as the mass of a neutron star increases to its maximum value. This behavior could be understood by noting that the tidal deformability of a black hole is zero because there is no matter to be deformed. Tidal Love number also decreases toward zero as the mass of a neutron star decreases even though the radius increases because, for small mass neutron stars, density distribution is more important than the radius. Note that, as the neutron star mass decreases below 0.5 $M_\odot$, mass distribution is more centralized and the outer shell becomes more dilute. In Fig.~\ref{fig7}, the dimensionless tidal deformability $\Lambda$ of a single neutron star is summarized for masses above 0.9 $M_\odot$ in which most of the observed neutron stars are distributed. In Fig.~\ref{fig7}(a), one can confirm that MS1 and MS1b are not consistent with the upper limit of the tidal deformability obtained from GW170817~\cite{GW170817PRL}. Other 5 models including KIDS are consistent with the current upper bound of the tidal deformability $\Lambda$. However, with more observations on the tidal deformability, one may be able to exclude some of the models because the largest difference in $\Lambda$ with $M=1.4 M_\odot$ is about a factor of two as in Fig.~\ref{fig7}(a). In Fig.~\ref{fig7}(b), $\Lambda$ is plotted as a function of compactness $M/R$ for masses above 0.9 $M_\odot$. In this mass range, $\Lambda$ mainly depends on the compactness independently of chosen models. This result can be understood from the results in Fig.~\ref{fig6} and Eq.~(\ref{eq4}). Since $k_2$ is almost linearly proportional to $(M/R)^{-1}$ for the masses above 0.9 $M_\odot$ as in Fig.~\ref{fig6}, the tidal deformability $\Lambda$ of a single neutron star becomes mainly a function of the compactness ($M/R$) in this mass range. Hence, from Eq.~(\ref{eq4}), one can get the approximate relation; $\Lambda \propto (M/R)^{-6}$. In summary, we found that KIDS, GSkI and SLy4 models are consistent with both mass-radius constraints from X-ray observations and the upper bound of tidal deformability from gravitational-wave observations. MS1 and MS1b models are more or less excluded because their results are not consistent with masses and radii obtained from X-ray observations, nor with the upper bound of the tidal deformability obtained from gravitational-wave observations. The remaining two Skyrme force models (SkI4 and SGI) are marginally consistent with both observations because the radii are near the upper boundary of X-ray observations as in Fig.~\ref{fig5} and the tidal deformabilities are also close to the upper boundary of revised estimate of the tidal deformability, $\Lambda_{1.4} = 190^{+390}_{-120}$ of W170817~\cite{LSC18}. Hence, the validity of nuclear equations of state can be further tested by future observations with less uncertainties. \section{Prospects} \label{sec4} In this work, we show that the energy density functionals such as KIDS allow us to understand both finite nuclei and nuclear matter in a systematic way. Although we did not include the details in this review, the symmetry energy plays an important role in understanding the physical properties of the dense nuclear matter~\cite{Kra18,Rai19} and neutron stars~\cite{Lattimer14}. Producing exotic nuclei that have larger differences in the numbers of neutrons and protons and performing collision experiments using them will help us understand the symmetry energy at high densities. Such experiments will be available in the future or planned rare isotope facilities including RAON (Rare isotope Accelerator complex for ON-line experiments) in Korea~\cite{Ahn13}. Recent measurement of a neutron star mass and radii by NICER~\cite{Riley19,Miller19} provides new constraints to the neutron star equations of state. The role of asymmetric nuclear matter properties, such as symmetry energy, has to be reanalyzed to satisfy the new observation. Energy density functionals including KIDS are very flexible and appropriate for the analysis because they have degrees of freedom to accommodate the new observation before fitting the nuclear properties. The work with KIDS in this direction is in progress. In addition to GW170817, tidal deformability of neutron stars has been estimated from the neutron star binary merger GW190425; $\tilde\Lambda \le 600$ with low-spin prior and $\tilde\Lambda \le 1100$ with high-spin prior~\cite{GW190425}. Despite the uncertainties in the event rates of the neutron star-neutron star mergers, we could be still optimistic about the future discovery of the neutron star merger events and more accurate measurements of neutron star properties because the sensitivities of the gravitational wave detectors are expected to be improved. \section*{Acknowledgements} We were supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government; Ministry of Science and ICT and Ministry of Education. Y.M.K. NRF-2016R1A5A1013277 and NRF-2019R1C1C1010571; K.K. NRF-2016R1A5A1013277;\\ C.H.H. NRF-2018R1A5A1025563;\\ C.H.L. NRF-2016R1A5A1013277\\ and NRF-2018R1D1A1B07048599.
{'timestamp': '2020-06-01T02:03:32', 'yymm': '2005', 'arxiv_id': '2005.14299', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14299'}
arxiv
\section{Introduction} A large portion of the empirical methods in Natural Language Processing (NLP) are defined over canonical text interpretation tasks such as Named Entity Recognition (NER), Seman
tic Role Labeling (SRL), Sentiment Analysis (SA), among others. The systematic creation of benchmarks and the comparative performance analysis of resources, representations and algorithms is instrumental for moving the boundaries of natural language interpretation. SemEval \cite{semeval-2019-international,semeval-2018-international,semeval-2017-international,semeval-2016-international,semeval-2015-international,semeval-2014-international,semeval-2013-joint-lexical,semeval-2012-sem} is the primary venue in the NLP community for the organisation of shared NLP tasks and challenges. SemEval is organised as an annual workshop co-located with the main NLP conferences and has attracted a large and growing audience of task organisers and participants. Despite its recognition as a major driver in the creation of gold-standards and evaluation campaigns, there is no existing meta-analysis which interprets the overall contribution of SemEval as a collective effort. This paper aims to address this gap by performing a systematic descriptive quantitative analysis of 96 tasks encompassing the SemEval campaigns between 2012-2019. This study targets understanding the evolution of SemEval over this period, describing the core patterns with regard to task popularity, impact, task format (inputs, outputs), techniques, target languages and evaluation metrics. This paper is organised as follows: section 2 describes related work; 3 describes the methodology; 4 defines the underlying task macro-categories; 5 and 6 presents the number of tasks and popularity in 2012-2019; 7 discusses SemEval impact in terms of citations; 8 shows targeted languages; then, sections 9, 10, 11 analyse input, output and evaluation metrics; 11 focuses on sentiment analysis architectures and representations; this is followed by a Discussion section; we close the paper with Recommendations and Conclusions. \section{Related work} Each SemEval task is described by an \textit{anthology}, which contains: a summary of previous editions or similar tasks, references to previous works, detailed task description, evaluation methods, available resources, overview of submitted systems and final results of the competition. It is worth noting, there is a variation, or even inconsistency, in the structure and the level of detail in the description. Participants are also encouraged to submit papers with systems architecture explanations. However, there is a lack of overall analysis across different tasks and years in SemEval. There are existing studies on the analysis of specific SemEval tasks. \cite{Nakov2016} focuses on developing Sentiment Analysis tasks in 2013-2015. \cite{Sygkounas2016} is an example of a replication study of the top performing systems, in this case systems used in SemEval Twitter Sentiment Analysis (2013-2015), and focuses on architectures and performance. Evolution and challenges in semantics similarity were described in \cite{JIMENEZ2015}. This is an example of a study on the performance of a given type of architecture across tasks of the same type. There also exist studies on shared tasks in given domain, specially in clinical application of NLP \cite{PMID:30157522}, \cite{10.1136/amiajnl-2011-000465}. However, they refer to tasks outside the SemEval and are more result oriented rather than task organization. Some studies discuss ethical issues in the organisation and participation of shared tasks. An overview focusing on task competitive nature and fairness can be found in \cite{parra-escartin-etal-2017-ethical}. In \cite{nissim-etal-2017-last} authors also relate to these issues, yet giving the priority to advancing the field over fair competition. Comparatively, this paper covers a wider range of NLP topics, and compares sentiment analysis and semantic similarity as well as other task types/groups in a systematic manner. To the best to our knowledge this is the first systematic analysis on SemEval. \section{Analysis methodology} We build a corpus based on the ACL anthology archive from the SemEval workshops between the years 2012-2019. Reference material included ACL anthology papers covering the task description, tasks' websites and papers describing the participating systems. All the reference papers included in this analysis are reported at the Appendix B. The pre-processing analysis consisted in manually extracting the target categories for the analysis which includes: task types, input and output types, as well as evaluation metrics, number of teams, languages and system architectures. Tasks were grouped based on the similarity between task types similarity. If the same team took part in several tasks the same year, we considered each participation as distinct. There are four missing tasks in the plotted indexes, due to cancellation (2015-16, 2019-11), task-sharing (2013-6) or lack of supporting task description (2013-14). Numbers of citations are the numbers returned by Google Scholar, using \textit{Publish and Perish} supporting API \cite{publish-and-perish}. The list of citations were manually validated and noisy entries were filtered out. A final table with all the values extracted from the corpus is included in the Appendix B. \begin{figure*}[ht] \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.93\textwidth]{Task_in_years.png} \caption[]% {{\small Number of tasks in SemEval 2012-2019}} \label{fig:tasks_in_years} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.9\textwidth]{Teams.png} \caption[]% {{\small Number of teams participating in SemEval}} \label{fig:teams} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.9\textwidth]{cumulative_citations.png} \caption[]% {{\small Cumulative number of task citations, except for citations in SemEval proceedings}} \label{fig:cum} \end{subfigure} \quad \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.9\textwidth]{cumulative_citations_internal.png} \caption[]% {{\small Cumulative number of task citations in SemEval proceedings}} \label{fig:cum_int} \end{subfigure} \caption[ AAA ] {\small Overall plots for SemEval editions 2012-2019} \label{fig:overall_plots} \end{figure*} \section{Task types and groups} Based on task description we group each task within a macro-category. Then, due to a large number of task types, tasks were clustered within 6 groups: \textit{Sentiment Analysis} (\textbf{SA}); \textit{Semantic Analysis} (\textbf{SEM}): semantic analysis, semantic difference, semantic inference, semantic role labeling, semantic parsing, semantic similarity, relational similarity; \textit{Information Extraction} (\textbf{IE}): information extraction, temporal information extraction, argument mining, fact checking; \textit{Machine Translation} (\textbf{MT}); \textit{Question Answering} (\textbf{QA}); \textit{Other} (\textbf{OT}): hypernym discovery, entity linking, lexical simplification, word sense disambiguation, taxonomy extraction, taxonomy enrichment. There are also macro-categories defined by the SemEval organizers, starting from 2015, but we found them not consistent enough for the purpose of this analysis. \section{SemEval tasks in years} Within 8 editions of SemEval, a total of 96 tasks were successfully announced. The number of tasks within one group is roughly similar every year (except for MT), as well as distribution of tasks in each edition. According to Fig.\ref{fig:tasks_in_years}, we observe decreasing number of SEM tasks: 5 on average in 2012-2017, and only 2 in 2018-2019. Moreover, there were no machine translation tasks in the last 2 years, and a low number of MT tasks in general (only 4 tasks in 8 years). Although SA has a relatively limited task complexity when compared to SEM or IE, which reflects a higher variety of task types and an abundance of more specific interpretation challenges, the number of SA tasks each year is high (4, 3, 3 and 4 in years 2016-2019). It is worth mentioning, that there are other 6 SA tasks in the forthcoming SemEval 2020. The absence of some task types may be caused by the emergence of specialized workshops or conferences, e.g. low number of MT tasks in SemEval is caused by the presence a separate venue for MT: the Conference On Machine Translation \cite{barrault-EtAl:2019:WMT}, which attracts more participants than SemEval in this field. \section{Task popularity} As a measure of task popularity, we analysed how many teams participated in a given task. As the number of teams signed up to the task is usually much higher than the number submitting a system result, we consider only the latter. The number of teams increased significantly from 62 in 2012 to 646 in 2019, which shows not only a popularity boost for SemEval, but an increase in the general interest for NLP. So far, a total of 1883 teams participated in this period. In Fig.\ref{fig:teams}, we observe a gradual increase in SemEval popularity, 30\% on average each year to 2018, with a +129\% jump in 2019. This is associated mainly with a dramatic increase of interest for SA: 542 teams (84\% of total) in 2019. However, at the same time, number of teams in non-SA tasks decreased from 132 in 2017, to 115 in 2018 and 104 in 2019. The most popular tasks groups along the years are SA and SEM, which gather more than 75\% of teams on average each year. The third most popular is IE, in which total of 235 teams participated in SemEval from 2012 (12\% of total). As a contrast, we observe a relatively low interest in QA and OT tasks. Only 41 teams participated in the last 3 years (3\% of a total of 1148 in 2017-2019). Especially in OT tasks, which concentrates novel tasks, in many cases including novel formats. In the last 2 years, SA shows a major increase in popularity (76\% of all teams, compared to 40\% in 2013-2017). At the same time, in tasks such as 2019-10, 2018-4 and 2018-6, which are mathematical question answering, entity linking on multiparty dialogues and parsing time normalization, respectively, only 3, 4 and 1 teams submitted results. This divergence may be associated with an emergence of easily applicable ML systems and libraries, which better fit to standard classification tasks more prevalent in SA (in contrast to OT, QA nor IE). \section{The impact of SemEval papers} As a measure of the impact of SemEval outcomes in the NLP community, we analysed the numbers of citations per task description in Google Scholar. The task description paper was used as a proxy to analyse the task impact within the NLP community. Papers submitted by participating teams describing systems and methods were not included on this analysis. We considered the cumulative citations from 2012 to 2019 (Fig.\ref{fig:cum}), with additional distinction on citations of task description papers published in a given year (Fig.\ref{fig:cit_year}). Citations within SemEval proceedings were treated separately, as we focused on the impact both outside (Fig.\ref{fig:cum}) and inside (Fig.\ref{fig:cum_int}) the SemEval community. In other words, citations found in Google Scholar are split into numbers of papers \textit{out} and \textit{in} the SemEval proceedings. SA and SEM have the highest impact, being the most cited tasks along the years both inside and outside SemEval community, what can be attributed to their high popularity. Considering the external impact, in 2019 SA and SEM anthologies contributed with 2847 (41\%) and 2426 (35\%) citations respectively. IE has 985 citations (14\%) and QA contributed with 148 citations (2\%). The OT group, which consists of less canonical tasks, accumulated 468 citations (7\%). The impact of MT papers is noticeably lower - 84 (1\%). In terms of citations within the SemEval community (in all SemEval 2012-2019 proceedings), we observe a similar pattern: 41\% and 37\% citations in 2019 come from SA and SEM (357 and 322), and for remaining task groups proportions are almost identical as in citations outside community (Chi.sq. \textit{p-value}=0.06). The number of citations outside is 8 times higher than inside the community. This proves the scientific impact and coverage, which leads to beneficial effect of SemEval on the overall NLP community. A total of 6958 citations from 2019 are depicted in Fig.\ref{fig:cit_year} with distinction on the year in which the task was published (e.g. tasks from 2016 are cited 1682 times (23\%)). Similarly, a total of 876 citations in the SemEval proceedings are presented in Fig.\ref{fig:cit_int} (e.g. anthologies published in 2015 are cited 163 times in all SemEval proceedings so far). SA tasks from 2016, SEM from 2014 and IE from 2013 have the highest impact compared within groups (40\%, 28\% and 42\% respectively). One could expect higher numbers of citations for older papers, however, we do not observe this pattern. \section{Languages in tasks} We analysed SemEval it terms of languages used in the tasks (Fig.\ref{fig:languages}). We can distinguish 3 clusters: English-only (except for 3 tasks entirely in Chinese); multi-lingual, which define identical subtasks for several languages; cross-lingual (targeting the use of semantic relation across languages). In total of 96 tasks, 30 investigated more than one language (multi-lingual and cross-lingual tasks) and 63 tasks were using only English. The five most popular languages, excluding English are: Spanish (16), French (10), Italian (10), Arabic (8), German (8). Although Chinese is the 1st language in number of speakers, only 4 tasks were organised for Chinese. Most of multi-lingual or cross-lingual tasks are related to SA (5 in 2016-2018) or SEM (15 in 2012-2019), and obviously on MT tasks (3 in 2012-2014). There were 3 OT tasks, only one QA task, and no IE tasks. Task 11 in 2017 concerning program synthesis, aiming to translate commands in English into (program) code, attracted only one team. In 2018 and 2019 the interest of other languages is lower compared to previous years. Languages other than English were proposed only 5 and 3 times, respectively, whereas in 2016 and 2017 we observed the occurrence of respectively 10 and 14 times. \section{Input and Output Analysis} In order to better understand the evolution of the semantic complexity of the tasks, we analysed them in terms of the types used to represent input and output data in all subtasks. Based on their descriptions, we devised a list of 25 different abstract types used, then assigning each subtask the most appropriate Input and Output Types. \subsection{Types and Clusters} Taking into consideration both their complexity and purpose, we split the type list into 5 clusters: \textit{cluster 1}: document, text, paragraph, sentence, phrase, word, number; \textit{cluster 2}: score, score real value, score whole value, class label, probability distribution; \textit{cluster 3}: entity, attribute, topic, tree, Directed Acyclic Graph (DAG); \textit{cluster 4}: question, answer, query; \textit{cluster 5}: Knowledge Base (KB), program, time interval, timeline, semantic graph, syntactic labeled sentence. \begin{figure*} \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.9\textwidth]{Languages.png} \caption[]% {{\small Languages used in 96 SemEval tasks from 2012 to 2019}} \label{fig:languages} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.9\textwidth]{Citations.png} \caption[]% {{\small Number of task citations published in given year, except for citations in SemEval proceeding}} \label{fig:cit_year} \end{subfigure} \vfill \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.9\textwidth]{Citations_internal.png} \caption[]% {{\small Number of task citations from given year in SemEval proceedings}} \label{fig:cit_int} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.9\textwidth]{architectures.png} \caption[]% {{\small Models used in SA tasks from 2012 to 2019 at SemEval}} \label{fig:models} \end{subfigure} \end{subfigure} \caption[ AAA ] {\small } \label{fig:languages_cit_models} \end{figure*} \subsection{Input Types} As expected, types from \textit{cluster 1} (sequential tokens) make up for 76\% of overall input types used in all tasks (depicted in the Appendix A, Fig.\ref{fig:input}). Most popular input type is paragraph, for which about 60\% of cases represents a tweet. The remaining 24\% is split across \textit{clusters 2, 3, 4 and 5}. A subtle divergence towards the right-hand side can be noticed, starting with 2015, driven mostly by tasks from SA and IE task groups. The most dominant Input Types from each cluster are paragraph, class label, entity, question and KB. \subsection{Output Types} As shown in Fig.\ref{fig:output}, data types from \textit{clusters 2 and 3} are the majority in this case, accounting for 68\% of used representations. Class labels are repeatedly employed, especially by SA tasks. \textit{Cluster 1} types are constantly used across the years, fully dependent on the task types given in a certain year, 78\% of them coming from SEM, IE and OT. Rarely used are typed from \textit{clusters 4 and 5}, accounting for just 10\% of the total, half of which occur in SEM tasks during 2016 and 2017, complex tasks such as Community Question Answering and Chinese Semantic Dependency Parsing. We also found a possible relation between output type and popularity. In 2012-2017 tasks where outputs were in \textit{cluster 4 or 5}, attracted 8.3 teams per task on average, while in \textit{clusters 1-3} 13.9 teams/task. However, despite major increase in SemEval popularity, in 2018-2019 the former attracted only 7 teams/task, and the latter 43.5 teams/task. The group with most type variety is SEM, covering types across all clusters. On the other side of the spectrum, SA has the least variety, despite it being the most popular task group. The most dominant Output Types from each cluster are paragraph, class label, entity, answer and semantic graph. \begin{figure*} \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[height=0.9\textheight]{Output.png} \caption[]% {{\small Output Types}} \label{fig:output} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[height=0.9\textheight]{Evaluation_Metrics.png} \caption[]% {{\small Evaluation Metrics}} \label{fig:eval_metrics} \end{subfigure} \caption[ AAA ] {\small 96 SemEval tasks from 2012-2019} \label{fig:overall_plots_output_and_eval} \end{figure*} \section{Evaluation Metrics} We counted a total of 29 different evaluation metrics used in SemEval (Fig.\ref{fig:eval_metrics}). At a subtask level, the most frequent metric is F1-score, with 105 usages, followed by recall and precision, with 51 and 49 usages respectively, and accuracy, with 26 usages. F1, recall and precision are frequently jointly used, the last two playing the role of supporting break-down metrics for F1 in 95\% of cases. This combination is very popular, especially for IE tasks, almost half of the use coming from this task group. The top 5 evaluation metrics make up 84\% of the total number of metrics used in all years, last 12 (almost half) being only used once. In 89\% of cases when rare evaluation metrics (from Kendall's T to the right) are used, they occur in SA and SEM tasks e.g. Jaccard index in Affect in Tweets (2018) or Smatch in Meaning Representation Parsing (2016). Furthermore, 67\% of the least used evaluation metrics (only used 3 times or less) appear in 2015-2017, the same period when we could see tasks experimenting the most with input and output types. \subsection{Evaluation Metrics against Output Types} F1, recall and precision (depicted in Appendix A, Fig.\ref{fig:heatmap}) are mostly used for output types such as class label, paragraph and entity (each of which is the top output type from their clusters). Meanwhile, for output types represented by score, most used evaluation metrics are Pearson Correlation, Kendall's T, cosine similarity and Spearman Correlation. MAP, the 6th most used evaluation metric, is mostly used for ranked questions/answers either in recurring tasks such as Community Question Answering. Human judgment was only used twice, in Taxonomy Extraction Evaluation (2016) and Abstract Meaning Representation Parsing and Generation (2017). For further reference, see Appendix A. \section{Zooming in into Sentiment Analysis} \subsection{System architectures} The systematic analysis of the prevalent methods and architectures imposed particular challenges with regard to the data extraction process due to the intrinsic complexity of tasks (many systems include the composition of pre-processing techniques, rules, hand-crafted features and combinations of algorithms). Additionally, for the majority of task description papers, there is no systematic comparison between systems within a task, and consequently within group or years. Due to the consistent presence of SA along all years, we present an overview of the evolution of system architectures used in SA from 2013 to 2019 (Fig.\ref{fig:models}). In this analysis we focus on the best performing architectures. More than one best model in a task signifies best models in subtasks or that the final system was an ensemble of several algorithms. \textit{Regression based} model encompasses linear, logistic, or Gaussian regression, and \textit{Other} includes all rule-based or heavily hand-crafted models. We observe a drift in popularity of architectures from ML algorithms (2013-2016) to deep learning (DL) models (2017-2019). Despite the major adoption of DL models, traditional ML algorithms are consistently in use, both as separate models and as ensembles with DL. This is also true for other types of tasks. In many task description papers from 2018-2019, one can find ML-based systems as top performing participants. SVM-based models are still popular and in some tasks outperforms DL (2018-2, 2019-5). In the analysis of system architectures one needs to take into account that best system depends not only on the core algorithm but also on the team expertise and supporting feature sets and language resources. \begin{figure}[b!] \centering \includegraphics[width= 0.95\textwidth]{input_output_timeline.jpg} \caption{Timeline of Input Types (upper row) and Output Types (lower row) in Sentiment Analysis tasks at SemEval 2013-2019.} \label{fig:timeline} \end{figure} \subsection{Representations} The output of the SA related tasks provide an account of the evolution of sentiment and emotion representation in this community from 2013 until 2019 (Fig.\ref{fig:timeline}). At a discrete level, the number of maximum class labels representing sentiment intensity grew from 3 in 2013 to 7 in 2019. At a continuous score level, real-valued scores associated with sentiment was first used in 2015; in 2016 it switched to sentiment intensity; in 2017 it was being used as a way to determine the intensity of an emotion component out of 11 emotion types (rather than a single one, or the generic emotional intensity of a sentence). In terms of targeted subject, the tasks grew more granular over time: paragraph/word (2013), aspect terms (2014), sentence topic (2015), person (2016). Additionally, discourse evolved from simpler opinionated text in the direction of figurative language, for example: handling irony and metaphor in SA (2015), phrases comparison/ranking in terms of sense of humor (2017), irony detection (2018) and contextual emphasis (2019). \section{Discussion: What is SemEval evaluating?} The results of the analysis substantiate the following core claims, which summarises some of the trends identified in this paper: \begin{itemize} \item There is evidence of significant impact of SemEval in the overall NLP community. \item SemEval contributed to the construction of a large and diverse set of challenges with regard to semantic representation, supporting resources and evaluation methodologies and metrics. \item SemEval is becoming heavily biased towards solving classification/regression problems. We observe a major interest in tasks where the expected output is a binary or multi-class label or within a continuous real valued score. \item Sentiment Analysis tasks accounts for a disproportional attention from the community. \item There are two parallel narratives running on SemEval: low entry barrier and state-of-the-art defining. SemEval contains a rich corpus of unaddressed and complex NLP tasks, which are eclipsed by the easier low entry barrier tasks. This points to the double function of SemEval which performs a pedagogical task, serving as an entry point for early career researchers to engage within the NLP community and a state-of-the-art forum for pushing the boundaries of natural language interpretation. With the popularity of NLP applications and Deep Learning, the former function is eclipsing the latter. \item There is a significant trend to decrease the variety in the output and evaluation metrics in the recent years. While in previous years, tasks focused more on novel and exploratory tasks, recent tasks have explored, probably due to emergence of out-of-the-box DL models, this variety significantly decreased. Consequently, participants focus on easier tasks, which in part dissipates the community potential to address long-term challenges. \item Despite the recent interest in neural-based architectures, there is clear evidence of the longevity and lasting impact of older NLP methods. \end{itemize} \section{Recommendations} We believe that this paper can serve as a guideline for the selection and organisation of future SemEval tasks. Based on the analyses performed on this paper, these are the main recommendations: \begin{itemize} \item Prioritise tasks which have a clear argument on semantic and methodological challenges and novelty. \item Differentiate challenges which have a competition/pedagogical purpose from research tasks. \item Support the systematic capture of task metadata and submission data in a structured manner. This will allow for an efficient comparison between SemEval tasks and deriving insights for future SemEval editions. \end{itemize} \section{Conclusions} This paper reported a systematic quantitative analysis of SemEval, one of the primary venues for the empirical evaluation of NLP systems. The analysis, which provides a detailed breakdown of 96 tasks in the period between 2012-2019, provided quantitative evidence that: (i) SemEval has a significant impact in the overall NLP community, (ii) there is a recent drift towards a bias in the direction of Deep Learning classification methods which is eclipsing the research function of SemEval and (iii) that there is longevity and impact of older NLP methods in comparison to Deep Learning methods. \bibliographystyle{coling}
{'timestamp': '2021-01-01T02:05:40', 'yymm': '2005', 'arxiv_id': '2005.14462', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14462'}
arxiv
\section{Introduction \label{section intro}} In biostatistics, many models for survival and reliability analysis are two-state stochastic processes which lead to a particular event such as death, or
the outcome of a particular drug treatment. However, applying a multi-state stochastic process allows the modeller to provide a richer and more accurate model by adding more details. These details allow to capture alternative paths to the event of interest, specify all the intermediate events, and also allow to understand partial failure modes in a progressive disease. In our context, a multi-state stochastic process is a process $X(t)$ for $t \geq 0$, where $X(t)$ can take a finite number of values $ 1,2, \ldots ,\ell$. This process can be considered as a family of random variables $X(t)$ indexed by $t$. The quantities of interest are often the probability of being in a state at a given time and the distribution of first passage times (the time until the process reaches a given state for the first time from a particular starting state). In some applications of multi-state stochastic processes, the dependence on the history of the process is negligible. Therefore, for the sake of mathematical tractability, assuming the Markov property (where future transitions between states depend only upon the current state) is convenient. For instance, continuous time Markov chains, which we refer to here as {\em Markov processes}, are widely used in modelling the movements of patients between units of a hospital or between stages of a disease, see for instance \cite{broyles2010statistical, taylor1998using}, or in the prediction of the incidence rate of diseases, see \cite{aalen1997markov}. However, in certain cases, the Markov assumption is unrealistic. For instance, in the study of human sleep stages, the sleep stages usually do not follow an exponential distribution (constant hazard rate), but can rather have other forms such as a Weibull distribution, see \cite{wang2013computational}. Further, some aspects of systems' behavior can not be captured by Markov processes. For instance, the risk of chronic diseases such as AIDS essentially depends on the time since infection, see \cite{joly1999penalized}. For these cases, applying the class of \textit{semi-Markov processes}\index{semi-Markov process} (SMP), as an extension of Markov processes, is fruitful. Here, future probability transitions depend on the sojourn time (the time spent in the current state), and the clock of each sojourn time is reset to zero after each transition into a new state. SMPs have a variety of applications in healthcare. For instance, for predicting a disease progression \cite{goshu2013modelling}, health care manpower supply prediction \cite{trivedi1987semi}, and recovery progress of patients with a specific type of disease \cite{polesel1986application}. For biomedical applications, especially those concerned with characterizing an individual's progression through various stages of a disease, a three-state semi-Markov process known as the \textit{illness-death model }is very popular (see for instance \cite{joly1999penalized, boucquemont2014should}). The illness-death model may also be applied for modelling the trajectory of patients in intensive care units (ICUs) of hospitals (see \cite{liquet2012investigating, coeurjolly2012attributable}). Here, our main focus is on the statistical methodology of semi-Markov processes and more precisely on parametric models of SMPs. We compare and contrast two approaches for defining SMPs which we denote via, {\bf I - sojourn times}, and {\bf II - intensity transition functions}. Approach I is based on the specification of the {\em sojourn time distribution,} together with a transition probability matrix of a discrete time Markov chain, which we call the {\em embedded chain}. Approach II, is based on {\em intensity transition functions} which when multiplied by an infinitesimal quantity, specify the instantaneous probability of transition between states. Note that in the literature, these are sometimes also called hazard rate functions of the SMP, however they should not be confused with hazard functions of probability distributions (such as for example the sojourn time distributions). While from a probabilistic perspective, both Approach I and Approach II are equivalent ways for describing an SMP, from a statistical perspective there are differences. In this paper, we highlight that when it comes to parameter estimation, Approach II has significant advantages over Approach I. Intrinsically this is because Approach II can be expressed by using fewer parameters. Further, we can show that the likelihood function of Approach II can be written as the product of likelihoods of two-state models. This is very helpful for reducing the computational effort for likelihood-based parameter estimation. Nevertheless, depending on the application at hand, using either approach may be useful and in certain cases researchers have chosen to use Approach~I because it focuses directly on the underlying distributions. We highlight the associated tradeoffs in the paper. The remainder of this paper is structured as follows. In Section~\ref{section:SMP}, we introduce semi-Markov processes (SMP) via both Approach I and Approach II. We also present the probabilistic relationships between the approaches. We continue with Section~\ref{section:inference} focusing on inference and specify the likelihood function for both approaches. In Section~\ref{sec:application}, we illustrate inference on two available datasets and present how to implement both approaches (for both datasets) using several contemporary R software packages. For both datasets, we compare results of fully parametric models based on both approaches and highlight the implications of each modelling choice. Section~\ref{sec:conclusion} presents some concluding remarks. All the numerical results (tables and figures) from the paper are freely reproducible via R code in a comprehensive detailed vignette available in \cite{Liq_github}. \section{Semi-Markov Processes} \label{section:SMP} A multi-state model is a continuous time stochastic process with values in a discrete set that is often applied for longitudinal medical studies, where the patients may experience several events and their related information is collected over time. The complexity of a model greatly depends on the number of states and also on the possible transitions. Figure~\ref{Fig:Multi} demonstrates a general multi-state model. See \cite{andersen2012statistical} as a classical reference for multi-state models. \begin{figure}[h] \center \includegraphics[scale=0.6]{images/Multi_state.pdf} \caption{{{\small An Illustration of a multi-state Model. }}} \label{Fig:Multi} \end{figure} A specific class of examples of multi-state models is the class of Markov processes where the state evolution jumps between levels and the process adheres to the Markov property. However, in many real-world applications, we need a stochastic process that exhibits dependence between jump times. For instance, in biostatistics, where the trajectory of patients in a hospital is considered, using a Markov process as a multi-state model imposes a stringent limitation on the sojourn time distribution in each state. This is because a key consequence of using Markov processes is the fact that state sojourn times follow the exponential distribution and is ``non-ageing'' with a constant hazard rate. In contrast, in Semi-Markov processes this assumption can be relaxed. Hence with Markov processes one is often lacking the desired degrees of freedom for modeling the dependence between jump times. Extending the modelling from Markov processes to the class of SMPs removes the restriction of memoryless (exponential) state sojourn times and at the same time, preserves the usefulness of treating the data as jump processes in continuous time. In fact, for semi-Markov processes we consider a relaxation of the Markov property for sojourn times and only the embedded chain of states is required to follow the Markov property. For this reason, SMPs are applied for modelling a variety of phenomena in different areas such as economics, reliability, and health care, see \cite{janssen2013semi}. To define an SMP, consider a homogeneous Markov chain $\{J_n\}_{n \ge 0}$ on states $\{1,2,\ldots, \ell\}$ where the probability of $n$-th ($n\geq 1$) jump from state $i$ to state $j$ for $i \neq j$ is $p_{ij}$. That is, \begin{equation} \label{eq:pij} p_{ij}=\mathbb{P}(J_{n}=j ~|~ J_{n-1}=i). \end{equation} Based on the directed graph associated with these probabilities (where arc $i \to j$ exists only if $p_{ij} > 0$) states can be classified as either transient or recurrent. {A state $i$ is {\em absorbing} if $p_{ij} = 0$ for all $j \neq i$. Note that under this definition \[ \sum_{j \neq i} p_{ij} = \begin{cases} 1 & \text{for non-absorbing}~i, \\ 0 & \text{for absorbing}~i. \end{cases} \] } Denote the increasing sequence of jump times by $T_0=0< T_1< T_2 < T_3 <\ldots$ and {define} $N(t) = \max \,\{n : T_n \leq t\}$ for $t\geq 0$. This is a count of the number of transitions up to time $t$. The stochastic process $X(t):= J_{N(t)}$ is said to be a {\em semi-Markov process (SMP)} if whenever the process enters state $i$, the next state is $j$ with probability $p_{ij}$ and given that the next state to be entered is $j$, the time until the transition from $i$ to $j$ is a random variable with cumulative distribution function $F_{ij}(t)$: \begin{equation} \label{eq:Fij} F_{ij}(t)=\mathbb{P}(\tau_n \leq t ~|~ J_{n-1}=i, J_{n}=j), \quad t\geq 0, \end{equation} where $\tau_n= T_n-T_{n-1}$, see Chapter 4 of \cite{ross1996stochastic}. Hence in general, SMPs are not Markov processes as they do not posses the Markov property. Further, a semi-Markov process allows arbitrarily distributed sojourn times in any state but retains the Markov property for the embedded (discrete time) Markov chain, $\{J_n\}_{n \ge 0}$. In many applications of SMPs in healthcare, a very popular three state semi-Markov process known as the \textit{illness-death model} \index{illness-death model} is applied, see for instance \cite{polesel1986application}. \begin{figure}[h] \center \includegraphics[scale=0.6]{images/IllnessDeathModel.pdf} \caption{\label{Fig:IllD}{{\small The Illness-Death Model. }}} \end{figure} \paragraph{Illness-death model} The illness-death model is the most common model in epidemiology (see Figure~\ref{Fig:IllD}), where it is often applied in studying chronic diseases. In this model, we have three states ``Health'', ``Illness' and ``Death', denoted by 1, 2 and 3, respectively. There are three kinds of transitions: $1 \to 2 $, $2 \to 3 $ and $1 \to 3$ and state $3$ is absorbing. Since this model is often used to describe severe illnesses, there is no possibility of recovery, and therefore the model is irreversible\footnote{To avoid confusion, note that this term is also used for the different concept of a ``reversible'' Markov chain appearing in a different setting.}. In cases where treatments may yield remission, it is more appropriate to construct a model with an additional state ``Remission'' rather than to consider that there is a possibility of moving back to the ``Healthy'' state. Here, we consider the general multi-state model with all possible transitions as illustrated in Figure~\ref{Fig:Multi} which includes both illness-death model and its reversible version (see Figure~\ref{Fig:33}). We now arrive at our key point dealing with SMPs. Here we contrast the two approaches which we denote via, {\bf I - sojourn times}, and {\bf II - intensity transition functions}. Approach I was already used to define SMPs above. We spell out further details of Approach I and then continue to introduce Approach II. \begin{figure}[h] \center \includegraphics[scale=0.6]{images/IllnessDeathModel_G.pdf} \caption{\label{Fig:33}{{\small The Reversible Illness-Death Model. }}} \end{figure} \subsection{Approach I: Sojourn Times} As defined above, an SMP, $X(t)$, can be constructed via the sequence $\{(J_n, T_n)\}_{n\ge 0}$, of states and jump times respectively. The underlying parameters of this construction involve the transition probabilities of the embedded chain, $p_{ij}$ presented in \eqref{eq:pij}, as well as the distributions of the sojourn times for each transition $i \to j$, presented in \eqref{eq:Fij}. {The transition probabilities of the embedded chain are often organized in a matrix, $P = [p_{ij}]$ (where we set $p_{ii} = 0$). Note that $P$ restricted to the non-absorbing states is a stochastic matrix.} Further, assuming the sojourn time distributions are continuous, they are often represented in different forms including the probability density function, the survival function or the hazard rate function. We now present the details. The \textit{probability density function} of the sojourn time is \begin{equation} \label{eq:fijI} f_{ij}(t) = \lim_{\bigtriangleup t \rightarrow 0} \frac{1}{\bigtriangleup t}\mathbb{P}(\tau_n\in (t, t+\bigtriangleup t) ~|~J_{n-1}=i, J_{n}=j). \end{equation} % The corresponding \textit{survival function} is \begin{equation} \label{eq:SijI} S_{ij}(t) = \mathbb{P}(\tau_n>t ~|~ J_{n-1}=i, J_{n}=j)=1-F_{ij}(t). \end{equation} (Note that $S_{ij}(t)$ is a decreasing function, that is $S_{ij}(0)=1$ and $\lim_{t \rightarrow +\infty} S_{ij}(t)=~0$). The \textit{hazard function} \index{hazard rate function} which is often thought of as the probability that a jump occurs in a specified interval $(t, t+\bigtriangleup t) $ given no jump before time $t$, is \begin{equation} \label{eq:alphaijI} {\alpha}_{ij}(t) = \lim_{\bigtriangleup t \rightarrow 0} \frac{1}{\bigtriangleup t}\mathbb{P}\big(\tau_n \in (t, t+\bigtriangleup t) ~|~ J_{n-1}=i, J_n=j, \tau_n>t). \end{equation} Here, note that by definition of conditional probability we have \begin{equation}\label{eq:m} \alpha_{ij}(t)=\frac{f_{ij}(t)}{S_{ij}(t)}, \qquad \mbox{and} \qquad f_{ij}(t)= \alpha_{ij}(t) e^{-\int_0^t \alpha_{ij}(u) du}. \end{equation} It is also useful to define the probability of staying in a current state $i$ for at least $t$ time units. We call this the survival function of the waiting/holding time in state $i$, and denote it via \begin{equation} \label{eq:S-i} S_{i}(t)= \mathbb{P}(\tau_n>t ~|~ J_n=i) = \sum_{j \neq i} p_{ij} S_{ij}(t). \end{equation} \subsection{Approach II, Intensity Transition Functions} The first approach required specification of the parameters using two types of objects, transition probabilities of the embedded Markov chain, and the distribution of sojourn times given a transition $i \to j$. The second approach which we present now is more succinct in that it only requires one type of object: {\em intensity transition functions}. These functions are defined via \begin{equation} \label{eq:alphaTildeij} \tilde{\alpha}_{ij}(t) = \lim_{\bigtriangleup t \rightarrow 0} \frac{1}{\bigtriangleup t}\mathbb{P}( \tau_n \in (t, t+\bigtriangleup t), J_{n}=j ~|~ J_{n-1}=i, \tau_n>t), \end{equation} and are similar in nature to hazard rates. However, they should not be treated as hazard rate functions of a probability distribution. They rather indicate the instantaneous probability of making a transition from state $i$ to state $j$ after spending $t$ time units in state $i$ since the last transition. However summing up over all target states $j$, we do obtain a hazard rate of a probability distribution which we denote via $ \tilde{\alpha}_i(t) = \sum_{j \neq i} \tilde{\alpha}_{ij}(t). $ This is the hazard rate of the waiting/holding time in state $i$. We can use this approach to obtain an alternative expression to $S_i(t)$ of \eqref{eq:S-i}. \begin{equation} \label{eq:SiFromAlphaTilde} S_i(t) = e^{-\int_0^t \tilde{\alpha}_i(u) \, du} = e^{-\int_0^t \sum_{j \neq i} \tilde{\alpha}_{ij}(u) \, du}. \end{equation} Using Approach II, more formally, the meaning of the intensity transition functions is \begin{equation} \label{eq:uglyDef} \lim_{\Delta t \to 0} \frac{\mathbb{P}\big(X(t+u + \Delta t) = j ~|~ X(t) =i, X(t^-) \neq i, \tau_{N(t)+1} > u\big)}{\Delta t} = \tilde{\alpha}_{ij}(u), \end{equation} where using the notation defined previously, $\tau_{N(t)+1}$ is the time elapsed since the last transition, measured at time $t$. The conditional probability in \eqref{eq:uglyDef} is conditional on the fact that the last transition was at time $t$ into state $i$ and up to time $t+u$ there have not been any further transitions. Given this condition, it indicates the instantaneous probability of: (i) making a transition at time $t+u$. (ii) Making the transition into state $j$. It can be shown that specification of the intensity transition functions $\tilde{\alpha}_{ij}(\cdot)$ completely specify the probability law of an SMP. See \cite{breuer2005introduction} for a formal description where one can also use the notation ${\cal H}(t^-)$ to define the history of the process just before time $t$, i.e. up to $t^-$. This can formally be defined as the sigma algebra in a {filtration} associated with the stochastic process. Related notation often used in the literature is the (conditional on history) transition probability to be in state $j$ at time $t+u$, after being in state $i$ in time $t$. This is often {denoted by} $P_{ij}(t, t+u)=\mathbb{P}\big(X(t+u) = j ~|~ X(t) =i, {\cal H}(t^-)\big)$. {We note that Approach~II can be viewed as a collection of competing risk models where at the instance of arriving to a new state, a new set of cause-specific hazard rates is defined. See for example \cite{andersen2002competing} where a competing risk model is presented as a multi-state model and the cause-specific hazard rates terminology is used. \subsection{Relations Between the Two Approaches} \label{sec:relations} We now wish to show how for the same SMP, one can use either the parameters of Approach~I or the parameters of Approach~II and interchange between them. A key relationship is the following \begin{equation} \label{eq:relation-alpha-alphatilde} \tilde{\alpha}_{ij}(t)= \frac{p_{ij}S_{ij}(t)}{S_{i}(t)} \alpha_{ij}(t) = p_{ij} \frac{f_{ij}(t)}{S_i(t)}. \end{equation} It is established using the conditional probabilities, defined in \eqref{eq:pij}, \eqref{eq:SijI}, \eqref{eq:S-i}, and \eqref{eq:alphaijI}. This key relationship also yields an interpretation of the {\em cumulative incidence function} for the $i \to j$ transition which we denote via $\text{CIF}_{ij}(\cdot)$. This is a common measure used in the field of competing risks, see for example \cite{koller2012comp}, that determines the probability of transitioning into $j$ before or at time $t$. By rearranging and integrating both sides of \eqref{eq:relation-alpha-alphatilde} we obtain the following representation of the cumulative incidence function. \begin{equation} \label{eq:cumulativeIncidence} \text{CIF}_{ij}(t) = \int_0^t S_i(u) \tilde{\alpha}_{ij}(u) \, du = \int_0^t f_{ij}(u) p_{ij} \, du = p_{ij} F_{ij}(t). \end{equation} We can now convert between the parameterizations of both approaches as follows. {\bf Approach I $\to$ Approach II.} Given $\alpha_{ij}(\cdot)$ and $p_{ij}$, obtain $\tilde{\alpha}_{ij}(\cdot)$ as follows. \begin{equation} \label{eq:ItoII} \tilde{\alpha}_{ij}(t)= p_{ij} \frac{ e^{-\int_0^t \alpha_{ij}(u) \, du}}{\sum_{k \neq i} p_{ik} e^{-\int_0^t \alpha_{ik}(u) \, du}} \alpha_{ij}(t). \end{equation} This follows directly from \eqref{eq:relation-alpha-alphatilde}. {\bf Approach II $\to$ Approach I.} Given $\tilde{\alpha}_{ij}(\cdot)$ obtain $p_{ij}$ and $\alpha_{ij}(\cdot)$: First we have, \begin{equation} \label{eq:pijIItoI} p_{ij} = \int_0^\infty \tilde{\alpha}_{ij}(t) e^{-\int_0^t \sum_{k \neq i} \tilde{\alpha}_{ik}(u) \, du} \, dt. \end{equation} This can also be obtained from \eqref{eq:cumulativeIncidence} by taking $t \to \infty$. Once $p_{ij}$ values are at hand we again use \eqref{eq:relation-alpha-alphatilde} and isolate $f_{ij}(t)$ to obtain, \begin{equation} \label{eq:densTildeTrans} f_{ij}(t) = \tilde{\alpha}_{ij}(t) \, \frac{e^{-\int_0^t \sum_{k \neq i} \tilde{\alpha}_{ik}(u) \, du}}{p_{ij}}, \end{equation} from which we can obtain $\alpha_{ij}(t)$ using \eqref{eq:m} in the standard manner. \subsection{Examples} \label{sec:examples} We now present three examples illustrating the relationship between the two approaches. \noindent \paragraph{Example 1: Continuous Time Markov Chains (CTMC)} A (finite state) CTMC is a multi-state stochastic process on states $\{1,\ldots,\ell\}$ that is time homogenous and satisfies the Markov property. That is, $ {\mathbb P}\big(X(s+t) = j ~|~ X(s) = i, {\cal H}(s^-)\big) = {\mathbb P}\big(X(t) = j ~|~ X(0) = i \big). $ {For simplicity of notation we restrict attention to CTMCs without absorbing states.} Such chains can be parameterized in several ways, two of which are reminiscent of Approach~I, and Approach~II above: \begin{itemize} \item CTMC Approach I: Define a $\ell \times \ell$ stochastic transition probability matrix $P$ (non-negative entries and rows sum to $1$) with $0$ entries in the diagonal. Denote the entries via $P_{ij}$. This is often called the transition probability matrix of the embedded discrete time Markov chain. Then define a vector of rates of length $\ell$, {\color{blue}$\boldsymbol {\lambda}$}, where $\lambda_i^{-1}$ denotes the mean holding time of state $i$. Then from the theory of CTMCs, see for example \cite{breuer2005introduction}, the process evolves as follows. If at time $t$ the process is in state $i$, an exponential random variable with parameter $\lambda_i$ is generated to determine a holding duration. After that duration passes a transition from state $i$ to state $j$ occurs with probability $P_{ij}$. The durations and the transition choice are independent. The process then repeats. It is now evident that such a description of a CTMC is a special case of {\bf Approach I} for SMPs, where \begin{equation} \label{eq:CTMC1asSMP} \alpha_{ij}(t) = \lambda_i \qquad \mbox{and} \qquad p_{ij} = P_{ij}. \end{equation} Observe that $\alpha_{ij}(t)$ is independent of $t$ and independent of the target state $j$. \item CTMC Approach II: Define a $\ell \times \ell$ generator matrix $Q$ (non-negative entries on the off-diagonal, {negative} entries on the diagonal, and rows sum to $0$). Denote the entries via $q_{ij}$. Then treat the process as an SMP using {\bf Approach II} with \begin{equation} \label{eq:CTMC2asSMP} \tilde{\alpha}_{ij}(t) = q_{ij}, \end{equation} Observe that $\alpha_{ij}(t)$ is independent of $t$. \end{itemize} It is well known from the theory of CTMCs the two parameterizations {are equivalent}. \noindent {\bf CTMC Approach I $\to$ CTMC Approach II}: \[ q_{ij} = \begin{cases} \lambda_i P_{ij} & i \neq j,\\ -\lambda_i & i=j, \end{cases} \] {\bf CTMC Approach II $\to$ CTMC Approach I}: \[ P_{ij} = \frac{q_{ij}}{ \sum_{k \neq i} q_{ik}}, \qquad \mbox{and} \qquad \lambda_i = \sum_{k \neq i} q_{ik} = -q_{ii}. \] We now verify that these transformations directly agree with the relationship between Approach I of the SMP and Approach II of the SMP. Using \eqref{eq:ItoII} and \eqref{eq:CTMC1asSMP} we have, \begin{equation} \label{eq:benoitGivingCourseInDenemark} \tilde{\alpha}_{ij}(t) = P_{ij} \frac{e^{-\int_0^t \lambda_i \, du}}{\sum_{k \neq i} P_{ik} e^{-\int_0^t \lambda_i \, du}} \lambda_i = P_{ij} \lambda_i = q_{ij}. \end{equation} Going in the other direction using \eqref{eq:pijIItoI} and remembering that since $Q$ is a generator matrix, $q_{ii} = - \sum_{k \neq i} q_{ij} < 0$, we have, \[ p_{ij} = \int_0^\infty q_{ij} e^{-\int_0^t \sum_{k \neq i} q_{ik} \, du} \, dt = q_{ij} \int_0^\infty e^{q_{ii} t} \, dt = \frac{q_{ij}}{-q_{ii}}. \] Now using \eqref{eq:densTildeTrans} we have, \[ f_{ij}(t) = -q_{ij} \frac{e^{\int_0^t q_{ii} \, du }}{q_{ij} / q_{ii}} = -q_{ii} e^{q_{ii}t} = \lambda_i e^{-\lambda_i t}. \] This is an exponential density and hence it has a constant hazard rate $\alpha_{ij}(t) = \lambda_{i}$. \noindent \paragraph{Example 2: Exponential Sojourn Times} The example above illustrates that an SMP parameterized via Approach I where $\alpha_{ij}(t)$ is constant in time and also independent of $j$ yields constant (Approach II) transition intensity functions, $\tilde{\alpha}_{ij}(t)$. That is, constant (Approach~I) hazard rates yield constant transition intensity functions, but under the condition that ${\alpha}_{ij}(t)$ is the same for all target states $j$. We now show that this condition is necessary for having constant transition intensity functions. To see this, use \eqref{eq:ItoII} with $\alpha_{ij}(t) = \lambda_{ij}$ where for some $j_1$ and $j_2$, $\lambda_{ij_1} \neq \lambda_{i j_2}$. Now, \begin{equation} \label{eq:nonConstantTildeAlphaSurrr} \tilde{\alpha}_{ij}(t)= p_{ij} \frac{ e^{-\lambda_{ij} t}}{\sum_{k \neq i} p_{ik} e^{- \lambda_{ik}t}} \lambda_{ij}. \end{equation} Since $\lambda_{ij_1} \neq \lambda_{i j_2}$ we are not able to cancel out the exponent as was done in \eqref{eq:benoitGivingCourseInDenemark}. Interestingly, the converse does not hold. The previous example showed that time independent transition intensity functions always yield a CTMC and hence constant (Approach I) hazard rates. \noindent \paragraph{Example 3: Weibull Distributions} The Weibull continuous univariate distribution of a non-negative random variable is parameterized by a shape parameter $\eta>0$ and a scale parameter $\mu>0$. It has density, survival function, and hazard rate functions respectively given via, \[ f(t) = \frac{\eta}{\mu} \Big(\frac{t}{\mu} \Big)^{\eta-1} e^{-(t/\mu)^\eta}, \qquad S(t) = e^{-(t/\mu)^\eta}, \qquad \alpha(t) = \frac{\eta}{\mu} \Big(\frac{t}{\mu} \Big)^{\eta-1} . \] It is appealing in survival analysis and reliability analysis because if $\eta<1$ the hazard rate is monotonically decreasing; if $\eta=1$ the hazard rate is constant (an exponential distribution) and if $\eta>1$ the hazard rate is monotonically increasing. Say now that we are using Approach I and parameterize all of the sojourn time distributions as Weibull distributions, where for transition $i \to j$ we have respective scale and shape parameters $\mu_{ij}$ and $\eta_{ij}$. Now, if we were to consider the Approach II representation, then from \eqref{eq:ItoII} the transition intensity functions are \begin{equation} \label{eq:benoitGivingCourseInDenemark1} \tilde{\alpha}_{ij}(t) = p_{ij} \frac{ e^{-(t/\mu_{ij})^{\eta_{ij}}}}{\sum_{k \neq i} p_{ik} e^{-(t/\mu_{ik})^{\eta_{ik}}}} \frac{{\eta_{ij}}}{\mu_{ij}} \Big(\frac{t}{\mu_{ij}} \Big)^{{\eta_{ij}}-1} = K_{ij}(t) t^{\eta_{ij} - 1}, \end{equation} where we use the notation $K_{ij}(t)$ to represent all of the components of $\tilde{\alpha}_{ij}(t)$ excluding $t^{\eta_{ij} - 1}$. Now the form of $K_{ij}(t)$ can determine if the Approach II representation is of a Weibull type or not. A Weibull type representation will follow if $K_{ij}(t)$ is independent of $t$, otherwise it is not. We now see that if for a fixed $i$ and every $j_1$ and $j_2$ with $j_1 \neq j_2$ we have that \begin{equation} \label{eq:conditionForIandIIWeibullTogether} \eta_{ij_1} = \eta_{ij_2}, \qquad \mbox{and} \qquad \mu_{ij_1} = \mu_{ij_2}, \end{equation} then, $K_{ij}(t)$ is independent of $t$ and we obtain a Weibull type Approach II representation with, \begin{equation} \label{eq:dudesInBordouxDoWeibull} \tilde{\alpha}_{ij}(t) = p_{ij} \frac{{\eta_{ij}}}{\mu_{ij}} \Big(\frac{t}{\mu_{ij}} \Big)^{{\eta_{ij}}-1} = \frac{\eta_{ij}}{p_{ij}^{-1/\eta_{ij}}\mu_{ij}} \Bigg(\frac{t}{p_{ij}^{-1/\eta_{ij}}\mu_{ij}} \Bigg)^{\eta_{ij}-1}, \end{equation} and hence the scale parameter is modified to $p_{ij}^{-1/\eta_{ij}}\mu_{ij}$ and the shape parameter keeps the same form. Otherwise (if there exists $j_1$ and $j_2$ such that \eqref{eq:conditionForIandIIWeibullTogether} does not hold), then $\tilde{\alpha}_{ij}(t)$ cannot be of the Weibull type as in \eqref{eq:dudesInBordouxDoWeibull}. Going the other way, assume we are using Approach II with Weibull type transition intensity functions, \[ \tilde{\alpha}_{ij}(t) = \frac{\eta_{ij}}{\mu_{ij}} \Big(\frac{t}{\mu_{ij}} \Big)^{\eta_{ij}-1} . \] Then by integrating \eqref{eq:pijIItoI} we can in principle obtain $p_{ij}$. It turns out that if for a given state $i$, the shape parameters of all the transition intensity functions is the same (denote it via $\eta_i$) then the integration yields an explicit form, \begin{equation} \label{eq:manMan7} p_{ij} = \frac{\mu_{ij}^{\eta_{i} }}{\sum_{k \neq i} \mu_{ik}^{\eta_{i}} }, \end{equation} otherwise, there is not an explicit solution. In such a case where $\eta_{ij} = \eta_i$ for all $j$, we also have from equation \eqref{eq:densTildeTrans}, \begin{equation} \label{eq:fijStopMan} f_{ij}(t) = \frac{\eta_{i}}{\mu_{ij}} \Big(\frac{t}{\mu_{ij}} \Big)^{\eta_{i}-1} \, \frac{ e^{ -\sum_{k \neq i} \big(\frac{t}{\mu_{ik}}\big)^{\eta_i} } } { \frac{\mu_{ij}^{\eta_{i} }}{\sum_{k \neq i} \mu_{ik}^{\eta_{i}} }}. \end{equation} Further, if $\mu_{ij} = \mu_i$ for all $j$ then $p_{ij}$ of \eqref{eq:manMan7} reduces to $\ell_i^{-1}$ where $\ell_i < \ell$ is the number of possible target states from state $i$. Note that this form is not specific to the Weibull type intensity transition function, but will hold whenever the intensity transition functions out of state $i$ are the same. However, with the Weibull case we can continue further and \eqref{eq:fijStopMan} can be represented as, \[ f_{ij}(t) =\ell_i \frac{\eta_{i}}{\mu_{i}} \Big(\frac{t}{\mu_{i}} \Big)^{\eta_{i}-1} \, { e^{ -\ell_i \big(\frac{t}{\mu_{i}}\big)^{\eta_i}}} = \frac{\eta_{i}}{\ell_i^{-1/\eta_{i}} \mu_{i}} \Big(\frac{t}{\ell_i^{-1/\eta_{i}}\mu_{i}} \Big)^{\eta_{i}-1} \, { e^{ - \big(\frac{t}{\ell_i^{-1/\eta_{i}}\mu_{i}}\big)^{\eta_i}}}, \] which denotes a Weibull distribution with same shape parameter as the transition intensity function and a scale parameter equal to $\ell_i^{-1/\eta_{i}}\mu_{i}$. \section{Likelihood and Inference} \label{section:inference} Here we focus on inference for the fully-parametric semi-Markov model defined by the two above mentioned approaches. Survival analysis usually applies for cohort or clinical studies, hence the data is usually gathered from the same subjects repeatedly during a time interval $[0,{\cal T}]$. A key issue in survival analysis and event history analysis is the occurrence of incomplete or sparse observations. For instance, in the case of chronic diseases, when the event of study is death, time of occurrence of this event is not observed for the subjects still alive at time~${\cal T}$. This type of incomplete observation is called \textit{right-censoring}. There are other kinds of incomplete data like left-censoring and interval censoring, see \cite{andersen2005censored,commenges2015dynamical}. \cite{andersen2002multi} present different incomplete data forms which can be handled in multi-state frameworks. Our focus in this paper is right censored data corresponding to when at the end of the observation period, not all individuals under study have reached an absorbing state. As our focus is on the fully-parametric case we now present the likelihood functions for both Approach~I and Approach~II. We assume there are $n$ subjects denoted via $h=1,\ldots,n$, for which data is collected and we assume independence between subjects. For each subject, we distinguish between two cases depending on if the subject is in an absorbing state at time ${\cal T}$ or not. {Note that in principle we can set ${\cal T}$ to be subject specific, but for simplicity we do not do so here.} This is recorded via $\delta_h$ where $\delta^{(h)} = 1$ implies no right censoring (subject is in an absorbing state at time ${\cal T}$), and $\delta^{(h)} = 0$ implies right censoring. Further, we record the state evolution denoted via the sequence $\{J\}$ and the sojourn times denoted via the sequence $\{\tau\}$. For simplicity, we assume that all subjects start at the same fixed and known state, denoted via $J_0$. This can be easily generalized. For subject $h$ during $[0,{\cal T}]$, we use $N^{(h)}$ to denote the number of state transitions up to time ${\cal T}$. The data of the subject is represented via \[ {\cal H}^{(h)}=\big(J_{0},J^{(h)}_{1}, \ldots, J^{(h)}_{N^{(h)}}, \tau^{(h)}_{1}, \ldots, \tau^{(h)}_{N^{(h)}} , \delta^{(h)}\big). \] Note that if $\delta^{(h)} = 0$ and there is right censoring then we are also interested in the time duration $U^{(h)}$ after the last state is visited. This quantity is represented via, \begin{equation} \label{eq:uDefThing} U^{(h)} = {\cal T} - \sum_{i=1}^{N^{(h)}} \tau^{(h)}_{i}. \end{equation} As our subjects are independent we can consider the likelihood contribution of each subject $h$ in isolation, and after denoting it via ${\cal L}^{(h)}$, the full likelihood is \begin{equation} \label{eq:oneHundredPercentCorrect} {\cal L}=\prod_{h=1}^n {\cal L}^{(h)}. \end{equation} The specific form of ${\cal L}^{(h)}$ now depends on if we are using Approach~I or Approach~II and as we show below, in Approach~II, it can further be decomposed as in \eqref{eq:decomposeAppII}. \subsection{Likelihood for Approach~I} The form of ${\cal L}^{(h)}$ based on ${\cal H}^{(h)}$ is, \begin{equation} \label{eq:approachIlikelihood} {\cal L}^{(h)}= \Big( \prod_{k=1}^{N^{(h)}} p_{J_{k-1} \,J_{k}} \, f_{J_{k-1} \,J_{k}}(\tau_{k}) \Big)\,\Big(S_{J_{N^{(h)}}}(U^{(h)})\Big)^{1- \delta^{(h)}}, \end{equation} where for brevity we omit the $(h)$ superscripts for the state and sojourn time sequences $\{J\}$ and $\{\tau\}$. To further understand \eqref{eq:approachIlikelihood} consider a recursive construction where each transition without censoring based on $J_{k-1} \to J_k$ with a sojourn time of $\tau_k$ has a likelihood contribution $p_{J_{k-1} \,J_{k}} \, f_{J_{k-1} \,J_{k}}(\tau_{k})$. Further, in case of censoring the last censored transition has likelihood contribution $S_{J_{N^{(h)}}}(U^{(h)})$ based on the survival function of the holding time \eqref{eq:S-i}, corresponding to the last observed state $J_{N^{(h)}}$. \subsection{Likelihood for Approach II} We refer to the key relationship \eqref{eq:relation-alpha-alphatilde} and replace $p_{ij}f_{ij}(u)$ in \eqref{eq:approachIlikelihood} with $\tilde{\alpha}_{ij}(u) S_{i}(u)$ to obtain, \begin{equation} \label{eq:approach2likelihoodA} {\cal L}^{(h)}= \Big( \prod_{k=1}^{N} \tilde{\alpha}_{J_{k-1} J_k}(\tau_k) S_{J_{k-1}}(\tau_k) \Big) \,\Big(S_{J_{N}}(U)\Big)^{1- \delta}, \end{equation} where for brevity we omit all superscripts $(h)$. Further, we manipulate the likelihood contribution of each subject as follows: \begin{equation} \label{eq:approach2likelihoodC} \begin{array}{ll} \displaystyle{\cal L}^{(h)} = \Big( \prod_{k=1}^{N} \tilde{\alpha}_{J_{k-1} J_k}(\tau_k) e^{-\int_0^{\tau_k} \tilde{\alpha}_{J_{k-1}}(u) \, du} \Big) \Big(e^{-\int_0^{U} \tilde{\alpha}_{J_N}(u) \, du}\Big)^{1-\delta} \\[12pt] \,\qquad= \displaystyle \prod_{k=1}^{N+1} \big(\tilde{\alpha}_{J_{k-1} J_k}(\tau_k)\big)^{ \mathbbm{1}_{\{k\neq N+1\}}} \, e^{-\int_0^{\tau_k} \tilde{\alpha}_{J_{k-1}}(u) \, du}. \end{array} \end{equation} In the first line we use the representation in \eqref{eq:SiFromAlphaTilde}. For the second line, by increasing the summation from $N$ to $N+1$ we are able to remove $\delta$ by defining $\tau_{N+1} = U$, $J_{N+1} \equiv -1$, and using the fact that $\tilde{\alpha}_{ij}(u) \equiv 0$ in cases where $i$ is an absorbing state and $0^0 \equiv 1$. Now, we are able to separate ${\cal L}^{(h)}$ into the form \begin{equation} \label{eq:decomposeAppII} {\cal L}^{(h)} = \prod_{i=1}^\ell \prod_{j=1}^\ell {\cal L}^{(h)}_{ij}, \end{equation} following similar ideas as those initially presented in \cite{hougaard1999multi}. To achieve such a separation, it is useful to define the indicators \[ \delta^{k-1}_{i \to j} =\mathbbm{1}_{\{J_{k-1}=i,\, J_k=j\} }, \qquad \mbox{and} \qquad \delta^{k-1}_{i \not \to j} = \mathbbm{1}_{\{J_{k-1}=i,\, J_k \neq j\}}. \] We may now manipulate \eqref{eq:approach2likelihoodC} to obtain \begin{equation} \label{eq:niceNiceCorrectEquation} {\cal L}^{(h)}_{ij} = \prod_{k=1}^{N+1}\Big(\tilde{\alpha}_{ij}(\tau_k) e^{-\int_0^{\tau_k} \tilde{\alpha}_{ij}(u)\,du}\Big)^{\delta^{k-1}_{i \to j} } \Big( e^{-\int_0^{\tau_k} \tilde{\alpha}_{ij}(u)\,du}\Big)^{\delta^{k-1}_{i \not \to j}}. \end{equation} The form of \eqref{eq:oneHundredPercentCorrect}, \eqref{eq:decomposeAppII}, and \eqref{eq:niceNiceCorrectEquation} allows us to treat each transition separately as if each transition intensity transition function $\tilde{\alpha}_{ij}(\cdot)$ has its own set of parameters. \subsection{Parametric forms and Covariate Information} \label{sec:thePlaceToBe} In carrying out inference, we assume a parametric form for $\alpha_{ij}(\cdot)$ in Approach~I or $\tilde{\alpha}_{ij}(\cdot)$ in Approach~II. We also allow for covariate information where the natural way to introduce covariates in a multi-state model is a \textit{Cox-like proportional hazard model} which can be defined by using either the hazard function of the sojourn times or by intensity transitions, see \cite{meira2009multi}. Hence for a vector of covariates $Z$, we have \begin{equation} \label{Eq:covariate-sojourn} {\alpha_{ij}}(t ~|~ Z)={\alpha}^{({\theta}_{ij})}_{ij,0}(t) e^{\beta_{ij}^TZ}, \end{equation} for Approach~I. Further we have \begin{equation} \label{Eq:covariate-approachII} \tilde{\alpha}_{ij}(t~|~ Z)={\tilde{\alpha}}^{(\tilde{\theta}_{ij})}_{ij,0}(t) e^{\tilde{\beta}_{ij}^TZ}, \end{equation} for Approach~II. Here ${\alpha}^{({\theta}_{ij})}_{ij,0}(\cdot)$ and ${\tilde{\alpha}}^{(\tilde{\theta}_{ij})}_{ij,0}(\cdot)$ are the baseline functions for the hazard rate and transition intensity functions respectively. They each follow a parametric form, determined via $\theta_{ij}$ and $\tilde{\theta}_{ij}$ respectively. Further, $\beta_{ij}$ and $\tilde{\beta}_{ij}$ are the regression parameters for transition $i \to j$ associated with the covariates of the subject. Observe that there are major differences in the interpretation of the regression coefficients $\beta_{ij}$ of Approach~I and $\tilde{\beta}_{ij}$ of Approach~II. The { former determine} the hazard ratios dealing with sojourn times and bear no effect on the actual transitions of the SMP. The latter, determine the hazard ratios dealing with risks and also affect the instantaneous rate of transitioning. When estimating the parameters for such a model, with Approach~I, we also need to estimate $p_{ij}$, whereas with Approach~II this is not needed as $p_{ij}$ is implicitly determined via \eqref{eq:pijIItoI}. Interestingly, via Approach~II, we have that the covariates implicitly affect $p_{ij}$ ,whereas with Approach~I, we may wish to set $p_{ij}(Z)$ using multinomial regression, however it isn't common practice. \subsection{Contrasting Inference for Approach~I and Approach~II} The key difference in the likelihood expressions between Approach~I \eqref{eq:approachIlikelihood} and Approach~II \eqref{eq:decomposeAppII} and \eqref{eq:niceNiceCorrectEquation} is in computational tractability. As noted by \cite{Krol} the optimization of the likelihood for Approach~I could be challenging when the number of covariates and parameters is large due to a complex form of the sojourn time distribution. This is especially the case when the number of variables is large compared to the size of the data. It should further be noted that modeling using Approach~I requires the embedded chain parameters in addition to the parameters involved in \eqref{Eq:covariate-sojourn}. This means that for a model with $\ell$ states, there may be up to $\ell^2 -\ell$ more parameters to estimate when using Approach~I, in comparison to Approach~II, where all the parameters appear in \eqref{Eq:covariate-approachII}. With Approach~II, the decoupling in \eqref{eq:decomposeAppII} allows to optimize the likelihood for each transition $i \to j$ separately if each hazard intensity transition function \eqref{Eq:covariate-approachII} has its own set of parameters $\tilde{\theta}_{ij}$ and $\tilde{\beta}_{ij}$. In such a framework, the full likelihood of the SMP can be maximized separately by maximizing the likelihood of the different separate two-state models, see \cite{meira2009multi}. Also, the inference of the SMP can be easily handled by using survival models like the Cox model as the separate likelihood arising in survival analysis. Hence Approach~II enables fitting SMP models using survival analysis methods and software, see for example \cite{therneau2000cox,cook2007statistical,jackson2016flexsurv}. This also applies to the case where extensions to the proportional hazard assumption are needed. The forms in \eqref{Eq:covariate-sojourn} and \eqref{Eq:covariate-approachII} are clearly based on a proportional hazard assumption with fixed covariates where there is only a multiplicative effect on the baseline hazard functions independent of time. An extension is to consider time-dependent covariates $Z(t)$ as in \cite{andersen2000multi,andersen2002multi}. Further, in both proportional models the effect of variables is assumed to have a linear (or log-linear) functional form. A more flexible model is to consider non-linear effects using smooth functions of the covariates such as the Generalized Additive Model (GAM) presented by \cite{hastie1990exploring}. With such extensions, using Approach~II is much more straightforward than using Approach~I due to the decoupling and because there is ample statistical software available for survival analysis. One may also wish to incorporate random effects in the models using frailty models. In this case, both \eqref{Eq:covariate-sojourn} and \eqref{Eq:covariate-approachII} could also include random effect to take into account the correlation between observations/subjects. This has been handled in \cite{liquet2012investigating} for Approach~II by exploiting the decoupling of the likelihood. Such random effects may also be incorporated in Approach~I, however the computational effort will be greater. \section{Semi-Markov Application in Practice}\label{sec:application} In this section, we illustrate the inference of the semi-Markov model on two real datasets. {Our purpose with these examples is to help practitioners see how the mathematical details presented above interact with statistical software. In light of this, we chose datasets that have been analyzed previously with a one sided view (either Approach~I or Approach~II but not both). These datasets are readily available in the R ecosystem.} All the numerical results including tables and figures, are freely reproducible through R codes in a comprehensible detail vignette available in \cite{Liq_github}. For both datasets, we compare results of full parametric models based on Approach~I where we estimate sojourn time distributions and transition probabilities of the embedded chain, and based on Approach~II where we estimate transition intensity functions. To carry out estimation using Approach~I we use the { \texttt{SemiMarkov}} package, see \cite{listwon2015semimarkov}. This package implements inference using the likelihood \eqref{eq:oneHundredPercentCorrect} with \eqref{eq:approachIlikelihood} while allowing to use different parametric forms for the sojourn time distribution including exponential, Weibull, and exponentiated Weibull, see \cite{foucher2005semi}. It also supports covariates with standard inference for the covariate coefficients $\beta_{ij}$, including the Wald test. It allows the possibility to include different covariates for each transition. While the focus of the inference with this package is Approach~I, as output, the package can automatically provide the intensity transition functions $\tilde{\alpha}_{ij}(\cdot)$, referred to as the ``hazard function of the SMP''. For Approach~II, the decoupling in \eqref{eq:decomposeAppII} with \eqref{eq:niceNiceCorrectEquation} allows us to use any package that implements inference for survival analysis since for each transition $i \to j$ we can maximize, \[ {\cal L}_{ij} = \prod_{h=1}^n {\cal L}_{ij}^{(h)} \] separately if each hazard intensity transition function \eqref{Eq:covariate-approachII} is assumed to have its own set of parameters $\tilde{\theta}_{ij}$ and $\tilde{\beta}_{ij}$. For this we can use different R packages including \texttt{flexsurv} (\cite{jackson2016flexsurv}), \texttt{survival} (\cite{survival-package}), and \texttt{eha} (\cite{ehapack}). In the examples here, we mainly use \texttt{flexsurv} which offers an easy way to run different parametric forms for $\tilde{\alpha}_{ij}(\cdot)$ including exponential, Weibull, gamma, and generalized gamma (see \cite{Prent1975}) among many other forms. It also supports covariates with standard inference for the covariate coefficients $\tilde{\beta}_{ij}$. We note that the purpose of this section is not to be an extensive survey of all of the R packages that can be used for inference in multi-state models. For a comprehensive software survey, see the survival task view from R (\cite{survivaltaskview}). Our focus here is to illustrate how the aforementioned packages can be used to easily carry out inference, while we also illustrate a few key points. \subsection{Progressive Three-State Model for the Stanford Heart Transplant data} As an illustrative example, we revisit the analysis of the Stanford Heart Transplant data, freely available through the \texttt{survival} package. A full description of the data can be found in \cite{Crow77}. The dataset presents the survival of patients on the waiting list for the Stanford heart transplant program. We analyze the data similarly to \cite{meira2011p3state} which use their R package \texttt{p3state} and propose an illness-death model similar to Figure~\ref{Fig:IllD}. Their states represent ``alive without transplant'', ``alive with transplant'', and ``dead''. We consider the same dataset, where patients' records from October 1967 (start of the program) until April 1974 includes the information of $103$ patients where $69$ received heart transplants, and $75$ died. Three fixed covariates for each patient are available. These include the age at acceptance (\texttt{age}), the year of acceptance (\texttt{year}), and previous surgery (\texttt{surgery}: coded as \texttt{1} = yes; \texttt{0} = no). We first use the data without taking into account any covariate effects to show that an exponential distribution on the baseline hazard rate of the sojourn time, $\alpha_{ij}(\cdot)$ does not produce constant intensity transition functions, $\tilde{\alpha}_{ij}(\cdot)$. This point was shown theoretically in Example~2, of Section~\ref{sec:examples} and we now illustrate it using data and the { \texttt{SemiMarkov}} package. Using the package we obtain both $\alpha_{ij}(\cdot)$ and $\tilde{\alpha}_{ij}(\cdot)$ for $(i,j) = (1,2)$ and $(i,j) = (1,3)$. The results are in Figure \ref{Fig:hazardAppli1} where we see that while $\alpha_{ij}(\cdot)$ are constant, $\tilde{\alpha}_{ij}(\cdot)$ are clearly not constant across time. These curves in fact follow a form like \eqref{eq:nonConstantTildeAlphaSurrr}. \begin{figure}[h!] \center \includegraphics[scale=0.3]{images/hazard_12_appli1.pdf} \includegraphics[scale=0.3]{images/hazard_13_appli1.pdf} \caption{ \small Baseline hazard rate of the sojourn times (Approach-I: $\alpha_{ij}(\cdot)$) and baseline hazard rate of the semi-Markov process (Approach-II: $\tilde{\alpha}_{ij}(\cdot)$) for transition $1\to 2$ (top plot) and $1\to 3$ (bottom plot) } \label{Fig:hazardAppli1} \end{figure} We now continue to analyze the data fully with Approach~I using the { \texttt{SemiMarkov}} package where we compare different parametric forms and covariates in the model. Table~1 presents the inference results of $6$ different models investigated depending on the distribution and covariates chosen for each transition. For each model we present the $\hat{\beta}_{ij}$ estimates and their estimated standard error. We also point out that the { \texttt{SemiMarkov}} package yields $p$-values for the associated Wald tests. We also present the estimates of the transition probabilities of the embedded chain, $\hat{p}_{ij}$. It is evident that these are outputs from likelihood estimation as they don't fully agree across models (they are not proportion estimates). For the final two models, {\em Weibull + Select} and {\em Weibull/Exp + Select} are based on model selection of the significant covariates with more details in the vignette, see \cite{Liq_github}. Observe that the final model, {\em Weibull/Exp + Select}, incorporates two different distributions. \begin{figure}[h!] \center \includegraphics[scale=0.7]{images/Table1.pdf} \end{figure} For choosing between the different proposed models, we use the \textit{Expected Kullbak-Leibler} (EKL) risk as in \cite{liquet2012investigating}. This is done via the \textit{Akaike Information Criterion} (AIC) which is an estimate of the EKL risk in a parametric framework using maximum likelihood method, see \cite{liquet2003bootstrap,commenges2008estimating}. For each of the $6$ models described above, we obtain the AIC using output from { \texttt{SemiMarkov}} and present it in the first part of Table~2. The best model according to the AIC is given by the {\em Exponentiated Weibull} model. However, due to convergence issues during the optimization (see the vignette \cite{Liq_github}), we select the second best result given by the {\em Weibull/Exp + Select} model as the most appropriate model for this dataset according to AIC using Approach~I. \begin{figure}[h!] \center \includegraphics[scale=0.7]{images/Table2.pdf} \end{figure} Moving onto inference using Approach~II, we investigate and estimate different models by specifying different proportional intensity transition models as defined in equation \eqref{Eq:covariate-approachII}. We use \texttt{flexsurv} to estimate $\tilde{\alpha}_{ij}(\cdot)$ for several models which we present in the second part of Table~2. The models {\em Exponential}, {\em Weibull}, {\em Gamma}, and {\em Generalized Gamma} are all estimated with all three covariates in the model. The additional two models, {\em Weibull+Select}, and {\em Generalized Gamma + Select}, have a reduced set of covariates with a procedure consistent with that used for Approach~I above. More details are in the vignette (see \cite{Liq_github}). The AIC based performance of each of these models is presented in the second part of Table~2. The best model according to the AIC is {\em Generalized Gamma + select}. In this model only the \texttt{age} covariate {is used} for transition $1 \to 2$, only the \texttt{year} covariate is used for transition $1\to 3$, and both the \texttt{surgery} and \texttt{age} covariates are used for transition $2 \to 3$. % For this dataset, the best model based on Approach~I presents a better AIC value ($1639.73$) in comparison to the best model based on Approach~II ($1701.03$). As discussed in Section~\ref{sec:thePlaceToBe} the regression coefficients of Approach~I and those of Approach~II do not have the same interpretation. Indeed for the hazard rate of the sojourn time, the regression coefficients can be interpreted in terms of relative risk on the waiting time (i.e.\, given an $i \to j$ transition). {For example the positive coefficient for 'surgery' for the $1 \to 3$ transition, as appearing in Table~1, can be interpreted as an indication that death without getting a heart transplant is less favorable for those that had previous surgery}. While the hazard rate of the SMP based on transition intensity functions can be interpreted as the subject's risk of passing from state $i$ to state $j$. {Interpretation of the regression coefficients using both Approach~I and Approach~II is further discussed in the second example below.} Since a key object of a Semi-Markov process is the transition intensity function, generally called the hazard rate of the SMP, it is also useful to visualize estimates of these functions based on both Approach~I and Approach~II. Further insight may be gained by plotting these against non-parametric estimators. For this, we used the Breslow estimator \cite{breslow1972discussion}, which yields an estimate for the baseline cumulative risk function of a cox-regression type model of each transition. Figure \ref{Fig:cumul} plots the cumulative baseline (all covariates are set to $0$) estimated transition intensity functions, estimated via Approach~I and Approach~II, against the Breslow estimator. These cumulative functions are based on the parameter estimates for the best model of Approach~I ({\em Weibull/Exp + Select}) vs. the best model of Approach~II ({\em Generalized Gamma + Select}). In this case, the plots in Figure \ref{Fig:cumul} hint at a better fit (closer to the non-parametric estimation) of the SMP using Approach~I in accordance with the results of the AIC values discussed above. \begin{figure}[h] \center \includegraphics[scale=0.25]{images/cumul_12.pdf} \includegraphics[scale=0.25]{images/cumul_13.pdf} \includegraphics[scale=0.25]{images/cumul_23.pdf} \caption{\small Cumulative of the baseline hazard rate of the SMP (intensity transition function) estimated from best AIC model of Approach-I and Approach-II. A semi-parametric estimation is presented as benchmark} \label{Fig:cumul} \end{figure} \subsection{Reversible Semi-Markov Model for the Asthma Control Data} \begin{figure} \center \includegraphics[scale=0.6]{images/IllnessDeathModel_reversible.pdf} \caption{{{\small The Reversible three-state Model. }}}\label{Fig:rev3} \end{figure} We revisit the analysis of the asthma control data which has been used to illustrate the { \texttt{SemiMarkov}} R package in \cite{listwon2015semimarkov}. {See also the related papers using the same data,\cite{saint2006overweight} and \cite{saint2003analysis}.} The data consists of the follow-up of severe asthmatic patients consulting at varied times according to their perceived needs. At each visit, several covariates were recorded and asthma was evaluated. The available data (\texttt{asthma}) from the \texttt{SemiMarkov} package presents a random selection of the $371$ patients with at least two visits. In this illustration, we only use the BMI (Body Mass Index) covariate coded: \texttt{1} if BMI $\geq 25$ and \texttt{0} otherwise. Similarly to the analysis in \cite{listwon2015semimarkov}, we use a reversible three-state model as presented in Figure~\ref{Fig:rev3}. This investigates the evolution of asthma based on, ``optimal control'' (State 1), ``sub-optimal control'' (State 2) and ``unacceptable control'' (State 3). More details about concepts of control scores can be found in \cite{saint2003analysis}. Observe that in this model, none of the states are absorbing and hence the model is said to be {\em reversible}. {We note that this data is based on scheduled visits every three months or according to patients' perceived needs, \cite{saint2006overweight}, and is thus potentially interval censored since the exact times of state transitions are approximated to be at visit times. This aspect of the analysis was not the focus of \cite{listwon2015semimarkov} and the related papers. While dealing with such interval censoring is important we do not focus on it here further and refer the reader to \cite{commenges2002inference} and \cite{van2016multi}.} For illustration purposes, we concentrate on a Weibull model as proposed in \cite{listwon2015semimarkov}. We perform a first full parametric proportional Weibull model for the sojourn times (Approach~I) with the BMI covariate for each transition. This yields significant results of BMI covariate for the transitions $1 \to 3$ and $3 \to 1$. Then, we decide to run a sparse model including BMI only for these transitions to facilitate the convergence of the model which can be difficult for a model with too many parameters as reported in \cite{listwon2015semimarkov}. For this new model, BMI regression coefficients remain significant for transitions $1 \to 3$ and $3 \to 1$ with $\widehat{\beta}_{13}=-0.88$ and $\widehat{\beta}_{31}=-0.448$, and respective $p$-values $0.012$, and $0.023$. The fact that hazard ratio of the sojourn time associated with these covariates is less than unity (estimated coefficients are negative), indicates that BMI $\geq 25$ generally lengthens the duration of the sojourn time in state~1 when making a $1 \to 3$ transition and generally lengthens the duration of the sojourn time in state~3 when making a $3 \to 1$ transition. This can also be interpreted as a decrease of the risk of leaving ``optimal control'' state to ``unacceptable control'' as well as a decrease of the risk of leaving ``unacceptable control'' state to ``optimal control''. However, the magnitude of the estimated coefficients cannot be used to evaluate the change in the hazard ratio on the risk (recall equations \eqref{Eq:covariate-sojourn} and \eqref{Eq:covariate-approachII} and the differences between $\beta_{ij}$ and $\tilde{\beta}_{ij}$). We may also visualize the effect of BMI covariate on the associated risks by plotting the hazard rate of the semi-Markov model (intensity transition functions), deduced from the sojourn times. See Figure~\ref{Fig:illustration2} where we also plot estimated transition intensity functions using Approach~II estimation, which we describe now. \begin{figure}[h!] \center \includegraphics[scale=0.22]{images/hazard_13.pdf} \includegraphics[scale=0.22]{images/hazard_12.pdf} \includegraphics[scale=0.22]{images/hazard_32.pdf} \includegraphics[scale=0.22]{images/hazard_31.pdf} \includegraphics[scale=0.22]{images/hazard_21.pdf} \includegraphics[scale=0.22]{images/hazard_23.pdf} \caption{ \small Hazard rate of the semi-Markov process for each transition from the sojourn time and intensity transition approaches.} \label{Fig:illustration2} \end{figure} Approach~II presented in this paper offer a direct way to model the intensity transition functions and has the advantage of a split likelihood which facilitates an efficient optimization procedure. In the vignette available in \cite{Liq_github}, we present the results of the full Weibull proportional semi-Markov model using this Approach. For illustration, we do this in two ways: (i) Optimizing the full likelihood. (ii) Splitting the likelihood for each transition and optimizing individually. As expected, both ways yield the same likelihood estimates. This highlight the advantage of the intensity transition based approach for overcoming potential computational issues in the optimization. Moreover, in some applications, some transitions of interest could then be further explored with any sophisticated survival model of choice as the estimation for each transition is performed separately. See for example \cite{therneau2000cox,royston2002flexible,liquet2012investigating,jackson2016flexsurv}. We augment the plots in Figure \ref{Fig:illustration2} with the estimated transition intensity functions using Approach~II. Note that similarly to Approach~I, the BMI effects are in the same direction. However, the interpretation of the regression coefficients is different. For the intensity transition based estimation (Approach~II), the BMI regression coefficient estimates are $\hat{\tilde{\beta}}_{13} = -0.5$ and $\hat{\tilde{\beta}}_{31} = -0.67$. Here, in contrast to the Approach~I estimates, the exact magnitude of the coefficients may be interpreted as the change in the hazard ratios associated with BMI $\geq 25$. \section{Concluding Remarks} \label{sec:conclusion} We have surveyed the two main approaches for modeling and inference using semi-Markov processes in a parametric setting. These are Approach~I based on sojourn time distributions and the embedded Markov chain, and Approach~II based on transition intensity functions. Each approach has its advantages and drawbacks when carrying out inference and analysis and these were described in the paper. We further summarized the formulation of these two approaches and showed relations between them, where we focused on inference for each of the approaches. In general, the intensity transition based approach (Approach~II) allows the likelihood to be split when each transition has its own set of parameters. Such separation into two-state models facilitates more efficient optimization as well as usage of additional modeling tools. The large scale impact of such separation on bigger datasets and/or simulated data as part of a simulation study remains to be carried out in future work. However, even putting the computational aspects aside, there is great value in such separation as it allows to use many survival analysis R packages directly within the context of semi-Markov processes. Such packages include \texttt{flexsurv} \cite{jackson2016flexsurv} which allows to fit various shapes of transition intensity functions including the Royston-Parmar spline model \cite{royston2002flexible}. Further, in a semi-parametric framework the \texttt{mstate} package \cite{de2011mstate} is popular for running multi-state models including the SMP. By exploiting the separation of the likelihood, \texttt{mstate} provides the estimation of covariate effects using Cox regression models. In addition, non-linear effects of the covariates could be investigated through smooth functions using the \texttt{mgcv} package which enables to run GAM models with survival data, see \cite{Rwood}. Further, with Approach~II, we can easily handle random effects using the \texttt{frailtypack} package \cite{frailtyypack}, as has been exploited by \cite{liquet2012investigating}. Further, in presence of high dimensional data, elastic-net penalty \cite{coxnet} can be used via the package \texttt{glmnet} to fit Cox regression models. Such benefits of using the intensity transition based approach have previously been reported and exploited for general Markov multi-state models as in \cite{hougaard1999multi,andersen2002multi} and \cite{meira2009multi}. Our purpose here was to survey these benefits and compare to Approach~I as is popularly used with the { \texttt{SemiMarkov}} package providing a flexible tool for inference of semi-Markov models based on sojourn time hazard rates in a parametric framework \cite{listwon2015semimarkov}. {The focus in this paper was purely on time-homogenous SMP in which the absolute time or date is assumed not to affect the probabilistic behavior. In certain situations one should consider time-inhomogeneous models allowing the absolute calendar time to play a role in the probabilistic model and the inference. Such situations can occur if seasonal (periodic) phenomena are present, or in long term longitudinal studies where the nature of the cohort is assumed to vary during the study. In a future study, it may be of interest to contrast Approach~I and Approach~II in a time-inhomogeneous setting.} Of further interest, we highlight the fact that PH (Phase Type) distributions may fit naturally in an SMP framework. Such approaches have been previously proposed in \cite{latouche1982phase}, \cite{malhotra1993selecting}, \cite{titman2010semi} and \cite{aalen1995phase}. Interestingly, as PH distributions provide a dense semi-parameteric approximation of any distribution of a non-negative random variable, it may be of interest to approximate semi-Markov processes with continuous time Markov chains constructed via PH distributions, where there are potentially more than $\ell$ states in the approximating Markov chain. To the best of our knowledge, such a semi-parametric inference setup has still not been explored. Such a setup may work well with the EM algorithm for parameter estimation of this type of PH distributions, see \cite{asmussen1996fitting}. We leave this for future work. \section*{Acknowledgements} AA and BL are supported by the Australian Research Council (ARC) Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS) under grant number CE140100049. YN is supported by Australian Research Council (ARC) under grant number DP180101602. \vspace{-0.28cm}
{'timestamp': '2021-09-03T02:19:20', 'yymm': '2005', 'arxiv_id': '2005.14298', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14298'}
arxiv
\section*{Introduction} A Josephson junction (JJ) consists of a localized discontinuity (weak link) in the order parameter of two superconducting electrodes \cite{tinkham_introduction_2004}, where the
dissipation-less current ruled by the Cooper pairs transport is controlled by the macroscopic quantum phase difference ($\varphi$) across the junction. Weak links are typically realized in the form of a thin insulator, a semiconductor or metallic wire, or a narrow superconducting constriction~\cite{tinkham_introduction_2004, likharev_superconducting_1979}. The junction current-phase relation (CPR) strongly depends on the structural attributes of the constriction, i.e., on how its effective length ($L$, i.e., the distance between the superconducting leads), width ($w$), and thickness ($t$) compare with the superconducting coherence length ($\xi\textsubscript{w}$) \cite{likharev_superconducting_1979}. In a fully superconducting one-dimensional JJ ($w,t\ll\xi\textsubscript{w}$) the CPR evolves from the single-valued distorted sinusoidal characteristic, typical of the short-junction limit ($L\ll\xi\textsubscript{w}$ Fig.~\ref{Fig1}a) and of non-superconducting weak links, to the multi-valued function obtained in the long regime ($L\gg\xi\textsubscript{w} $, Fig.~\ref{Fig1}b) \cite{likharev_superconducting_1979}. In the latter scenario, multiple (odd) solutions are available to the system at fixed $\varphi$, and the steady state will depend on the history of $\varphi$. In the specific example of Fig. \ref{Fig1}b three solutions are possible for the Josephson current ($I\textsubscript{s}$) at $\varphi$ close to $\pi$. Two of them are energetically-stable, they correspond to two local minima in the Josephson energy~\cite{langer_intrinsic_1967} and are topologically discriminated by the parity of the winding number of the superconducting phase along the wire~\cite{little_decay_1967,strambini_-squipt_2016} which reflects into two opposite directions of $I_S(\varphi)$~\cite{petkovic_deterministic_2016}, as indicated in Fig. \ref{Fig1}b by the even (red) and odd (blue) branches of $I_s$. In order to switch between these two stable branches, a $2 \pi$ slippage of the superconducting phase along the weak link is required. The slippage passes through the third backward solution in the CPR, a metastable state which corresponds to a saddle point in the Josephson energy separating the two stable minima and forming the barrier of a double-well potential. In analogy with the physics of topological insulators, this intermediate metastable state is gapless, and is associated to the formation of a phase-slip center in the middle of the junction~\cite{langer_intrinsic_1967,arutyunov_superconductivity_2008}. The large superconducting condensation energy lost in this gapless center is at the origin of the strong phase-slip energy barrier separating the two topological branches. We take advantage of this topologically-protected double well potential to implement a robust and permanent superconducting memory: The Josephson phase-slip memory (PSM). Differing from similar quantum phase-slip memories~\cite{mooij_phase-slip_2005}, the geometry of the PSM has been conceived for a deterministic control of the state via an external magnetic field, while stochastic quantum or thermally-activated phase slips are exponentially suppressed. As described below, these events are negligible thanks to the low resistance of the nanowire $R_N < R_q L/\xi_w$, where $R_q = h/e^2 = 6.5$~k$\Omega $\cite{virtanen_spectral_2016}. \section*{Results} \subsection*{Implementation of the memory cell} \begin{figure*}[ht!] \includegraphics[width=0.9\linewidth]{FIG/Figura1_land.pdf} \caption{{Phase-Slip Memory working principle and structure.} \textbf{a}-\textbf{b} Sketch of the current-phase relation (CPR, $I_s(\varphi)$ ) for a S-S'-S weak link (schemed on top) in the short \textbf{a} and long \textbf{b} junction regime. The CPR (yellow curve) evolves from a deformed sinusoid to a multi-valued function as the junction length increases. In the latter, the transition between the two topologically-protected states, corresponding to even and odd topological index~\cite{little_decay_1967}, occurs via phase-slips in the wire~\cite{likharev_superconducting_1979,troeman_temperature_2008} and corresponds to the vertical jump indicated by the colored arrows between the two current branches (red and blue). \textbf{c}-\textbf{d} Dependence of the tunnel current ($I$) on the normalized applied magnetic flux ($\Phi/\Phi_0$, with $\Phi_0= h/2e \simeq 2*10^{-15}$ Wb the flux quantum), at fixed bias voltage ($V$) for a SQUIPT in the short \textbf{c} and long \textbf{d} junction regime. In the latter case, the current evolution shows a hysteretic profile (red and blue curves), which stems from the multi-valued CPR. Top: scheme of a voltage-biased DC SQUIPT in a two-wire configuration. $\Phi$ is the magnetic flux piercing the ring. \textbf{e} Pseudo- color scanning electron micrograph of a typical PSM. An Al nanowire (green) is inserted in a micron-size Al ring (yellow), whereas an Al$\textsubscript{0.98}$Mn$\textsubscript{0.02}$ probing electrode (red) is tunnel-coupled to the middle of the nanowire and to a second Al electrode (green) via an insulating oxide layer (gray) to allow the memory operation. Inset: blow-up of the weak-link region. The passive replicas due to the three-angle shadow-mask metal deposition are visible. \textbf{f} schematic top-view and cross section of the device. } \label{Fig1} \end{figure*} \begin{figure*}[ht!] \includegraphics[width=0.9\linewidth]{FIG/Figura2.Definitiva.pdf} \caption{{Phase-Slip Memory magneto-electric response.} \textbf{a} Current vs voltage characteristics acquired at $\Phi$ = 0 (black trace) and $\Phi$ = $\Phi\textsubscript{0}/2$ (orange trace). The magnetic flux modulates $\Delta \textsubscript{w}$ and, therefore, the $I-V$ tunnel characteristics. \textbf{b} $I(\Phi)$ of a typical memory cell biased at $V$ = 300 $\mu$V. The purple and green arrows indicate the magnetic flux sweep directions. The width of the hysteretic loop ($\delta \Phi$), the current drop ($\delta I$), and the current at the hysteresis crossing-point ($I\textsubscript{cp}=I(\Phi\textsubscript{0}/2)$), are also indicated. \textbf{c} Evolution of $I(\Phi)$ acquired for selected values of $V$, as indicated by the colored arrows in panel \textbf{a}. $I\textsubscript{cp}$ increases by rising $V$. \textbf{d} Dependence of the hysteresis width ($\delta \Phi$) on $V$. $\delta \Phi$ monotonically drops by increasing $V$. \textbf{e} Relative variation of the tunneling current ($\zeta=\delta I/I\textsubscript{cp}$) vs $V$. All these measurements were taken at $T=25$ mK.} \label{Fig2} \end{figure*} The design of a proof-of-concept PSM requires an architecture enabling the tuning of the superconducting phase and the definition of an efficient readout scheme. To finely control $\varphi$, the JJ is inserted in a superconducting loop, where an external magnetic field gives rise to a total flux ($\Phi$) piercing the ring area. Stemming from fluxoid quantization~\cite{doll_experimental_1961}, the superconducting phase difference across the weak link is given by $\varphi=2\pi\Phi/\Phi\textsubscript{0}$ (where $\Phi\textsubscript{0} \simeq 2.067\times 10\textsuperscript{-15}$ Wb is the flux-quantum) while the phase drop along the loop is negligible (see Methods for details). The phase difference, together with the topological index, determines the amplitude of the superconducting gap in the local density of states (DOS) of the wire~\cite{virtanen_spectral_2016}, which can be probed by a metallic electrode tunnel-coupled to the middle of the junction, thereby implementing a superconducting quantum interference proximity transistor (SQUIPT)~\cite{giazotto_superconducting_2010}, as sketched on top of Fig.~\ref{Fig1}c. As a result, at fixed $\Phi$ the amplitude of the tunneling current ($I$) flowing through the probing electrode will depend on the even/odd parity of the topological index of the junction codifying the logic [0] and [1] states of the PSM cell (see Fig. \ref{Fig1}d). Encoding the memory state in the parity of the winding number is a common feature to all flux-based superconducting memories, including e.g. nano-SQUIDs\cite{murphy_nanoscale_2017,ilin_supercurrent-controlled_2021} flux qubits\cite{mooij_superconducting_2006} or kinetic-inductance memories \cite{chen_kinetic_1992} from which it shares the low dissipation and high operation speeds. But, differing form the latter approaches, the dynamics of the memory cell here is entirely dominated by the physics of the weak-link. The read out in the SQUPIT is based on a tunneling spectroscopy of the weak-link and the hysteresis in the magnetic flux is not a consequence of an unbalance between the ring and junction inductance but is an intrinsic property of the CPR. The scanning electron micrograph (SEM) of a representative PSM cell is shown in Fig.\ref{Fig1}e together with a top-down and cross-section scheme in Fig.\ref{Fig1}f. Realized through a suspended-mask lithography technique (see Methods for fabrication details) the weak link consists of a one-dimensional Al nanowire (green, $t$ = 25 nm and $w$ = 90 nm) with a length $L$ $\sim{400}$ nm, embedded in a micron-sized 70-nm-thick Al ring (yellow). In addition, a 20-nm-thick normal metal electrode (red, Al$\textsubscript{0.98}$Mn$\textsubscript{0.02}$) is tunnel-coupled to the center of the wire (with a normal-state tunnel resistance $R_{t1}\simeq 65$ k$\Omega$). To measure the tunneling current, a second Al lead (green) is tunnel-coupled to the normal metal electrode (with a normal-state resistance $R_{t2}\simeq 90$ k$\Omega$)~\cite{ronzani_phase-driven_2017}. Based on the device structural parameters, we estimate the ratio $L/\xi\textsubscript{w,0}\simeq 6$, where $\xi\textsubscript{w,0}\simeq65$ nm is the zero-temperature coherence length\cite{de_simoni_metallic_2018}, thereby providing the frame of the long-junction regime~\cite{likharev_superconducting_1979,virtanen_spectral_2016} (see Methods for details). Within these geometrical constrains and thanks to the low resistivity of Al ($\rho < R_q \xi_w$), both quantum and thermally-activated phase slips are negligibly small, with rates $< 10^{-289} $Hz (see Methods for more details on the estimate). Notably, the PSM is completely made of aluminum compounds thus ensuring high-quality tunnel barriers and full compatibility of all fabrication steps for industrial scaling. \subsection*{Magneto-electric response} To test the PSM transport properties and assess the operation parameters of the memory cell, we first performed a preliminary magneto-electric characterization at bath temperature $T=25$ mK. Figure \ref{Fig2}a shows the current vs voltage characteristics ($I(V)$) of a typical device measured at $\Phi= 0$ (black curve) and $\Phi= \Phi\textsubscript{0}/2$ (orange curve). At zero magnetic flux, the quasiparticle tunnel current is suppressed for $|V| \lesssim 400\;\mu$V due to the presence of two S-I-N tunnel junctions in series and is consistent with the an Al gap of $\simeq 200\; \mu$eV for both the read-out lead ($\Delta \textsubscript{Al} $) and the weak link ($\Delta \textsubscript{w}(\Phi=0)$). The latter can be modulated by the external magnetic flux~\cite{giazotto_superconducting_2010,ronzani_phase-driven_2017}, showing a reduction of about $50\%$ at $\Phi= \Phi\textsubscript{0}/2$ (orange line), $\Delta \textsubscript{w}(\Phi=\Phi_0/2) \simeq 100\; \mu$eV (see also Supplementary Figure~1 for more details). Differently from short-junction SQUIPTs~\cite{ligato_high_2017,ronzani_phase-driven_2017}, the $I(\Phi)$ characteristic is not only $\Phi\textsubscript{0}$-periodic, but it is also strongly hysteretic in $\Phi$. This is highlighted in Fig. \ref{Fig2}b, where the tunnel current measured at $V = 300\; \mu$V as a function of increasing (purple trace) and decreasing (green trace) magnetic flux is shown. The forward trace exhibits periodic maxima followed by sudden jumps corresponding to the nucleation of a phase-slip center in the superconducting nanowire~\cite{likharev_superconducting_1979,virtanen_spectral_2016,arutyunov_superconductivity_2008}. Accordingly, the backward trace evolves in a totally specular fashion. The evolution of $I(\Phi)$ on the bias voltage is shown in Fig. \ref{Fig2}c. The hysteresis loop drawn by the back and forth $I(\Phi)$ exhibits a reduction of its width ($\delta\Phi$) by increasing $V$, as quantified also in Fig. \ref{Fig2}d. This trend can be ascribed to a local overheating in the weak link induced by the quasiparticle current flowing through the probing junction which enlarges $\xi\textsubscript{w}(T)$~\cite{tinkham_introduction_2004} thereby deviating the CPR towards the single-valued non-hysteretic form \cite{likharev_superconducting_1979,virtanen_spectral_2016}. The relative separation between the two $I(\Phi)$ branches can be quantified by a parameter ($\zeta$) defined as the ratio between the current drop at the phase-slip transition and the current at the hysteresis crossing point, $\zeta=\delta I/I(\Phi=n\Phi_0/2)$, where n is an integer odd number. A large $\zeta$ improves the visibility of the PSM logic states. Similarly to $\delta\Phi$, the increase of $V$ induces a monotonic reduction of $\zeta$, as shown in Fig.~\ref{Fig2}e. \begin{figure} \includegraphics[width=\linewidth]{FIG/Figura3_Definitiva_DC.pdf} \caption{{Operation of the Phase-Slip Memory with DC read-out.} \textbf{a} Sketch of the memory operation principle at a constant voltage bias ($V$). Low (blue, $I\textsubscript{[0]}$) and high (red, $I\textsubscript{[1]}$) current branches at the bias flux ($\Phi\textsubscript{B} \in (\Phi\textsubscript{0}/2,\Phi\textsubscript{B\textunderscore max})$) encode the [0] and [1] logic states, respectively. The erase (write) operation is performed by applying a flux pulse with amplitude $\Phi\textsubscript{E}>\Phi\textsubscript{B\textunderscore max}$ ($\Phi\textsubscript{W}<\Phi\textsubscript{B\textunderscore min}$). The memory can also be operated in the complementary part of the hysteresis at $\Phi\textsubscript{B} \in (\Phi\textsubscript{B\textunderscore min},\Phi\textsubscript{0}/2)$ by exchanging the erase and write fluxes. \textbf{b} Evolution of the read-out tunneling current (top panel) measured at $V=300\;\mu$V for $\Phi$ composed by a bias flux $\Phi\textsubscript{B}=0.54\Phi\textsubscript{0}$ (yellow trace) interrupted by write ($\Phi\textsubscript{W}=0.33\Phi\textsubscript{0}$, red) and erase ($\Phi\textsubscript{E}=0.75\Phi\textsubscript{0}$, blue) pulses (bottom panel). \textbf{c} Same as in \textbf{b} but now the voltage bias (central panel, green trace) is applied only during the read-out operation to minimize power consumption and demonstrate the non-volatility of the memory cell. All the measurements were taken at $T=25$ mK. } \label{Fig3} \end{figure} \begin{figure*}[ht!] \includegraphics[width=0.9\linewidth]{FIG/Figura3_Definitiva_AC_prova.pdf} \caption{{Operation of the Phase-Slip Memory with AC read-out.} \textbf{a} Sketch of memory operation in the presence of a sinusoidal flux oscillation ($\Phi\textsubscript{AC}$, yellow trace) around $\Phi\textsubscript{B} \in (\Phi\textsubscript{0}/2,\Phi\textsubscript{B\textunderscore max})$. \textbf{b} Evolution of the read-out current (top panel) measured at $V=300 \; \mu$V and $\Phi$ composed by a flux bias ($\Phi\textsubscript{B}=0.56\Phi\textsubscript{0}$) superimposed with a sinusoidal oscillation $\Phi\textsubscript{AC}=\pm0.04\Phi\textsubscript{0}$ (yellow trace in the bottom panel). Write ($\Phi\textsubscript{W}=0.32\Phi\textsubscript{0}$, red) and erase ($\Phi\textsubscript{E}=0.81\Phi\textsubscript{0}$, blue) flux pulses are applied to switch the logic state of the memory cell. Notice that the two current signals oscillate with a $\pi$ shift making the phase of the AC signal a very sensitive read-out observable. Vertical dashed lines highlight the signals phase shift with respect to the magnetic flux. \textbf{c} Demonstration of persistent memory operation at $\Phi\textsubscript{\textbf{}}=\Phi\textsubscript{0}/2$ obtained by measuring the signal phase with a lock-in amplifier (top) every 4 hours and only when the read-out voltage is turned on ($V=300 \; \mu$V, bottom). State [1] was measured for almost 3 days showing no sign of degradation, and low dissipation being $V=0$ for most of the time. The error bar was estimated from the root mean square of the sampled signal. All the data were recorded at $T=25$ mK.} \label{Fig4} \end{figure*} \subsection*{Memory operation with DC readout} The typical operation cycle of the PSM memory cell is sketched in Fig.~\ref{Fig3}a. A bias flux ($\Phi_B$) is required to access the multi valued state enclosed within the hysteretic domain ($\Phi\textsubscript{B\textunderscore min} = (\Phi_0-\delta\Phi)/2 ,\Phi\textsubscript{B\textunderscore max}=(\Phi_0+\delta\Phi)/2$). Writing (erasing) operations are performed by lowering (increasing) the total flux below (above) the hysteretic domain by means of short pulses. As a consequence, the parity of the topological index switches between odd and even and the tunneling current between low and high current state. Figure \ref{Fig3}b shows a real-time writing/erasing operation in the continuous read-mode, i.e., with a fixed a bias voltage $V=300\;\mu$V. The bias flux is set at $0.54\Phi_0$, just above the crossing-point of the hysteresis to avoid degeneracy in the current amplitude (see Fig. \ref{Fig2}c). The memory is then initialized in the [0] state corresponding to a current $I\simeq 43$ pA. By applying a negative flux pulse down to $\Phi\textsubscript{W}=0.33\Phi\textsubscript{0}$, the PSM logic state suddenly transits to [1] as detected by the current jump to $I\simeq 90$ pA. Conversely, the logic state [0] is recovered via a positive erasing flux pulse up to $\Phi\textsubscript{E}=0.75\Phi\textsubscript{0}$. The device unequivocally shows the typical behavior of a memory cell upon many erasing/writing cycles. From the real-time characteristic is possible also to quantify the energy required for the writing/erasing operations. This can be estimated from the energy difference of the system in the two flux configurations that can be simplified in $E(\Phi_{B_{max},B_{min}})-E(\Phi_{0})\simeq \frac{\Phi_0}{2 \mathcal{L}_{K}} \frac{\delta\Phi}{2}$, where $\mathcal{L}_{K}$ is the kinetic inductance of the JJ \cite{mooij_phase-slip_2005}. In our experimental configuration, the estimated energy is $\sim 0.1$~eV, this number is consistent with the predictions for the energy of the topological barrier $U \sim \Delta \textsubscript{w} \frac{\hbar}{e^2 R_N} \frac{L}{\xi_w}$~\cite{virtanen_spectral_2016}. Notably, differing from conventional flux-based superconducting memories, the inductance of the PSM ring is not relevant for the device which can be made negligibly small without any loss of hysteresis or functionality. This allows the miniaturization of the PSM that could be further operated with a flux generated by supercurrents directly injected in a small portion of the superconducting ring~\cite{enrico_-chip_2019} therefore eliminating the requirement of an external magnetic field but with the disadvantage of an additional feed line integrated in the device. The ability of a memory cell to retain the data even when the power is temporarily turned off is called non-volatility, which, even if not essential for a RAM memory, it is an adding value for energy saving and data storage. The PSM requires two power sources: one to generate the bias flux $\Phi_B$ and one for the read-out signal. The former was provided by an external superconducting magnetic controlled by a current source, then power dependent. To overcome this limitation $\Phi_B$ could also be generated by a permanent dissipationless superconducting coil as well as a metallic ferromagnetic layer buried in the semiconducting substrate or by directly employing a ferromagnetic insulator as dielectric substrate\cite{strambini_revealing_2017,de_simoni_toward_2018}. Alternatively, a proper phase bias might be generated with an additional ferromagnetic pi-junction~\cite{ryazanov_coupling_2001} inserted in the ring or through a phase-battery~\cite{strambini_josephson_2020}. The read-out voltage is only required to probe the resistance state of the PSM. As demonstrated in Figure \ref{Fig3}c, temporarily and repeated measures of both logic states do not affect the stored data with a readout dissipation as low as $P_{[0]}\simeq25$ fW and $P_{[1]}\simeq40$ fW for logic state [0] and [1], respectively, and only limited by the noise of the current amplifier. This low dissipated power combined with the intrinsic cutoff time $\tau_{R}\simeq 30$ ps estimated from the RC circuit of the tunnel junctions (see Methods for details) yields a predicted tiny energy required per bit readout $J_{[0]}=P_{[0]}\tau_{R}\simeq 4.7$~$ \mu$eV and $J_{[1]}=P_{[1]}\tau_{R}\simeq 7.5 $~$ \mu$eV. These values were only estimated, and stem from the severe bandwidth limitations of the cryogenic filters. Similarly to rapid single flux quantum, the writing/erasing process is expected with a switching time of $\sim 1$~ps which is typical for small superconducting loops~\cite{golod_single_2015,zhao_compact_2018,ryazanov_magnetic_2012}. The PSM speed is therefore expected to be on par with current state-of-the-art superconducting memories both in the reading and in the writing/erasing process~\cite{vernik_magnetic_2013,gingrich_controllable_2016,golod_single_2015,madden_phase_2018,zhao_compact_2018}. \subsection*{Memory robustness and operation with AC readout} The robustness of the PSM against flux fluctuations is tested by superimposing to the working biasing flux a sizable sinusoidal signal ($\Phi\textsubscript{AC}$, see Fig. \ref{Fig4}a). The PSM shows optimal stability with respect to flux oscillations, as shown in Fig. \ref{Fig4}b for $V=300\;\mu$V and $\Phi\textsubscript{B}=0.56\Phi\textsubscript{0}$. The memory preserves the stored state and keeps the readout value of the two logic states well separated for fluctuations $\Phi\textsubscript{AC}\simeq 0.08\Phi\textsubscript{0}$, then $\sim 50\%$ of the hysteretic domain of the memory $\delta \Phi$, at least. Interestingly, thanks to the opposite sign of the magnetoconductance of PSM in the two topological states (visible for instance in Fig.~\ref{Fig2}b and c), the AC flux modulation induces an AC response in the tunneling current which acquires a $\pi$ shift when switching between the two logic states [0] and [1]. This phase shift provides a complementary and efficient method to probe the parity of the JJ winding number, which is not affected by the position of $\Phi_B$ within the hysteretic domain, or by the low visibility of the DC readout signal (see also Supplementary Figure~4 and 5 for more details). This allows to operate the memory cell also in the degenerate point $\Phi_B = \Phi_0/2$, where the energies of the [0] and [1] states are equal, a basic condition to implement a phase-slip qubit~\cite{mooij_phase-slip_2005,mooij_superconducting_2006}. Therefore, the PSM provides an alternative low-frequency method for the qubit readout. With the phase-based readout the persistency of the PSM have been tested up to almost three days, as shown in Figure \ref{Fig4}c. The memory is initialized to logic state [1], and the readout is performed every 4 hours. No sign of signal degradation has been observed even after $\sim 3$ days of measurement confirming the vanishing phase-slip rate ($\sim 10^{-289}$Hz) as estimated from our parameters~\cite{virtanen_spectral_2016,arutyunov_superconductivity_2008} (See Methods for details on the estimate). As a consequence, the memory error rate expected for quantum and thermally-activated phase slips is infinitesimally small and errors can be generated only by large magnetic-flux fluctuations ($\gtrsim \delta \Phi$) of the driving magnetic flux. The other source of error that might degrade the memory state is the reading current that could switch the memory via inductive coupling to the ring or by quenching the superconductivity of the weak-link, as commonly happen for superconducting kinetic inductance memories\cite{ilin_supercurrent-controlled_2021}. Differing from the latter, the high resistance of the probing tunnel barrier strongly limits the reading current to $\lesssim$nA, then much smaller than the current required for switching ($\sim$mA)\cite{enrico_-chip_2019} and the critical current of the weak-link ($\gtrsim \mu$A for an Al nanowire\cite{bours_unveiling_2020}). This makes also the error rate during readout operation negligible. High temperature can degrade the performance of PSM by increasing $\xi_w(T)$~\cite{tinkham_introduction_2004} thereby lowering the JJ effective length, and driving the nanowire junction towards the non-hysteretic single-valued CPR occurring for $L\lesssim3.5\xi\textsubscript{w}$~\cite{likharev_superconducting_1979,troeman_temperature_2008}. In addition, thermal activation can substantially increase the phase-slip rate in the vicinity of the transition that is at $\phi \lesssim \phi_{B_{max}}$ and $\phi \gtrsim \phi_{B_{min}}$)~\cite{virtanen_spectral_2016}. Figure \ref{Fig5}a shows the evolution of the hysteresis loop at several bath temperatures ($T$). The hysteresis progressively fades out by increasing $T$, but persists up to $1.1$ K, which corresponds to $\sim 85\%$ of the nanowire critical temperature, with $\delta\Phi$ reduced to the $\sim12\%$ of the base temperature value (see Fig. \ref{Fig5}b). Consequently, also the contrast $\zeta(T)$ lowers by increasing $T$, as shown in Fig.~\ref{Fig5}c. Still, the visibility of the hysteresis loop at high-temperatures demonstrates the strength of the PSM with a substantial protection of the topological state even in the presence of a sizable amount of hot quasi-particles~\cite{little_decay_1967}. Although the low $\delta\Phi$ achieved at high temperature degrades the robustness of the memory with respect to flux-noise, it also allows to write the memory cell with smaller fluxes for a total cost of operation down to $\sim 10$~meV. \begin{figure}[ht] \includegraphics[width=\linewidth]{FIG/Figura5.Definitiva.pdf} \caption{{Temperature dependence of the Phase-Slip Memory.} \textbf{a} Current modulation $I(\Phi)$ for several bath temperatures ($T$) at $V= 300$ $\mu$V. The hysteresis loop narrows and fades out by increasing the temperature since the superconducting nanowire approaches the short-junction limit at high $T$. Inset: blow up of the $I(\Phi)$ characteristics around $\Phi_0/2$ at $1.1$ K. Forward (purple) and backward (green) traces highlight the presence of hysteresis. \textbf{b} Temperature dependence of $\delta\Phi$ measured at $V= 300$ $\mu$V. $\delta\Phi$ monotonically decreases with temperature. \textbf{c} $\zeta$ vs $T$ for selected values of $V$. $\zeta$ drops with temperature, and by increasing $V$. Black lines in panels b and c are guides for the eye.} \label{Fig5} \end{figure} \section*{Discussion} In summary, we have envisioned and demonstrated an original persistent Josephson phase-slip single memory cell which takes advantage of fluxoid quantization to codify two logic states in the topological index of the system, i.e., the parity of the superconducting winding number~\cite{strambini_-squipt_2016}. Differing from conventional superconducting loops~\cite{ilin_supercurrent-controlled_2021,murphy_nanoscale_2017,zhao_compact_2018}, here the separation between the two topological states is provided by the large phase-slip barrier, which is unique to long superconducting JJs \cite{little_decay_1967,virtanen_spectral_2016}. Moreover, its operation mechanism is completely independent of the size or inductance of the superconducting loop thus allowing device miniaturization only limited by fabrication capabilities. The memory exploits conventional superconductors thereby avoiding the use of complex ferromagnetic metals typical of present superconducting memories~\cite{ryazanov_magnetic_2012,gingrich_controllable_2016,baek_hybrid_2014,golod_single_2015,madden_phase_2018,vernik_magnetic_2013}. Notably, the performances of the PSM are competing with state-of-the-art superconducting memories with an extremely low energy dissipation per bit operation ($\sim 10^{-24}$~J and $\sim 10^{-20} $~J for readout and write, respectively) and high operation speed (up to $\sim 30 $~ps and $\sim 1 $~ps for readout and write, respectively). Thanks to the topological protection, the PSM shows endurance, persistence, and high-temperature operation (up to $\sim 1 $~K), only limited by the Al critical temperature. The use of vanadium~\cite{ligato_high_2017} or niobium~\cite{jabdaraghi_low-temperature_2016}, therefore, could extend the memory operation above liquid He temperature, and further promote miniaturization thanks to the lower coherence length of these metals respect to Al. In addition, our phase-based read-out scheme ensures protection against magnetic flux fluctuations, and provides ideal visibility in all the operation ranges. In fact, despite being intrinsically slower than conventional methods (high-speed lock-in amplifiers reach nowadays a clock frequency of about $\sim 600$~MHz), the phase-based readout can be a valuable approach for the readout of phase-slip qubits. Furthermore, scalability to large arrays of PSM cells might be designed by taking advantage of the well known architectures employed for transition edge sensors, since both devices are based on a precise resistance measurement. In particular, frequency domain multiplexing or microwave resonators together with SQUID amplifiers \cite{ullom_review_2015} could be used for the selective read-out of each PSM composing the total memory. Sneak currents can be avoided by employing strongly non-linear resistors between each single memory unit, such as superconductor/insulator/normal metal/insulator/superconductor Josephson junctions. Integrating superconducting current feed lines in the ring\cite{enrico_-chip_2019} will allow to scale also the write procedure with the additional cost of wiring complexity. Yet, the presence of independent write and read lines, with the former characterized by a low impedance, increases stability against perturbations of the read current and might simplify the integration of the PSM with existing superconducting logic elements including rapid single flux quantum\cite{golod_single_2015,zhao_compact_2018,ryazanov_magnetic_2012}, reciprocal quantum logic~\cite{Herr}, quantum flux parametrons~\cite{Hosoya}, Josephson field-effect transistors~\cite{Doh}, and gate-controlled cryotrons~\cite{de_simoni_metallic_2018,Paolucci1, Paolucci2}. Yet, the strong topological protection and stability observed in the PSM make our approach promising in light of the implementation of phase-slip flux qubits \cite{mooij_phase-slip_2005,mooij_superconducting_2006} and quantum memories. \section*{Methods} \label{sec:Methods} \subsection*{Device fabrication details.} \label{sec: Device fabrication details} The hybrid memory cells were realized by shadow-mask lithography technique. The suspended resist-mask was defined by electron-beam lithography (EBL) onto a SiO$\textsubscript{2}$ wafer. All metal-to-metal clean interfaces, and metal-to-oxide barriers were realized in an ultra-high vacuum (UHV) electron-beam evaporator (EBE) with a base pressure of 10$\textsuperscript{-11}$ Torr equipped with a tiltable sample holder suitable for multi-directional depositions. In order to obtain wire/ring transparent interfaces, which is crucial for the device operation, the use of the same material is strongly recommended\cite{ronzani_phase-driven_2017}. Therefore, the nanowire and the ring of the PSM were realized with aluminum. Furthermore, the Al film evaporation is relatively simple, and its high-quality native oxide allows the realization of good tunnel barriers through oxygen exposure at room temperature. At first, 15 nm of Al$\textsubscript{0.98}$Mn$\textsubscript{0.02}$ were evaporated at an angle of -18$^\circ$ to realize the normal metal electrode. Subsequently, the sample was exposed to 60 mTorr of O$\textsubscript{2}$ for 5 min in order to form the thin insulating AlMnOx layer. Next, the sample holder was tilted to 10$^\circ$ for the deposition of 20 nm of Al realizing the SQUIPT nanowire (length ${L}$ = 400 nm, width $w=90$ nm and thickness $t=25$ nm) and the superconducting electrodes. Finally, a thicker layer of Al ($t_{R}=70$ nm) was evaporated at 0$^\circ$ to realize the superconducting loop of circumference $\sim7.6\;\mu$m, and average width $w_{R,ave}\simeq600$ nm. \subsection*{Magneto-electric characterization.} \label{sec:Magneto-electrical characterization} The magneto-electric characterization of the samples was performed at cryogenic temperatures in a $\textsuperscript{3}$He-$\textsuperscript{4}$He dilution refrigerator (Triton 200, Oxford Instruments) equipped with RC-filters of resistance $\sim$ 2k$\Omega$. The out-of-plane magnetic field was applied via a superconducting magnet driven by a low-noise current source (Series 2600, Keithley Instruments). The DC measurements were performed in a two-wire voltage-bias configuration through a low-noise voltage DC source (GS200, Yokogawa) coupled with a room-temperature current preamplifier (Model 1211, DL Instruments) (see Fig. 1-c). The AC characterization was performed via a combination of DC bias and low-frequency lock-in technique. A DC bias voltage ($V$) was applied to the device. A current given by the sum of a DC and AC sinusoidal modulation energized the superconducting magnet. The read-out current oscillations induced by variation of $\Phi$, and the phase of the signal (with respect to the flux oscillations) were recorded by a lock-in amplifier (SR830, Stanford Research Systems). Further details on the readout scheme can be found in the note 5 of the Supplementary Information. \subsection*{Device parameters.} \label{sec:Device parameters} Based on the device structure, we estimate the zero-temperature nanowire coherence length $\xi\textsubscript{w,0}$ = $\sqrt{\hbar D/\Delta \textsubscript{w,0}} \simeq 65$ nm, where $\hbar$ is the reduced Planck constant, ${D} \simeq {18}$ cm$\textsuperscript{2}$s$\textsuperscript{-1}$ is the diffusion coefficient, and $\Delta \textsubscript{w,0}\simeq$ 200 $\mu$eV is the zero-temperature gap in Al. The nanowire critical temperature is $T_{C,w}=\Delta \textsubscript{w,0}/1.764k_B\simeq1.31$ K, where $k_B$ is the Boltzmann constant. At low temperature, the ratio ${L}/ \xi\textsubscript{w,0}\simeq 6$ confirming the frame of the long JJ regime for the PSM.\cite{likharev_superconducting_1979}. The single-valued CPR limit (achieved for $\xi_{w,short}\gtrsim L/3.5\sim 114$ nm) is reached at temperature $T_{short}=T_{C,w}(1-0.852^2\frac{\xi_{w,0}l}{\xi^2_{w,short}})\sim 1.29$~K~\cite{likharev_superconducting_1979}, where $l=3D/v_F\simeq 3$ nm is the nanowire mean free path, and $v_F=2.03\times10^6$~m/s is the Fermi velocity of Al. The kinetic inductance ($\mathcal{L}_K$) of a long JJ depends on the geometry and superconducting properties of the nanowire\cite{virtanen_spectral_2016}. In our case, at $25$ mK it takes the value $\mathcal{L}_K=\frac{R_N\hbar}{\pi\Delta \textsubscript{w}}\frac{1}{\tanh{\frac{\Delta \textsubscript{w}}{2k_B T}}}\simeq18$~pH~\cite{meservey_measurements_1969}. The nanowire normal-state resistance is given by $R_N=\frac{L}{wt\sigma}\simeq17\;\Omega$, where $\sigma=DN_fe^2 \simeq 1 \times10^7$ S m$^{-1}$ is the Al film conductance (with $N_f=2.15\times10^{47}$ J$^{-1}$m$^{-3}$ the density of states at the Fermi energy of Al). Analogously, the ring total inductance (including both the geometric and kinetic contributions) takes the value $\mathcal{L}_R \sim$ 1 pH~\cite{ronzani_phase-driven_2017}(with normal-state resistance $R_{R}\simeq1.4\;\Omega$). The contribution of the ring to the total inductance of the SQUIPT yields a screening parameter $\beta=\mathcal{L}_R/\mathcal{L}_K \lesssim 0.1$. The small $\beta$ cannot account for the hysteretic behavior of the PSM, which stems, differently, from the long-junction regime of the Josephson nanowire. The writing/erasing time ($\tau_{W,E}$) is mainly due to the time required to polarize the SQUIPT with the external flux. It is given by $\tau_{W,E}=\mathcal{L}_{SQUIPT}/R_{SQUIPT}\sim $ 1 ps, where $\mathcal{L}_{SQUIPT}=\mathcal{L}_{K}+\mathcal{L}_{R}$ and $R_{SQUIPT}=R_{N}+R_{R}$ are the total inductance and resistance of the SQUIPT, respectively. The read-out time ($\tau_{R}$) is predominantly limited by the characteristic time of the two tunnel barriers, $\tau_{R}=\tau_{t1}+\tau_{t2}\sim 30$ ps, where $\tau_{t1}=R_{t1}C_{t1}\sim 20$ ps is the characteristic time of the first tunnel junction, and $\tau_{t2}=R_{t2}C_{t2}\sim 10$ ps is the time constant of the second junction. The junctions capacitances ($C_{t1}\sim 0.3$ fF and $C_{t1}\sim 0.1$ fF) are estimated from the area and the typical specific capacitance of AlOx tunnel barriers $\sim 50$ fF/$\mu$m$^2$ \subsection*{Phase-slip rates} Stochastic phase-slips are possible via quantum tunneling and thermal activation. They scale exponentially with the phase-slip barrier, the former with $-U/\Delta\textsubscript{w,0}$ while the latter with $-U/k_B T$. Both of them are small for $R_{\xi} < R_{q}$ (where $R_{\xi}=R_N \xi_{w}/L$), as demonstrated in the following. The quantum phase-slip rate is~\cite{mooij_phase-slip_2005}: \begin{equation} \Gamma_{qps} = \Omega_{qps} \exp{-0.3 \frac{R_q}{R_{\xi}}}, \end{equation} where $\Omega_{qps} \simeq 0.85 \frac{\Delta\textsubscript{w}}{\hbar} \frac{L}{\xi_w} \sqrt{\frac{R_q}{R_{\xi}}} \simeq 75 $~THz is the quantum phase-slips attempt frequency. With the parameters of our experiment we obtain the negligibly small $\Gamma_{qps} \sim 2\times 10^{-289}$~Hz. Thermally activated phase-slips rate reads~\cite{arutyunov_superconductivity_2008}: \begin{equation} \Gamma_{TAPS} = \Omega_{TAPS} \exp{-\frac{\delta F}{k_B T}}, \end{equation} where $\delta F = 2.7 \frac{Tc-T}{T} U $ is the free energy difference of the potential barrier and $\Omega_{TAPS} \simeq 5.5 \frac{k_B T}{\hbar} \frac{L}{\xi_w} \sqrt{\frac{\delta F}{k_B T}} $ is the attempt frequency. In the temperature range of the experiment $ T<< Tc$, $\Gamma_{TAPS}$ is expected to be even smaller then $\Gamma_{qps}$. As an example, at $T= 100 $~mK the attempt frequency is $\Omega_{TAPS} \simeq 500$~THz and $\Gamma_{TAPS} \sim 10^{-474,257}$Hz. From these equations is possible to see that $\Gamma_{TAPS}$ is relevant only at temperature very close to Tc. \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{Acknowledgements} The authors acknowledge M. Cuoco and P. Virtanen for fruitful discussions. N.L., E.S., and F.G. acknowledge partial financial support from the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC Grant No. 615187- COMANCHE. N.L., E.S., and F.G. were partially supported by EU’s Horizon 2020 research and innovation program under Grant Agreement No. 800923 (SUPERTED). The work of F.P. was partially supported by the Tuscany Government (Grant No POR FSE 2014-2020) through the INFN-RT2 172800 project. The authors acknowledge the European Union (Grant No. 777222 ATTRACT) through the T-CONVERSE project. \section*{Author contributions} \label{sec:Author contributions} E.S. and F.G. conceived the experiment. N.L. fabricated the samples with inputs from F.P.. N.L. and E.S. performed the measurements. N.L. analyzed the experimental data with inputs from E.S. and F.G.. All the authors discussed the results and their implications equally at all stages and wrote the manuscript. \section*{Competing Interests} \label{sec: Additional information} The authors declare no competing interests \bibliographystyle{naturemag_NoURL}
{'timestamp': '2020-06-01T02:11:20', 'yymm': '2005', 'arxiv_id': '2005.14535', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14535'}
arxiv
\section{Introduction}\label{sec:introduction} The equation of state (EoS) of strongly interacting matter is one of the key observables characterizing properties of matter under extreme conditions. I
t encodes information on the phase structure of Quantum Chromodynamics (QCD). One way of delineating the EoS is through the speed of sound, $c_s^2 \equiv \mathrm{d} P /\mathrm{d} \epsilon$. It has been conjectured that the speed of sound is bounded by $c_s^2 < 1/3$ (conformal bound). This conjecture was in fact confirmed in ab initio calculations of lattice QCD (LQCD) at finite temperature and small net-baryon density~\cite{Borsanyi:2012cr,Bazavov:2014pvz,Borsanyi:2013bia,Borsanyi:2010cj,Karsch:2006xs,Gavai:2004se}. However, the domain of low temperature and high net-baryon density is still inaccessible via the LQCD methods due to the infamous sign problem. This corner of the QCD phase diagram is of utmost importance for the understanding of the extraterrestrial observations, particularly for the study of neutron stars (NSs), their mergers~\cite{Bauswein:2018bma} and supernovae~\cite{Fischer:2017lag}. Until recently, the progress in constraining the EoS at low temperature and high density was driven mostly by several discoveries of high-mass NSs~\cite{Demorest:2010bx,Antoniadis:2013pzd,Fonseca:2016tux,Cromartie:2019kug}. These observations have set an important constraint on the maximum mass of a NS. More remarkably, the first ever detection of gravitational waves from the compact star merger GW170817~\cite{Abbott:2018exr}, the second detection from GW190425~\cite{Abbott:2020uma}, as well as the NICER observation of the millisecond pulsar PSR~J0030+451~\cite{Riley:2019yda,Miller:2019cac} delivered simultaneous measurement of masses and radii. Perturbative calculations of cold but dense QCD show that the speed of sound complies with the conformal bound~\cite{Kurkela:2009gj}, although they are reliable only at densities far beyond those realized in astrophysical objects. Contrary to these calculations, as well as the LQCD predictions, several recent analyses provided compelling reasons to expect that the conformal bound has to be violated at densities realized in the interior of NSs in order to support the observed NS properties~\cite{Bedaque:2014sqa,Tews:2018kmu}. The advancement of nuclear theory over the years has also tightened the constraints on the EoS over a wide range of densities. This has been achieved by systematic analyses of new astrophysical observations within simplistic approaches, such as the constant-speed-of-sound (CSS) model~\cite{Alford:2013aca} or multipolytropic class of EoSs~\cite{Hebeler:2013nza,Read:2008iy,Alvarez-Castillo:2017qki}. Recently, the interplay between the high-mass constraint (which requires high pressure) and the upper limit on the compactness from GW170817 event (which favors soft pressure) was used to derive a lower bound constraint on the maximal value of the speed of sound in the cold and dense matter EoS of a NS~\cite{Reed:2019ezm}. This new constraint strengthens previous expectations that the conformal bound is likely to be violated at densities realized inside NSs. The lower bound of the speed of sound derived in~\cite{Reed:2019ezm} complies with constraints valid at various density regimes. This is particularly useful for determining a class of effective models in which the low-density and high-density regimes are not treated independently, but rather combined in a consistent unified framework. To this end, we employ the hybrid quark-meson-nucleon (QMN) model~\cite{Benic:2015pia,Marczenko:2017huu,Marczenko:2018jui,Marczenko:2019trv,Marczenko:2020jma} to quantify the EoS of cold and dense matter under NS conditions. The model has the characteristic feature that, at increasing baryon density, the chiral symmetry is restored within the hadronic phase by lifting the mass splitting between chiral partner states, before the quark deconfinement takes place. Quark degrees of freedom are included on top of hadrons, but their unphysical onset is prevented at low densities. This is achieved by an auxiliary scalar field which couples to both nucleons and quarks. This field serves as a momentum cutoff in the Fermi-Dirac distribution functions, thus it suppresses the unphysical thermal fluctuations of fermions, with the strength linked to the density. Our main focus is put on the role of the dynamical quark confinement in constraining the EoS of cold and dense matter under NS conditions. This paper is organized as follows. In Sec.~\ref{sec:hybrid_qmn}, we introduce the hybrid quark-meson-nucleon model. In Sec.~\ref{sec:results}, we discuss the obtained numerical results on the equation of state under neutron-star conditions and neutron-star relations, and we confront them with recent observations and constraints. Finally, Sec.~\ref{sec:conclusions} is devoted to the summary and conclusions. \section{Hybrid quark-meson-nucleon model}\label{sec:hybrid_qmn} In this section, we briefly introduce the hybrid QMN model for the chiral symmetry restoration and deconfinement phase transitions~\cite{Benic:2015pia,Marczenko:2017huu,Marczenko:2018jui,Marczenko:2019trv,Marczenko:2020jma}. The hybrid QMN model is composed of the baryonic parity doublet~\cite{Detar:1988kn,Jido:1999hd,Jido:2001nt} and mesons as in the Walecka model~\cite{Walecka:1974qa}, as well as quark degrees of freedom as in the standard linear sigma model~\cite{Scavenius:2000qd}. The spontaneous chiral symmetry breaking yields the mass splitting between the two baryonic parity partners, while it generates the entire mass of a constituent quark. In this work, we consider a system with $N_f=2$; hence, relevant for this study are the positive-parity nucleons, i.e., proton ($p_+$) and neutron ($n_+$), and their negative-parity partners, denoted as $p_-$ and $n_-$, as well as the up ($u$) and down ($d$) quarks. The fermionic degrees of freedom are coupled to the chiral fields $\left(\sigma, \boldsymbol\pi\right)$, the isosinglet vector-isoscalar field ($\omega_\mu$), and the vector-isovector field ($\boldsymbol \rho_\mu$). The important concept of statistical confinement is realized in the hybrid QMN model by introducing a medium-dependent modification of the particle distribution functions. \begin{table*}[t!]\begin{center}\begin{tabular}{|c|c|c|c|} \hline $\rho_0~$[fm$^{-3}$] & $E/A - m_+$ [MeV] & $K$~[MeV] & $E_{\rm sym}$~[MeV] \\ \hline\hline $0.16$ & $-16$ & 240 & 31 \\ \hline \end{tabular}\end{center} \caption{Properties of the nuclear ground state at $\mu_B = 923~$MeV and the symmetry energy used in this work.} \label{tab:external_params} \end{table*} The thermodynamic potential of the hybrid QMN model in the mean-field approximation reads~\cite{Marczenko:2020jma} \begin{equation}\label{eq:thermo_pot_iso} \Omega = \sum_{x=p_\pm,n_\pm,u,d}\Omega_x + V_\sigma + V_\omega + V_\rho + V_b \textrm. \end{equation} where the summation goes over the fermionic degrees of freedom. The spin degeneracy factor, $\gamma_x$ for nucleons is $\gamma_\pm=2$ for both positive- and negative-parity states, while the spin-color degeneracy factor for up and down quarks is $\gamma_q=2\times 3 = 6$. The kinetic part, $\Omega_x$, reads \begin{equation}\label{eq:thermokin} \Omega_x = \gamma_x \int\frac{\mathrm{d}^3p}{\left(2\pi\right)^3} T \left[\ln\left(1-n_x\right) + \ln\left(1-\bar n_x\right)\right]\textrm, \end{equation} where the functions $n_x$ and $\bar n_x$ are the modified Fermi-Dirac distributions for nucleons \begin{subequations}\label{eq:cutoff_nuc} \begin{align} n_\pm &= \theta \left(\alpha^2 b^2 - \boldsymbol p^2\right) f_\pm \textrm,\\ \bar n_\pm &= \theta \left(\alpha^2 b^2 - \boldsymbol p^2\right) \bar f_\pm \end{align} \end{subequations} and for quarks \begin{subequations}\label{eq:cutoff_quark} \begin{align} n_q &= \theta \left(\boldsymbol p^2-b^2\right) f_q \textrm,\\ \bar n_q &= \theta \left(\boldsymbol p^2-b^2\right) \bar f_q \textrm, \end{align} \end{subequations} respectively. The model embeds the concept of statistical confinement through the modified Fermi-Dirac distribution functions, where $b$ is the expectation value of an auxiliary scalar field $b$ and $\alpha$ is a dimensionless model parameter. As demonstrated in Refs.~\cite{Benic:2015pia,Marczenko:2017huu,Marczenko:2018jui,Marczenko:2019trv,Marczenko:2020jma}, the parameter $\alpha$ plays also a crucial role in tuning the order of the chiral phase transition. From the definition of $n_\pm$ and $n_q$, it is evident that, in order to mimic the statistical confinement, the $b$ field should have a nontrivial vacuum expectation value, to suppress quark degrees of freedom at low densities in the confined and to allow for their population at high densities in deconfined phase. From Eqs.~\eqref{eq:cutoff_nuc} and~\eqref{eq:cutoff_quark}, one finds that the nucleons favor large $b$, whereas the quarks small $b$. The functions $f,\bar f$ are the standard Fermi-Dirac distribution functions for particle and antiparticle, \begin{subequations} \begin{align} f_x &= \frac{1}{1+e^{\beta \left(E_x - \mu_x\right)}} \textrm,\\ \bar f_x &= \frac{1}{1+e^{\beta \left(E_x + \mu_x\right)}}\textrm, \end{align} \end{subequations} respectively. $\beta$ is the inverse temperature, and the dispersion relation $E_x = \sqrt{\boldsymbol p^2 + m_x^2}$. The effective chemical potentials for $p_\pm$ and $n_\pm$ are defined as \begin{subequations}\label{eq:u_eff_had_iso} \begin{align} \mu_{p_\pm} &= \mu_B - g^N_\omega\omega - \frac{1}{2}g^N_\rho \rho + \mu_Q\textrm,\\ \mu_{n_\pm} &= \mu_B - g^N_\omega\omega + \frac{1}{2}g^N_\rho \rho\textrm. \end{align} \end{subequations} The effective chemical potentials for up and down quarks are given by \begin{subequations}\label{eq:u_effq} \begin{align} \mu_u &= \frac{1}{3}\mu_B - g^q_\omega \omega - \frac{1}{2}g^q_\rho \rho + \frac{2}{3}\mu_Q\textrm,\\ \mu_d &= \frac{1}{3}\mu_B - g^q_\omega \omega + \frac{1}{2}g^q_\rho \rho - \frac{1}{3}\mu_Q\textrm. \end{align} \end{subequations} In Eqs.~\eqref{eq:u_eff_had_iso}~and~\eqref{eq:u_effq}, $\mu_B$, $\mu_Q$ are the baryon and charge chemical potentials, respectively. \begin{table*}[t!]\begin{center}\begin{tabular}{|c|c|c|c|c|c|} \hline $m_+~$[MeV] & $m_-~$[MeV] & $m_\pi~$[MeV] & $f_\pi~$[MeV] & $m_\omega~$[MeV] & $m_\rho~$[MeV] \\ \hline\hline 939 & 1500 & 140 & 93 & 783 & 775 \\ \hline \end{tabular}\end{center} \caption{Physical vacuum inputs used in this work.} \label{tab:vacuum_params} \end{table*} The strength of $g^N_\omega$ is fixed by the nuclear saturation properties, while the value of $g^N_\rho$ can be fixed by fitting the value of symmetry energy~\cite{glendenning00:book}. The properties of the nuclear ground state and the symmetry energy are shown in Table~\ref{tab:external_params}. On the other hand, the nature of the repulsive interaction among quarks and their coupling to the $\omega$ and $\rho$ mean fields are still far from consensus. To account for the uncertainty in the theoretical predictions, one may treat the couplings $g^q_\omega$ and $g^q_\rho$ as free parameters. As demonstrated in Ref.~\cite{Marczenko:2020jma}, the repulsive quark-vector interaction has consequences for the phenomenological description of compact stellar objects of masses around $2M_\odot$. In the current work, however, we are interested in NSs with masses of $1.4M_\odot$. Thus, we neglect the repulsive quark-vector interactions and set $g_\omega^q = g_\rho^q = 0$ for simplicity of discussion. The effective masses of the chiral partners, $m_{p_\pm} = m_{n_\pm} \equiv m_\pm$, are given by \begin{equation}\label{eq:doublet_masses} m_\pm = \frac{1}{2} \left[ \sqrt{\left(g_1+g_2\right)^2\sigma^2+4m_0^2} \mp \left(g_1 - g_2\right)\sigma \right] \textrm. \end{equation} The positive-parity nucleons are identified as the positively charged and neutral $N(938)$ states, i.e., proton ($p_+$) and neutron ($n_+$). Their negative-parity counterparts, denoted as $p_-$ and $n_-$ are identified as $N(1535)$~\cite{Tanabashi:2018oca}. From Eq.~(\ref{eq:doublet_masses}), it is clear that the chiral symmetry breaking generates only the splitting between the two masses. When the chiral symmetry is restored, the masses become degenerate with a common finite mass $m_\pm\left(\sigma=0\right) = m_0$, which reflects the parity doubling structure of the \mbox{low-lying} baryons. Following the previous studies of the \mbox{parity-doublet-based} models~\cite{Benic:2015pia,Marczenko:2017huu,Marczenko:2018jui,Marczenko:2019trv,Marczenko:2020jma,Zschiesche:2006zj,Motornenko:2019arp,Mukherjee:2016nhb,Dexheimer:2012eu,Steinheimer:2011ea,Weyrich:2015hha,Sasaki:2010bp,Yamazaki:2019tuo,Mukherjee:2017jzi,Ishikawa:2018yey,Steinheimer:2010ib}, as well as recent lattice QCD results~\cite{Aarts:2017rrl,Aarts:2018glk}, we choose a rather large value, $m_0=700$~MeV. The couplings $g_1$ and $g_2$ in Eq.~\eqref{eq:doublet_masses} can be determined by fixing the fermion masses in the vacuum. Their values used in this work are summarized in Table~\ref{tab:vacuum_params}. The quark effective mass, $m_u = m_d \equiv m_q$, is linked to the sigma field as \begin{equation}\label{eq:mass_quark} m_q = g_q \sigma \textrm. \end{equation} We note that in contrast to the baryonic parity partners (cf. Eq.~\eqref{eq:doublet_masses}), quarks become massless as the chiral symmetry gets restored. The value of the coupling $g_q$ in Eq.~\eqref{eq:mass_quark} can be determined by assuming the quark mass to be $m_q = 1/3~m_+$ in the vacuum. The potentials in Eq.~\eqref{eq:thermo_pot_iso} read \begin{subequations}\label{eq:potentials} \begin{align} V_\sigma &= -\frac{\lambda_2}{2}\left(\sigma^2 + \boldsymbol\pi^2\right) + \frac{\lambda_4}{4}\left(\sigma^2 + \boldsymbol\pi^2\right)^2 - \frac{\lambda_6}{6}\left(\sigma^2 + \boldsymbol\pi^2\right)^3- \epsilon\sigma \textrm,\label{eq:potentials_sigma}\\ V_\omega &= -\frac{m_\omega^2 }{2}\omega_\mu\omega^\mu\textrm,\\ V_\rho &= - \frac{m_\rho^2}{2}{\boldsymbol \rho}_\mu{\boldsymbol \rho}^\mu \textrm,\\ V_b &= -\frac{\kappa_b^2}{2}b^2 + \frac{\lambda_4}{4}b^4 \textrm,\label{eq:potentials_b} \end{align} \end{subequations} where $\lambda_2 = \lambda_4f_\pi^2 - \lambda_6f_\pi^4 - m_\pi^2$, and $\epsilon = m_\pi^2 f_\pi$. $m_\pi$, $m_\omega$, and $m_\rho$ are the $\pi$, $\omega$, and $\rho$ meson masses, respectively, and $f_\pi$ is the pion decay constant. The parameters $\lambda_4$ and $\lambda_6$ are fixed by the properties of the nuclear ground state. Numerical values of all model parameters are summarized in Table~\ref{tab:model_params}. \begin{table*}[t!]\begin{center}\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $\lambda_4$ & $\lambda_6f_\pi^2$ & $g^N_\omega$ & $g^N_\rho$ & $g_1$ & $g_2$ & $g_q$ & $\kappa_b~$[MeV] & $\lambda_b$ \\ \hline\hline 33.74 & 13.20 & 7.26 & 7.92 & 13.75 & 7.72 & 3.36& 155 & 0.074\\ \hline \end{tabular}\end{center} \caption{Numerical values of the model parameters. The values of $\lambda_4$, $\lambda_6$ and $g^N_\omega$ are fixed by the nuclear ground state properties, $g^N_\rho$ by the symmetry energy, and $g_q$ is fixed by the vacuum quark mass (see the text). The remaining parameters, $\kappa_b$ and $\lambda_b$ are fixed following Ref.~\cite{Marczenko:2017huu}.} \label{tab:model_params} \end{table*} In-medium profiles of the mean fields are obtained by extremizing the thermodynamic potential~in Eq.~\eqref{eq:thermo_pot_iso}. In the grand canonical ensemble, the~thermodynamic pressure is obtained from the thermodynamic potential as \mbox{$P = -\Omega + \Omega_0$}, where $\Omega_0$ is the value of the thermodynamic potential in the vacuum. The~net-baryon number density for a species $x$ is defined as \begin{equation} \rho^x_B = -\frac{\partial \Omega_x}{\partial \mu_B} \textrm, \end{equation} where $\Omega_x$ is the kinetic term in Eq.~\eqref{eq:thermokin}. The~total net-baryon number density~reads \begin{equation} \rho_B = \rho_B^{n_+} + \rho_B^{n_-} + \rho_B^{p_+} + \rho_B^{p_-} + \rho_B^{u} + \rho_B^{d} \textrm. \end{equation} In the following section the above hybrid QMN model equation of state of strongly interacting matter will be applied to identify properties of compact stellar objects such as NSs. The allowed range for the $\alpha$ parameter is $\alpha b_0 = 300 - 450~$MeV~\cite{Benic:2015pia,Marczenko:2017huu}, where $b_0$ denotes the vacuum expectation value of the $b$-field. Following our previous works, we choose four representative values within that interval: $\alpha b_0 = 350,~370,~400,~450~$MeV, to systematically study the phenomenology of compact stellar objects. \begin{figure} \resizebox{0.5\columnwidth}{!}{\includegraphics{figures/p_e}} \resizebox{0.496\columnwidth}{!}{\includegraphics{figures/cs2_700_max}} \caption{Left panel: Thermodynamic pressure, $P$, under the NS conditions of $\beta$-equilibrium and charge neutrality, as a function of the energy density, $\epsilon$. Right panel: The square of the speed of sound, $c_s^2$, as a function of the baryon density, $\rho_B$, in the units of saturation density, in the vicinity of the chiral phase transition. The first-order phase transitions are seen in the left panel as plateaux of constant pressure and as vanishing speed of sound in the right panel. In the left panel, the yellow-shaded region marks the constraint obtained by Hebeler {\it et al.}~\cite{Hebeler:2013nza}. In the right panel, the open symbols show the central densities of corresponding $1.4M_\odot$ NSs, and the filled symbols show the corresponding densities at which the maximal value of the speed of sound within $1.4M_\odot$ NSs is reached. Results in both panels are obtained for $m_0 = 700~$MeV and four representative values of the parameter $\alpha$.}\label{fig:p_e_cs2} \end{figure} \section{Results}\label{sec:results} The composition of NS matter requires $\beta$-equilibrium, as well as the charge neutrality condition. To this end, we include electrons and muons as gases of free relativistic particles. In the left panel of Fig.~\ref{fig:p_e_cs2}, we show the calculated EoSs for $m_0=700~$MeV, as functions of the energy density, for different values of the $\alpha$ parameter, namely $\alpha b_0=350~$MeV (dashed, magenta line), $\alpha b_0=370~$MeV (solid, green line), $\alpha b_0=400~$MeV (dash-dotted, red line), and $\alpha b_0=450~$MeV (dotted, blue line). Shown EoSs feature chiral phase transitions, defined as a jump in the $\sigma$-field expectation value, which causes the parity partners to become almost degenerate with mass $m_\pm = m_0$. The mechanism of statistical confinement introduced in the previous section has a prominent impact on the stiffness of class of EoSs obtained in the model. Namely, higher values of the parameter $\alpha$ yield weaker first-order transitions, triggered at higher densities, which eventually becomes a smooth crossover. As the density increases, the EoSs feature another two sequential transitions. First, associated with the onset of the down quark, and second, associated with the onset of up quark, after which the matter is fully deconfined and comprised solely of quarks. We note that the sequential appearance stays in contrast to the isospin-symmetric case, where quarks deconfine simultaneously, owing to the isospin symmetry~\cite{Marczenko:2020jma}. Such separation of the chirally broken and the deconfined phase might indicate the existence of a quarkyonic phase, where the quarks are partly confined to form a Fermi sphere, but the relevant degrees of freedom around the Fermi surface remain the nucleons with the restored chiral symmetry~\cite{Hidaka:2008yy,McLerran:2008ua,Andronic:2009gj,McLerran:2018hbz,Jeong:2019lhv,Zhao:2020dvu}. We note that the discussed EoSs are in good agreement with the maximum-mass constraint obtained by using a multi-polytrope ansatz for the EoS above the saturation density, shown as yellow-shaded region in the left panel of Fig.~\ref{fig:p_e_cs2}. Interestingly, the chiral phase transitions and the deconfinement of down quark lie within the region. We note that, in principle, the inclusion of the quark-vector repulsive coupling has an impact only on the high density part of the EoS, when compared to the case with vanishing coupling. Namely, it extends the hadronic branch to higher densities, and simultaneously, shifts the appearance of the quarks (see~\cite{Marczenko:2020jma} for details). The values of the density jumps associated with the chiral symmetry restoration, and consequent onset of down and up quarks featured in the class of EoS obtained in the model are shown in Table.~\ref{tab:jumps}. \begin{table*}[t!]\begin{center}\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{$\alpha b_0~$ [MeV]} \\ \hline $350$ & $370$ & $400$ & $450$ \\ \hline\hline $1.82 - 2.60$ & $2.14 - 2.76$ & $2.61 - 2.92$ & $3.56$ \\ $4.98 - 6.11$ & $5.84 - 6.21$ & $5.10$ & $4.82 - 6.04$ \\ $9.21 - 13.58$ & $11.03 - 15.42$ & $16.40 - 19.25$ & $10.84$ \\ \hline \end{tabular}\end{center} \caption{Baryon density ranges of the coexistence phases associated with the chiral restoration (top), onset of down (middle) and up (bottom) quark under the neutron-star conditions, in terms of saturation density units, $\rho_0$, for $m_0=700~$MeV and different values of $\alpha b_0$. In the cases were transitions proceed as smooth crossovers, a single value is given.} \label{tab:jumps} \end{table*} In the right panel of Fig.~\ref{fig:p_e_cs2}, we show the calculated speeds of sound, $c_s^2 \equiv \mathrm{d} P / \mathrm{d} \epsilon$, in the units of the speed of light, as a function of the baryon number density, $\rho_B$. For $\alpha b_0=350,~370,~400~$MeV, the coexistence phases of chirally broken and restored phases due to first order phase transitions are seen as regions of vanishing speed of sound. For $\alpha b_0=450~$MeV, the chiral phase transition proceeds as a smooth crossover and is seen as a dip around $3.5\rho_0$. We note that the sequential appearance of down and up quarks are triggered at higher densities and are not shown in the figure. The notable rapid increase of the speed of sound in each curve, before the chiral phase transition takes place, is a result of the stiffening mechanism that arises due to the statistical confinement implemented in the model (cf. Eq.~\eqref{eq:cutoff_nuc}). We use the obtained EoSs in the mean-field approximation to solve the general-relativistic Tolman-Oppenheimer-Volkoff (TOV) equations for spherically symmetric objects at zero temperature, in order to obtain the mass-radius relations for NSs. In the left panel of Fig.~\ref{fig:m_r_cs2}, we show the mass-radius relations calculated from the introduced EoSs, together with the state-of-the-art constraints: the high-mass measurement of the PSR J0740+6620~\cite{Cromartie:2019kug}, two recent GW170817~\cite{Abbott:2018exr} and GW190425~\cite{Abbott:2020uma} events, and the mass-radius constraint obtained for J0030+0451~\cite{Miller:2019cac}. The agreement with all of the constraints is good. Notably, the chiral phase transitions (shown as circles) are featured within the mass and radius region accessible by GW190425, at around $1.8M_\odot$. We note that the presented results are calculated for vanishing quark-vector interactions, i.e., $\chi=0$. Finite value of the parameter $\chi$, leads to overall improvement of the phenomenological description of compact objects around $2M_\odot$ (see Ref.~\cite{Marczenko:2020jma} for details). In general, there is a tension between the constraints from high-mass measurements and gravitational-wave observations. On the one hand, it is long known that the existence of $2M_\odot$ NSs is supported by EoSs that are stiff enough at high densities above $2\rho_0$. On the other hand, the deformability constraint favors EoSs that are soft around $\sim1-2\rho_0$. In~\cite{Reed:2019ezm}, the interplay between constraints from these two types of measurements was used to derive a lower limit on the speed of sound in a $1.4M_\odot$ NS within a class of simplistic constant-speed-of-sound (CSS) EoSs. The constraint is shown in the right panel of Fig.~\ref{fig:m_r_cs2} as a function of radius, $R_{1.4}$, of $1.4M_\odot$ NS (yellow-shaded region). The speed of sound monotonically decreases as $R_{1.4}$ increases. However, in principle, it is rather unlikely that the speed of sound is independent of density. Thus, if at some density the speed of sound is below the value needed to comply with the maximum-mass constraint, then it may have to be larger than the desired value of the constraint, at other densities. Therefore, the constraint places a lower estimate for the maximum of the speed of sound in dense NS matter. We note that the constraint also implies that the maximal value of the speed of sound has to exceed the conformal bound, i.e., $c_s^2=1/3$ (horizontal, black line). \begin{figure} \resizebox{0.5\columnwidth}{!}{\includegraphics{figures/m_r_700}} \resizebox{0.496\columnwidth}{!}{\includegraphics{figures/cs2_max_radius}} \caption{Left panel: Mass-radius sequences obtained for compact star solutions of the TOV equations. The inner (outer) gray band shows the 68.3\%~(95.4\%) credibility regions for the mass of PSR J0740+6620~\cite{Cromartie:2019kug}. The inner (outer) green and purple bands show 50\%~(90\%) credibility regions obtained from the recent GW170817~\cite{Abbott:2018exr} and GW190425~\cite{Abbott:2020uma} events for the low- and high-mass posteriors. The inner (outer) black region corresponds to the mass and radius constraint at 68.2\% (95.4\%) obtained for PSR J0030+0451 by the group analyzing NICER X-ray data~\cite{Miller:2019cac}. The circles mark the coexistence of the chirally broken and chirally restored phases. Right panel: The maximal value of the square of the speed of sound, $c_s^2$, within a $1.4M_\odot$ NS, as a function of its radius, $R$. The central values of the speed of sound (open symbols) and maximal values within $1.4M_\odot$ and corresponding radii are shown. The yellow-shaded region represents the lower bound for the maximum value of the speed of sound inside a $1.4M_\odot$ neutron star derived in Ref.~\cite{Reed:2019ezm}. The black horizontal line shows the conformal value $c_s^2=1/3$. Results in both panels are obtained for $m_0 = 700~$MeV, and four representative values of the parameter $\alpha$.}\label{fig:m_r_cs2} \end{figure} In the figure, we also show the maximal values of $c_s^2$ (filled symbols) within $1.4M_\odot$ obtained for each parametrization in the hybrid QMN model, together with corresponding central values (open symbols). For $\alpha b_0=350,~370,~400~$MeV, the values are not only above the conformal limit, but they also lie above the constraint. Notably, the maximal values of the speed of sound are obtained at densities where the stiffening of the EoS set in. This is seen in the right panel of Fig.~\ref{fig:p_e_cs2}, where the maximal values of $c_s^2$ in $1.4M_\odot$ are shown as filled symbols. On the other hand, the maximal value of the speed of sound for $\alpha b_0=450~$MeV does not exceed the conformal value. In this case, $c_s^2$ rises monotonically even beyond the central density of the $1.4M_\odot$ NS, and the stiffening sets in at higher densities (see right panel of Fig.~\ref{fig:p_e_cs2}). The obtained radii, central baryon densities, and the maximal speeds of sound of $1.4M_\odot$ neutron stars for all parametrizations are provided in Table~\ref{tab:r_lambda_cs2}. Seemingly, sufficient stiffening of the EoS at densities just above the saturation density, realizable in $1.4M_\odot$ NS, is a trademark that is required in order to comply with the constraint from Ref.~\cite{Reed:2019ezm}. In the hybrid QMN model, it is provided through the dynamical mechanism of confinement which strength in linked to the density. We note that this effect is also featured in other effective approaches, such as the relativistic density-functional models with excluded nucleon volume~\cite{Typel:2016srf} and the class of CSS hybrid EoSs~\cite{Alford:2017qgh}. However, too extreme stiffening can become at certain tension with the analysis of GW170817~\cite{Abbott:2018exr}. In principle, this tension could be resolved, e.g., by a strong phase transition occurring in the mass range relevant for GW170817~\cite{Paschalidis:2017qmb}. We note that because in the hybrid QMN model the chiral phase transition featured around $1.8M_\odot$, the obtained $1.4M_\odot$ NSs are composed solely of nuclear matter with broken chiral symmetry. This is seen in the right panel of Fig.~\ref{fig:p_e_cs2}, where the central values of the speed of sound (open symbols) of $1.4M_\odot$ NSs are obtained at baryon densities below the chiral phase transition region (plateaux of vanishing speed of sound). Therefore, the inclusion of the statistical confinement has important implications already at densities before the quarks deconfine. This is even more pronounced in a class of models with sequential chiral and deconfinement phase transitions, such as the hybrid QMN model. This may have important phenomenological implications for the study of multi-messenger astronomy and heavy ion collisions (HIC)~\cite{Sasaki:2019jyh}. \begin{table}[t!]\begin{center}\begin{tabular}{|c||c|c|c|}\hline $\alpha b_0~$[MeV] & $R_{1.4}$~[km] & $\rho_B~[\rho_0]$ & $c_s^2$ \\ \hline\hline 350 & 13.22 & 1.59 & 0.468 \\ \hline 370 & 12.79 & 1.85 & 0.459 \\ \hline 400 & 12.41 & 2.23 & 0.480 \\ \hline 450 & 12.29 & 2.53 & 0.295 \\ \hline \end{tabular}\end{center} \caption{Values of radius, central baryon density, and maximal value of the speed of sound of $1.4M_\odot$ NSs obtained in the hybrid QMN model for different values of $\alpha$ parameter for $m_0=700~$MeV.} \label{tab:r_lambda_cs2} \end{table} \section{Conclusions}\label{sec:conclusions} The progress in constraining the equation of state (EoS) of cold and dense matter under extreme conditions is driven mostly through modern multi-messenger-astronomy observations. Several high-mass neutron star (NS) observations~\cite{Demorest:2010bx,Antoniadis:2013pzd,Fonseca:2016tux,Cromartie:2019kug} and the first ever detection of the gravitational waves from the compact-star merger GW170817~\cite{Abbott:2018exr} have delivered powerful constraints on the NS mass-radius profile independently. In fact, there is an apparent tension between the high-mass constraint (which requires high pressure) and the upper limit on the compactness from GW170817 event (which favors soft pressure). Recently, the interplay between them was used to derive a lower bound constraint on the maximal value of the speed of sound in the cold and dense matter EoS of a NS~\cite{Reed:2019ezm}. This new constraint strengthens previous expectations that the conformal bound is likely to be violated at densities realized inside NSs~\cite{Bedaque:2014sqa,Tews:2018kmu}. Because such constraint characterizes different density regimes, it is of particular use for a class of effective models in which the low-density and high-density regimes are not treated independently, but rather combined in a consistent unified framework, such as, e.g.,~\cite{Marczenko:2020jma,Bastian:2015avq,Bastian:2018wfl}. In this study, we have utilized the hybrid quark-meson-nucleon (QMN) model to quantify the equation of state (EoS) of cold and dense matter. The model unifies the thermodynamics of quark and hadronic degrees of freedom. The interplay between the quark confinement and the chiral symmetry breaking is embedded in a dynamical way into a single unified framework. Within this approach, we have systematically investigated the EoS of cold and dense asymmetric matter under NS conditions. We have constructed the mass-radius relations based on solutions of the Tolman-Oppenheimer-Volkoff (TOV) equations. We have shown that in order to comply with modern constraints from multi-messenger astronomy, a rapid increase of pressure is required at densities inside a $1.4M_\odot$ NS. In the hybrid QMN model, such stiffening is naturally connected to the dynamical mechanism of confinement which strength in linked to the density. This result highlights the fact that the confinement plays a crucial role in the phenomenology of matter under extreme conditions, even at densities smaller than the density at which the system undergoes a hadron-to-quark phase transition. We note that in Ref.~\cite{Marczenko:2017huu}, it was shown that the confinement mechanism of the hybrid QMN model leads to strengthening of the chiral phase transition when compared to pure quark or hadronic models without confinement. This motivates further study and possible applications of the model, not only in the context of multi-messenger astronomy, but also in the context of heavy ion collisions (HIC) at finite temperature and density, where a novel signature of chiral symmetry restoration within the dense hadronic sector in dilepton production was recently proposed~\cite{Sasaki:2019jyh}. \begin{acknowledgement} I acknowledge fruitful discussions and helpful comments from Krzysztof Redlich and Chihiro Sasaki. This work was partly supported by the Polish National Science Center (NCN), under Preludium Grant No. UMO-2017/27/N/ST2/01973. \end{acknowledgement}
{'timestamp': '2020-06-01T02:05:17', 'yymm': '2005', 'arxiv_id': '2005.14364', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14364'}
arxiv
\section{Introduction} \begin{figure}[htb] \includegraphics[scale=0.5]{Fig1.png} \caption{(a) Schematics of $\alpha$-(BEDT-TTF)$_2$I$_3$ irradiated with circularly polarized light. (b) Unit cell (dash
ed rectangle) with four nonequivalent BEDT-TTF molecules (A, A$^\prime$, B, C) and transfer integrals in the BEDT-TTF layer.} \label{Fig1} \end{figure} Photo-induced phase transitions are one of the central topics in recent condensed-matter physics~\cite{Tokura06,Bukov15,Basov17,Aoki14,Yonemitsu06}. A theoretical study using the Floquet theorem has predicted that the honeycomb lattice irradiated with circularly polarized light attains a topological band structure similar to the band structure originally proposed by Haldane~\cite{Haldane88} that exhibits a topological phase transition. This transition results in the photo-induced quantum Hall effect in graphene, even in the absence of an external magnetic field~\cite{Oka09,Kitagawa11} and was indeed confirmed in a recent experiment~\cite{McIver19}. Since this pioneering theoretical work, the Floquet theory has been applied to various electron systems and has revealed many interesting photo-induced topological phenomena~\cite{Kitagawa10,Lindner11,Grushin14,ZhengW14,Mikami16,JYZou16,Ezawa13,Kang20,Claassen16,ZhangMY19, ZYan16,Sato16,Kitamura17,LDu17,Ezawa17,Takasan15,Takasan17a,Takasan17b,Menon18,ChenR18}. The topic is now attracting enormous research interest from the viewpoint of both fundamental science and electronics applications. However, most of the previous research has dealt with tight-binding models on simple lattices such as the honeycomb lattice~\cite{Oka09,Kitagawa11,Kitagawa10}, the Kagome lattice~\cite{Mikami16,LDu17}, the Lieb lattice~\cite{Mikami16}, and the stacked graphene systems~\cite{JYZou16} or simple materials with a two-dimensional atomic layer such as graphene~\cite{Oka09,Kitagawa11,Kitagawa10}, silicene~\cite{Ezawa13}, black phosphorene~\cite{Kang20} and transition-metal dichalcogenides~\cite{Claassen16,ZhangMY19}. There have been few studies based on realistic models for specific materials. In addition, the only successful experiment so far was done for graphene~\cite{McIver19}, which has a simple electronic structure described well by the tight-binding model on honeycomb lattice~\cite{Haldane88,CastroNeto09}. However, to develop further this promising research field, widening the range of target materials is indispensable, and towards this objective, theoretical studies on actual materials with complex electronic and crystalline structures are highly desired. Moreover, we can expect richer material-specific photo-induced topological phenomena in studies on actual materials. One promising material is an organic salt $\alpha$-(BEDT-TTF)$_2$I$_3$ where BEDT-TTF denotes bis(ethylenedithio)-tetrathiafulvalene~\cite{Tajima06}. This compound has inclined Dirac cones in its band structure~\cite{Katayama06,Kobayashi07,Kajita14}, and many interesting topological properties and phenomena rising from these Dirac-cone bands have been theoretically investigated so far, e.g., the quantum Hall effect~\cite{Kajita14}, the structures of Berry curvature in momentum space~\cite{Suzumura11}, and the flux-induced Chern insulator phases~\cite{Osada17}. In this paper, we theoretically predict the emergence of photo-induced topological phases and their phase transitions in $\alpha$-(BEDT-TTF)$_2$I$_3$. By applying the Floquet theory to the photo-driven tight-binding model for conduction electrons in the BEDT-TTF layer, we demonstrate that the inclined Dirac cones become gapped at the Dirac points by the irradiation with circularly polarized light [see Fig.~\ref{Fig1}(a)]. The system then becomes a topological insulator characterized by a quantized Chern number~\cite{Thouless82,Kohmoto85} and conductive chiral edge states~\cite{Hao08}. We obtain a rich phase diagram in the plane of the amplitude and frequency of the light that contains phases corresponding to a Chern insulator, semimetal, and normal insulator. The calculated photo-induced Hall conductivity shows characteristic dependencies on the light amplitude and temperature in each phase, indicating that this quantity provides a sensitive experimental indicator in the detection and identification of these topological phases and their phase transitions. One advantage of the usage of organic compounds is that an effective amplitude of light is an order of magnitude larger than graphene because their lattice constants are much larger (to be discussed later). They enhance the feasibility of the experiments. This work contributes to the development of this field by expanding the potential range of materials for research. \section{Model and Method} We employ a tight-binding model to describe the electronic structure of $\alpha$-(BEDT-TTF)$_2$I$_3$~\cite{Katayama06,Kajita14}, which is given by, \begin{eqnarray} H=\sum_{i,j}\sum_{\alpha, \beta} t_{i\alpha,j\beta} c^\dagger_{i\alpha}c_{j\beta}. \end{eqnarray} The unit cell of the BEDT-TTF layer contains four molecular sites (A, A$^\prime$, B, C), and we consider transfer integrals $t_{i\alpha,j\beta}$ among them [Fig.~\ref{Fig1}(b)] where $i$ and $j$ label the unit cells, whereas $\alpha$ and $\beta$ label the molecular sites. Under ambient pressure, this compound exhibits a charge-ordered ground state~\cite{Tajima06}. When a uniaxial pressure is applied along the $a$ axis, this charge order melts and eventually inclined Dirac cones emerges within its peculiar band structure. In this study, we consider the latter situation by taking the transfer integrals evaluated theoretically at uniaxial pressure $P_{a}$ of 4 kbar; specifically, $t_{a1}=-0.038$ eV, $t_{a2}= 0.080$ eV, $t_{a3}=-0.018$ eV, $t_{b1}= 0.123$ eV, $t_{b2}= 0.146$ eV, $t_{b3}=-0.070$ eV, and $t_{b4}=-0.025$ eV~\cite{Kobayashi04}. After the Fourier transformations with respect to the spatial coordinates, the tight-binding Hamiltonian is rewritten in the momentum space as, \begin{eqnarray} H=\sum_{\bm{k}}(c^{\dagger}_{\bm{k}A}\, c^{\dagger}_{\bm{k}A'}\,c^{\dagger}_{\bm{k}B}\,c^{\dagger}_{\bm{k}C})\hat{H}(\bm{k})\left( \begin{array}{c} c_{\bm{k}A} \\ c_{\bm{k}A'} \\ c_{\bm{k}B} \\ c_{\bm{k}C} \end{array} \right) \end{eqnarray} where \begin{eqnarray} \hat{H}(\bm{k})=\left( \begin{array}{cccc} 0 & A_2(\bm{k}) & B_2(\bm{k}) & B_1(\bm{k}) \\ A_2^{*}(\bm{k}) & 0 & B_2^{*}(\bm{k}) & B_1^{*}(\bm{k}) \\ B_2^{*}(\bm{k}) & B_2(\bm{k}) & 0 & A_1(\bm{k}) \\ B_1^{*}(\bm{k}) & B_1(\bm{k}) & A_1(\bm{k}) & 0 \end{array} \right) \label{alpham} \end{eqnarray} with \begin{align} &A_1(\bm{k})=2t_{a1}(\bm{k})\cos(k_y/2) \\ &A_2(\bm{k})=t_{a2}(\bm{k})e^{ik_y/2}+t_{a3}(\bm{k})e^{-ik_y/2} \\ &B_1(\bm{k})=t_{b1}(\bm{k})e^{i(k_x/2+k_y/4)}+t_{b4}(\bm{k})e^{-i(k_x/2-k_y/4)} \\ &B_2(\bm{k})=t_{b2}(\bm{k})e^{i(k_x/2-k_y/4)}+t_{b3}(\bm{k})e^{-i(k_x/2+k_y/4)}. \end{align} We then consider a situation, in which this compound is irradiated with circularly polarized light applied perpendicular to the $ab$ plane [Fig.~\ref{Fig1}(a)]. The applied circularly polarized light generates a vector potential $\bm A(\tau)=A(\cos\omega\tau, \sin\omega\tau)$, which corresponds to a time-dependent electric field: \begin{eqnarray} \bm E(\tau)=-\frac{d \bm A}{d \tau}=A\omega(\sin\omega\tau, -\cos\omega\tau). \end{eqnarray} In the presence of this vector potential, the transfer integrals attain Peierls phases as, \begin{align} &t_{i\alpha,j\beta}(\tau) \nonumber \\ &=t_{i\alpha,j\beta}\exp{ \left[-i\frac{e}{\hbar}\bm{A}(\tau)\cdot (\bm{r}_{i\alpha}-\bm{r}_{j\beta})\right]} \nonumber \\ &=t_{i\alpha,j\beta}\exp\left[i\left\{\mathcal{A}_b(\tilde{x}_j-\tilde{x}_i)\cos\omega\tau +\mathcal{A}_a(\tilde{y}_j-\tilde{y}_i)\sin\omega\tau\right\}\right] \end{align} Here we introduce dimensionless quantities $\mathcal{A}_a=eAa/\hbar$ and $\mathcal{A}_b=eAb/\hbar$ and dimensionless coordinates $\bm r_{i\alpha}=(b\tilde{x}_{i\alpha}, a\tilde{y}_{i\alpha})$ with $a$ and $b$ being the lattice constants along the $y$ and $x$ axes, respectively [Fig.~\ref{Fig1}(b)]. We use experimental values $a$=0.9187 nm and $b$=1.0793 nm~\cite{Mori12}, which give a ratio $\mathcal{A}_b/\mathcal{A}_a=b/a=1.175$. The amplitude of the ac electric field of light $E^\omega$ is given by $E^\omega=A\omega=\mathcal{A}_a\hbar\omega/ea$. In the momentum representation, the effects of Peierls phases are taken into account by replacing the momenta $k_x$ and $k_y$ with $k_x+\mathcal{A}_b$ and $k_y+\mathcal{A}_a$, respectively. Then we obtain the time-dependent Hamiltonian for the photo-irradiated $\alpha$-(BEDT-TTF)$_2$I$_3$ as, \begin{eqnarray} \hat{H}(\bm{k},\tau)=\left( \begin{array}{cccc} 0 & A_2(\bm{k},\tau) & B_2(\bm{k},\tau) & B_1(\bm{k},\tau) \\ A_2^{*}(\bm{k},\tau) & 0 & B_2^{*}(\bm{k},\tau) & B_1^{*}(\bm{k},\tau) \\ B_2^{*}(\bm{k},\tau) & B_2(\bm{k},\tau) & 0 & A_1(\bm{k},\tau) \\ B_1^{*}(\bm{k},\tau) & B_1(\bm{k},\tau) & A_1(\bm{k},\tau) & 0 \end{array} \right) \label{alpham} \end{eqnarray} with \begin{eqnarray} A_1(\bm{k},\tau) &=& 2t_{a1}\cos\left(\frac{k_y}{2}+\frac{\mathcal{A}_a}{2}\sin\omega\tau \right) \\ A_2(\bm{k},\tau) &=& t_{a2}\exp\left[ i\left(\frac{k_y}{2}+\frac{\mathcal{A}_a}{2}\sin\omega\tau \right)\right] \nonumber \\ &+& t_{a3}\exp\left[-i\left(\frac{k_y}{2}+\frac{\mathcal{A}_a}{2}\sin\omega\tau \right)\right] \\ B_1(\bm{k},\tau) &=& t_{b1}\exp\left[ i\left(\frac{k_x}{2}+\frac{k_y}{4}\right)\right] \exp\left[ i\mathcal{A}\sin(\omega\tau + \theta) \right] \nonumber \\ &+& t_{b4}\exp\left[-i\left(\frac{k_x}{2}-\frac{k_y}{4}\right)\right] \exp\left[ i\mathcal{A}\sin(\omega\tau - \theta) \right] \nonumber \\ \\ B_2(\bm{k},\tau) &=& t_{b2}\exp\left[ i\left(\frac{k_x}{2}-\frac{k_y}{4}\right)\right] \exp\left[-i\mathcal{A}\sin(\omega\tau - \theta) \right] \nonumber \\ &+& t_{b3}\exp\left[-i\left(\frac{k_x}{2}+\frac{k_y}{4}\right)\right] \exp\left[-i\mathcal{A}\sin(\omega\tau + \theta) \right] \nonumber \\ \end{eqnarray} where \begin{eqnarray} \mathcal{A}=\frac{1}{4}\sqrt{4\mathcal{A}_b^2+\mathcal{A}_a^2}, \quad\quad \theta=\tan^{-1}\left(\frac{2\mathcal{A}_b}{\mathcal{A}_a}\right). \end{eqnarray} We analyze this time-periodic tight-binding Hamiltonian using the Floquet theory. The time-dependent Schr\"odinger equation is given by, \begin{eqnarray} i\hbar \frac{\partial}{\partial t}\ket{\Psi(\bm k,\tau)} =H(\bm k,\tau)\ket{\Psi(\bm k,\tau)}. \end{eqnarray} According to the Floquet theorem, the wave function $\ket{\Psi(\tau)}$ is written in the form \begin{eqnarray} \ket{\Psi(\bm k,\tau)}=e^{-i\varepsilon \tau/\hbar}\ket{\Phi(\bm k,\tau)} \end{eqnarray} where $\ket{\Phi(\bm k,\tau)}=\ket{\Phi(\bm k,\tau+T)}$. Here $T(=2\pi/\omega$) is the temporal period of the ac light field, and $\varepsilon$ is the quasi-energy. This equation is rewritten in the form, \begin{eqnarray} \sum_m \hat{\mathcal{H}}_{n,m}(\bm k)\ket{\Phi_{\nu}^m(\bm k)} = \varepsilon^n_{\nu}(\bm k) \ket{\Phi_{\nu}^n(\bm k)}, \label{H-Mw} \end{eqnarray} with \begin{eqnarray} \hat{\mathcal{H}}_{n,m}(\bm k) \equiv \hat{H}_{n-m}(\bm k)-m\omega\delta_{n,m}\hat{1}_{4 \times 4} \label{H-Mw2} \end{eqnarray} where $\nu$ labels the eigenstates, and $n$ corresponds to the number of photons. The four-component vector $\ket{\Phi_{\nu}^n(\bm k)}$ rerpesents the $\nu$th Floquet state ($\nu$=1,2,3,4) in the $n$-photon subspace. We introduce the following Fourier coefficients, \begin{eqnarray} \ket{\Phi_{\nu}^n(\bm k)}&=& \frac{1}{T}\int_0^T \ket{\Phi_{\nu}(\bm k,\tau)}e^{in\omega\tau} d\tau,\\ \hat{H}_n(\bm k)&=&\frac{1}{T}\int_0^T \hat{H}(\bm k,\tau)e^{in\omega\tau} d\tau \nonumber \\ &=&\left( \begin{array}{cccc} 0 & A_{2,n}(\bm k) & B_{2,n}(\bm k) & B_{1,n}(\bm k) \\ A_{2,-n}^{*}(\bm k) & 0 & B_{2,-n}^{*}(\bm k) & B_{1,-n}^{*}(\bm k) \\ B_{2,-n}^{*}(\bm k) & B_{2,n}(\bm k) & 0 & A_{1,n}(\bm k) \\ B_{1,-n}^{*}(\bm k) & B_{1,n}(\bm k) & A_{1,n}(\bm k) & 0 \end{array} \right) \nonumber \\ \end{eqnarray} with \begin{align} &A_{1,n}(\bm k)= t_{a1}\,e^{ik_y/2}J_{-n}(\mathcal{A}_a/2)+t_{a1}\,e^{-ik_y/2}J_{n}(\mathcal{A}_a/2) \\ &A_{2,n}(\bm k)= t_{a2}\,e^{ik_y/2}J_{-n}(\mathcal{A}_a/2)+t_{a3}\,e^{-ik_y/2}J_{n}(\mathcal{A}_a/2) \\ &B_{1,n}(\bm k)= t_{b1}\,e^{ i(k_x/2+k_y/4)}J_{-n}(\mathcal{A})e^{-in\theta} \nonumber \\ & \quad\quad\quad\quad +t_{b4}\,e^{-i(k_x/2-k_y/4)}J_{-n}(\mathcal{A})e^{+in\theta} \\ &B_{2,n}(\bm k)= t_{b2}\,e^{ i(k_x/2-k_y/4)}J_{n}(\mathcal{A})e^{+in\theta} \nonumber \\ & \quad\quad\quad\quad +t_{b3}\,e^{-i(k_x/2+k_y/4)}J_{n}(\mathcal{A})e^{-in\theta} \end{align} We solve the eigenequation~(\ref{H-Mw}) by restricting the number of photons to $|m|\le M_{\rm max}$ ($M_{\rm max}$=16 throughout the present work). Consequently, the Floquet-Hamiltonian matrix $\hat{\mathcal{H}}(\bm k)$ is composed of $(2M_{\rm max}+1)\times(2M_{\rm max}+1)$ block matrices $\hat{\mathcal{H}}_{n,m}(\bm k) \equiv \hat{H}_{n-m}(\bm k)-m\omega\delta_{n,m}\hat{1}_{4 \times 4}$. The total size of $\hat{\mathcal{H}}(\bm k)$ is $(8M_{\rm max}+4)\times (8M_{\rm max}+4)$ (132$\times$132 in the present work) because the size of each block matrix is 4$\times$4. Note that as the frequency $\omega$ is reduced, a Floquet matrix of larger size $|m|$ is required, typically of order $W/\hbar\omega$, where $W$ is the band width. Having adopted $|m|\le 16$ for $\alpha$-(BEDT-TTF)$_2$I$_3$ with $W\sim0.8$ eV, the obtained results are sufficiently accurate for $\hbar\omega \gtrsim$ 0.05 eV. The Chern number of the $\nu$th band $N_{\rm Ch}^\nu$ ($\nu$=1,2,3,4) is related to the Berry curvature $B_z^{n\nu}(\bm k)$, \begin{eqnarray} N_{\rm Ch}^\nu=\frac{1}{2\pi}\int\int_{\rm BZ}\;B_z^{0\nu}(\bm k) dk_xdk_y, \end{eqnarray} where the Berry curvature $B_z^{n\nu}(\bm k)$ of the $\nu$th $n$-photon Floquet band at each $\bm k$ point is given by \begin{align} &B_z^{n\nu}(\bm k)= \nonumber \\ &i\sum_{(m,\mu)}\frac{ \bra{\Phi_{\nu}^n(\bm k)}\frac{\partial \mathcal{H}}{\partial k_x}\ket{\Phi_{\mu}^m(\bm k)} \bra{\Phi_{\mu}^m(\bm k)}\frac{\partial \mathcal{H}}{\partial k_y}\ket{\Phi_{\nu}^n(\bm k)} -{c.c.}} {[\varepsilon^m_\mu(\bm k)-\varepsilon^n_\nu(\bm k)]^2}. \end{align} Here $\mathcal{H}$ denotes the matrix of the Floquet Hamiltonian, and $\varepsilon^n_\nu(\bm k)$ and $\ket{\Phi_{\nu}^n(\bm k)}$ the eigenenergy and eigenvector of the equation~(\ref{H-Mw}) with $\nu=1,2,3,4$ and $|n|\le 16$. The summation is taken over $m$ and $\mu$ where $(m,\mu)\ne(n,\nu)$; ``$c.c.$" denotes the complex conjugate of the first term of the numerator. In this work, the Chern numbers are calculated using a numerical technique proposed by Fukui, Suzuki, and Hatsugai in Ref.~\cite{Fukui05}. \section{Results} \begin{figure}[tb] \includegraphics[scale=0.5]{Fig2.png} \caption{(a) Band dispersions of the third and fourth bands, $E_3(\bm k)$ and $E_4(\bm k)$, before light irradiation. (b)-(d) Those for the photo-induced phases under irradiation with circularly polarized light, i.e., (b) Chern-insulator, (c) semimetal, and (d) normal-insulator phases. (e),(f) Berry curvatures of the fourth band $B_z^{04}(\bm k)$ in momentum space for (e) the photo-induced Chern insulator phase with $N_{\rm Ch}^4=-1$ and (f) the photo-induced normal insulator phase with $N_{\rm Ch}^4=0$.} \label{Fig2} \end{figure} We first discuss the photo-induced variation of band structures and their topological properties. Figure~\ref{Fig2}(a) shows the band dispersions for the third and fourth bands, $E_3(\bm k)$ and $E_4(\bm k)$, in the absence of photo-irradiation. These two bands make contact at two individual points in momentum space to form a pair of inclined Dirac cones. Note that the Dirac points are located at the Fermi level because this compound has a $3/4$ electron filling with three fully occupied lower bands and an unoccupied fourth band. Figures~\ref{Fig2}(b)-(d) show plots of $E_3(\bm k)$ and $E_4(\bm k)$ for photo-irradiated systems with various $E^\omega$ and $\omega$ of light. Once the system is irradiated with circularly polarized light, a gap opens at the Dirac points. These three band structures correspond to three different photo-induced phases characterized by the Chern number $N_{\rm Ch}$ and the band gap $E_{\rm gap}$, that is, (b) Chern-insulator, (c) semimetal, and (d) normal-insulator phases, respectively. Here the band gap is defined by $E_{\rm gap}={\rm min}[E_4(\bm k)]-{\rm max}[E_3(\bm k)]$ where min[$E_4(\bm k)$] and max[$E_3(\bm k)$] are the minimum energy of the fourth band and the maximum energy of the third band, respectively. A positive $E_{\rm gap}$ means that the bulk is insulating, for which, in the whole momentum space, the fourth band is located above the Fermi level whereas the other three bands are located below the Fermi level. In contrast, a negative $E_{\rm gap}$ means that the bulk is semi-metallic, for which the third band is located above the Fermi level whereas the fourth band is located below the Fermi level in some portion of the momentum space. Importantly, when the electron filling $3/4$ with three fully occupied lower bands, a sum of the Chern numbers over three bands below the Fermi level, $N_{\rm Ch}=\sum_{\nu=1}^3 N_{\rm Ch}^\nu$, coincides with $-N_{\rm Ch}^4$ because conservation of invariance requires the sum of the Chern numbers over all four bands to be zero. The band structure in Fig.~\ref{Fig2}(b) characterized by $E_{\rm gap}>0$ and $N_{\rm Ch}=-N_{\rm Ch}^4=+1$ is assigned to the Chern insulator phase. In contrast, the band structure in Fig.~\ref{Fig2}(c) is characterized by $E_{\rm gap}<0$, for which the fourth band around $\bm k=(\pm \pi,0)$ is lower in energy than the third band around the Dirac points. This band structure is assigned to the semimetal phase. Interestingly, the band structure in Fig.~\ref{Fig2}(d) has $E_{\rm gap}>0$ and resembles the band structure for the Chern insulator [Fig.~\ref{Fig2}(b)]. However, the Chern number $N_{\rm Ch}=-N_{\rm Ch}^4$ is zero in this case, indicating that the system lies in a topologically trivial insulating state. Therefore, this band structure is assigned to the normal insulator phase. Note that these phases appear as nonequilibrium steady states under the continuous application of circularly polarized light, and, in this sense, they are distinct from thermodynamically equilibrium phases. We find a clear difference between the Chern insulator phase and the normal insulator phase in the Berry curvature $B_z^{04}(\bm k)$. The Berry curvature in the Chern insulator phase has two negative peaks around the gapped Dirac points [Fig.~\ref{Fig2}(e)], corresponding to a nonzero quantized Chern number $N_{\rm Ch}^4$ of $-1$, whereas that for the normal insulator phase has additional positive peaks as well as two negative peaks around the gapped Dirac points [Fig.~\ref{Fig2}(f)] that cancel the opposite contributions, resulting in $N_{\rm Ch}^4=0$. \begin{figure} \includegraphics[scale=0.5]{Fig3.png} \caption{(a) Phase diagram of photo-driven $\alpha$-(BEDT-TTF)$_2$I$_3$ in the plane of the amplitude $E^\omega$ and frequency $\omega$ of the applied circularly polarized light. (b), (c) Color maps of (b) the Chern number of the highest (fourth) band $N_{\rm Ch}^4$ and (c) the band gap $E_{\rm gap}$ in the plane of $E^\omega$ and $\omega$.} \label{Fig3} \end{figure} Figure~\ref{Fig3}(a) shows the phase diagram of photo-driven $\alpha$-(BEDT-TTF)$_2$I$_3$ in the plane of the amplitude $E^\omega$ and frequency $\omega$ of an applied circularly polarized light. Three phases are present, namely, the Chern-insulator, semimetal, and normal-insulator phases. This phase diagram is produced by calculating the Chern number of the (highest) fourth band $N_{\rm Ch}^4$ [Fig.~\ref{Fig3}(b)] and the band gap $E_{\rm gap}$ [Fig.~\ref{Fig3}(c)]. We assign the area with $E_{\rm gap}>0$ and $N_{\rm Ch}^4 \ne 0$ to the Chern insulator phase, the area with $E_{\rm gap}>0$ and $N_{\rm Ch}^4 = 0$ to the normal insulator phase. The area with $E_{\rm gap}<0$ is assigned to the semimetal phase. According to the obtained phase diagram, we expect that the usage of high-frequency light with $\hbar\omega>0.75$ eV is preferable to observe the photo-induced Chern insulator phase in the low $E^\omega$ range, whereas the usage of low-frequency light with $\hbar\omega<0.7$ eV is suitable to observe the rich phase transitions upon the variation of $E^\omega$. \begin{figure}[tb] \includegraphics[scale=0.5]{Fig4.png} \caption{(a) Chern number $-N_{\rm Ch}^4$ and band gap $E_{\rm gap}$ plotted as a function of the light amplitude $E^\omega$ when $\hbar\omega=0.5$ eV, which manifest successive emergence of three photo-induced electronic phases with increasing $E^\omega$. (b) $E^\omega$-profiles of the Hall conductivity $\sigma_{xy}$ for various temperatures when $\hbar\omega=0.5$ eV.} \label{Fig4} \end{figure} Finally, we discuss the Hall conductivity in the three different photo-induced electronic phases. This physical quantity is closely related to the topological nature of the electronic states and can be exploited to identify the predicted topological phases and to detect their phase transitions. We plot the calculated Chern number $-N_{\rm Ch}^4$ and band gap $E_{\rm gap}$ [Fig.~\ref{Fig4}(a)] as functions of the amplitude $E^\omega$ of the applied circularly polarized light setting $\hbar\omega=0.5$ eV, for which the three photo-induced phases, semimetal, normal-insulator, and Chern-insulator phases, emerge successively as $E^\omega$ increases. Recall that the relation $N_{\rm Ch}=-N_{\rm Ch}^4$ holds when the electron filling is $3/4$. We also plot the calculated $E^\omega$-profiles of the Hall conductivity $\sigma_{xy}$ for various temperatures when $\hbar\omega=0.5$ eV [Fig.~\ref{Fig4}(b)]. The Hall conductivity $\sigma_{xy}$ is calculated using the relation, \begin{eqnarray} \sigma_{xy}=\frac{2e^2}{h}\int\int_{\rm BZ}\;\frac{dk_xdk_y}{2\pi} \sum_{n,\nu} n_{n\nu}(\bm k) B_z^{n\nu}(\bm k) \label{sgmyx} \end{eqnarray} where the factor 2 accounts for the spin degeneracy. Here $n_{n\nu}(\bm k)$ is the nonequilibrium distribution function, which describes the electron occupations of Floquet bands in the photo-driven nonequilibrium steady states. The nonequilibrium distribution function $n_{n\nu}(\bm k)$ is calculated using the Floquet-Keldysh formalism~\cite{Tsuji09,Aoki14}, which is formulated by combining the Keldysh Green's function technique~\cite{Jauho94,Mahan00} with the Floquet theory. The Dyson equation for the Green's function matrices is given by \begin{eqnarray} & &\left( \begin{array}{cc} \hat{G}^{\rm R}(\bm k,\varepsilon) & \hat{G}^{\rm K}(\bm k,\varepsilon) \\ 0 & \hat{G}^{\rm A}(\bm k,\varepsilon) \end{array} \right)^{-1} \nonumber \\ & &= \left( \begin{array}{cc} [\hat{G}^{\rm R0}(\bm k,\varepsilon)]^{-1} & 0 \\ 0 & [\hat{G}^{\rm A0}(\bm k,\varepsilon)]^{-1} \end{array} \right) -\left( \begin{array}{cc} \hat{\Sigma}^{\rm R} & \hat{\Sigma}^{\rm K}(\varepsilon) \\ 0 & \hat{\Sigma}^{\rm A} \end{array} \right). \nonumber \\ \end{eqnarray} Here $\hat{G}^{\rm R}$, $\hat{G}^{\rm A}$ and $\hat{G}^{\rm K}$ ($\hat{\Sigma}^{\rm R}$, $\hat{\Sigma}^{\rm A}$ and $\hat{\Sigma}^{\rm K}$) are matrices of the retarded, advanced, and Keldysh Green's functions (self-energies), respectively, each of which is composed of $(2M_{\rm max}+1)\times(2M_{\rm max}+1)$ block matrices where the size of each block matrix is 4$\times$4. The matrix components of $\hat{G}^{\rm R0}$, $\hat{G}^{\rm A0}$, $\hat{\Sigma}^{\rm R}$, $\hat{\Sigma}^{\rm A}$ and $\hat{\Sigma}^{\rm K}$ are given, respectively, by, \begin{align} &[\hat{G}^{\rm R0}(\bm k,\varepsilon)]^{-1}_{n\nu,m\mu} =\varepsilon\delta_{n,m}\delta_{\nu,\mu}-\mathcal{H}_{n\nu,m\mu}(\bm k) \\ &[\hat{G}^{\rm A0}(\bm k,\varepsilon)]^{-1}_{n\nu,m\mu} =\varepsilon\delta_{n,m}\delta_{\nu,\mu}-\mathcal{H}_{n\nu,m\mu}(\bm k) \\ &[\hat{\Sigma}^{\rm R}]_{n\nu,m\mu}=-i\Gamma\delta_{n,m}\delta_{\nu,\mu} \\ &[\hat{\Sigma}^{\rm A}]_{n\nu,m\mu}= i\Gamma\delta_{n,m}\delta_{\nu,\mu} \\ &[\hat{\Sigma}^{\rm K}(\varepsilon)]_{n\nu,m\mu}=-2i\Gamma\tanh\left[ \frac{\varepsilon-\mu+m\omega}{2k_{\rm B}T} \right]\delta_{n,m}\delta_{\nu,\mu}, \end{align} where the symbol $\hat{M}_{n\nu,m\mu}$ denotes the $(\nu,\mu)$th component of the $(m,n)$th block matrix $\hat{M}_{nm}$ ($4 \times 4$), and the block matrix $\hat{\mathcal{H}}_{n,m}$ constituting the Floquet Hamiltonian is given by Eq.~(\ref{H-Mw2}). We consider a situation that the system is coupled to a heat reservoir at temperature $T$ with a dissipation coefficient $\Gamma$ where we set $\Gamma$=0.1 eV for the calculations. The lesser Green's function $\hat{G}^{<}$ and the lesser self-energy $\hat{\Sigma}^{<}$ are calculated, respectively, by, \begin{eqnarray} & &\hat{G}^{<}(\bm k,\varepsilon)=\hat{G}^{\rm R}(\bm k,\varepsilon) \;\hat{\Sigma}^{<}(\varepsilon)\;\hat{G}^{\rm A}(\bm k,\varepsilon), \\ & &\hat{\Sigma}^{<}(\varepsilon)= (\hat{\Sigma}^{\rm A}+\hat{\Sigma}^{\rm K}(\varepsilon)-\hat{\Sigma}^{\rm R})/2. \end{eqnarray} The matrix components of the lesser self-energy $\hat{\Sigma}^{<}$ read, \begin{eqnarray} [\hat{\Sigma}^{<}(\varepsilon)]_{n\nu,m\mu}=i\Gamma\left(1-\tanh\left[ \frac{\varepsilon-\mu+m\omega}{2k_{\rm B}T} \right]\right)\delta_{n,m}\delta_{\nu,\mu} \end{eqnarray} The nonequilibrium distribution function $n_{n\nu}(\bm k)$ for the $\nu$-th Floquet band in the $n$-photon subspace is given by, \begin{eqnarray} n_{n\nu}(\bm k)= \frac{\braket{\Phi_\nu^n(\bm k)|\hat{N}_{\bm k}(\varepsilon_\nu^n(\bm k))|\Phi_\nu^n(\bm k)}} {\braket{\Phi_\nu^n(\bm k)|\hat{A}_{\bm k}(\varepsilon_\nu^n(\bm k))|\Phi_\nu^n(\bm k)}} \end{eqnarray} where \begin{eqnarray} & &\hat{A}_{\bm k}(\varepsilon)= i(\hat{G}^{\rm R}(\bm k,\varepsilon)-\hat{G}^{\rm A}(\bm k,\varepsilon))/2\pi \\ & &\hat{N}_{\bm k}(\varepsilon)=-i\hat{G}^{<}(\bm k,\varepsilon)/2\pi \end{eqnarray} and $\ket{\Phi_\nu^n(\bm k)}$ is the Floquet eigenstates obtained by solving Eq.~(\ref{H-Mw}). In Fig.~\ref{Fig4}(b), we find that the Hall conductivity $\sigma_{xy}$ is almost zero in the semimetal phase when $E^\omega$ is small ($E^\omega \lesssim 4$ MV/cm) but starts increasing with increasing $E^\omega$ at around $E^\omega \sim 4$ MV/cm towards the phase boundary to the normal insulator phase. It reaches a quantized value of $\sim 2e^2/h$ at the phase boundary and is kept constant in the normal insulator phase although it is slightly decreased as temperature increases. This nearly constant behavior does not change even when the system enters the Chern insulator phase, in which the value of $\sigma_{xy}$ is again nearly constant to be $\sim 2e^2/h$. The quantized Hall conductivity of $\sigma_{xy}\sim 2e^2/h$ in the Chern insulator phase is naturally understood from the quantized Chern number of $N_{\rm Ch}=-N_{\rm Ch}^4=1$. On the other hand, the finite $\sigma_{xy}$ in the nomarl insulator phase is rather nontrivial because the Chern number vanishes ($N_{\rm Ch}=0$) in this phase. It is attributable to the formation and the electron occupations of Floquet subbands (a series of replicas of the original bands with an energy spacing of $\hbar\omega$) in the present photo-driven system. Namely, when the light frequency is not large enough as in the present case with $\hbar\omega=0.5$ eV as compared to the bandwidth $W$ ($\sim$0.8 eV in the present $\alpha$-type organic salt), the hybridization and the anti-crossing of Floquet bands with different photon numbers occur. As a result, the highest Floquet band in the $m=-1$ subspace and the lowest Floquet band in the $m=+1$ subspace are partially occupied by electrons, and the Hall conductivity captures the Chern numbers of the Floquet bands in these finite-photon subspaces. This phenomenon cannot be expected in usual normal insulator phase in equilibrium. Because both the normal insulator phase and the Chern insulator phase exhibit nearly the same behaviors of the Hall conductivity, it may be difficult to distinguish these phases by the Hall-conductivity measurements. However, the temperature dependence is more pronounced in the normal insulator phase. Hence, measurements of the precise temperature profiles of $\sigma_{xy}$ may be exploited for identification of the phases. It should be mentioned that in this $\alpha$-type organic salt, the Dirac points appear at low temperatures when the charge order melts. One possible option to realize such a situation is application of an uniaxial pressure $P_a$, which is known to melt the charge ordering in the ground state. Thus, the predicted topological phase transitions may be observed under application of the pressure. The measurements must be performed using diamond anvil pressure cells, which are transparent for laser light. Another option is to perform the experiments above the charge ordering temperature $T_{\rm CO}(\sim 40)$ K, which is much easier than the first option. Its feasibility is supported by our calculated Hall conductivity data, which shows that the quantized Hall conductivity is observed even at $T$=100 K. \section{Conclusion} To summarize, we theoretically predicted the emergence of rich photo-induced topological phases as nonequilibrium steady states in organic salt $\alpha$-(BEDT-TTF)$_2$I$_3$ with inclined Dirac-cone bands under continuous application of circularly polarized light. The predicted topological electronic phases and their transitions upon the laser-parameter variations are expected to be observed in future experiments by measuring the photo-induced Hall effect. A crucial advantage of the usage of organic compounds is that the effective amplitude of the laser is enhanced significantly due to the large molecule sizes and resulting larger lattice constants because the dimensionless laser amplitude $\mathcal{A}_a=eaE^\omega/\hbar\omega$ and $\mathcal{A}_b=ebE^\omega/\hbar\omega$ are proportional to the lattice constants $a$ and $b$. We expect the effective vector potential $\mathcal{A}$ in $\alpha$-(BEDT-TTF)$_2$I$_3$ to be nearly an order of magnitude larger than that for graphene, and hence the feasibility of experimental realization of the predicted photo-induced phenomena is expected to be increased. The field of photo-induced topological phase transitions is now rapidly growing, but target materials for research are still limited to a few toy models and simple atomic-layered materials. Our work may contribute by advancing this research field through broadening the range of target materials. \section{Acknowledgment} We thank Y. Tanaka for useful discussions. This work was supported by JSPS KAKENHI (Grant Nos. 17H02924, 16H06345, 19H00864, 19K21858, 19K23427, and 20H00337) and Waseda University Grant for Special Research Projects (Project Nos. 2019C-253 and 2020C-269). KK is supported by World-leading Innovative Graduate Study Program for Materials Research, Industry, and Technology (MERIT-WINGS) of the University of Tokyo.
{'timestamp': '2020-12-08T02:10:00', 'yymm': '2005', 'arxiv_id': '2005.14411', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14411'}
arxiv
\section{Introduction} \IEEEPARstart{T}{he} rapid development of the worldwide mobile communication technologies has been witnessed in recent years. After the 4th generation (4G) mobile communications
became universal around the world, the initial 5th generation (5G) standard was completed in 2018 and the 5G commercial networks were already employed in part in the first quarter of 2020. For supporting huge mobile data traffic and high-speed communications required by a growing number of {\color{black}the} mobile devices accessed to the wireless networks, a variety of innovative techniques including millimeter wave (mmWave), ultra-dense network (UDN) and massive multiple-input multiple-output (MIMO) are implemented in 5G wireless transmission systems \cite{M.Agiwal2016(CST)}. These techniques exhibit great advantages in helping the communication systems improve spectral efficiency (SE) \cite{W.Yan2019(WCL)}, but {\color{black}face} challenging problems such as: 1) the mmWave is susceptible to blockage and suffers from severe power attenuation during the long-distance propagation in the atmosphere \cite{X.Lin2015(WCL)}, so that the wireless communication system will bear poor reliability when the received signals are substantially weak; 2) the UDN is composed of numerous intensively distributed base stations (BS) \cite{W.Sun2018(TWC)} while the massive MIMO requests the signal transceivers to be equipped with large-scale antenna arrays \cite{S.Hu2018(TSP)}, which {\color{black}lead} to high hardware cost (HWC). One mature technological solution to these problems is utilizing relays to establish a multi-hop transmission mode. Conventional wireless cooperative communication systems mostly employ relays \cite{S.Cheng2018(TVT), P.K.Sharma2016(CL), X.Xia2015(TSP), A.K.Mishra2018(TVT)} to process on the signals received halfway and retransmit the signals to the destination terminals actively through an uncontrollable propagation environment. Relays are validated to be effective on improving system reliability \cite{P.K.Sharma2016(CL)}, but are still active retransmitting facilities that require high energy consumption (EC) and HWC. Recently, another state of the art approach, which is named Intelligent Reflecting Surface (IRS) \cite{O.Ozdogan2019(WCL), Q.Wu2019(ICASSP)}, Large Intelligent Surface (LIS) \cite{W.Yan2019(WCL)} or Large Intelligent Metasurface (LIM) \cite{Z.He2020(WCL)}, has attracted {\color{black}considerable attention} from wireless communication researchers. An IRS is a planar array composed of a large number of low-cost passive reconfigurable reflecting elements, each of which induces an adjustable phase shift on the coming signal wave and reflects the signal to the destination terminal \cite{Q.Wu2019(TWC), C.Huang2019(JSAC), C.Huang2020(WCM), Liuyiming(arXiv)}. It is {\color{black}distinct} from the ordinary physical reflecting surfaces which simply reflect the signal waves without any parameter adjustment, and also different from the traditional relays which actively retransmit the received signals. As a passive reflecting apparatus, the IRS is envisioned as a promising hardware solution to EC and HWC in the future communication networks. There have already been studies that focused on the achievable rate (ACR) maximization, energy efficiency improvement, modulation scheme, secure communication realization, phase shift optimization, channel estimation and capacity {\color{black}analysis for} the IRS-aided wireless communication systems \cite{E.Basar2020(TCOM), M.Cui2019(WCL), H.Shen2019(CL), W.Yan2019(WCL), Q.Nadeem2020(arXiv), E.Bjornson2020(WCL), C.Huang2018(ICASSP), C.Huang2019(TWC)}. For instance, C. Huang, \textit{et al}. \cite{C.Huang2018(ICASSP), C.Huang2019(TWC)} employed the IRS to maximize the ACR \cite{C.Huang2018(ICASSP)} and the energy efficiency \cite{C.Huang2019(TWC)} of the wireless communication systems. E. Basar \cite{E.Basar2020(TCOM)} proposed an IRS-based index modulation scheme which enabled high data rate and low bit-error-rate (BER). M. Cui, \textit{et al}. \cite{M.Cui2019(WCL)} and H. Shen, \textit{et al}. \cite{H.Shen2019(CL)} developed IRS-aided secure wireless communication systems where the IRS was employed to maximize the rate gap (secrecy rate) between the desired transmission path from the source to the legitimate user and the undesired one from the source to the eavesdropper. W. Yan, \textit{et al}. \cite{W.Yan2019(WCL)} developed a passive beamforming and information transferring method and optimized the phase shifts with different state values to improve the average signal-to-noise ratio (SNR). Q. Nadeem, \textit{et al}. \cite{Q.Nadeem2020(arXiv)} outlined an IRS-aided multiple-user MIMO communication system and estimated the cascaded channel matrix within each time interval. E. Björnson, \textit{et al}. \cite{E.Bjornson2020(WCL)} analysed and compared the channel capacities of the IRS-supported, the decode-and-forward (DF) relay assisted and the single-input-single-output (SISO) communication systems, and derived the least required number of {\color{black}the} IRS reflecting elements which allowed {\color{black}the} IRS to outperform the DF relay and SISO. It is noted that the aforementioned works are carried out under the assumption of perfect hardware. However, in most practical situations, the inherent hardware impairment (HWI), e.g. phase noise, quantization error, amplifier non-linearity, \textit{et al}., which {\color{black}will} generally limit the system performance, cannot be neglected due to the non-ideality of the communication devices in the real world \cite{X.Zhang2015(TCOM), K.Xu2015(IEEE PACRIM), X.Xia2015(TSP)}. Although the effect of the HWI on the system performance can be mitigated by compensation algorithms \cite{T.Schenk2008(book)}, there will still exist residual HWI due to the imprecisely estimated time-variant hardware characteristic and the random noise. As a result, it is of great significance to probe into the system performance in the presence of HWI. Some researchers \cite{E.Bjornson2014(TIT), Q.Zhang2018(TWC), E.Bjornson2013(CL)} analysed the channel capacity of the massive MIMO communication systems with HWI, which they modelled as additive Gaussian distributed distortion noise. But to the best of our knowledge, there were only a few studies that analysed the IRS-aided communication systems with the HWI at the IRS \cite{S.Hu2018(GlobalCom), M.A.Badiu2020(WCL),D.Li2020(CL)}. Among these studies, the researchers performed some important initial works by modelling {\color{black}the HWI at the IRS} as an additive variable {\color{black}with respect to} the distance between {\color{black}the} reflecting point and the reflecting surface center \cite{S.Hu2018(GlobalCom)}, or as the uniformly distributed phase noise generated by the reflecting units \cite{M.A.Badiu2020(WCL),D.Li2020(CL)}. However, {\color{black} these works still left several research gaps to be filled. First,} the HWI of the transmitting devices and receiving terminals was not simultaneously taken into consideration, which would jointly influence the performance of the IRS-aided communication systems as well. {\color{black} Second, the phase shift optimization was not implemented when there existed HWI, which was indispensable for one to acquire the optimal IRS configuration with hardware imperfections. Third, the performance comparisons between the IRS and the conventional approaches, e.g. DF relay, which also contributed to the wireless data transmission enhancement, needed to be further explored in the presence of HWI.} Up to now, we have not found the {\color{black} related works that inquired into the above three aspects}. Therefore, in this article, we will provide the ACR {\color{black}analysis} and phase shift optimization on the IRS-aided communication system in consideration of {\color{black}the HWI at both the IRS and transceivers}, and present performance comparisons with the existing multiple-antenna DF relay assisted {\color{black}communication} system with the HWI at the DF relay and transceivers. Our contributions are summarized as follows. \begin{itemize} \item[•] By referring to \cite{M.A.Badiu2020(WCL)}, we model the {\color{black}HWI at the IRS} as a phase error matrix, in which the random phase errors generated by the IRS reflecting units are uniformly distributed. By referring to \cite{E.Bjornson2014(TIT),E.Bjornson2015(TWC)}, {\color{black}we model the transceiver HWI as the additive distortion noise as well as the phase drift and the thermal noise}. When the IRS phase shifts are adjusted to compensate for the phase shifts in the {\color{black}source-IRS} channel and the {\color{black}IRS-destination} channel, we mathematically derive the closed-form expression of the average ACR {\color{black}and the IRS utility} with respect to the number of the reflecting elements, denoted by $N$. From the theoretical and the numerical results, we confirm that {\color{black}the HWI decreases the average ACR and the IRS utility, and imposes more severe impact on the ACR performance as $N$ becomes larger.} \item[•] In order to optimize the IRS phase shifts and obtain the maximum average ACR with {\color{black}HWI}, we formulate the optimization problems and transform the non-convex problems into convex semidefinite programming (SDP) problems, then obtain the solution numerically by exploiting CVX toolbox with SDPT3 solver in the MATLAB simulation. {\color{black}Besides, we evaluate the impact of the channel estimation errors and the residual phase noises on the optimization performance, after which we conclude that both of the two unavoidable factors result in performance degradation to some extent.} \item[•] {\color{black}When the HWI appears at the IRS, the DF relay and the transceivers}, we compare the performance of the IRS with that of the {\color{black}multiple-antenna} DF relay in terms of the ACR {\color{black}and the utility}, and derive the condition where the IRS can always surpass the DF relay for all $N>0$. The results illustrate that if $N$ is large enough {\color{black}or the transmitting power is sufficiently high}, the IRS with $N$ passive reflecting elements is able to outperform the DF relay with the same number of antennas in the presence of HWI. \end{itemize} The rest of this article is organized as follows. In Section II, we introduce the IRS-aided {\color{black}wireless} communication system with HWI by showing the system model. In Section III, we analyse the ACR {\color{black}and the IRS utility} in the considered wireless communication system. In Section IV, we {\color{black}narrate the problem formulation and transformation} when optimizing the IRS phase shifts {\color{black}in the presence of HWI}. In Section V, we compare the performance of the IRS with that of the multiple-antenna DF relay in terms of the ACR {\color{black}and the utility}. In Section VI, we provide numerical results to verify the theoretical {\color{black}analysis and present discussions on the channel estimation errors and the residual phase noises}. In Section VII, we draw the overall conclusions. \textit{Notations}: Italics denote the variables or constants, while boldfaces denote the vectors or matrices. $\mathbf{A}^*$, $\mathbf{A}^T$, $\mathbf{A}^H$ and $ \mathbf{A}^{-1}$ symbolize the conjugate, transpose, conjugate-transpose and inverse of matrix $\mathbf{A}$, respectively. $tr(\mathbf{A})$ and $rank(\mathbf{A})$ stand for the trace and the rank of $\mathbf{A}$. $diag(\mathbf{a})$ represents an $n\times n$ sized diagonal matrix whose diagonal elements are $(a_1,a_2,\ldots,a_n)$ in vector $\mathbf{a}$. $||.||_2$ represents $\ell_2$ norm. {\color{black}$\odot$ symbolizes the Hadamard product.} $\mathbf{A}\in\mathbb{C}^{m\times n}$ or $\mathbf{A}\in\mathbb{R}^{m\times n}$ means that $\mathbf{A}$ is an $m\times n$ sized complex or real-number matrix. $\mathbf{A}\sim\mathcal{CN}(\mathbf{0},\mathbf{V})$ or $\mathbf{A}\sim\mathcal{N}(\mathbf{0},\mathbf{V})$ illustrates that $\mathbf{A}$ obeys complex normal or normal distribution with mean of zero and covariance matrix of $\mathbf{V}$. $\mathbf{A}\succeq\mathbf{0}$ means that $\mathbf{A}$ is positive semidefinite. $\mathbb{E}_\mathbf{x}[\mathbf{A}]$ denotes the expectation of $\mathbf{A}$ on the random variable $\mathbf{x}$ if $\mathbf{A}$ is a stochastic matrix in relation to $\mathbf{x}$. $\mathbf{I}_n$ and $\mathbf{\Gamma}_n$ symbolize $n\times n$ sized identity matrix and $n\times n$ sized matrix with all elements of 1, respectively. $\mathbf{1}$ stands for the unit row vector with all elements of 1. $\Delta=b^2-4ac$ represents the discriminant of the quadratic function $f\left(x\right)=ax^2+bx+c$. {\color{black}$g(x)=\mathcal{O}(f(x))$ signifies that $|g(x)/f(x)|$ is bounded when $x\rightarrow\infty$. $\lim_{x\to\infty}f(x)$ is represented by $f(x)|_{x\rightarrow\infty}$ throughout the whole paper.} \section{Communication System Model} In this article, the considered wireless communication system (Figure \ref{Fig-1}) includes a signal-emitting source (e.g. the base station, BS), an IRS with $N$ passive reflecting elements, an IRS controller and a signal-receiving destination (e.g. the user equipment, UE). The signal-emitting source, assumed to be equipped with single antenna, transmits the modulated signals with an average signal power of $\sqrt{P}$. The IRS induces reconfigurable phase shifts, which are adjusted by the IRS controller based on the channel state information (CSI), on the impinging signals, and reflects the coming signal waves to the destination. The signal-receiving destination, also equipped with single antenna, receives the directly arrived signals from the source and passively reflected signals from the IRS. \begin{figure}[!t] \includegraphics[width=4.0in]{Fig_1_R1.jpg} \centering \caption{The considered IRS-aided wireless communication system, including a single-antenna signal-emitting source, a single-antenna signal-receiving destination, an IRS with $N$ passive reflecting elements, and an IRS controller.} \label{Fig-1} \end{figure} Generally, due to the non-ideality of the hardware, the received signal is disturbed by the HWI which universally exists in the real-world communication devices. In this considered system, {\color{black} the HWI appears at both the IRS and the signal transceivers}. First, {\color{black} the HWI at the IRS} is modelled as a random diagonal phase error matrix, which involves $N$ random phase errors induced by {\color{black} the} intrinsic hardware {\color{black} imperfection} of the passive reflectors, or by {\color{black} the} imprecision of the channel estimation \cite{M.A.Badiu2020(WCL)}. It is expressed as {\color{black}\begin{equation} \mathbf{\Theta}_E=diag\left(e^{j\theta_{E1}},e^{j\theta_{E2}},\ldots,e^{j\theta_{EN}}\right) \end{equation} where} {\color{black}$j^2=-1$}; $\theta_{Ei}$, for $i=1,2,\ldots,N$, are random phase errors uniformly distributed on $\left[-\pi/2,\pi/2\right]$. {\color{black} Then, the HWIs at the signal transceivers primarily include the additive distortion noise, the multiplicative phase drift and the amplified thermal noise \cite{E.Bjornson2013(CL), E.Bjornson2014(TIT), E.Bjornson2015(TWC), Q.Zhang2018(TWC)}, which create a mismatch between the intended signal and the practically generated signal, or create a distortion on the received signal during the reception processing. The distortion noises, generated by the transmitter and the receiver due to the insufficiency of the accurate modelling, the time-variant characteristics, \textit{et al}., are modelled as $\eta_t(t)\sim\mathcal{CN}(0,\Upsilon_t)$ and $\eta_r(t)\sim\mathcal{CN}(0,V_r)$, respectively, where $\Upsilon_t$ and $V_r$ will be given in (\ref{Upsilon_t}) and (\ref{V_r}). The multiplicative phase drift caused by the local oscillator at the receiver is modelled as $\phi(t)=e^{j\psi(t)}$, with its expectation given by $\mathbb{E}[\phi(t)]=e^{-\frac{1}{2}\delta t}$ \cite{Q.Zhang2018(TWC)}, where $\delta$ denotes the oscillator quality, and $\psi(t)$ follows a Wiener process: \begin{equation} \psi(t)\sim\mathcal{N}(\psi(t-1),\delta) \end{equation} The amplified thermal noise, aroused by the mixers at the receiver and by the interference leakage from other frequency bands or wireless networks \cite{E.Bjornson2015(TWC)}, is modelled as $w'(t)\sim\mathcal{CN}(0,\sigma_{w'}^2)$, with $\sigma_{w'}^2$ being the thermal noise variance. } Therefore, referring to Eq. (2) in \cite{E.Bjornson2014(TIT)} and {\color{black} Eq. (3) in \cite{E.Bjornson2015(TWC)}}, the received signal disturbed by {\color{black} HWI} is modelled as {\color{black}\begin{equation}\label{eq2-12} y(t)=\phi(t)\left(\mathbf{h}_{IU}^T\mathbf{\Phi}\mathbf{\Theta}_E\mathbf{h}_{SI}+h_{SU}\right)\left[\sqrt Ps(t)+\eta_t(t)\right]+\eta_r(t)+w(t) \end{equation} where} $s(t)$ stands for the unit-power signal symbol at time $t$ with $\mathbb{E}\left[s(t)s^*(t)\right]=1$; $w(t)\sim\mathcal{CN}\left(0,\sigma_w^2\right)$ denotes the receiver noise, whose {\color{black} variance $\sigma_w^2$, according to \cite{E.Bjornson2015(TWC)}, satisfies $\sigma_w^2=F\sigma_{w'}^2$, with $F>1$ being the noise amplification factor}; $\mathbf{\Phi}=\alpha\times diag\left(e^{j\theta_1},e^{j\theta_2},\ldots,e^{j\theta_N}\right)$ represents the phase shifting matrix of the IRS, where $\alpha\in(0,1]$ is the fixed amplitude reflection coefficient and $\theta_i$, for $i=1,2,\ldots,N$, are the adjustable phase-shift variables of the IRS; $h_{SU}=\sqrt{\mu_{SU}}e^{j\varphi_{SU}}$ represents the channel coefficient from the source to the destination, where $\sqrt{\mu_{SU}}$ and $\varphi_{SU}$ are the power attenuation coefficient and the phase shift of $h_{SU}$; $\mathbf{h}_{IU}\in\mathbb{C}^{N\times 1}$ and $\mathbf{h}_{SI}\in\mathbb{C}^{N\times1}$ are the channel coefficients from the IRS to the destination and from the source to the IRS, respectively, which are expressed as{\color{black}\cite{E.Bjornson2020(WCL)}} \begin{equation} \mathbf{h}_{IU}=\sqrt{\mu_{IU}}\left(e^{j\varphi_{IU,1}},e^{j\varphi_{IU,2}},\ldots,e^{j\varphi_{IU,N}}\right)^T \end{equation} \begin{equation} \mathbf{h}_{SI}=\sqrt{\mu_{SI}}\left(e^{j\varphi_{SI,1}},e^{j\varphi_{SI,2}},\ldots,e^{j\varphi_{SI,N}}\right)^T \end{equation} where $\sqrt{\mu_{IU}}$ and $\sqrt{\mu_{SI}}$ are the power attenuation coefficients of $\mathbf{h}_{IU}$ and $\mathbf{h}_{SI}$; $\varphi_{IU,i}$ and $\varphi_{SI,i}$, for $i=1,2,\ldots,N$, are the phase shifts of $\mathbf{h}_{IU}$ and $\mathbf{h}_{SI}$. {\color{black} As the distortion noises are proportional to the signal power, we have \begin{equation}\label{Upsilon_t} \Upsilon_t=\kappa_tP \mathbb{E}\left[s(t)s^*(t)\right] \end{equation} \begin{equation}\label{V_r} V_r=\kappa_rP\left|\phi(t)\left(\mathbf{h}_{IU}^T\mathbf{\Phi}\mathbf{\Theta}_E\mathbf{h}_{SI}+h_{SU}\right)\right|^2 \mathbb{E}\left[s(t)s^*(t)\right] \end{equation} where} $\kappa_t$ and $\kappa_r$ represent the proportionality coefficients which describe the severities of the distortion noises at the transmitter and the receiver, respectively. For this communication system, we {\color{black} will} {\color{black} derive the approximate closed-form ACR expression and the IRS utility in relation to $N$ in the presence of HWI, and analyse the ACR and utility degradations caused by HWI in the next section}. \section{ACR Analysis with HWI} Based on the signal model in {\color{black} (\ref{eq2-12})}, we {\color{black} will} analyse the ACR {\color{black} and the IRS utility} of the considered IRS-aided communication system in the presence of {\color{black} HWI}. Here we assume that the phase information in the cascaded source-IRS-destination channel model \cite{Q.Nadeem2020(arXiv)} is already estimated before $\mathbf{\Phi}$ is adjusted, so that $\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$, for $i=1,2,\ldots,N$, are known for the IRS phase shift controller. This can be realized via some existing channel estimation techniques in e.g. \cite{B.Zheng2020(WCL),Q.Nadeem2020(arXiv)}, which estimated the cascaded channel, {\color{black}and \cite{L.Wei2020(Arxiv),L.Wei2020(IEEE SAM)}, which designed robust and effective channel estimation frameworks based on the PARAllel FACtor (PARAFAC)}. In {\color{black} (\ref{eq2-12})}, $\mathbf{h}_{IU}^T\mathbf{\Phi}\mathbf{\Theta}_E\mathbf{h}_{SI}$ is maximized if each phase shift of the IRS is adjusted into $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$, for $i=1,2,\ldots,N$, to compensate for the phase shifts in $\mathbf{h}_{IU}$ and $\mathbf{h}_{SI}$ \cite{E.Bjornson2020(WCL)}. As a result, when $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$, the received signal affected by {\color{black} HWI} is expressed as {\color{black} \begin{equation}\label{eq2-7} y(t)=\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\left[\sqrt Ps(t)+\eta_t(t)\right]+\eta_r(t)+w(t) \end{equation} where} $\mathbf{g}_{IU}=\sqrt{\mu_{IU}}\mathbf{1}^T$ and $\mathbf{g}_{SI}=\sqrt{\mu_{SI}}\mathbf{1}^T$. {\color{black}Accordingly}, the ACR with {\color{black} HWI} is expressed as {\color{black} \begin{equation}\label{eq2-16} \begin{split} R_{HWI}\left(N\right)&=\log_2\left\{1+\frac{P\left|\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right|^2}{P(\kappa_t+\kappa_r)\left|\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right|^2+\sigma_w^2}\right\} \end{split} \end{equation}} Based on (\ref{eq2-16}), we obtain the following theorem. {\color{black} \begin{theorem} When $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$ and $\theta_{Ei}$ is uniformly distributed on $\left[-\pi/2,\pi/2\right]$, the approximate average ACR with {\color{black} HWI} is expressed as \begin{equation}\label{eq2-17} \overline{R_{HWI}}\left(N\right)=\log_2\left\{1+\frac{\beta N^2+\lambda N+\mu_{SU}} {\left(\kappa_t+\kappa_r\right)\left(\beta N^2+\lambda N+\mu_{SU}\right)+\frac{\sigma_w^2}{P}}\right\} \end{equation} where \begin{equation}\label{eq-revision-1} \beta=\frac{4\alpha^2\mu_{IU}\mu_{SI}}{\pi^2} \end{equation} \begin{equation}\label{eq-revision-2} \lambda=\left(1-\frac{4}{\pi^2}\right)\alpha^2\mu_{IU}\mu_{SI}+\frac{4\alpha}{\pi}\mu_{IU}^{\frac{1}{2}}\mu_{SI}^{\frac{1}{2}}\mu_{SU}^{\frac{1}{2}}\cos{(\varphi_{SU})} \end{equation} The IRS utility with HWI, defined by $\gamma_{HWI}(N)=\frac{\partial\overline{R_{HWI}}\left(N\right)}{\partial N}$ according to the Definition 1 in \cite{S.Hu2018(GlobalCom)}, is expressed as \begin{equation}\label{Utility Expression} \begin{split} \gamma_{HWI}(N)=\frac{\sigma_w^2}{P}(2\beta N+\lambda)\left\{\left[(\kappa_t+\kappa_r)(\beta N^2+\lambda N+\mu_{SU})+\frac{\sigma_w^2}{P}\right]\times\right.\\ \left. \left[(\kappa_t+\kappa_r+1)(\beta N^2+\lambda N+\mu_{SU})+\frac{\sigma_w^2}{P}\right]\ln 2\right\}^{-1} \end{split} \end{equation} \end{theorem} \begin{proof} The proof is given in Appendix A. \end{proof}} {\color{black} Subsequently, for theoretically evaluating the impact that the HWI has on the ACR and the IRS utility, we further calculate the rate gap, defined by $\delta_R(N)=R(N)-\overline{R_{HWI}}\left(N\right)$, and the utility gap, defined by $\delta_{\gamma}(N)=\gamma(N)-\gamma_{HWI}(N)$ in the following \textbf{Lemma 1}, where $R(N)$ and $\gamma(N)$, denoting the ACR and the IRS utility without HWI, will be given in the proof.} \begin{lemma} When $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$ and $\theta_{Ei}$ is uniformly distributed on $\left[-\pi/2,\pi/2\right]$, the rate gap {\color{black}$\delta_R\left(N\right)$} between the average ACR{\color{black}s} with and without HWI is expressed as {\color{black}\begin{equation}\label{eq2-18} \delta_R\left(N\right)\!=\!\log_2\left\{\frac{P\left(\kappa_t+\kappa_r\right)\chi+\sigma_w^2+P^2\chi\varpi\left(\frac{\kappa_t}{\sigma_w^2}+\frac{\kappa_r}{\sigma_w^2}\right)+P\varpi}{P(\kappa_t+\kappa_r+1)\chi+\sigma_w^2}\right\} \end{equation} where \begin{equation}\label{eq-revision-3} \varpi=\frac{\pi^2}{4}\beta N^2+\rho N+\mu_{SU} \end{equation} \begin{equation}\label{eq-revision-4} \chi=\beta N^2+\lambda N+\mu_{SU} \end{equation} with $\rho$ given by $\rho=2\alpha\mu_{IU}^{\frac{1}{2}}\mu_{SI}^{\frac{1}{2}}\mu_{SU}^{\frac{1}{2}}\cos{(\varphi_{SU})}$. The utility gap $\delta_{\gamma}(N)$ between the IRS utilities with and without HWI is expressed as \begin{equation}\label{Utility Degradation} \begin{split} \delta_\gamma(N)=& \left[P^3\chi^2(\kappa_t+\kappa_r+1)\left(\frac{\kappa_t}{\sigma_w^2}+\frac{\kappa_r}{\sigma_w^2}\right)\frac{\partial\varpi}{\partial N}+ P^2(\kappa_t+\kappa_r+1)\left(\frac{\partial\varpi}{\partial N}\chi-\frac{\partial\chi}{\partial N}\varpi\right)+\right.\\ &\left. P^2\left(\kappa_t+\kappa_r\right)\left(\frac{\partial\varpi}{\partial N}\chi+\frac{\partial\chi}{\partial N}\varpi\right)+ P\sigma_w^2\left(\frac{\partial\varpi}{\partial N}-\frac{\partial\chi}{\partial N}\right)\right]\times \left\{ \left[P(\kappa_t+\kappa_r)\chi+\sigma_w^2+\right.\right.\\ &\left.\left.P^2\varpi\chi\left(\frac{\kappa_t}{\sigma_w^2}+\frac{\kappa_r}{\sigma_w^2}\right)+P\varpi\right] \left[P(\kappa_t+\kappa_r+1)\chi+\sigma_w^2\right]\ln 2 \right\}^{-1} \end{split} \end{equation} where $\frac{\partial\varpi}{\partial N}=\frac{\pi^2}{2}\beta N+\rho$ and $\frac{\partial\chi}{\partial N}=2\beta N+\lambda$ are the partial derivatives of $\varpi$ and $\chi$, respectively.} \end{lemma} {\color{black} \begin{proof} According to \cite{E.Bjornson2020(WCL)}, $R\left(N\right)$ is expressed as \begin{equation}\label{eq2-11} R\left(N\right)=\log_2\left\{1+\frac{P\left[\alpha^2N^2\mu_{IU}\mu_{SI}+2\alpha N\sqrt{\mu_{IU}\mu_{SI}\mu_{SU}}\cos{\left(\varphi_{SU}\right)}+\mu_{SU}\right]}{\sigma_w^2}\right\} \end{equation} Then, the corresponding $\gamma(N)$, defined by $\gamma(N)=\frac{\partial R(N)}{\partial N}$, is given by \begin{equation}\label{Utility Expression 2} \begin{split} \gamma(N)&=\frac{\frac{P}{\sigma_w^2}\left[2\alpha^2\mu_{IU}\mu_{SI}N+2\alpha\sqrt{\mu_{IU}\mu_{SI}\mu_{SU}}\cos{\left(\varphi_{SU}\right)}\right]} {\left\{1+\frac{P}{\sigma_w^2}\left[\alpha^2 N^2\mu_{IU}\mu_{SI}+2\alpha N\sqrt{\mu_{IU}\mu_{SI}\mu_{SU}}\cos{\left(\varphi_{SU}\right)}+\mu_{SU}\right]\right\}\ln 2} \end{split} \end{equation} Thereupon, by calculating $\delta_R(N)=R(N)-\overline{R_{HWI}}\left(N\right)$ and $\delta_{\gamma}(N)=\gamma(N)-\gamma_{HWI}(N)=\frac{\partial R(N)}{\partial N}-\frac{\partial \overline{R_{HWI}}(N)}{\partial N}=\frac{\partial \delta_R(N)}{\partial N}$, we derive the above (\ref{eq2-18}) and (\ref{Utility Degradation}). \end{proof}} {\color{black}\textbf{Theorem 1} demonstrates that: 1) although the $\overline{R_{HWI}}\left(N\right)$ increases with $N$, the proportionality coefficient $\beta$ on $N^2$ in $\overline{R_{HWI}}\left(N\right)$ is smaller than $\alpha^2\mu_{IU}\mu_{SI}$ in $R(N)$, hinting that $\overline{R_{HWI}}\left(N\right)$ rises more slowly than $R(N)$. 2) When $N\rightarrow\infty$, the $\overline{R_{HWI}}\left(N\right)$ is limited by \begin{equation}\label{R_HWI Upper Bound} \left.\overline{R_{HWI}}\left(N\right)\right|_{N\rightarrow\infty}=\log_2\left(1+\frac{1}{\kappa_t+\kappa_r}\right) \end{equation} which signifies that even if $N$ becomes significantly large or tends to be infinite, the potential growth of $\overline{R_{HWI}}\left(N\right)$ will be primarily restricted by $\kappa_t$ and $\kappa_r$ of the HWI at the signal transceivers. On the contrary, $R(N)$ continuously ascends without bound as $N$ grows. 3) The $\gamma_{HWI}(N)$ is inversely proportional to $\mathcal{O}(N^3)$, which indicates that the IRS utility with HWI descends as $N$ grows, and is close to zero when $N\rightarrow\infty$. This implies that if $N$ is extremely large or nearly infinite, adding more passive reflecting elements will contribute to little ACR improvement when there exists HWI. \textbf{Lemma 1} illustrates that: 1) the rate gap $\delta_R(N)>0$ for $N>0$, which indicates that the ACR is degraded by HWI. 2) The $\delta_R(N)$ increases with $N$, because the numerator inside $\log_2(.)$ contains $\chi\varpi$ which is proportional to $\mathcal{O}(N^4)$, while the denominator inside $\log_2(.)$ merely involves $\chi$ which is proportional to $\mathcal{O}(N^2)$. This implies that as $N$ grows, the IRS-aided wireless communication system will suffer from more serious ACR degradation. 3) The utility gap $\delta_{\gamma}(N)>0$, because by expanding $\left(\frac{\partial\varpi}{\partial N}\chi-\frac{\partial\chi}{\partial N}\varpi\right)$ and $\left(\frac{\partial\varpi}{\partial N}-\frac{\partial\chi}{\partial N}\right)$ in (\ref{Utility Degradation}), we have \begin{equation} \begin{split} &\left(\frac{\partial\varpi}{\partial N}\chi-\frac{\partial\chi}{\partial N}\varpi\right)= \frac{4\alpha^2\mu_{IU}\mu_{SI}N^2}{\pi^2}\left[\left(\frac{\pi^2}{4}-1\right)\alpha^2\mu_{IU}\mu_{SI}+(\pi-2)\alpha\mu_{IU}^{\frac{1}{2}}\mu_{SI}^{\frac{1}{2}}\mu_{SU}^{\frac{1}{2}}\cos{(\varphi_{SU})} \right]+\\ &\left[\left(2-\frac{8}{\pi^2}\right)N+\frac{4}{\pi^2}-1\right]\alpha^2\mu_{IU}\mu_{SI}\mu_{SU}+ \left(2-\frac{4}{\pi}\right)\alpha\mu_{IU}^{\frac{1}{2}}\mu_{SI}^{\frac{1}{2}}\mu_{SU}^{\frac{3}{2}}\cos{(\varphi_{SU})}>0 \end{split} \end{equation} \begin{equation} \begin{split} \left(\frac{\partial\varpi}{\partial N}-\frac{\partial\chi}{\partial N}\right)=(2N-1)\left(1-\frac{4}{\pi^2}\right)\alpha^2\mu_{IU}\mu_{SI}+\left(2-\frac{4}{\pi}\right)\alpha\mu_{IU}^{\frac{1}{2}}\mu_{SI}^{\frac{1}{2}}\mu_{SU}^{\frac{1}{2}}\cos{(\varphi_{SU})}>0 \end{split} \end{equation} This reveals that the IRS utility will be also degraded by HWI to some extent.} It is notable that the results in \textbf{Theorem 1} and \textbf{Lemma 1} are derived on the basis of $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$, which is configured to compensate for the phase shifts in $\mathbf{h}_{IU}$ and $\mathbf{h}_{SI}$ \cite{E.Bjornson2020(WCL)}. However, $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$ might not be optimal in this considered wireless propagation environment, as it does not take the phase shift in $h_{SU}$ into account. Thus, in Section IV, we will optimize the IRS phase shifts and reconfigure the phase shifting matrix to obtain the maximum ACR in the presence of HWI. \section{Phase Shift Optimization} Instead of configuring $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$ to evaluate the ACR, we {\color{black} will} formulate the optimization problem to optimize the IRS phase shifts with {\color{black} HWI} in this section. {\color{black}\subsection{Problem Formulation and Transformation}} Here, we retrospect {\color{black} (\ref{eq2-12})}, from which we obtain the ACR with {\color{black} HWI}: {\color{black} \begin{equation}\label{original ACR with HWI} \begin{split} R_{\mathbf{\Phi},HWI}\left(N\right)&=\log_2\left\{1+\frac{P\left|\phi(t)\left(\mathbf{h}_{IU}^T\mathbf{\Phi}\mathbf{\Theta}_E\mathbf{h}_{SI}+h_{SU}\right)\right|^2}{P(\kappa_t+\kappa_r)\left|\phi(t)\left(\mathbf{h}_{IU}^T\mathbf{\Phi}\mathbf{\Theta}_E\mathbf{h}_{SI}+h_{SU}\right)\right|^2+\sigma_w^2}\right\} \end{split} \end{equation} Therefore, aiming at maximizing the received SNR, we can formulate the phase shift optimization problem as \begin{subequations}\label{original problem} \begin{align} (\mathrm{P1}):\ \mathop{\max}\limits_{\mathbf{\Phi}}&\frac{P\left|\phi(t)\left(\mathbf{h}_{IU}^T\mathbf{\Phi}\mathbf{\Theta}_E\mathbf{h}_{SI}+h_{SU}\right)\right|^2}{P(\kappa_t+\kappa_r)\left|\phi(t)\left(\mathbf{h}_{IU}^T\mathbf{\Phi}\mathbf{\Theta}_E\mathbf{h}_{SI}+h_{SU}\right)\right|^2+\sigma_w^2} \\s.t.\ &\left|\left[\mathbf{\Phi}\right]_{(n,n)}\right|=\alpha,\ n=1,2\ldots N \end{align} \end{subequations} However, the objective function (OBF) in (\ref{original problem}a) is non-concave with respect to $\mathbf{\Phi}$, and the constraint in (\ref{original problem}b) is non-convex. Thus, inspired by \cite{M.Cui2019(WCL)}, we will convert (P1) into another solvable form.} Let $\mathbf{D}_{IU}$ denote a diagonal matrix expressed as $\mathbf{D}_{IU}=diag\left(\mathbf{h}_{IU}\right)$, and $\bm{\theta}$ denote a column vector expressed as $\bm{\theta}=\alpha\left(e^{j\theta_1},e^{j\theta_2},\ldots,e^{j\theta_N}\right)^T$. Then, we have $\bm{\theta}^T\mathbf{D}_{IU}=\mathbf{h}_{IU}^T\mathbf{\Phi}$. By replacing $\mathbf{h}_{IU}^T\mathbf{\Phi}$ with $\bm{\theta}^T\mathbf{D}_{IU}$, we expand (\ref{original ACR with HWI}) into {\color{black} \begin{equation}\label{eq2-22} R_{\bm{\theta},HWI}\left(N\right)=\log_2{\left\{1+\frac{P\left(Z+||h_{SU}||_2^2\right)}{P(\kappa_t+\kappa_r)\left(Z+||h_{SU}||_2^2\right)+\sigma_w^2}\right\}} \end{equation} where} $Z=\mathbf{h}_{SI}^H\mathbf{\Theta}_E^H\mathbf{D}_{IU}^H\bm{\theta}^*\bm{\theta}^T\mathbf{D}_{IU}\mathbf{\Theta}_E\mathbf{h}_{SI}+\mathbf{h}_{SI}^H\mathbf{\Theta}_E^H\mathbf{D}_{IU}^H\bm{\theta}^*h_{SU}+h_{SU}^*\bm{\theta}^T\mathbf{D}_{IU}\mathbf{\Theta}_E\mathbf{h}_{SI}$. Let $\mathbf{a}$ be defined by $\mathbf{a}=\left(\bm{\theta}^T,1\right)^H$. We can rewrite $Z$ as $Z=\mathbf{a}^H\mathbf{\Xi}\mathbf{a}$, where \begin{equation}\label{eq2-24} \mathbf{\Xi}=\left(\begin{matrix}\mathbf{D}_{IU}\mathbf{\Theta}_E\mathbf{h}_{SI}\mathbf{h}_{SI}^H\mathbf{\Theta}_E^H\mathbf{D}_{IU}^H&h_{SU}^*\mathbf{D}_{IU}\mathbf{\Theta}_E\mathbf{h}_{SI}\\\mathbf{h}_{SI}^H\mathbf{\Theta}_E^H\mathbf{D}_{IU}^Hh_{SU}&0\\\end{matrix}\right) \end{equation} Therefore, {\color{black} $R_{\bm{\theta},HWI}\left(N\right)$} can be simplified into {\color{black}\begin{equation}\label{eq2-25} \begin{split} R_{\bm{\theta},HWI}\left(N\right)=&\log_2{\left\{1+\frac{P\left(\mathbf{a}^H\mathbf{\Xi}\mathbf{a}+||h_{SU}||_2^2\right)}{P(\kappa_t+\kappa_r)\left(\mathbf{a}^H\mathbf{\Xi}\mathbf{a}+||h_{SU}||_2^2\right)+\sigma_w^2}\right\}} \\ =&\log_2{\left\{1+\frac{P\left[tr(\mathbf{\Xi}\mathbf{X})+||h_{SU}||_2^2\right]}{P(\kappa_t+\kappa_r)\left[tr(\mathbf{\Xi}\mathbf{X})+||h_{SU}||_2^2\right]+\sigma_w^2}\right\}} \end{split} \end{equation} where } \begin{equation}\label{eq2-26} \mathbf{X}=\mathbf{a}\mathbf{a}^H=\left(\begin{matrix}\bm{\theta}^*\bm{\theta}^T&\bm{\theta}^*\\\bm{\theta}^T&1\\\end{matrix}\right)\in\mathbb{C}^{\left(N+1\right)\times\left(N+1\right)} \end{equation} Then, the optimization problem is formulated as {\color{black}\begin{subequations}\label{converted problem 1} \begin{align} (\mathrm{P2}):\ \mathop{\max}\limits_{\bm{\theta}}&\frac{P\left[tr(\mathbf{\Xi}\mathbf{X})+||h_{SU}||_2^2\right]}{P(\kappa_t+\kappa_r)\left[tr(\mathbf{\Xi}\mathbf{X})+||h_{SU}||_2^2\right]+\sigma_w^2} \\s.t.\ &\left|\left[\bm{\theta}\right]_{n}\right|=\alpha,\ n=1,2\ldots N \end{align} \end{subequations} which is still non-convex due to the complicated non-concave OBF in (\ref{converted problem 1}a) and the non-convex module constraint in (\ref{converted problem 1}b). Here, from $\bm{\theta}^*\bm{\theta}^T$ in (\ref{eq2-26}), it can be realized that the diagonal entries in $\mathbf{X}$ embody the modules of the elements in $\bm{\theta}$. Thus, we define a simple matrix $\mathbf{E}_n$, whose $(i,j)$-th element is given by \begin{equation}\label{eq2-28} \left[\mathbf{E}_n\right]_{(i,j)}=\left\{\begin{matrix}1,\ \ \ \ \ i=j=n\\0,\ \ \ \ otherwise\\\end{matrix}\right. \end{equation} As a result, the optimization problem is converted into \begin{subequations}\label{converted problem 2} \begin{align} (\mathrm{P3}):\ \mathop{\max}\limits_{\mathbf{X}\succeq\mathbf{0}}&\frac{P\left[tr(\mathbf{\Xi}\mathbf{X})+||h_{SU}||_2^2\right]}{P(\kappa_t+\kappa_r)\left[tr(\mathbf{\Xi}\mathbf{X})+||h_{SU}||_2^2\right]+\sigma_w^2} \\s.t.\ &tr\left(\mathbf{E}_n\mathbf{X}\right)=\alpha^2,\ n=1,2\ldots N \\&tr\left(\mathbf{E}_{N+1}\mathbf{X}\right)=1 \\&rank(\mathbf{X})=1 \end{align} \end{subequations} where the constraint in (\ref{converted problem 2}b) is transformed from the module constraint of $\left|\left[\bm{\theta}\right]_n\right|^2=\left[\bm{\theta}^*\bm{\theta}^T\right]_{(n,n)}=\mathbf{a}^H\mathbf{E}_n\mathbf{a}=tr\left(\mathbf{E}_n\mathbf{X}\right)=\alpha^2$, for $n=1,2\ldots N$; the constraint in (\ref{converted problem 2}c) is transformed from $\left[\mathbf{X}\right]_{(N+1,N+1)}=1$; the constraint in (\ref{converted problem 2}d) is responsible for strictly guaranteeing that 1) the resolved $\mathbf{X}$ can be decomposed into $\mathbf{X}=\mathbf{a}\mathbf{a}^H$, and 2) the solution of the phase shift in $\bm{\theta}$ in the resolved $\mathbf{X}$ is equivalent to the solution of the phase shift in $\mathbf{\Phi}$ in (P1). Nevertheless, (P3) is still non-convex and is difficult to solve. Therefore, the problem transformation should be further performed. Thanks to the Charnes-Cooper transformation \cite{L.Liu2014(TSP), Charnes1962}}, let $\mathbf{Y}$ and $\mu$ be defined by $\mathbf{Y}=\mu\mathbf{X}$ and $\mu=\frac{1}{tr(\mathbf{\Xi}\mathbf{X})+||h_{SU}||_2^2+\frac{\sigma_w^2}{P\left(\kappa_t+\kappa_r\right)}}$. Then, the OBF in (\ref{converted problem 2}a) is expressed as $\frac{1}{\left(\kappa_t+\kappa_r\right)}\times\left[tr(\mathbf{\Xi}\mathbf{Y})+\mu||h_{SU}||_2^2\right]$. Therefore, (P3) is transformed into \begin{subequations}\label{converted problem 3} \begin{align} (\mathrm{P4}):\ \mathop{\max}\limits_{\mathbf{Y}\succeq\mathbf{0},\mu\geq 0}&\frac{1}{\left(\kappa_t+\kappa_r\right)}\times\left[tr(\mathbf{\Xi}\mathbf{Y})+\mu||h_{SU}||_2^2\right] \\s.t.\ &tr\left(\mathbf{E}_n\mathbf{Y}\right)=\mu\alpha^2,\ n=1,2\ldots N \\&tr\left(\mathbf{E}_{N+1}\mathbf{Y}\right)=\mu \\&tr\left(\mathbf{\Xi Y}\right)+\mu\left[||h_{SU}||_2^2+\frac{\sigma_w^2}{P\left(\kappa_t+\kappa_r\right)}\right]=1 \\&rank(\mathbf{Y})=1 \end{align} \end{subequations} {\color{black} Although (P4) is still non-convex due to the non-convex (\ref{converted problem 3}e), it can be relaxed if the constraint of $rank(\mathbf{Y})=1$ is omitted. Hence, the relaxed problem is formulated as} \begin{subequations}\label{eq2-43} \begin{align} (\mathrm{P5}):\ \mathop{\max}\limits_{\mathbf{Y}\succeq\mathbf{0},\mu\geq 0}&\frac{1}{\left(\kappa_t+\kappa_r\right)}\times\left[tr(\mathbf{\Xi}\mathbf{Y})+\mu||h_{SU}||_2^2\right] \\s.t.\ &tr\left(\mathbf{E}_n\mathbf{Y}\right)=\mu\alpha^2,\ n=1,2\ldots N \\&tr\left(\mathbf{E}_{N+1}\mathbf{Y}\right)=\mu \\&tr\left(\mathbf{\Xi Y}\right)+\mu\left[||h_{SU}||_2^2+\frac{\sigma_w^2}{P\left(\kappa_t+\kappa_r\right)}\right]=1 \end{align} \end{subequations} which is currently a SDP problem and can be solved by existing techniques \cite{A.Nemirovski2008(ActaNumerica)}. {\color{black} However, the matrix $\mathbf{\Xi}$ involves the stochastic phase errors, which are generally unknown due to their randomness and prevent us from predetermining $\mathbf{\Xi}$ and obtaining the solution in reality. In view of this issue, we will further calculate the expectation of $\mathbf{\Xi}$, denoted by $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$, in order to facilitate the optimization procedure and achieve a statistical average optimization result.} {\color{black}\subsection{Expectation of $\mathbf{\Xi}$}} According to (\ref{eq2-24}), $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$ can be written as \begin{equation}\label{eq2-29} \mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]= \mathbb{E}_{\mathbf{v}_E}\left[\mathbf{\Xi}\right]= \left(\begin{matrix}\mathbf{D}_{IU}\mathbf{D}_{SI}\mathbb{E}_{\mathbf{v}_E}\!\!\left[\mathbf{v}_E\mathbf{v}_E^H\right]\mathbf{D}_{SI}^H\mathbf{D}_{IU}^H&h_{SU}^*\mathbf{D}_{IU}\mathbf{D}_{SI}\mathbb{E}_{\mathbf{v}_E}\!\!\left[\mathbf{v}_E\right]\\\mathbb{E}_{\mathbf{v}_E}\!\!\left[\mathbf{v}_E^H\right]\mathbf{D}_{SI}^H\mathbf{D}_{IU}^Hh_{SU}&0\\\end{matrix}\right) \end{equation} where $\mathbf{D}_{SI}=diag(\mathbf{h}_{SI})$ and $\mathbf{v}_E=\left(e^{j\theta_{E1}},e^{j\theta_{E2}},\ldots,e^{j\theta_{EN}}\right)^T$. $\mathbb{E}_{\mathbf{v}_E}\left[\mathbf{v}_E\mathbf{v}_E^H\right]$, which is expressed as \begin{equation}\label{eq2-30} \begin{split} \mathbb{E}_{\mathbf{v}_E}\left[\mathbf{v}_E\mathbf{v}_E^H\right]= \left(\begin{matrix}\begin{matrix}1&\mathbb{E}_{\delta_{\theta}}\!\!\left[e^{j\theta_{E1}-j\theta_{E2}}\right]\\\mathbb{E}_{\delta_{\theta}}\!\!\left[e^{j\theta_{E2}-j\theta_{E1}}\right]&1\\\end{matrix}&\begin{matrix}\cdots&\mathbb{E}_{\delta_{\theta}}\!\!\left[e^{j\theta_{E1}-j\theta_{EN}}\right]\\\cdots&\mathbb{E}_{\delta_{\theta}}\!\!\left[e^{j\theta_{E2}-j\theta_{EN}}\right]\\\end{matrix}\\\begin{matrix}\vdots&\vdots\\\mathbb{E}_{\delta_{\theta}}\!\!\left[e^{j\theta_{EN}-j\theta_{E1}}\right]&\mathbb{E}_{\delta_{\theta}}\!\!\left[e^{j\theta_{EN}-j\theta_{E2}}\right]\\\end{matrix}&\begin{matrix}\ddots&\ \ \ \ \ \ \ \ \ \vdots\ \ \ \ \ \ \ \ \ \\\cdots&\ \ \ 1\ \ \ \\\end{matrix}\\\end{matrix}\right) \end{split} \end{equation} represents the autocorrelation matrix of $\mathbf{v}_E$, where $\delta_{\theta}=\theta_{Ei}-\theta_{Ej}$ obeys triangular distribution on $[-\pi,\pi]$ as $\theta_{Ei}$ obeys uniform distribution on $\left[-\pi/2,\pi/2\right]$ (detailed in Appendix A). {\color{black} Because $\mathbb{E}_{\delta_{\theta}}\left[e^{j\theta_{Ei}-j\theta_{Ej}}\right]=\mathbb{E}_{\delta_{\theta}}\left[e^{j\delta_{\theta}}\right]=\int_{-\pi}^{\pi}{f\left(\delta_{\theta}\right)e^{j\delta_{\theta}}d\delta_{\theta}}=4/\pi^2$, where $f\left(\delta_{\theta}\right)$, expressed as (\ref{eq-A6}) in Appendix A, is the probability density function of $\delta_{\theta}$, we have $\mathbb{E}_{\mathbf{v}_E}\left[\mathbf{v}_E\mathbf{v}_E^H\right]=\mathbf{I}_N+\mathbf{J}$, where the $(i,j)$-th element in the matrix $\mathbf{J}$ is expressed as \begin{equation} \left[\mathbf{J}\right]_{(i,j)}=\left\{ \begin{matrix} 0,\ \ \ i=j\\ \frac{4}{\pi^2},\ \ i\neq j \end{matrix} \right. \end{equation} Moreover}, because $\mathbb{E}_{\mathbf{v}_E}\left[\mathbf{v}_E\right] =\left(\mathbb{E}_{\theta_{Ei}}\!\!\left[e^{j\theta_{E1}}\right],\mathbb{E}_{\theta_{Ei}}\!\!\left[e^{j\theta_{E2}}\right],\ldots,\mathbb{E}_{\theta_{Ei}}\!\!\left[e^{j\theta_{EN}}\right]\right)^T$ and $\mathbb{E}_{\theta_{Ei}}\left[e^{j\theta_{Ei}}\right] =\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}{f(\theta_{Ei})e^{j\theta_{Ei}}d\theta_{Ei}} =\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}{f(\theta_{Ei})(cos\theta_{Ei}+jsin\theta_{Ei})d\theta_{Ei}} =2/\pi$ for $i=1,2,...,N$, where $f\left(\theta_{Ei}\right)=1/\pi$ is the probability density function of $\theta_{Ei}$, we have $\mathbb{E}_{\mathbf{v}_E}\left[\mathbf{v}_E\right]=\left(2/\pi\right)\mathbf{1}^T$. By substituting {\color{black}$\mathbb{E}_{\mathbf{v}_E}\left[\mathbf{v}_E\mathbf{v}_E^H\right]=\mathbf{I}_N+\mathbf{J}$} and $\mathbb{E}_{\mathbf{v}_E}\left[\mathbf{v}_E\right]=\left(2/\pi\right)\mathbf{1}^T$ into (\ref{eq2-29}), we have {\color{black}\begin{equation}\label{eq2-35} \mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]=\left(\begin{matrix}\mathbf{D}_{IU}\mathbf{D}_{SI}(\mathbf{I}_N+\mathbf{J})\mathbf{D}_{SI}^H\mathbf{D}_{IU}^H&\frac{2}{\pi}h_{SU}^*\mathbf{D}_{IU}\mathbf{D}_{SI}\mathbf{1}^T\\\frac{2}{\pi}\mathbf{1}\mathbf{D}_{SI}^H\mathbf{D}_{IU}^Hh_{SU}&0\\\end{matrix}\right) \end{equation} Consequently}, by replacing $\mathbf{\Xi}$ in (P5) with $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$, we obtain \begin{subequations}\label{eq2-44} \begin{align} (\mathrm{P6}):\ \mathop{\max}\limits_{\mathbf{Y}\succeq\mathbf{0},\widetilde{\mu}\geq 0}&\frac{1}{\left(\kappa_t+\kappa_r\right)}\times\left[tr(\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]\mathbf{Y})+\widetilde{\mu}||h_{SU}||_2^2\right] \\s.t.\ &tr\left(\mathbf{E}_n\mathbf{Y}\right)=\widetilde{\mu}\alpha^2,\ n=1,2\ldots N \\&tr\left(\mathbf{E}_{N+1}\mathbf{Y}\right)=\widetilde{\mu} \\&tr\left(\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]\mathbf{Y}\right)+\widetilde{\mu}\left[||h_{SU}||_2^2+\frac{\sigma_w^2}{P\left(\kappa_t+\kappa_r\right)}\right]=1 \end{align} \end{subequations} where $\widetilde{\mu}=\frac{1}{tr(\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]\mathbf{X})+||h_{SU}||_2^2+\frac{\sigma_w^2}{P\left(\kappa_t+\kappa_r\right)}}$. {\color{black}Currently, because the matrix $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$ in the OBF and constraints only includes the channel coefficients, which can be estimated via existing channel estimation techniques, it is easy for us to configure $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$, which assists us in completing the phase shift optimization in terms of maximizing the average SNR in the presence of HWI.} It is remarkable that after (P6) is solved, the $\bm{\theta}^T$ in the $(N+1)$-th row of the $\mathbf{X}=\widetilde{\mu}^{-1}\mathbf{Y}$ in the solution can be extracted to reconstruct $\mathbf{Y}$ based on (\ref{eq2-26}) and $\mathbf{Y}=\widetilde{\mu}\mathbf{X}$. If the reconstructed $\mathbf{Y}$, denoted by $\mathbf{Y}_r$, {satisfies} $\mathbf{Y}_r=\mathbf{Y}$ and $rank(\mathbf{Y}_r)=1$, the $\bm{\theta}^T$ can be regarded as the optimal phase shift vector. As a result, {\color{black}we will test the values in $\mathbf{Y}_r$ and the rank of $\mathbf{Y}_r$ in the simulations in Section VI, in order to investigate whether} the optimal IRS phase shifts can be acquired from $\bm{\theta}^T$ in the $(N+1)$-th row of the $\mathbf{X}=\widetilde{\mu}^{-1}\mathbf{Y}$ in the solution of the relaxed problem. {\color{black} In addition, after the phase shift optimization process, two possible factors may still remain and influence the performance. 1) Most channel estimation methods suffer from estimation errors, which lead to imperfect CSI of $\mathbf{h}_{IU}$, $\mathbf{h}_{SI}$ and $h_{SU}$. Based on the imperfect CSI, we can only construct an inaccurate $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$, and then acquire a non-optimal phase shift vector. 2) Due to the inherent hardware imperfection, synchronization offset and estimation accuracy limit in the real world, the optimized phase values may not be precisely obtained. In this case, we may actually obtain $\widetilde{\bm{\theta}}^T=\bm{\theta}^T\odot\bm{\theta}_p^T$ instead of $\bm{\theta}^T$ after the optimization, where $\bm{\theta}_p=(e^{j\theta_{p1}},e^{j\theta_{p2}},...,e^{j\theta_{pN}})^T$ denotes a residual phase noise vector with $\theta_{pi}$ being the $i$-th random residual phase noise, which may disturb $\bm{\theta}^T$ and affect the optimization performance. The performance degradation caused by the aforementioned two factors is worth to be discussed. Thus, we will present the discussions on them in Section VI.} $\ $ \section{Comparisons with DF Relay} The DF relay is a conventional active {\color{black}approach} which is also applied for data transmission enhancement in the wireless communication network. Hence, it is {\color{black}necessary} to compare the performance of the IRS with that of the DF relay in the same situation. It was already confirmed that the ideal-hardware IRS equipped with a large number of reflecting units could help the wireless communication system provide higher ACR than the ideal-hardware DF relay equipped with one antenna \cite{E.Bjornson2020(WCL)}. {\color{black} However, the comparisons in \cite{E.Bjornson2020(WCL)} were made in consideration of single-antenna DF relay and multiple-unit IRS without HWI. Note that as $N$ grows, the average ACR of the multiple-unit IRS-aided communication system increases, while that of the single-antenna DF relay assisted communication system remains constant under a certain condition. This might be unfair for the DF relay during the comparisons. Therefore, in this section, we will compare the performances of the IRS with $N$ passive reflecting units and the DF relay with $N$ active antennas,} for the purpose of exploring whether the IRS can still possess advantages in ACR {\color{black}and utility} over the {\color{black}multiple-antenna DF relay} when there exists HWI. {\color{black} Before the comparisons, determining the exact closed-form ACR of the multiple-antenna DF relay assisted communication system with respect to $N$ in the presence of HWI, is a hard nut to crack,} as the channel coefficients include random phase shifts, which cannot be compensated by the DF relay. {\color{black} Fortunately, we realize that the source-to-relay and the relay-to-destination channels can similarly be regarded as the uplink and downlink channels modelled in \cite{E.Bjornson2014(TIT)}, which assists us in establishing the \textit{closed-form upper bound} of the ACR in relation to $N$ for the multiple-antenna DF relay supported communication system. } Let {\color{black}$\mathbf{h}_{SR}$, $\mathbf{h}_{RU}$} and $h_{SU}$ denote the source-to-relay, relay-to-destination and source-to-destination channels, respectively. For comparing, we assume that {\color{black}$\mathbf{h}_{SR}=\mathbf{h}_{SI}$, $\mathbf{h}_{RU}=\mathbf{h}_{IU}$}, and the receiver noises at the DF relay and the destination terminal have the same variance of $\sigma_w^2$. If the HWI appears at the source transmitter, the DF relay and the destination receiver, {\color{black}according to Eq. (6) and Eq. (2) in \cite{E.Bjornson2014(TIT)},} the signals received by the DF relay and the destination terminal are modelled as {\color{black}\begin{equation}\label{eq2-47} \mathbf{y}_{DF}(t)=\mathbf{h}_{SR}\left[\sqrt{P_1}s(t)+\eta_t(t)\right]+\bm{\eta}_{r_{DF}}(t)+\mathbf{w}_{DF}(t) \end{equation} and \begin{equation}\label{eq2-48} y_{U1}(t)=\mathbf{h}_{RU}\left[\sqrt{P_2}\mathbf{s}(t)+\bm{\eta}_{t_{DF}}(t)\right]+\eta_{r1}(t)+w(t) \end{equation} \begin{equation}\label{eq2-49} y_{U2}(t)=h_{SU}\left[\sqrt{P_1}s(t)+\eta_t(t)\right]+\eta_{r2}(t)+w(t) \end{equation} where} $P_1$ and $P_2$ are the transmitting powers of the source and the DF relay under the constraint of $P=\frac{P_1+P_2}{2}$ \cite{E.Bjornson2020(WCL)}; $y_{U1}(t)$ and $y_{U2}(t)$ are the signals received by the destination terminal through channel {\color{black}$\mathbf{h}_{RU}$} and $h_{SU}$, respectively; $\mathbf{w}_{DF}(t)\sim\mathcal{CN}(\mathbf{0},\sigma_w^2\mathbf{I})$ and $w(t)\sim\mathcal{CN}(0,\sigma_w^2)$ are the receiver noises at the DF relay and the destination terminal; {\color{black} $\eta_t(t)\sim\mathcal{CN}(0,\Upsilon_t)$, $\bm{\eta}_{r_{DF}}(t)\sim\mathcal{CN}(\mathbf{0},\mathbf{V}_{r_{DF}})$, $\bm{\eta}_{t_{DF}}(t)\sim\mathcal{CN}(\mathbf{0},\mathbf{\Upsilon}_{t_{DF}})$, $\eta_{r1}(t)\sim\mathcal{CN}(0,V_{r1})$ and $\eta_{r2}(t)\sim\mathcal{CN}(0,V_{r2})$} are the distortion noises at the source transmitter, the DF-relay receiver, the DF-relay transmitter and the destination receiver, respectively, with {\color{black} $\Upsilon_t$, $\mathbf{V}_{r_{DF}}$, $\mathbf{\Upsilon}_{t_{DF}}$, $V_{r1}$ and $V_{r2}$ given by \begin{equation} \Upsilon_t=\kappa_tP_1\mathbb{E}[s(t)s^*(t)] \end{equation} \begin{equation} \mathbf{V}_{r_{DF}}=\kappa_{r_{DF}}P_1\mathbb{E}[s(t)s^*(t)]\times diag(|h_{SR,1}|^2,...,|h_{SR,N}|^2) \end{equation} \begin{equation} \mathbf{\Upsilon}_{t_{DF}}=\kappa_{t_{DF}}P_2\times diag\{\mathbb{E}[s_1(t)s_1^*(t)],...,\mathbb{E}[s_N(t)s_N^*(t)]\} \end{equation} \begin{equation} V_{r1}=\kappa_{r1}P_2\mathbf{h}_{RU}^T\mathbb{E}[\mathbf{s}(t)\mathbf{s}^H(t)]\mathbf{h}_{RU}^* \end{equation} \begin{equation} V_{r2}=\kappa_{r2}P_1 |h_{SU}|^2\mathbb{E}[s(t)s^*(t)] \end{equation} where $\mathbf{s}(t)$ denotes the signal transmitted by the DF relay at time $t$; $s_i(t)$, for $i=1,2,...,N$, represents the $i$-th transmit symbol in $\mathbf{s}(t)$, with $\mathbb{E}[s_i(t)s_i^*(t)]=1$;} $\kappa_t$, $\kappa_{r_{DF}}$, $\kappa_{t_{DF}}$, $\kappa_{r1}$ and $\kappa_{r2}$ are the proportionality factors. For simple {\color{black}analysis}, we consider that $\kappa_{t_{DF}}=\kappa_t$ and $\kappa_{r1}=\kappa_{r2}=\kappa_{r_{DF}}=\kappa_r$, as the hardware characteristics of the transceivers in the DF relay are similar to those in the source equipment and the destination terminal. Therefore, {\color{black}referring to Eq. (26), Eq. (27) in \cite{E.Bjornson2014(TIT)}, and Eq. (15) in \cite{J.N.Laneman2004(TIT)},} the {\color{black}upper bound} of the ACR of the multiple-antenna DF relay assisted communication system with HWI is expressed as \begin{equation}\label{eq2-55} R_{HWI}^{DF}(N)= \frac{1}{2}\min{\left\{\mathfrak{A}(N),\mathfrak{B}(N)\right\}} \end{equation} where {\color{black}\begin{equation}\label{eq-A} \mathfrak{A}(N)=\log_2\left(1+\frac{N\mu_{SI}}{\kappa_r\mu_{SI}+N\kappa_t\mu_{SI}+\frac{\sigma_w^2}{P_1} }\right) \end{equation} \begin{equation}\label{eq-B} \mathfrak{B}(N)=\log_2\left(1+\frac{\mu_{SU}}{(\kappa_t+\kappa_r)\mu_{SU}+\frac{\sigma_w^2}{P_1}}+ \frac{N\mu_{IU}}{\kappa_t\mu_{IU}+N\kappa_r\mu_{IU}+\frac{\sigma_w^2}{P_2} }\right) \end{equation} Correspondingly, the utility of the multiple-antenna DF relay is expressed as \begin{equation}\label{eq-DF Utility} \gamma_{HWI}^{DF}(N)= \left\{\begin{matrix} \frac{\kappa_r\mu_{SI}^2+\frac{\sigma_w^2}{P_1}\mu_{SI}} {2\left(\kappa_r\mu_{SI}+N\kappa_t\mu_{SI}+\frac{\sigma_w^2}{P_1}\right)\left(\kappa_r\mu_{SI}+N\kappa_t\mu_{SI}+\frac{\sigma_w^2}{P_1}+N\mu_{SI}\right)\ln2}, \ \ \mathfrak{A}(N)<\mathfrak{B}(N) \\ \frac{\kappa_t\mu_{IU}^2+\frac{\sigma_w^2}{P_2}\mu_{IU}} {2\left(1+\frac{\mu_{SU}}{(\kappa_t+\kappa_r)\mu_{SU}+\frac{\sigma_w^2}{P_1}}+ \frac{N\mu_{IU}}{\kappa_t\mu_{IU}+N\kappa_r\mu_{IU}+\frac{\sigma_w^2}{P_2}}\right) \left(\kappa_t\mu_{IU}+N\kappa_r\mu_{IU}+\frac{\sigma_w^2}{P_2}\right)^2 \ln2}, \ \ \mathfrak{A}(N)>\mathfrak{B}(N) \end{matrix}\right. \end{equation} For analysis convenience and symbol unification, we assume that $\kappa_t+\kappa_r=\kappa$ with $\kappa_t=\kappa_r=\frac{1}{2}\kappa$ \cite{E.Bjornson2014(TIT)}, and} the total transmitting power of the DF relay assisted communication system ($P_1+P_2=2P$) is allocated by $P_1=P_2=P$. {\color{black} Subsequently, in order to investigate whether the IRS is potentially capable of outperforming the DF relay in the presence of HWI, we will compare $R_{HWI}^{DF}(N)$ in (\ref{eq2-55}) with $\overline{R_{HWI}}(N)$ in (\ref{eq2-17}) from the perspective of scaling law, by considering first $N\rightarrow\infty$ and then $P\rightarrow\infty$ in the following \textbf{Lemma 2} and \textbf{Lemma 3}. } {\color{black} \begin{lemma} When $N\rightarrow\infty$, we have \begin{equation}\label{N_inf_ACR_Compare} \left.\overline{R_{HWI}}(N)\right|_{N\rightarrow\infty}>\left.R_{HWI}^{DF}(N)\right|_{N\rightarrow\infty} \end{equation} \begin{equation}\label{N_inf_Utility_Compare} \left.\gamma_{HWI}(N)\right|_{N\rightarrow\infty}=\left.\gamma_{HWI}^{DF}(N)\right|_{N\rightarrow\infty}=0 \end{equation} \end{lemma} \begin{proof} The proof is given in Appendix B. \end{proof} \begin{lemma} When $P\rightarrow\infty$, we have \begin{equation}\label{P_inf_ACR_Compare} \left.\overline{R_{HWI}}(N)\right|_{P\rightarrow\infty}>\left.R_{HWI}^{DF}(N)\right|_{P\rightarrow\infty} \end{equation} \begin{equation}\label{P_inf_Utility_Compare_IRS} \left.\gamma_{HWI}(N)\right|_{P\rightarrow\infty}=0 \end{equation} \begin{equation}\label{P_inf_Utility_Compare_DF} \left.\gamma_{HWI}^{DF}(N)\right|_{P\rightarrow\infty} =\frac{\kappa}{(\kappa+N\kappa+2N)(\kappa+N\kappa)\ln 2} \end{equation} \end{lemma} \begin{proof} The proof is given in Appendix C. \end{proof} \textbf{Lemma 2} and \textbf{Lemma 3} demonstrate that: 1) when $N$ becomes large enough, or when $P$ is sufficiently high, the IRS can surpass the conventional multiple-antenna DF relay in terms of the ACR performance in the presence of HWI. 2) If $N\rightarrow\infty$, both the utilities of the IRS and the multiple-antenna DF relay verge on zero, hinting that adding one more reflecting element on the IRS or one more antenna on the DF relay hardly improves the ACR. 3) When $P$ is nearly infinite, the utility of the IRS is close to zero, while that of the multiple-antenna DF relay converges to a positive value, which indicates that adding one more antenna on the DF relay can still improve the ACR. This is because the IRS is passive and almost useless when $P\rightarrow\infty$, which makes the line-of-sight (LoS) link infinitely strong, while the DF relay is active and consumes power when retransmitting the wireless signals, so that when $P\rightarrow\infty$, each active antenna can always possess positive transmitting power and contribute to the data transmission enhancement. On this point, the multiple-antenna DF relay is more advantageous. } Moreover, it can be predicted that the IRS may possibly always outperform the multiple-antenna DF relay when the level of the transceiver HWI is high, because the HWI at the IRS is modelled as a phase error matrix which does not contain $\kappa_t$ or $\kappa_r$, while the HWI at the DF relay involves the distortion noises which contain the two terms. The DF relay may perform worse with higher $\kappa_t+\kappa_r$ while the IRS may maintain the performance due to the fixed uniform distribution of the phase errors. {\color{black}Therefore, we will also derive the interval of $\kappa_t+\kappa_r$, in which the IRS can always surpass the DF relay for all $N>0$ in the following \textbf{Lemma 4}. } \begin{lemma} The IRS will always outperform the {\color{black}multiple-antenna} DF relay for all $N>0$, when $\kappa_t+\kappa_r$ satisfies {\color{black}\begin{equation}\label{eq2-69} \kappa_t+\kappa_r>2\sigma_w^4 \left[P^2(\beta+\lambda+\mu_{SU})^2-2\sigma_w^2P(\beta+\lambda+\mu_{SU})\right]^{-1} =\kappa_{th} \end{equation} where $\beta$ and $\lambda$ have been defined in (\ref{eq-revision-1}) and (\ref{eq-revision-2}), respectively.} \end{lemma} \begin{proof} The proof is given in Appendix D. \end{proof} \textbf{Lemma 4} demonstrates that $\kappa_t+\kappa_r$ determines whether the IRS can always outperform the DF relay for all $N>0$ by a threshold $\kappa_{th}$ in (\ref{eq2-69}), {\color{black}which is mainly} decided by $\mu_{SI}$, $\mu_{IU}$, $\mu_{SU}$, $P$ and $\sigma_w^2$. {\color{black} If $P\rightarrow\infty$, we have $\kappa_{th}\rightarrow 0$, which makes (\ref{eq2-69}) always hold and makes the IRS perform better for all $N>0$ and $\kappa_t+\kappa_r>0$ in terms of the ACR. The outcome is consistent with (\ref{P_inf_ACR_Compare}) in \textbf{Lemma 3}.} \section{Simulation Results} \subsection{System Setup and Parameter Setting} This section will numerically elaborate the results of the ACR {\color{black} and the IRS utility} with or without HWI, and compare the performances of the IRS and the DF relay. As shown in Figure \ref{Fig-coordinate}, a two-dimensional plane in meters is established to indicate the positions of the source, the IRS and the destination, \begin{figure}[!t] \includegraphics[width=3.6in]{Fig_Coordinate.jpg} \centering \caption{Communication system design in the simulations. Three dashed lines which indicate $d_{SI}$, $d_{IU}$ and $d_{SU}$ constitute a right triangle, where $d_{SU}=\sqrt{d_{SI}^2+d_{IU}^2}$.} \label{Fig-coordinate} \end{figure} which are placed at $(0,15)$, $(50,15)$ and $(50,0)$. Regardless of the height, the distances between the source and the IRS ($d_{SI}$), the IRS and the destination ($d_{IU}$) and the source and the destination ($d_{SU}$) are $d_{SI}=50$ $m$, $d_{IU}=15$ $m$ and $d_{SU}=\sqrt{d_{SI}^2+d_{IU}^2}\approx52.2$ $m$, respectively. According to \cite{M.Cui2019(WCL),E.Bjornson2014(TIT)}, the other parameters are set {\color{black}in Table I. Based on Table I}, the power attenuation coefficients of channel $\mathbf{h}_{IU}$ (or $\mathbf{h}_{RU}$), $\mathbf{h}_{SI}$ (or $\mathbf{h}_{SR}$) and $h_{SU}$ are derived by $\sqrt{\mu_{IU}}=\sqrt{\zeta_0(d_0/d_{IU})^{\alpha_{IU}}}$, $\sqrt{\mu_{SI}}=\sqrt{\zeta_0(d_0/d_{SI})^{\alpha_{SI}}}$ and $\sqrt{\mu_{SU}}=\sqrt{\zeta_0(d_0/d_{SU})^{\alpha_{SU}}}$ \cite{M.Cui2019(WCL)}. \begin{table} \renewcommand{\arraystretch}{1.3} \caption{{\color{black}Parameter configurations.}} \label{Table_Parameters} \centering {\color{black} \begin{small} \begin{tabular}{ccc} \hline Parameters & Definitions & Values\\ \hline Amplitude Reflection Coefficient & $\alpha$ & $1$\\ Signal Power & $P$ & $20$ dBm\\ Receiver Noise Power & $\sigma_w^2$ & $-80$ dBm\\ Path Loss & $\zeta_0$ & $-20$ dB\\ Reference Distance & $d_0$ & $1$ m\\ Path Loss Exponents & $\alpha_{IU}=\alpha_{SI}=\alpha_{SU}$ & $3$\\ Phase Shift in $\mathbf{h}_{IU}$ & $\varphi_{IU,i}$ & Random in $[0,2\pi]$\\ Phase Shift in $\mathbf{h}_{SI}$ & $\varphi_{SI,i}$ & Random in $[0,2\pi]$\\ Phase Shift in $h_{SU}$ & $\varphi_{SU}$ & $\frac{\pi}{4}$\\ Proportionality Coefficients of Distortion Noises & $\kappa_t=\kappa_r$ & $0.05^2$\\ Oscillator Quality & $\delta$ & $1.58\times 10^{-4}$\\ \hline \end{tabular} \end{small} } \end{table} During the comparisons with DF relay, $d_{SI}$, $d_{IU}$ and $d_{SU}$ are also regarded as the distances between the source and the DF relay, the DF relay and the destination, and the source and the destination, respectively, which still adhere to $d_{SU}=\sqrt{d_{SI}^2+d_{IU}^2}$. The proportionality coefficients can be changed for diverse observations, but still satisfy $\kappa_t=\kappa_r$. {\color{black}\subsection{Numerical Illustrations for \textbf{Theorem 1} and \textbf{Lemma 1}}} {\color{black} For further discussing and validating the theoretical analysis in Section III}, we carry out the simulations via the following steps: \textit{B-Step 1}: We calculate $\overline{R_{HWI}}(N)$ in (\ref{eq2-17}) {\color{black}and $\gamma_{HWI}(N)$ in (\ref{Utility Expression})}, and record the results with HWI from {\color{black} $N=1$ to $N=5000$}. \textit{B-Step 2}: We calculate $R(N)$ in (\ref{eq2-11}) {\color{black}and $\gamma(N)$ in (\ref{Utility Expression 2})}, and record the results without HWI from {\color{black} $N=1$ to $N=5000$}. \textit{B-Step 3}: {\color{black}We calculate the rate gap $\delta_R(N)$ in (\ref{eq2-18}) and the utility gap $\delta_{\gamma}(N)$ in (\ref{Utility Degradation}), and record the results from $N=1$ to $N=5000$.} \textit{B-Step 4}: We calculate and record the numerical results of $R_{HWI}(N)$ in (\ref{eq2-16}) from {\color{black} $N=1$ to $N=5000$}. Due to the randomness of the phase errors generated by the IRS, the ACR is averaged on 1000 Monte Carlo trials {\color{black}every 500 points}. The average ACRs and {\color{black}IRS utilities} as functions of $N$ from $N=1$ to $N=5000$ are described in Figure \ref{Fig-For_IRS_ACR_and_Utility}. It is indicated that: 1) the experimental results fit well with the theoretical ones from $N=1$ to $N=5000$, which verifies the tightness of (\ref{eq2-17}). 2) The average ACR with HWI is lower and increases more slowly than that without HWI, and the rate gap {\color{black}widens} as $N$ grows. {\color{black}This phenomenon implies that when $N$ grows, the HWI accumulates and begets more severe ACR degradation. 3) When $N$ becomes pretty large, the ACR with HWI verges on $\log_2\left(1+\frac{1}{\kappa_t+\kappa_r}\right)=7.6511$, which testifies the correctness of (\ref{R_HWI Upper Bound}). 4) The IRS utility with HWI is lower than that without HWI, which demonstrates that the HWI reduces the IRS utility as well. Besides, both the IRS utility and the utility gap descend as $N$ grows, which reveals that the influence of the HWI on the IRS utility becomes slighter when $N$ is larger.} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=3.2in]{For_Average_ACR-eps-converted-to.pdf}} \label{For_Average_ACR} \subfloat[]{\includegraphics[width=3.2in]{For_Utility-eps-converted-to.pdf}} \label{For_Utility} \hfil \caption{{\color{black} Average ACRs and IRS utilities as functions of $N$ with or without HWI. (a) Average ACRs with respect to $N$, the curves marked with "$\square$", "$\bigcirc$", "$\bigtriangledown$" and "$*$", represent the results obtained in \textit{B-Step 1} to \textit{B-Step 4}, respectively. (b) IRS utilities with respect to $N$, the curves marked with "$\square$", "$\bigcirc$" and "$\bigtriangledown$", represent the results obtained in \textit{B-Step 1} to \textit{B-Step 3}, respectively.}} \label{Fig-For_IRS_ACR_and_Utility} \end{figure*} $\ $ {\color{black} \subsection{Phase Shift Optimization}} {\color{black}For giving insights into the phase shift optimization approach in Section IV, we carry out the simulations through the following steps: \textit{C-Step 1}: We solve (P6) by adopting CVX Toolbox with SDPT3 Solver, and obtain the maximum average SNR from the solution of the OBF in (\ref{eq2-44}a). Based on this solution, we calculate and record the ACRs at $N=1,13,25,37$. \textit{C-Step 2}: We solve (P6) and obtain the optimized matrix $\mathbf{Y}$ and variable $\widetilde{\mu}$. Next, we extract the $\bm{\theta}^T$ in the $(N+1)$-th row of $\mathbf{X}=\widetilde{\mu}^{-1}\mathbf{Y}$. Then, we utilize $\bm{\theta}^T$ to reconstruct $\mathbf{X}$ according to (\ref{eq2-26}) and $\mathbf{Y}$ according to $\mathbf{Y}=\widetilde{\mu}\mathbf{X}$, and denote the reconstructed $\mathbf{X}$ and $\mathbf{Y}$ by $\mathbf{X}_r$ and $\mathbf{Y}_r$, respectively. Finally, we substitute $\mathbf{Y}_r$ into the OBF in (\ref{eq2-44}a) and obtain the average SNR, based on which we calculate and record the ACRs at $N=1,13,25,37$. \textit{C-Step 3}: Based on the extracted $\bm{\theta}^T$, we obtain the optimized IRS phase shift matrix $\mathbf{\Phi}$ according to $\mathbf{\Phi}=diag(\bm{\theta}^T)$. Then, we substitute $\mathbf{\Phi}$ into (\ref{original ACR with HWI}) and obtain the ACRs with HWI, which are averaged on 1000 Monte Carlo trials at $N=1,13,25,37$. \begin{figure}[!t] \includegraphics[width=3.2in]{For_Optimization-eps-converted-to.pdf} \centering \caption{{\color{black}Average ACRs as functions of $N$ with HWI. The curves marked with "$\Diamond$", "+" and "$\bigcirc$" represent the results obtained in \textit{C-Step 1} to \textit{C-Step 3}, respectively. The curves marked with "$\square$" and "$*$" are copied from Figure \ref{Fig-For_IRS_ACR_and_Utility} (a) for comparisons.}} \label{Fig-Optimization} \end{figure} The average ACRs as functions of $N$ with HWI are depicted in Figure \ref{Fig-Optimization}. Results in Figure \ref{Fig-Optimization} show that: 1) the curves obtained in \textit{C-Step 1} and \textit{C-Step 2} coincide, indicating that $\mathbf{Y}_r=\mathbf{Y}$. Moreover, we calculate the rank of $\mathbf{Y}_r$ and obtain $rank(\mathbf{Y}_r)=1$. Because $\mathbf{Y}_r$ is constructed by $\bm{\theta}^T$ in the $(N+1)$-th row of $\mathbf{X}=\widetilde{\mu}^{-1}\mathbf{Y}$ in the solution, $\bm{\theta}^T$ is testified to be the optimal IRS phase shift vector. 2) The curves obtained in \textit{C-Step 1} and \textit{C-Step 3} coincide, confirming that the mathematical derivations for $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$ in (\ref{eq2-35}) are correct. 3) The average ACRs with the optimized IRS phase shifts exceed the average ACRs with $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$, demonstrating that $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$ is not the optimal phase shift as it does not take $h_{SU}$ into account.} $\ $ {\color{black} \subsection{Discussions on Channel Estimation Errors and Residual Phase Noises}} {\color{black}Because most IRS-aided communication systems suffer from channel estimation errors, and the optimized IRS phase shifts may generally be affected by residual phase noises, as narrated at the end of Section IV, we will probe into the influence of the two factors on the optimization performance. The channel estimation errors are set to be additive complex variables according to Eq. (2) in \cite{J.Zhang2020(CL)}, which follow the zero-mean complex Gaussian distribution with the variance of $\sigma_w^2$. More detailed information about the CSI uncertainty models and simulation parameters can be found in \cite{J.Zhang2020(CL)}. The residual phase noises $\theta_{pi}$ in $\bm{\theta}_p$, for $i=1,2,...,N$, are also set to be uniformly distributed on $[-\pi/2,\pi/2]$. \begin{figure}[!t] \includegraphics[width=3.2in]{For_Optimization_Other_Source_HWI-eps-converted-to.pdf} \centering \caption{{\color{black} Influences of the channel estimation errors and residual phase noises on the optimization results. The ACRs are derived by substituting $\mathbf{\Phi}=diag(\bm{\theta}^T)$ or $\mathbf{\Phi}=diag(\bm{\theta}^T\odot\bm{\theta}_p^T)$ into (\ref{original ACR with HWI}) and are averaged on 1000 Monte Carlo trials. "Imperfect CSI" means that there are channel estimation errors, while "Perfect CSI" represents the opposite.}} \label{Fig-For_Optimization_Other_Source_HWI} \end{figure} In the simulations, for investigating the average ACR with channel estimation errors, we first adopt the CSI with errors to construct $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$ and solve (P6), and then substitute $\mathbf{\Phi}=diag(\bm{\theta}^T)$ into (\ref{original ACR with HWI}) which contains the actual CSI. For investigating the average ACR with residual phase noises, we first solve (P6) and exert the influence of $\bm{\theta}_p$ on $\bm{\theta}^T$ by constructing $\bm{\theta}^T\odot\bm{\theta}_p^T$, and then substitute $\mathbf{\Phi}=diag(\bm{\theta}^T\odot\bm{\theta}_p^T)$ into (\ref{original ACR with HWI}). Figure \ref{Fig-For_Optimization_Other_Source_HWI} depicts the influences of the channel estimation errors and residual phase noises on the optimization results. It is demonstrated that: 1) both the channel estimation errors and the residual phase noises reduce the average ACR and degrade the optimization performance. 2) The residual phase noises impose more serious negative impact on the performance than the channel estimation errors, manifesting that the inherent hardware imperfection, synchronization offset and estimation accuracy limit in the real world, are key potential factors that affect the optimization performance. } $\ $ \subsection{Comparisons with DF Relay} In order to {\color{black}validate} the theoretical {\color{black}analysis} in Section V, we {\color{black}will} numerically compare the ACRs {\color{black}and the utilities} for the IRS-aided and the conventional multiple-antenna DF relay assisted wireless communication systems in the presence of HWI. {\color{black} Following Section V, we will compare the performances by varying $N$ and $P$. $\ $ \textit{1) Comparisons by varying $N$}: \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=3.2in]{For_N_ACR_Comparison_DF-eps-converted-to.pdf}} \label{Fig-For_N_ACR_Comparison_DF} \subfloat[]{\includegraphics[width=3.2in]{For_N_Utility_Comparison_DF-eps-converted-to.pdf}} \label{Fig-For_N_Utility_Comparison_DF} \hfil \caption{{\color{black}Comparisons with DF relay by varying $N$. (a) Average ACRs as functions of $N$, with $P=20$ dBm and $\kappa_t=\kappa_r=0.05^2,0.07^2,0.09^2$. (b) Utilities as functions of $N$, with $P=20$ dBm and $\kappa_t=\kappa_r=0.05^2,0.07^2,0.09^2$.}} \label{Fig-C1} \end{figure*} First, considering the transmitting power to be fixed ($P=20$ dBm), we compare the average ACR in (\ref{eq2-17}) with that in (\ref{eq2-55}) by varying $N$, and observe the simulation results at $\kappa_t=\kappa_r=0.05^2,0.07^2,0.09^2$. Figure \ref{Fig-C1} (a) displays the average ACRs of the IRS-aided and the DF relay assisted wireless communication systems in relation to $N$, from $N=1$ to $N=5000$. It is indicated that: 1) the ACRs decrease when $\kappa_t$ and $\kappa_r$ grow, which verifies that more severe HWI is concomitant with more serious ACR reduction. 2) Although it might not be realistic for the IRS and the DF relay to be equipped with such a large number (e.g. 5000) of reflecting elements and antennas in practical implementations, the result testifies (\ref{N_inf_ACR_Compare}) in \textbf{Lemma 2} and confirms the possibility that when $N$ is extremely large, the IRS is capable of outperforming the DF relay in terms of the ACR performance. It is worth noting that when $\kappa_t=\kappa_r=0.05^2,0.07^2,0.09^2$, the IRS always performs better for all $N\in[1,5000]$. This is because with the system parameters set in Section VI-A, the $\kappa_{th}$ in (\ref{eq2-69}) in \textbf{Lemma 4} is computed as $\kappa_{th}=4.0451\times10^{-6}$, which is smaller than $\kappa=\kappa_t+\kappa_r=0.05^2+0.05^2,\ 0.07^2+0.07^2,\ 0.09^2+0.09^2$. 3) As $N$ grows, the ACRs do not continuously increase appreciably. Instead, the ACRs of the IRS-aided communication system are approximately limited by 7.6511 bps/Hz at $\kappa_t=\kappa_r=0.05^2$, 6.6871 bps/Hz at $\kappa_t=\kappa_r=0.07^2$ and 5.9710 bps/Hz at $\kappa_t=\kappa_r=0.09^2$; while those of the DF relay assisted communication system are approximately limited by 4.3237 bps/Hz at $\kappa_t=\kappa_r=0.05^2$, 3.8400 bps/Hz at $\kappa_t=\kappa_r=0.07^2$ and 3.4798 bps/Hz at $\kappa_t=\kappa_r=0.09^2$. The values are consistent with what are computed from $\left.\overline{R_{HWI}}\left(N\right)\right|_{N\rightarrow\infty}=\log_2\left(1+\frac{1}{\kappa_t+\kappa_r}\right)$ and $R_{HWI}^{DF}(N)|_{N\rightarrow\infty}=\frac{1}{2}\log_2\left(1+\frac{2}{\kappa}\right)$. 4) The ACRs of the DF relay assisted communication system increase rapidly when $N<100$, illustrating that the ACR performance of the DF relay can be significantly improved by increasing the quantity of the antennas when $N$ is small. Then, we compare the utility in (\ref{Utility Expression}) with that in (\ref{eq-DF Utility}) by varying $N$. Figure \ref{Fig-C1} (b) describes the utilities of the IRS and the DF relay in relation to $N$. The observation interval is shrunk ($N\in[1,16]$), for clearly viewing the details on the utility reduction of the DF relay. The results show that: 1) when $\kappa_t$ and $\kappa_r$ grow, the utilities decrease, which confirms that more severe HWI is concomitant with more serious utility degradation. 2) The IRS utility is lower than the DF-relay utility when $N$ is small, and both of them decrease to zero as $N$ grows. This is consistent with what is given in (\ref{N_inf_Utility_Compare}) in \textbf{Lemma 2}. $\ $ \textit{2) Comparisons by varying $P$}: \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=3.2in]{For_P_ACR_Comparison_DF-eps-converted-to.pdf}} \label{Fig-For_P_ACR_Comparison_DF} \subfloat[]{\includegraphics[width=3.2in]{For_P_Utility_Comparison_DF-eps-converted-to.pdf}} \label{Fig-For_P_Utility_Comparison_DF} \hfil \caption{{\color{black}Comparisons with DF relay by varying $P$. (a) Average ACRs as functions of $P$, with $N=256$ and $\kappa_t=\kappa_r=0.05^2,0.07^2,0.09^2$. (b) Utilities as functions of $P$, with $N=256$ and $\kappa_t=\kappa_r=0.05^2,0.07^2,0.09^2$.}} \label{Fig-C2} \end{figure*} First, considering the number of the reflecting elements or the DF-relay antennas to be fixed ($N=256$), we compare the average ACR in (\ref{eq2-17}) with that in (\ref{eq2-55}) by varying $P$, and observe the numerical results at $\kappa_t=\kappa_r=0.05^2,0.07^2,0.09^2$. Figure \ref{Fig-C2} (a) plots the average ACRs of the IRS-aided and the DF relay assisted wireless communication systems with respect to $P$, from $P=1$ dBm to $P=50$ dBm. It is indicated that: 1) when $P<2$ dBm and $N=256$, the IRS performs worse than the DF relay if $\kappa_t=\kappa_r=0.05^2,0.07^2$, but better if $\kappa_t=\kappa_r=0.09^2$. This phenomenon reveals that when $P$ is low, the transceiver HWI influences the performance of the DF relay more seriously than the performance of the IRS. 2) The ACRs of the IRS-aided communication system increase faster as $P$ rises, and exceed the ACRs of the DF relay assisted communication system when $P>5$ dBm. 3) Both the ACRs of the IRS-aided and the DF relay assisted communication systems are bounded when $P$ is high. Specifically, the ACRs of the IRS-aided communication system are approximately bounded by 7.6511 bps/Hz at $\kappa_t=\kappa_r=0.05^2$, 6.6871 bps/Hz at $\kappa_t=\kappa_r=0.07^2$ and 5.9710 bps/Hz at $\kappa_t=\kappa_r=0.09^2$; while the ACRs of the DF relay assisted communication system are approximately bounded by 4.3209 bps/Hz at $\kappa_t=\kappa_r=0.05^2$, 3.8372 bps/Hz at $\kappa_t=\kappa_r=0.07^2$ and 3.4770 bps/Hz at $\kappa_t=\kappa_r=0.09^2$. The values coincide with what can be derived from $\overline{R_{HWI}}(N)|_{P\rightarrow\infty}=\log_2\left(1+\frac{1}{\kappa}\right)$ and $R_{HWI}^{DF}(N)|_{P\rightarrow\infty}=\frac{1}{2}\log_2\left(1+\frac{2N}{\kappa+\kappa N} \right)$. Then, we compare the utility in (\ref{Utility Expression}) with that in (\ref{eq-DF Utility}) by varying $P$. Figure \ref{Fig-C2} (b) depicts the utilities of the IRS and the DF relay with respect to $P$. The results demonstrate that: 1) when $P$ is relatively low, the utilities of the IRS outstrip those of the DF relay, but both of them descend as $P$ grows. 2) The utilities of the IRS decrease to zero, while those of the DF relay converge to certain positive values (around $1.09\times10^{-5}$), which are consistent with what are calculated from $\left.\gamma_{HWI}^{DF}(N)\right|_{P\rightarrow\infty}=\frac{\kappa}{(\kappa+N\kappa+2N)(\kappa+N\kappa)\ln 2}$. These results validate (\ref{P_inf_Utility_Compare_IRS}) and (\ref{P_inf_Utility_Compare_DF}) in \textbf{Lemma 3}, and reveal that in terms of the utility, although the IRS is preferable to the DF relay at a low system power, the DF relay becomes more advantageous if $P$ is significantly high. } {\color{black} \section{Conclusion and Future Works}} In this article, for the purpose of evaluating the performance of the IRS in consideration of {\color{black} the hardware non-ideality at both the IRS and the signal transceivers}, we first analyse the average ACR {\color{black}and the IRS utility} for the IRS-aided SISO communication system, then optimize the IRS phase shifts by converting the original non-convex problem into a SDP problem, {\color{black}subsequently investigate the impact of the channel estimation errors and the residual phase noises on the optimization performance}, and finally compare the IRS with the conventional {\color{black}multiple-antenna} DF relay {\color{black}in terms of the ACR and the utility} in the presence of HWI. The results illustrate that: 1) as the number of the reflecting units grows, the average ACR of the IRS-aided communication system increases, {\color{black}while the utility of the IRS decreases}. 2) The HWI degrades {\color{black}both the ACR and the utility, and it causes more severe ACR reduction when more reflecting elements are equipped.} 3) If the number of the reflecting units is large enough or {\color{black} the transmitting power is sufficiently high}, the IRS can surpass the conventional DF relay in terms of the ACR, {\color{black} although the DF relay is relatively more advantageous in terms of the utility.} Consequently, the IRS is proved to be still an effective facility for data transmission enhancement in the future wireless communication networks with imperfect hardware in the real world. {\color{black}As in most actual circumstances, the BS is generally equipped with multiple antennas and is responsible for serving multiple users, it is meaningful to dissect the system performance in the IRS-aided multiple-user MISO communication scenario in the presence of HWI. In view of the complex-matrix form of the BS-IRS channel, deriving the closed-form average ACR as a function of the number of the reflecting elements is a challenging task and deserves more effort. In addition, forasmuch as the typical amplify-and-forward (AF) relay is also widely utilized to assist the wireless communication, the insightful theoretical comparison with this conventional approach in the presence of HWI is challenging but worth to be performed in depth as well in the future.} \appendices \section{Proof of Theorem 1} In Appendix A, we will mathematically prove \textbf{Theorem 1} in Section III. {\color{black} First, based on (\ref{eq2-16}), the exact average ACR can be derived from \begin{equation}\label{Exact Ergodic} \begin{split} \overline{R_{HWI}}\left(N\right)=& \mathbb{E}_{\mathbf{\Theta}_E}\left[R_{HWI}\left(N\right)\right]\\=& \mathbb{E}_{\mathbf{\Theta}_E}\left\{\log_2\left\{1+\frac{P\left|\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right|^2}{P(\kappa_t+\kappa_r)\left|\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right|^2+\sigma_w^2}\right\}\right\} \end{split} \end{equation} However, as illustrated in \cite{D.Li2020(CL)}, it is difficult, if not impossible, to obtain the exact closed-form expression for $\mathbb{E}_{\mathbf{\Theta}_E}\left[R_{HWI}\left(N\right)\right]$. Therefore, inspired by \cite{D.Li2020(CL), Y.Han2019(TVT), M.Matthaiou2009(TCOM), Q.T.Zhang2005(TWC), S.Sanayei2007(TWC)}, we will also find an approximation to $\mathbb{E}_{\mathbf{\Theta}_E}\left[R_{HWI}\left(N\right)\right]$. Fortunately, according to Eq. (35) in \cite{E.Bjornson2013(TCOM)}, which is given by \begin{equation}\label{eq35_in_[R4.2]} \mathbb{E}\left\{\log_2\left(1+\frac{x}{y}\right)\right\}\approx \log_2\left(1+\frac{\mathbb{E}\{x\}}{\mathbb{E}\{y\}}\right) \end{equation} a simpler closed-form expression for the average ACR can be achieved by the approximation in (\ref{eq35_in_[R4.2]}). Hence, based on (\ref{eq35_in_[R4.2]}), the $\overline{R_{HWI}}\left(N\right)$ in (\ref{Exact Ergodic}) can be approximated by \begin{equation}\label{aver_ACR_IT-HWI-approx} \begin{split} \overline{R_{HWI}}\left(N\right)\approx& \log_2\left\{1+\frac{P\mathbb{E}_{\mathbf{\Theta}_E}\left[\left|\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right|^2\right]}{P(\kappa_t+\kappa_r)\mathbb{E}_{\mathbf{\Theta}_E}\left[\left|\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right|^2\right]+\sigma_w^2}\right\} \\ =&\log_2\left(1+\frac{P\mathcal{Q}}{P(\kappa_t+\kappa_r)\mathcal{Q}+\sigma_w^2}\right) \end{split} \end{equation} where $\mathcal{Q}=\mathbb{E}_{\mathbf{\Theta}_E}\left[\left|\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right|^2\right]$. From (\ref{aver_ACR_IT-HWI-approx}), the problem of deriving the closed-form expression for $\mathbb{E}_{\mathbf{\Theta}_E}\left[R_{HWI}\left(N\right)\right]$ is converted into that for $\mathcal{Q}$, which is expanded into \begin{equation}\label{Expansion of E} \begin{split} \mathcal{Q}=& \mathbb{E}_{\mathbf{\Theta}_E}\left\{\left[\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right]^*\left[\phi(t)\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right]\right\} \\ =&\mathbb{E}_{\mathbf{\Theta}_E}\left[\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)^*\left(\alpha\mathbf{g}_{IU}^T\mathbf{\Theta}_E\mathbf{g}_{SI}+h_{SU}\right)\right] \end{split} \end{equation} Subsequently,} let $\mathbf{G}_{IU}$ and $\mathbf{v}_{E}$ be defined by $\mathbf{G}_{IU}=diag\left(\mathbf{g}_{IU}\right)=\sqrt{\mu_{IU}}\mathbf{I}_N$ and $\mathbf{v}_{E}=\left(e^{j\theta_{E1}},e^{j\theta_{E2}},\ldots,e^{j\theta_{EN}}\right)^T$. Because we have $\mathbf{v}_{E}^T\mathbf{G}_{IU}=\mathbf{g}_{IU}^T\mathbf{\Theta}_E$, $\mathbf{G}_{IU}^*=\mathbf{G}_{IU}$ and $\mathbf{g}_{SI}^*=\mathbf{g}_{SI}$, from (\ref{Expansion of E}) we obtain \begin{small} \begin{equation}\label{Convert Q} \begin{split} \mathcal{Q}&=\mathbb{E}_{\mathbf{\Theta}_E}\left[{\alpha^2\mathbf{g}}_{SI}^T\mathbf{G}_{IU}^T\mathbf{v}_{E}^*\mathbf{v}_{E}^T\mathbf{G}_{IU}\mathbf{g}_{SI}+\alpha\mathbf{g}_{SI}^T\mathbf{G}_{IU}^T\mathbf{v}_{E}^*h_{SU}+\alpha h_{SU}^*\mathbf{v}_{E}^T\mathbf{G}_{IU}\mathbf{g}_{SI}+||h_{SU}||_2^2\right] \\ &=\alpha^2\mu_{IU}\mu_{SI}\mathbb{E}_{\theta_{Ei}}\left[tr\left(\mathbf{v}_{E}^T\mathbf{\Gamma}_N\mathbf{v}_{E}^*\right)\right]+\alpha\sqrt{\mu_{IU}\mu_{SI}\mu_{SU}}\mathbb{E}_{\theta_{Ei}}\left\{\sum_{i=1}^{N}\left[e^{j{{(\varphi}_{SU}+\theta}_{Ei})}+e^{-j{{(\varphi}_{SU}+\theta}_{Ei})}\right]\right\}+||h_{SU}||_2^2 \end{split} \end{equation} \end{small} where $i=1,2,...,N$. In (\ref{Convert Q}), we can expand $tr\left(\mathbf{v}_{E}^T\mathbf{\Gamma}_N\mathbf{v}_{E}^*\right)$ into \begin{small} \begin{equation}\label{eq-A3} \begin{split} tr\left(\mathbf{v}_{E}^T\mathbf{\Gamma}_N\mathbf{v}_{E}^*\right)&=N+\sum_{i\neq1}^{N}e^{j{{(\theta}_{E1}-\theta}_{Ei})}+\sum_{i\neq2}^{N}e^{j{{(\theta}_{E2}-\theta}_{Ei})}+\ldots+\!\!\sum_{i\neq N-1}^{N}e^{j{{(\theta}_{E\left(N-1\right)}-\theta}_{Ei})}+\sum_{i\neq N}^{N}e^{j{{(\theta}_{EN}-\theta}_{Ei})}\\ &=N+2\sum_{i=2}^{N}{cos{{(\theta}_{E1}-\theta}_{Ei})}+2\sum_{i=3}^{N}{cos{{(\theta}_{E2}-\theta}_{Ei})}+\ldots+2\sum_{i=N}^{N}{cos{{(\theta}_{E\left(N-1\right)}-\theta}_{Ei})}\\ &=N+{\mathbf{1M1}}^T \end{split} \end{equation} \end{small} where the matrix $\mathbf{M}$ is expressed as \begin{small} \begin{equation}\label{eq-A4} \mathbf{M}=\left(\begin{matrix}2\cos{\left({\theta_{E1}-\theta}_{E2}\right)}&2\cos{\left({\theta_{E2}-\theta}_{E3}\right)}&\begin{matrix}\cdots\ &2\cos{\left({\theta_{E\left(N-1\right)}-\theta}_{EN}\right)}\\\end{matrix}\\2\cos{\left({\theta_{E1}-\theta}_{E3}\right)}&2\cos{\left({\theta_{E2}-\theta}_{E4}\right)}&\begin{matrix}\cdots\ \ \ \ \ \ &\ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\end{matrix}\\\begin{matrix}\vdots\\\begin{matrix}2\cos{\left({\theta_{E1}-\theta}_{E\left(N-1\right)}\right)}\\2\cos{\left({\theta_{E1}-\theta}_{EN}\right)}\\\end{matrix}\\\end{matrix}&\begin{matrix}\vdots\\\begin{matrix}2\cos{\left({\theta_{E2}-\theta}_{EN}\right)}\\0\\\end{matrix}\\\end{matrix}&\begin{matrix}\begin{matrix}\iddots\ \ \ \ \ \ \ \ \ \ &\ \vdots\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\end{matrix}\\\begin{matrix}0&\ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\end{matrix}\\\begin{matrix}0&\ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\end{matrix}\\\end{matrix}\\\end{matrix}\right) \end{equation} \end{small} We can also utilize Euler formula to expand $\sum_{i=1}^{N}\left[e^{j{{(\varphi}_{SU}+\theta}_{Ei})}+e^{-j{{(\varphi}_{SU}+\theta}_{Ei})}\right]$ and then obtain $\sum_{i=1}^{N}\left[e^{j{{(\varphi}_{SU}+\theta}_{Ei})}+e^{-j{{(\varphi}_{SU}+\theta}_{Ei})}\right]=2\sum_{i=1}^{N}\cos{\left({\varphi_{SU}+\theta}_{Ei}\right)}$. As $\theta_{Ei}$, for $i=1,2,...,N$, are random variables which are uniformly distributed on $\left[-\pi/2,\pi/2\right]$, we should calculate the expectations of $2\sum_{i=1}^{N}\cos{\left({\varphi_{SU}+\theta}_{Ei}\right)}$ and $tr\left(\mathbf{v}_{E}^T\mathbf{\Gamma}_N\mathbf{v}_{E}^*\right)$ in order to obtain a statistical average ACR. First, we calculate $\mathbb{E}_{\theta_{Ei}}\left[2\sum_{i=1}^{N}\cos{\left({\varphi_{SU}+\theta}_{Ei}\right)}\right]$ and have \begin{small} \begin{equation}\label{eq-A5} \begin{split} \mathbb{E}_{\theta_{Ei}}\!\!\left[2\sum_{i=1}^{N}\cos{\left({\varphi_{SU}+\theta}_{Ei}\right)}\right] &=2\mathbb{E}_{\theta_{Ei}}\!\!\left[\sum_{i=1}^{N}{\cos{\varphi_{SU}}\cos{\theta_{Ei}}}-\sum_{i=1}^{N}{\sin{\varphi_{SU}}\sin{\theta_{Ei}}}\right]\\ &=2N\!\cos{\varphi_{SU}}\!\!\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\!\!\!{f\left(\theta_{Ei}\right)\cos{\theta_{Ei}{d\theta}_{Ei}}}-2N\sin{\varphi_{SU}}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}{f\left(\theta_{Ei}\right)\sin{\theta_{Ei}{d\theta}_{Ei}}}\\ &=\frac{4}{\pi}N\cos{\varphi_{SU}} \end{split} \end{equation} \end{small}where $f\left(\theta_{Ei}\right)=1/\pi$ is the probability density function of variable $\theta_{Ei}$. Subsequently, we calculate $\mathbb{E}_{\theta_{Ei}}\left[tr\left(\mathbf{v}_{E}^T\mathbf{\Gamma}_N\mathbf{v}_{E}^*\right)\right]=N+\mathbb{E}_{\theta_{Ei}}\left[{\mathbf{1M1}}^T\right]$. It is notable that the elements in $\mathbf{M}$ are either 0, or $2\cos(\theta_{Ei}-\theta_{Ej})$ for $i<j$. Therefore, let $\delta_{\theta}$ be defined by $\delta_{\theta}=\theta_{Ei}-\theta_{Ej}$. Because $\theta_{Ei}$ obeys uniform distribution on $\left[-\pi/2,\pi/2\right]$, $\delta_{\theta}$ obeys triangular distribution on $[-\pi,\pi]$ whose probability density function is expressed as \begin{equation}\label{eq-A6} f\left(\delta_{\theta}\right)=\left\{\begin{matrix}\frac{1}{\pi^2}\delta_{\theta}+\frac{1}{\pi},\ \delta_{\theta}\in\left[-\pi,0\right]\\-\frac{1}{\pi^2}\delta_{\theta}+\frac{1}{\pi},\ \delta_{\theta}\in\left[0,\pi\right]\\\end{matrix}\right. \end{equation} Thus, we have {\color{black} \begin{small} \begin{equation}\label{eq-A7} \begin{split} N+\mathbb{E}_{\theta_{Ei}}\left[{\mathbf{1M1}}^T\right]&=N+\mathbb{E}_{\theta_{Ei}}\left[2\sum_{i<j}^{N}\cos{\left({\theta_{Ei}-\theta}_{Ej}\right)}\right]\\ &=N+N\left(N-1\right)\left[\int_{-\pi}^{0}{\left(\frac{1}{\pi^2}\delta_{\theta}+\frac{1}{\pi}\right)\cos{\left(\delta_{\theta}\right)}d\delta_{\theta}}+\int_{0}^{\pi}{\left(-\frac{1}{\pi^2}\delta_{\theta}+\frac{1}{\pi}\right)\cos{\left(\delta_{\theta}\right)}d\delta_{\theta}}\right]\\ &=N+\frac{1}{\pi^2}N(N-1)\left[\int_{-\pi}^0\delta_{\theta}\cos{(\delta_{\theta})}d\delta_{\theta}-\int_0^{\pi}\delta_{\theta}\cos{(\delta_{\theta})}d\delta_{\theta}\right]\\ &=N+\frac{4}{\pi^2}N(N-1)=\frac{4N^2}{\pi^2}+\left(1-\frac{4}{\pi^2}\right)N \end{split} \end{equation} \end{small}} By substituting (\ref{eq-A5}) and (\ref{eq-A7}) into (\ref{Convert Q}), and substituting (\ref{Convert Q}) into (\ref{aver_ACR_IT-HWI-approx}), we finally prove (\ref{eq2-17}). {\color{black} Then, by calculating $\gamma_{HWI}(N)=\frac{\partial\overline{R_{HWI}}\left(N\right)}{\partial N}$, we finally prove (\ref{Utility Expression}).} \section{Proof of Lemma 2} In Appendix B, we will prove \textbf{Lemma 2} in Section V. On the assumption that $\kappa_t=\kappa_r=\frac{1}{2}\kappa$ and $P_1=P_2=P$, {\color{black}when $N\rightarrow\infty$, from (\ref{eq-A}) and (\ref{eq-B}), we have \begin{equation} \mathfrak{A}(N)|_{N\rightarrow\infty}=\log_2\left(1+\frac{2}{\kappa}\right) \end{equation} \begin{equation} \mathfrak{B}(N)|_{N\rightarrow\infty}=\log_2\left(1+\frac{1}{\kappa+\frac{\sigma_w^2}{P\mu_{SU}}}+\frac{2}{\kappa}\right) \end{equation} Because $1+\frac{2}{\kappa}<1+\frac{1}{\kappa+\frac{\sigma_w^2}{P\mu_{SU}}}+\frac{2}{\kappa}$, according to (\ref{eq2-55}), we have \begin{equation} R_{HWI}^{DF}(N)|_{N\rightarrow\infty}=\frac{1}{2}\mathfrak{A}(N)|_{N\rightarrow\infty}=\frac{1}{2}\log_2\left(1+\frac{2}{\kappa}\right) \end{equation} Given $\overline{R_{HWI}}(N)|_{N\rightarrow\infty}$ in (\ref{R_HWI Upper Bound}), we calculate $\overline{R_{HWI}}(N)|_{N\rightarrow\infty}-R_{HWI}^{DF}(N)|_{N\rightarrow\infty}$ and obtain \begin{equation} \overline{R_{HWI}}(N)|_{N\rightarrow\infty}-R_{HWI}^{DF}(N)|_{N\rightarrow\infty} =\frac{1}{2}\log_2\left(1+\frac{1}{\kappa^2+2\kappa}\right)>0 \end{equation} from which we prove (\ref{N_inf_ACR_Compare}). Then, based on (\ref{Utility Expression}) and (\ref{eq-DF Utility}), we consider $N\rightarrow\infty$ and prove (\ref{N_inf_Utility_Compare}). } \section{Proof of Lemma 3} In Appendix C, we will prove \textbf{Lemma 3} in Section V. Similar to the proof of \textbf{Lemma 2} in Appendix B, {\color{black} when $P\rightarrow\infty$, we have \begin{equation} \mathfrak{A}(N)|_{P\rightarrow\infty}=\log_2\left(1+\frac{2N}{\kappa+\kappa N} \right) \end{equation} \begin{equation} \mathfrak{B}(N)|_{P\rightarrow\infty}=\log_2\left(1+\frac{1}{\kappa}+\frac{2N}{\kappa+\kappa N} \right) \end{equation} Because $1+\frac{2N}{\kappa+\kappa N}<1+\frac{1}{\kappa}+\frac{2N}{\kappa+\kappa N}$, according to (\ref{eq2-55}), we have \begin{equation} R_{HWI}^{DF}(N)|_{P\rightarrow\infty}=\frac{1}{2}\mathfrak{A}(N)|_{P\rightarrow\infty}=\frac{1}{2}\log_2\left(1+\frac{2N}{\kappa+\kappa N} \right) \end{equation} Based on (\ref{eq2-17}), we have \begin{equation} \overline{R_{HWI}}(N)|_{P\rightarrow\infty}=\log_2\left(1+\frac{1}{\kappa}\right) \end{equation} Hence, we calculate $\overline{R_{HWI}}(N)|_{P\rightarrow\infty}-R_{HWI}^{DF}(N)|_{P\rightarrow\infty}$ and obtain \begin{equation} \overline{R_{HWI}}(N)|_{P\rightarrow\infty}-R_{HWI}^{DF}(N)|_{P\rightarrow\infty} =\frac{1}{2}\log_2\left\{1+\frac{2\kappa+N+1}{(N+1)\kappa^2+2N\kappa} \right\}>0,\ \ \forall N>0 \end{equation} from which we prove (\ref{P_inf_ACR_Compare}). Then, on the basis of (\ref{Utility Expression}) and (\ref{eq-DF Utility}), we consider $P\rightarrow\infty$ and prove (\ref{P_inf_Utility_Compare_IRS}) and (\ref{P_inf_Utility_Compare_DF}). } \section{Proof of Lemma 4} In Appendix D, we will prove \textbf{Lemma 4} in Section V. {\color{black} It is noted that $\overline{R_{HWI}}(N)$ in (\ref{eq2-17}) is a monotonically increasing function with respect to $N>0$. Thus, its minimum lies on \begin{equation} \overline{R_{HWI}}\left(1\right)=\log_2\left\{1+\frac{\beta+\lambda+\mu_{SU}} {\kappa\left(\beta+\lambda+\mu_{SU}\right)+\frac{\sigma_w^2}{P}}\right\} \end{equation} Besides, $R_{HWI}^{DF}(N)$ in (\ref{eq2-55}) is also a monotonically increasing function in relation to $N>0$, which is limited by $R_{HWI}^{DF}(N)|_{N\rightarrow\infty}=\frac{1}{2}\log_2\left(1+\frac{2}{\kappa}\right)$. Therefore, for the IRS to always outperform the DF relay in terms of the ACR for all $N>0$, the following relationship should hold: \begin{equation}\label{Condition_of_IRS_to_always_outperform} \overline{R_{HWI}}\left(1\right)>R_{HWI}^{DF}(N)|_{N\rightarrow\infty} \end{equation} From (\ref{Condition_of_IRS_to_always_outperform}), after a few manipulations, we obtain \begin{equation} \kappa>2\sigma_w^4 \left[P^2(\beta+\lambda+\mu_{SU})^2-2\sigma_w^2P(\beta+\lambda+\mu_{SU})\right]^{-1} \end{equation} and consequently prove \textbf{Lemma 4}. } \ifCLASSOPTIONcaptionsoff \newpage \fi
{'timestamp': '2020-11-19T02:23:35', 'yymm': '2005', 'arxiv_id': '2005.14454', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14454'}
arxiv
\section{\label{intr}Introduction} Cosmic strings and other topological defects are a generic prediction of many theories beyond the Standard Model of particle physics, being formed by means of the Ki
bble mechanism \citep{Kibble:1976sj}. Since the properties of these objects and their astrophysical consequences are intrinsically linked to the symmetry breaking patterns which produce them, one can think of them as fossil relics of the physical conditions in the early Universe. As such, hunting for defects in the early or recent Universe is akin to looking for evidence of specific symmetry breaking patterns. In particular a detection would indicate the presence of new physics, while a non-detection enables several constraints on theories of particle physics beyond the Standard Model. These are among the reasons why cosmic strings are a key target for next generation cosmic microwave background \citep{CORE} and gravitational wave experiments \citep{LISA}. These searches crucially depend on the availability of high-resolution, high-dynamic range simulations of defect networks. Unfortunately, computational constraints are already a limiting factor on these searches: it is clear that the approximations introduced to compensate for the absence of data introduce systematic uncertainties comparable to statistical uncertainties \citep{Ade:2013xla,Abbott:2017mem}. Using semi-analytical models with enough degrees of freedom \citep{Martins:1996jp,Book} can mitigate this problem. However, the proper calibration of such models also requires high-resolution simulations. In order to alleviate this problem, one can attempt to exploit alternatives to the standard hardware architectures with the onus of optimization falling to the developers of the tool in question. The goal of this work is to describe how to optimize our previously developed GPU-accelerated cosmic string evolution code \citep{Correia:2018gew,Correia:2019bdl} for multiple accelerators. Before proceeding, let us recall that there are two possible ways to simulate Abelian-Higgs cosmic string networks. In the first, one approximates the cosmic string by the action of a macroscopic (Nambu-Goto) string, which is in principle justified as long as the string effective cross-section is negligible when compared to the string length. This can be done also because the vacuum of the defect is strictly local---in the sense that fields are confined to the near-vicinity of the string core (hence confined to a world-sheet). Simulations which assume this approximation have been done by several independent groups \citep{BB,AS,FRAC,VVO,Blanco}. The advantages of these simulations are their comparatively large dynamical range and spatial resolution; the main disadvantage is that having only a one-dimensional effective action some of the key processes affecting network dynamics (such as intercommuting and loop production) are lost and have to be enforced by hand. In the second approach to simulating strings, one places fields on a co-moving discrete lattice and evolves these fields throughout cosmic time. In the strict sense of the word one does not evolve strings, but instead evolves the fields---the strings are merely specific configurations of these fields. Examples of early Abelian-Higgs simulations in the literature include \citet{Moore:2001px} and \citet{Bevis:2006mj}. Computational limitations imply that ordinarily these simulations can only yield more modest spatial resolutions and/or dynamic ranges, but they do have one important advantage: the microscopic field dynamics is preserved, and therefore it is relatively easy to extend the simplest Abelian-Higgs case to multiple fields (including suitable couplings). This enables the numerical study of defect types that are not described by the Nambu-Goto approximation, at least without considering additional degrees of freedom, or considering instead the Kalb-Rammond action. This flexibility is reflected in the literature: one can find examples of global defect simulations like domain walls \citep{Rybak1}, monopoles \citep{Monopoles} and global strings \citep{Lopez-Eiguren:2017dmc}, as well as semilocal strings \citep{Semilocals} and even hybrid networks \citep{HindNab}. This second approach is the one that we adopt in the present work. Until very recently both types of string simulations exploited only one architecture: Central Processing Units, in a typical distributed computing environment. Simulations which use alternative architectures (specifically, accelerator based ones) are far more scarce. So far there have been domain wall implementations by \citet{Intel} for Xeon Phi co-processors and by \citet{PhysRevE.96.043310}, and the more recent cosmic string implementation for GPUs by the present authors \citep{Correia:2018gew,Correia:2019bdl}. One limitation of our GPU implementations so far pertains to the amount of physical memory available on a graphical accelerator. A way around this involves swapping parts of the lattice from host to accelerator memory constantly, however, one expects this to negatively impact performance. In what follows we address this issue by implementing and subsequently validating an extension of our previous code for multiple accelerators, and we also quantify its scalability. An outline of the rest of the paper is as follows. We start in Sect. \ref{lat} with a brief outline of our discretization procedure, and then proceed to describe its implementation for multiple GPUs in Sect. \ref{disc}. Our procedure for validating the code is then described in Sect. \ref{val}, following which we discuss the tests of its scalability (both in terms of strong and weak scaling) in Sect. \ref{impl}. Finally, we present some conclusions and a brief outlook in Sect.\ref{concl}. \section{\label{lat}Discretization scheme} Field theory simulations of defects in cosmology rely on discretizing fields on a lattice and allow their evolution to be dictated by integrators which in the continuuum limit approximate the equations of motion of the fields. Abelian-Higgs cosmic strings arise from a Lagrangian density which is invariant under local $U(1)$ transformations, which upon breaking of this symmetry eventually forms topological defects. We start by providing a brief overview of the aspects of the discretization process relevant for our code (and specifically for its multi-GPU extension), referring the reader to our previous work on the single GPU version \citep{Correia:2018gew,Correia:2019bdl} for a more detailed discussion. The Lagrangian density has the form \begin{equation} \mathcal{L}=|D_\mu \phi|^2 - \frac{\lambda}{4}(|\phi|^2 -\sigma^2)^2 - \frac{1}{4e^2}F^{\mu \nu}F_{\mu \nu}\,, \end{equation} where $\phi$ is a complex scalar field, the electromagnetic field tensor is given by $F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$, $A_\mu$ is the gauge field (where the gauge coupling $e$ has been absorbed), $D_\mu \phi$ is the gauge covariant derivative given by $D_\mu = \partial_\mu -iA_\mu$ and $\lambda$ and $e$ are coupling constants. Throughout this work we assume both the temporal gauge ($A_0 =0$) and a Friedmann-Lemaitre-Robertson-Walker flat background, described by the metric \begin{equation} ds^2 = -dt^2 +a(t)^2 \bigg[ dr^2 + r^2 (d\theta^2 + \sin \theta^2 d\phi^2 ) \bigg] \end{equation} where $a(t)$ is a function known as scale factor and ($t, r, \theta, \phi$) are spacetime coordinates. Through a coordinate transformation this can be re-written in comoving coordinates and conformal time, which simplifies the metric to $g_{\mu \nu} = a^2 diag(-1,1,1,1)$. By standard variational principles one can obtain the equations of motion, \begin{equation} \ddot{\phi} + 2\frac{\dot{a}}{a}\dot{\phi} = D^jD_j\phi -\frac{a^{2}\lambda}{2} (|\phi|^2 - \sigma^2) \end{equation} \begin{equation} \dot{F}_{0j} = \partial_j F_{ij} -2a^2 e^2 Im[\phi^* D_j \phi]\,, \end{equation} along with Gauss's law, \begin{equation} \partial_i F_{0i} = 2 a^2 e^2 Im[\phi^* \dot{\phi}]\,, \end{equation} where $\dot{a}$ indicates a derivative of the scale factor with respect to conformal time. In order to obtain the discrete form of these equations (which tell us how to update the fields at each numerical timestep) one needs to consider the principles of lattice gauge theory \citep{PhysRevD.10.2445}. Note that we will, in addition, allow both the scalar and gauge couplings to scale (this modifies the usual equations of motion and by extension also their discretizations). Both couplings are now described by, \begin{align} \lambda = \lambda_0 a^{2(1-s)} && e = e_0 a^{(1-s)} \end{align} such that the value of $s$ can be used to either fix the comoving width of the defect \citep{PRS} or even to allow it to grow (for a negative $s$) and subsequently shrink as expected in the true equations of motion $s=1$ \citep{Bevis:2006mj}. In this way the defects are resolved and do not become smaller than the lattice spacing. In terms of simulation parameters we choose lattice spacing of $\Delta x = 0.5$ and timestep size of $\Delta \eta = 0.1$, which are standard choices in the literature (\citet{Correia:2019bdl,Daverio:2015nva}, and couplings of $\lambda = 2.0$ and $e_0 = 1.0$; the latter choices ensure criticality, with a Bogomol'nyi ratio $\beta = \lambda / 2e^2 = 1.0$. These standard choices are made to enable a meaningful comparison to the most commonly simulated cases on the literature and thus allow us to validate our code, as detailed below. The simulations start at $\eta_0=1.0$. The initial conditions consist of setting a random phase for the complex scalar field $\phi$ while other fields are set to zero. These initial conditions are an attempt to mimic the fields after a second-order phase transition (already in the broken symmetry phase). For the purpose of this paper, we note that we are not applying any period of artificial damping. However, such capability exists in the simulation and has in fact already been used to study the consequences of varying degrees of damping \citet{Correia:2020gkj}. Due to the periodic boundary conditions, the simulation ends when the horizon reaches half the box size (half-light crossing time). The final conformal time can then be computed using $0.5 \Delta x N$ where $N$ is the box size. Altogether, these specs mean that there are 630 time-steps in a $256^3$ simulation and $1270$ time-steps in a $512^3$ simulation. With these prescriptions one obtains a discretization based on a staggered leap-frog scheme with respect to terms of second-order in time and Crank-Nicholson with respect to terms of first order in time: \begin{equation} \begin{split} (1+\delta)\Pi^{x,\eta+\frac{1}{2}} &= (1-\delta)\Pi^{x,\eta-\frac{1}{2}}+\Delta\eta [D_j^-D_j^+\phi^{x,\eta} \\ &-\frac{\lambda_0}{2} a_\eta^{2s}(|\phi^{x,\eta}|^2-\sigma^2)\phi^{x,\eta}] \end{split} \end{equation} \begin{equation} \begin{split} (1+\omega)E^{x,\eta+\frac{1}{2}}_i &= (1-\omega)E^{x,\eta-\frac{1}{2}}_i +\Delta\eta [-\partial_i^- F_{ij} \\ &+ 2e_0^{2}a^{2s}_\eta Im[\phi^* D_i^+ \phi]^{x,\eta} ] \end{split} \end{equation} \begin{equation} \phi^{x,\eta+1} = \phi^{x,\eta} + \Delta \Pi^{x,\eta+\frac{1}{2}} \end{equation} \begin{equation} A^{x,\eta+1}_i = A^{x,\eta}_i + \Delta E^{x,\eta+\frac{1}{2}}_i\,, \end{equation} where $\omega=\delta(1-s)$, $\delta=\frac{1}{2} \alpha \frac{dlna}{dln\eta}\frac{\Delta \eta}{\eta} = \frac{1}{2} \alpha \frac{m \Delta \eta}{(1-m)\eta}$, and the last equality assumes cosmological power-law expansion rates with the scale factor \begin{equation}\label{defm} a\,\propto\,t^m\,\propto \eta^{m/(1-m)}\,, \end{equation} respectively in terms of physical and conformal time. Note that $\delta$ is responsible for the Hubble damping of the scalar field and $\omega$ is responsible for damping the gauge field. In particular, the damping of the gauge field when $s\neq 1$ is responsible for upholding the version of the previously stated Gauss's law to machine precision. We remark as well that all spatial derivatives are accurate to $\mathcal{O}(\Delta x^2)$. In order to completely describe the communication and memory access patterns of our code, we must also describe gauge covariant-derivatives, $D^+ \phi$, and the derivatives of the gauge field strength $\partial^- F_{ij}$ in greater detail. In order to accurately preserve gauge invariance on the lattice these two quantities can be defined using so-called link variables, \begin{equation} U_j^x = e^{-i A _j}\,, \end{equation} which describe the gauge fields on the lattice as parallel transporters of the scalar field $\phi$. With these we can properly define the modified Laplacian stencil, \begin{equation} D_j^-D_j^+\phi^x = \sum_j \frac{1}{\Delta x^2} [ U_j^x\phi^{x+k_j} - 2\phi_j^{x} +(U_j^{x-k_j})^* \phi^{x-k_j}]\,, \end{equation} and the spatial components of the gauge field strength can be defined with the real part of the following product of link variables, \begin{align} \Xi_{ij} &= U_i^x U_i^{x+k_j} (U_j^{x+k_i})^* (U_j^{x})^* \\ &= exp[i \Delta x (\partial^+_i A'_j(x) -\partial^+_j A'_i(x) )]\,. \end{align} In order to validate the code and, more generally, quantitatively describe the evolution of cosmic string networks in production runs, one must compute and output at least two key network observables: the mean string separation $\xi$ and the mean velocity $v^2$. Our code can compute and output each of these variables in two separate ways. For the mean string separation these are \begin{align}\label{diagxi} \xi_\mathcal{L} = \sqrt{ \frac{-\mu V}{\sum_x \mathcal{L}_x} }\,, && \xi_W = \sqrt{\frac{\mathcal{V}}{\sum_{ij,x} W_{ij,x}}}\,. \end{align} The first one comes from \citet{Bevis:2006mj} and is based on the Lagrangian being strongly negatively peaked at the string core while approaching zero away from the string. The second one comes from the lattice based winding from \citet{Kajantie:1998bg}. We also use two different estimators to compute the mean squared velocity, \begin{align}\label{diagvel} \langle v^2\rangle_{\phi} = \frac{2R}{1+R}\,, && \langle v^2\rangle_{\omega} = \frac{1}{2} \bigg( 1+3\frac{\sum_x p_x \mathcal{W}_x}{\sum_x \rho_x \mathcal{W}_x} \bigg)\,, \end{align} where $R$ is the ratio, \begin{equation} R = \frac{\sum_x |\Pi|^2 \mathcal{L}}{\sum_{x,i} |D^+_{x,i} \phi|^2 \mathcal{L} } \end{equation} The first estimator comes from \citet{Hindmarsh:2008dw} and is based on considering a boosted static string. A complete derivation can be found in \citet{Hindmarsh:2017qff}. The second estimator is based on the equation of state parameter and has also been used in \citet{Hindmarsh:2017qff}. Note that these are not the only velocity estimators, but according to \citet{Hindmarsh:2017qff} these are the two estimators which are in better agreement with the expected velocity of an oscilating string in Minkowski space. Deep in eras where the scale factor evolves according to Eq. \ref{defm} (that is, with a constant expansion rate $m$), we expect these observables to exhibit scale-invariant behaviour \citep{Book}: the string separation should grow linearly with time, $\xi\propto\eta$, and the mean velocity should be constant, $v\propto const$. Note that the proportionality factors for different expansion rates $m$ depend on $m$ itself and also on other parameters, in a way that is quantitatively described by the analytic velocity-dependent one-scale model \citep{Martins:1996jp,Book}. A recent accurate calibration of this model has been presented in \citet{Correia:2019bdl}. \section{\label{disc}Extension to multiple accelerators} In order to enable taking a large lattice and dividing it among multiple accelerators, such that they can all partake in evolving the fields, we must consider that due to the modified Laplacian stencil and the derivative of the link variables, such field values ($\phi, A$) must be communicated between sub-domains. In order to allow such extension to multiple nodes in a network (as is standard in most high performance computing facilities) we use the Message-Passing-Interface (MPI) to facilitate communications. Throughout this work we assume a 3D decomposition. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure1.pdf} \caption{The packing procedure along two different directions. Blue represents the core of each domain (of size $NX \times NY$), red represents the buffers being filled with appropriate values to send to neighboring sub-domains and green represents an already received buffer. In the left panel, the buffer values come only from the blue inner core. After communication has taken place in this first direction, we can unpack the received buffer into the boundaries of the sub-domain. Once done, we can start packing the communication buffers for the next direction. This involves using not only the blue inner core but the freshly unpacked boundaries (in green). The pink boxes indicate domain areas where we update fields $E$ and $\phi$ either as the packing procedure begins (left and middle panels) or after all communication has taken place (right panel) whereas orange indicates these areas have already been updated.} \label{fig1} \end{figure*} In the Compute Unified Device Architecture (CUDA), all code to be executed by a graphical accelerator is contained in functions denoted as kernels. In order to implement the 3D decomposition, in addition to the kernels which evolve field quantities, we add kernels that pack outer values of the core of each sub-domain into additional buffers which are then sent to neighboring sub-domains via \textit{Isend} (from MPI). After both \textit{Isend/Irecv} complete (a \textit{WaitAll} barrier is necessary to ensure that all communication is complete) we use similar unpacking kernels to place the contents of received buffers into the boundaries of each sub-domain. Note that launching CUDA kernels from the host is non-blocking (relative to the host) which is why we must include a \textit{CUDAStreamSynchronize}, which ensures the packing kernel and all kernels before it have completed in time, in appropriate places. Note that a CUDA Stream is a sequence of commands launched in order (in this case a sequence of kernels). In order to obtain maximal bandwidth and minimal latency the communicated buffers are allocated in Unified Memory and Remote direct memory access is allowed, as noted in standard good practices \citep{pracebest}. It must be emphasized that some key requirements must be fulfilled in order to correctly satisfy all boundary conditions. For example, due to the diagonal terms of the gauge field $A_i$ (needed to compute the derivative of the gauge field strength) we must also pay attention to "corners". The way to correctly handle this is to use the so-called "diagonal trick" which means the corners along a given direction come from values exchanged from the previous communication in another direction. This dependency of exchanges in one direction upon preceding exchanges implies that communication must proceed in a given specific order. In our case this means communication must proceed as follows: \begin{enumerate} \item Pack the values to be sent to neighbors along X (outer part of the inner $NX \times NY \times NZ$ part of the domain); \item Send packed buffers to neighboring sub-domains; \item Unpack received values into boundaries in the X direction; \item Pack the values from the outer cells of the inner $NX \times NY \times NZ$ along with values received from the previous exchange (to ensure corners are appropriately handled), to be sent to neighbors in the Y direction; \item Exchange packed buffers in the Y-direction; \item Unpack received buffers into the boundaries of the sub-domain; \item Pack the values from not only the inner core from the sub-domain but also from the two previous exchanges; \item Exchange packed buffers in the Z-direction; \item Unpack received buffers into boundaries. \end{enumerate} Having done this we can choose to perform the update of $E$ and $\Pi$. However, in order to obtain Weak scaling above $90\%$ for thousands of GPUs we must also consider overlapping the compute and communication work. In order to compute overlap we can update the inner core of each sub-domain while we start packing the values for communication along X. Note that the outermost cells of this inner core require values from the boundaries which are still being communicated. This means that the size of the inner core we update must be $(NX-2) \times (NY-2) \times (NZ-2)$ (while for communication it remains $NX \times NY \times NZ$. After exchange in the X-direction is completed, we can begin updating the outer part of the sub-domain along X (given that the necessary boundary is not available), while communication is completed in the Y-direction. This proceeds until all boundaries are exchanged and $E$ and $\Pi$ are updated everywhere. At such a point, we can simply update $\phi$ and $A$. A schematic view of this is presented, for the simpler case of 2 dimensions, in Figure \ref{fig1}. Note that here we can also make use of multiple CUDA streams (asynchronous with respect to each other) in order to allow overlap between the compute kernels and the pack/unpack kernels (one stream per each pack/unpack kernel). This also means that the correct dependencies between streams must be enforced. This can be done using a combination of \textit{cudaEventRecord} (which signals the completion of a kernel in a given stream) and \textit{cudaStreamWaitEvent} (which can force a stream to wait for completion of a given event in another stream). \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{Size} & \textbf{m} & \textbf{$\dot{\xi_\mathcal{L}}$} & \textbf{$\dot{\xi_W}$} & \textbf{$\langle v^2 \rangle_\omega$} & \textbf{$\langle v^2 \rangle_\phi$} & Reference\\ \hline $1024^3$ & 1/2 & $0.280\pm0.023$ & $0.282\pm0.026$ & $0.306\pm0.003$ & $0.272\pm0.002$ & This work\\ $2048^3$ & 1/2 & $0.268\pm0.011$ & $0.267\pm0.010$ & $0.312\pm0.001$ & $0.283\pm0.001$ & This work\\ $4096^3$ & 1/2 & $0.253\pm0.007$ & $0.251\pm0.006$ & $0.308\pm0.002$ & $0.282\pm0.001$ & This work\\ \hline $512^3$ & 1/2 & $0.30\pm0.02$ & $0.32\pm0.03$ & $0.32\pm0.01$ & $0.31\pm0.01$ & \citet{Correia:2018gew} \\ $512^3$ & 1/2 & $0.31\pm0.02$ & - & - & - & \citet{Bevis:2006mj} \\ $1024^3$ & 1/2 & - & $0.26\pm0.02$ & - & - & \citet{Bevis:2010gj} \\ $4096^3$ & 1/2 & $0.234\pm0.006$ & $0.244\pm0.005$ & - & - & \citet{Daverio:2015nva} \\ \hline $1024^3$ & 2/3 & $0.279\pm0.016$ & $0.285\pm0.017$ & $0.255\pm0.003$ & $0.228\pm0.004$ & This work\\ $2048^3$ & 2/3 & $0.256\pm0.006$ & $0.257\pm0.005$ & $0.264\pm0.001$ & $0.240\pm0.001$ & This work\\ $4096^3$ & 2/3 & $0.252\pm0.010$ & $0.250\pm0.009$ & $0.265\pm0.001$ & $0.243\pm0.001$ & This work\\ \hline $512^3$ & 2/3 & $0.29\pm0.01$ & $0.29\pm0.02$ & $0.27\pm0.01$ & $0.25\pm0.01$ & \citet{Correia:2018gew} \\ $512^3$ & 2/3 & $0.30\pm0.01$ & - & - & - & \citet{Bevis:2006mj} \\ $1024^3$ & 2/3 & - & $0.28\pm0.01$ & - & - & \citet{Bevis:2010gj} \\ $4096^3$ & 2/3 & $0.235\pm0.008$ & $0.247\pm0.008$ & - & - & \citet{Daverio:2015nva} \\ \hline \end{tabular} \caption{The asymptotic rate of change of the mean string separation $\xi$ and the mean velocity squared $\langle v^2\rangle$ for the estimators defined in the text, in the radiation and matter eras ($m=1$ and $m=2$ respectively), for our simulations with box sizes of $4096^3$, $2048^3$ and $1024^3$, using $4096$, $512$ and $64$ GPUs respectively. The error bars are the statistical uncertainties from averages of 20 runs with different initial conditions. For comparison we show the results reported in \protect\citet{Correia:2018gew} from the single GPU code (for averages of twelve $512^3$ simulations) as well as results from simulations with CPU-based codes. The range of timesteps used for each fit to the GPU simulations is respectively $[517,1023.5]$, $[300.5,511.5]$, $[100.5,255.5]$, $[80,128]$ for the $4096^3$, $2048^3$, $1024^3$ and $512^3$ simulations.\label{tab1}} \end{center} \end{table*} \section{\label{val}Code validation} In order to verify that the simulation boxes evolved by the new code behave as expected, one must compare the numerically measured physical estimators (the slope of the time dependence of the mean string separation and the asymptotic mean velocity squared) to the values available in the literature, including those previously obtained (at different box sizes) with the single GPU version of the code, which has been previously validated \citep{Correia:2018gew}. We will do this with constant co-moving width simulations in the canonical radiation and matter dominated epochs, since these are the most common in the literature. In order to conduct this test, we simulate $20$ runs for each expansion rate, using the same initial condition for both matter and radiation. Using roughly the final third of the time-steps we obtain the slopes of each curve and the average and standard deviation are then used to extract the asymptotic quantities and their one sigma statistical uncertainties. The results of this validation test are summarized in Table \ref{tab1} and also in Figures \ref{fig2} and \ref{fig3} (respectively for the radiation and matter dominated eras), and in a nutshell the new code is in agreement with the results in the literature, given the reported uncertainties. \begin{figure*} \centering \includegraphics[width=\columnwidth]{figure2a.pdf} \includegraphics[width=\columnwidth]{figure2b.pdf} \includegraphics[width=\columnwidth]{figure2c.pdf} \includegraphics[width=\columnwidth]{figure2d.pdf} \caption{The evolution of the four relevant average network estimators, defined in Eqs. \ref{diagxi} and \ref{diagvel}, for the average of 20 runs in the radiation-dominated epoch ($m=1/2$), with lattice sizes of $4096^3$, $2048^3$ and $1024^3$, using $4096$, $512$ and $64$ GPUs respectively. We assume constant co-moving width throughout.} \label{fig2} \end{figure*} Regarding the mean string separation, our previously obtained values at $512^3$ were in agreement with the values obtained by \citet{Bevis:2006mj} with the same box size. The larger simulations in the current work confirm the slight drift of the scaling value of $\dot{\xi}$ to lower values, as can be seen in \citet{Bevis:2010gj} for $1024^3$ boxes and \citet{Daverio:2015nva} at $4096^3$ (see also \citet{Hindmarsh:2017qff}). This slow drift may be due to the fact that a higher (spatial) resolution affects the main energy loss mechanims for the network (loop production and scalar and gauge radiation) in slightly different ways, which in turn impacts the string network density. A detailed exploration of this hypothesis, in the context of the recently improved and calibrated velocity-dependent one scale model \citep{Correia:2019bdl} is in progress. We also note that the two independent estimators for the mean string separation, defined in Eq. \ref{diagxi}, lead to fully consistent values for $\dot{\xi_\mathcal{L}}$ and $\dot{\xi_W}$. \begin{figure*} \centering \includegraphics[width=\columnwidth]{figure3a.pdf} \includegraphics[width=\columnwidth]{figure3b.pdf} \includegraphics[width=\columnwidth]{figure3c.pdf} \includegraphics[width=\columnwidth]{figure3d.pdf} \caption{Same as Fig. \protect\ref{fig2}, for the matter-dominated epoch ($m=2/3$).} \label{fig3} \end{figure*} As for the average velocity squared, our previous work \citep{Correia:2018gew,Correia:2019bdl} using the estimators of \citet{Hindmarsh:2017qff} had already established qualitative agreement with the values in the literature, up to and including $4096^3$ simulations. Here this agreement continues. Note that in the case of the velocities there is no statistically significant drift in the scaling value as a function of the box size. On the other hand, and in agreement both with \citet{Hindmarsh:2017qff} and with our earlier $512^3$ study, our present analysis confirms that the velocity estimator based on the gradient on $\phi$ leads to values that are consistently lower than those of the equation of state estimator, by about ten per cent at all box sizes. \section{\label{impl}Performance} All the performance tests we report herewith were performed at Piz Daint, the largest supercomputer in Europe. At the time of the measurements, this facility contains $5704$ nodes each equipped with with one NVIDIA Tesla P100. All benchmarks are performed assuming the evolution of a local string network as described in the previous sections. To closely mimic a typical use case (in other words, a typical production run), we choose to compute the Lagrangian based mean string separation and the mean velocity squared estimated from $\phi$ and its conjugate field $\Pi$, weighted with the Lagrangian. The computation of these network averaged quantities occurs at every $5$ time-steps. The initial conditions are generated at random in each case. We will test our code performance in terms of two metrics: strong and weak scaling. To define each, let's consider two possible test cases: the first is an application which is compute bound and requires an enormous amount of wall-clock time and the second is an application which is limited by the amount of available memory. In the first case, given a constant problem size, one wishes to increase the number of processors used for the computation, thus decreasing the workload for each processor and overall wall-clock time. Characterizing this behavior will tell us how much we can afford to sub-divide our simulation grid for a specific problem size in terms of performance gains. This scalability diagnostic is known as strong scaling. For the second application, we wish to target larger and larger problem sizes. So for this type of scalability, we keep the workload per process constant and both the problem size and number of computational elements increase. This diagnostic is known as weak scaling. Both strong and weak scaling are arguably relevant metrics for the problem at hand, but in pragmatic terms the weak scaling is the most relevant one. The limiting factor in contemporary simulations is clearly the amount of available memory, and indeed one must often extrapolate from these relatively small simulations to cosmologically relevant scales. Targeting larger and larger simulation sizes (and therefore larger dynamic ranges) would lessen this problem. In particular, doubling the box size along each dimension, besides the obvious increases in volume (by a factor of 8) and in dynamic range (by a factor of 2), also increases the range of physical scales between string thickness and horizon size that can be probed. In our case the strong scaling would only become critical if the total wall-clock times of the simulation were much larger. For simulations of cosmic string networks (or indeed those of other cosmological defects) the dynamic range of the simulation increases with the size of the whole box, being proportional to the length of the smallest box side $N$ (for any cubic simulation box $N^3$). Given this feature of string simulations, for the weak scaling diagnostic we quantify the time taken to evolve the lattice a number of time-steps equal to the number of time-steps required to evolve the smallest simulation box we considered, which has a size of $256^3$. With our choices for other numerical parameters, described above, this corresponds to a total of $630$ time steps. \begin{figure*} \centering \includegraphics[width=\columnwidth]{figure4a.pdf} \includegraphics[width=\columnwidth]{figure4b.pdf} \includegraphics[width=\columnwidth]{figure4c.pdf} \includegraphics[width=\columnwidth]{figure4d.pdf} \includegraphics[width=\columnwidth]{figure4e.pdf} \includegraphics[width=\columnwidth]{figure4f.pdf} \caption{Performance indicators for our multiple GPU code; strong scaling is shown in the top set of panels, while weak scaling can be seen in the middle and bottom ones. The left-hand side panels show wall-clock time for a full-run (for the strong scaling plot) or the amount of wall-clock time necessary to complete $630$ time-steps (middle panels for the weak scaling plot at $256^3$) or to complete $1270$ timesteps (bottom panels at $512^3$). The corresponding parallel efficiencies relative to reference cases as defined in the text (see e.g. Eq. \protect\ref{eq:eff} for strong scaling) are presented on the right hand side panels.} \label{fig4} \end{figure*} We characterize both the weak and strong scaling using a speed-up factor, $S$, and a parallel efficiency, $E$. Both are calculated in comparison to a reference wall-clock time, denoted $t_{ref}$. In strong scaling this corresponds to the wall-clock time necessary to fully evolve the full dynamic range with the smallest number of GPUs (where a box of size $N^3$ can be fitted). The speed-up is given by \begin{equation} S = \frac{t_{ref}}{t_{n}}\,, \end{equation} where $t_n$ is the wall-clock time taken when running with $n$ GPUs. For weak-scaling we can also use this formula, with $t_{ref}$ being the time taken for a $256^3$ simulation with one GPU and $t_{n}$ the time taken to perform an $n$-times larger simulation (with the same number of timesteps) in $n$ GPU's. Weak scaling parallel efficiency is then the speed-up as a percentage. For strong scaling the parallel efficiency is re-scaled with the number of GPUs which the reference run uses, $n_{ref}$, that is \begin{equation} E_{strong} = \frac{n_{ref} t_{ref}}{n t_{n}}\,. \label{eq:eff} \end{equation} where, as above, $t_n$ is the time taken to run the simulation with $n$ GPU's. With these defined we are in a position to describe the scalability of our application. For strong scaling there is an obvious point beyond which no useful scaling can be obtained. While we are unaware of any consensus on the definition of useful scaling, in this manuscript we assume useful scaling to only exist above $50\%$ efficiency. Specifically, in our case this point ensues when the sub-domain size becomes too small. This is evidenced by the low parallel efficiencies seen in Table \ref{tab2} and in the top panels of Figure \ref{fig4} when approaching a sub-domain size of $128^3$. This is something relatively common in most multi-GPU implementations, at least from a cursory overview of the literature \citep{gmd-11-1665-2018,Potter:2016ttn}. This behaviour stems from two reasons. The first reason is the amount of communications relative to the execution of CUDA kernels: not even the overlap is sufficient for cleverly hiding this cost for extremely small sub-domains. The second reason which contributes to this behaviour is the amount of latency from launching CUDA kernels. \begin{table*} \begin{center} \begin{tabular}{c|c|c|c|c|c} \textbf{Box size} & \textbf{Number of GPU's} & \textbf{Domain decomposition} & \textbf{Wall-clock time} & \textbf{Speed-Up} & \textbf{Efficiency}\\ & & (x,y,z) & (s) & & (\%) \\ \hline $512^3$& 1 & (1,1,1) & 96.0 & - & - \\ & 2 & (1,1,2) & 50.1 & 1.92 & 95.9 \\ & 8 & (2,2,2) & 18.2 & 5.16 & 66.0 \\ & 32 & (2,4,4) & 6.59 & 14.57 & 45.5 \\ \hline $1024^3$& 8 & (2,2,2) & 217.39 & - & - \\ & 64 & (4,4,4) & 37.06 & 5.87 & 73.3 \\ & 512 & (8,8,8) & 12.48 & 17.41 & 27.2 \\ \hline $2048^3$& 64 & (4,4,4) & 438.45 & - & - \\ & 512 & (8,8,8) & 76.15 & 5.76 & 72.0 \\ \hline $4096^3$& 512 & (8,8,8) & 948.52 & - & - \\ & 4096 & (16,16,16) & 156.96 & 6.04 & 74.3 \\ \hline $8192^3$& 4096 & (16,16,16) & 1990.51 & - & - \\ \end{tabular} \caption{Strong scaling measurements for different lattice sizes reported in wall clock time to fully simulate a network from start to finish. We also present the speed-up and a parallel efficiency, both relative to the reference measurement, which uses the smallest number of GPU's where the simulation will fit. \label{tab2}} \end{center} \end{table*} \begin{table*} \begin{center} \begin{tabular}{c|c|c|c|c|c} \textbf{Box size} & \textbf{Number of GPU's} & \textbf{Domain decomposition} & \textbf{Wall-clock time} & \textbf{Speed-Up} & \textbf{Efficiency}\\ & & (x,y,z) & (s) & & (\%) \\ \hline $256^3$ & 1 & (1,1,1) & 8.93 & - & - \\ $256^2 \times 512$ & 2 & (1,1,2) & 8.95 & 1.00 & 99.7 \\ $256 \times 512^2$ & 4 & (1,2,2) & 8.93 & 1.00 & 99.9 \\ $512^3$ & 8 & (2,2,2) & 8.94 & 1.00 & 99.8 \\ $1024^3$ & 64 & (4,4,4) & 9.17 & 0.97 & 97.4 \\ $1024^2 \times2048$& 128 & (4,4,8) & 9.34 & 0.96 & 95.6 \\ $1024 \times 2048^2$& 256 & (4,8,8) & 9.44 & 0.95 & 94.6 \\ $2048^3$ & 512 & (8,8,8) & 9.68 & 0.92 & 92.2 \\ $2048^3 \times 4096$ & 1024 & (8,8,16) & 9.61 & 0.92 & 92.9 \\ $4096^3$ & 4096 & (16,16,16) & 9.81 & 0.91 & 91.2\\ \end{tabular} \caption{Weak scaling measurements for fixed box size of $256^3$ per domain are presented above. The wall-clock time corresponds to the time to complete $630$ time-steps (the number of time-steps for a full $256^3$ size simulation). In addition we present a speed-up as well as a parallel efficiency.\label{tab3}} \end{center} \end{table*} \begin{table*} \begin{center} \begin{tabular}{c|c|c|c|c|c} \textbf{Box size} & \textbf{Number of GPU's} & \textbf{Domain decomposition} & \textbf{Wall-clock time} & \textbf{Speed-Up} & \textbf{Efficiency}\\ & & (x,y,z) & (s) & & (\%) \\ \hline $512^3$& 1 & (1,1,1) & 96.0 & - & - \\ $1024^3$& 8 & (2,2,2) & 108.27 & 0.89 & 88.7\% \\ $2048^3$& 64 & (4,4,4) & 108.97 & 0.88 & 88.1\% \\ $4096^3$& 512 & (8,8,8) & 117.75 & 0.82 & 81.5\% \\ $8192^3$& 4096 & (16,16,16) & 124.41 & 0.77 & 77.1\% \\ \hline \end{tabular} \caption{Derived weak scaling measurements for fixed box size of $512^3$ per domain are presented above. The wall-clock time corresponds to the time to complete $1270$ time-steps (the number of time-steps for a full $512^3$ size simulation). These are derived from the strong scaling measurements above. In addition we present a speed-up as well as a parallel efficiency.\label{tab4}} \end{center} \end{table*} However disappointing the strong scaling might be, one can argue that given the relatively short run-times across the board, there isn't a need for good strong scaling \footnote{We remark that this doesn't mean strong scaling is not useful. In fact it is useful to select the most the efficient (ie. less costly in node-hours) number of GPU's for a single simulation}. Indeed, while the largest Abelian-Higgs field theory simulations currently described in the literature have box sizes of $4096^3$, we have been able to carry out full (production run level) $8192^3$ simulations, which using $4096$ GPUs take only $33.2$ minutes of wall clock time. In the same $4096$ GPUs, production runs of size $4096^3$ (the largest reported in the literature so far for cosmic strings) take less than $3$ minutes of wall clock time. On the other hand, weak scaling for $256^3$ is almost perfect up to thousands of GPUs with the lowest detected efficiency being of $91\%$---see Table \ref{tab3} and also the middle panels of Figure \ref{fig4}. This strongly suggests the overlap is being successful. Note however, that the overlap is no silver-bullet. As soon as the box size per GPU increases to $512^3$ the overlap isn't enough to completely hide the communications cost, which, hits a lower parallel efficiency (around $77\%$) at the largest process count ($4096$). While the reason for the larger fractional overhead of communications at $512^3$ is so far unknown and warrants further investigation, we can speculate it might be due to overlap combined with automatic data movement that exists between GPU and CPU when using Unified Memory buffers. If it is so, possible solutions include manually moving data and the use of pinned host memory (which can be accessed at will by a GPU). Our code can thus yield production grade simulations with the largest sizes in the literature, $4096^3$, in between 140 and 180 node-hours (the exact number depending on the number of GPUs being used), while for $8192^3$ simulations this increases by a factor of 16 (a factor of 8 due to the increased volume, and 2 due to the increased dynamic range) to about 2200 node-hours. Comparing to a CPU simulation is difficult without having one such simulation, but we can compare with the standard Abelian-Higgs simulation of \cite{Daverio:2015nva,Hindmarsh:2017qff}. We were provided timings via \citet{Hind:email} for a single $4096^3$ run on the older Monte Rosa system for evolution and winding writing. Given that we forego writing windings to disk (choosing only to save the result of the estimator instead), we will compare only with the evolution numbers. A run with $32768$ processes would require, given $32$ cores per node, 1024 nodes to be used for 5.128 hours, i.e. a total of 5 251 node-hours. So we can simulate an 8 times larger lattice for twice as many time-steps ($8192^3$) while spending a bit less than half of the billed node-hours of a traditional $4096^3$ run. In other words, one can say that at this high end of contemporary high-performance computing facilities our code is faster by a factor of 30. Note that compute time on Piz Daint is book-kept in node-hours, which is why we presented and compared in node-hours in lieu of core-hours. The other reason we prefer to compare in node-hours instead of core-hours is due to the ambiguity of defining what a “core” might be in a GPU architecture. In order to explain why this is ambiguous let’s follow the definition of a Graphics processing unit as a collection of Streaming Multiprocessors (SM’s). In the case of Pascal GP100 (the architecture of the Tesla cards at Piz Daint), there are $56$ SM’s, each with $64$ single-precision CUDA cores (3584 CUDA cores total). All instructions act upon groups of 32 threads (called warps). One might then assume that only 2 warps can execute at the same time in GP100 SM, but in reality this is not the way to achieve good performance, and therefore it is not what is done in our simulation. The correct way is to oversubscribe and allow multiple warps to be executed at the same time (each doing different instructions). The scheduler of each SM can switch between warps at will, always aiming to hide the cost of latency (meaning hopefully one will have more than 64 threads per SM being executed). How many warps can be in flight will vary from simulation to simulation, depending on the amount of resources (for example, amount of register or shared memory – types of fast memory available to each SM). An analysis for our CUDA kernels can be found in our previous paper \cite{Correia:2018gew}. The way PRACE converts node-hours to core-hours in Project Access applications is to consider the normal CPU cores (in this case $12$) and the number of SM’s as if they were cores (as explained, not $100\%$ correct), which at Piz Daint gives 68 cores per node. This doesn’t completely punish GPU simulations (considering 3584 CUDA cores would result in a large number of core-hours) but at the same time, it’s not an entirely realistic number. Even using the PRACE definition, there is another source of ambiguity: we do not use all CPU cores of a node, as only one is required to schedule work to the GPU. We could either use the billed core-hours by converting from node-hours to core-hours with a factor of $68$ or define an “effectively used” number of core-hours, using a factor of $57$ ($56$ SM’s + $1$ CPU core). To end this section we would like to comment on the usage of other metrics to assess application performance in our case. Given that, like most stencil CUDA kernels in the literature with $2.5D$ decomposition, our CUDA kernels end up being memory bound, we do not believe that the amount of floating point operations is a useful metric. It thus follows that we must characterize how memory bound each CUDA kernel is. This was done in the past as seen in \citet{Correia:2018gew} for a different GPU (specifically a QUADRO P5000 at our local cluster facility). Note however that these kernels handled the boundary conditions differently (without including ghost cells as described above). The increase of the sub-domain by two along each direction and the subsequent misaligned access do reduce the memory access bandwidth, but only impact the overall run-time of each kernel slightly (i.e., by $2\%$ in a worst-case-scenario). \section{\label{concl}Conclusions and outlook} We have extended our previous GPU-accelerated Abelian-Higgs string evolution code to be able to harness more than on graphical accelerator. To do so we used the Message Passing Interface to handle the necessary boundary terms of a 3D decomposed lattice. Each sub-lattice is evolved on an accelerator with the Compute Unified Device Architecture (CUDA). In this paper we have validated the code by comparing its outcomes to those described in the literature, and also quantified its scalability. To summarize, the validation confirms a previously noticed slight decrease of the rate of change of the mean string separation as the box size is increased, while no such effect is seen in the average string velocity. A detailed study of this effect, and its possible relation to the relevant energy loss mechanisms for the cosmic strings, is left for subsequent work. When it comes to scalability we obtain near-perfect weak scaling up to $4096$ GPUs. Strong scalability is not as good, and from a comparison with the literature \citep{gmd-11-1665-2018} we might expect a processor only version to have a better strong scale behaviour. While this is not a limiting factor for the scientific exploitation of the code, it will be worth to explore techniques to improve this behaviour. One such possibility is the hypercube decomposition of \citet{BlancoPillado:2010sy}, which avoids communication and therefore achieves better scalability. There are other possible improvements we can implement on this simulation. One would be the inclusion of Mesh Refinement techniques \citep{Drew:2019mzc,Helfer:2018qgv} which can be used to probe smaller and smaller sub-horizon scales starting from a course lattice and refining as necessary. This would enable probing such small scales without requiring unnecessarily large lattices. Conversely, this also means one can save memory and thus simulate larger comoving sub-domains (albeit with larger spacing) per GPU, enabling larger dynamical ranges. Having a hybrid CPU/GPU simulation in this case would also be an interesting possibility as offloading computations to the GPU would only occur if a sufficiently large amount of threads on a sub-lattice can be used. Another important extension would be the exploration of Fast-Fourier Transform capabilities. While one can already use this simulation to obtain observational imprints of string networks by focusing on calibrating semi-analytical modelling (see below for comments on this), it is true that in order to obtain said imprints without modelling one needs to use Fast-Fourier Transforms (FFT's) of the energy-momentum tensor (Unequal Time Correlators). The question of the viability of multiple GPU's for FFT's is however a difficult problem, in particular due to the movement of data often necessary to compute lines of distributed FFT's. Fortunately, some progress has been done on the literature \citep{DBLP:journals/corr/GholamiHMB15} and more recently for the pseudo-spectral solver of \citep{10.1145/3295500.3356209}. In the latter, a careful adaptation of the simulation to the topology of the machine, consideration about on-node and off-node data transfer and compute-communications overlap enable a favourable speed-up to be obtained. We add that the considerations made in that paper to adapt to the topology of Summit, are merely a reflection of the recent trend amongst the top500 \cite{top} for denser nodes (more GPUs and CPU cores per node, less total nodes). It is also worthy of note that our code is not Input/Output bound when only outputting the network diagnostic quantities every few timesteps, which is the case in standard (production) runs. This changes when outputting an entire lattice of some quantity (such as the absolute value of scalar field, or the windings) every few timesteps for visualization or other detailed diagnostic purposes. As such we are currently exploring in-situ visualization techniques, as previously described in \citet{Ayachit:2015:PCE:2828612.2828624} in order to evade this bottleneck. In conclusion, this new version of our Abelian-Higgs cosmic string simulation can do production runs for the largest box sizes seen in the literature ($4096^3$) in very competitive amounts of node-hours, and indeed do even larger boxes ($8192^3$) in very reasonable amounts of node-hours. Given the excellent weak scalability at $256^3$ per process, which we have demonstrated up to 4096 GPUs, the numbers are even more appealing when expressed in terms of wall clock time: less than $3$ minutes for $4096^3$, and $33.2$ minutes for $8192^3$ simulations, using the said $4096$ GPUs. We are currently leveraging such an advantage by simulating hundreds of networks at differing expansion rates to calibrate the semi-analytical model of string evolution \citep{Martins:1996jp,Martins:2000cs}, appropriately extending it to account for the correct velocity-dependencies of energy loss and curvature, as seen in \citet{Rybak1}. This was done for small boxes in \citet{Correia:2019bdl}, enabling a first quantitative comparison of the relative importance of the energy loss mechanisms of the networks. Given the changes in the asymptotic quantities seen in Table \ref{tab1}, it will be important to re-calibrate the model for each box size we are currently able to simulate. The availability of a large sample of simulations with increased spatial resolution and dynamic range will also enable a detailed study of the amount of small-scale structure of the network itself. Both of these have obvious implications for any rigorous assessment of the observational consequences of cosmic string networks. \section*{Acknowledgements} This work was financed by FEDER---Fundo Europeu de Desenvolvimento Regional funds through the COMPETE 2020---Operational Programme for Competitiveness and Internationalisation (POCI), and by Portuguese funds through FCT - Funda\c c\~ao para a Ci\^encia e a Tecnologia in the framework of the project POCI-01-0145-FEDER-028987. J.R.C. is supported by an FCT fellowship (grant reference SFRH/BD/130445/2017). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Quadro P5000 GPU used for this research. We acknowledge PRACE for awarding us access to Piz Daint at CSCS, Switzerland, through Preparatory Access proposal 2010PA4610 and Project Access proposal 2019204986. Technical support from Jean Favre at CSCS is gratefully acknowledged.
{'timestamp': '2021-05-18T02:14:25', 'yymm': '2005', 'arxiv_id': '2005.14369', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14369'}
arxiv
\section{Introduction} \label{sec:intro} Recently, quantum computing (QC) has been gaining increased attention as a potential way to significantly speed up simulations of physical systems \cite{ref:m
ontanaro16}. The focus is usually made on modeling many-body quantum systems, whose enormous configuration space is often straightforward to map on reasonably sized quantum circuits, at least in principle. But it is also of interest to explore whether QC can be useful for modeling classical systems such as plasmas. In particular, this could benefit fusion science, which heavily relies on simulations. To assess the potential utility of QC for plasma physics, it is important to understand what a quantum computer can and cannot do naturally. A digital quantum computer usually stores information in some $N$ entangled qubits, which are two-level quantum systems. (Sometimes, $d$-level quantum systems, or qudits, are used instead, with $d > 2$.) Due to their entanglement, the total configuration space of the computer is a tensor product of the configuration spaces of individual qubits; {i.e.,\/}\xspace the computer state is described by a $2^N$-dimensional complex vector, $\Psi$. The exponential scaling of $\dim \Psi$ with $N$ can be advantageous in solving large-dimensional problems; however, a quantum computer is naturally fit to perform simulations only of a certain type. A quantum simulation consists of applying a sequence of a $M$ linear unitary operations (``gates'') to qubits, which results in linear unitary evolution of $\Psi$. Hence, a program is a circuit, and the practicality of a quantum simulation depends on how large $M$ and $N$ are, as these numbers are constrained by technology and, ultimately, by the computer price. Simulation results are output through classical measurements. With enough measurements, one can calculate the expectation value of any given operator on~$\Psi$ with a pre-defined accuracy, assuming that this operator is efficiently computable. Such architecture is particularly suitable, among other things (discussed in \Sec{sec:subdiscussion} and further), to simulating processes governed by a linear Schr\"odinger equation \begin{gather}\label{eq:schr} \mathrm{i} \partial_t \psi = \oper{H} \psi, \quad \oper{H}^\dag = \oper{H}, \end{gather} where $\psi$ is a (quantum or classical) state vector that characterizes the physical system and the Hermitian operator $\oper{H}$ serves as a Hamiltonian. This is understood as follows. The solution of \Eq{eq:schr} is $\psi(t) = \oper{U}\psi_0$, where $\psi_0$ is the initial value of~$\psi$, $\oper{U} = \msf{T} \exp[-\mathrm{i} \smash{\int_0^t} \oper{H}(t')\,\mathrm{d} t']$ is a unitary evolution operator, and $\msf{T}\exp(\ldots)$ is an ordered exponential; or simply $\oper{U} = \mathrm{e}^{-\mathrm{i} \oper{H} t}$, if $\oper{H}$ is independent of time. A quantum circuit that implements~$\oper{U}$ can perform a \textit{quantum Hamiltonian simulation} (QHS) to yield $\psi(t)$ for given $\psi_0$. It has been shown that QHSs can be much faster than classical simulations if $\oper{H}$ is efficiently computable and of a certain type, for example, if it is represented by a sparse matrix. (For example, see the pioneering \Ref{ref:lloyd96}, the recent works \cite{ref:childs18, foot:gilyen}, and the many papers cited therein.) This also makes QHSs potentially attractive as elements of more elaborate algorithms for solving general linear equations \cite{ref:harrow09, ref:berry14, ref:childs17}. So far, research in this area has been focused mainly on expanding the class of Hamiltonians for which efficient QHSs are possible in an \textit{ad~hoc} fashion and on solving, basically, random problems, albeit impressively \cite{ref:arute19}. QHS implementations for problems of practical interest are rarely considered, and applicability of the existing methods to practical simulations remains uncertain \cite{ref:scherer17, ref:montanaro16b}. This is even more the case with applications of QC to non-Hermitian Hamiltonians and to nonlinear problems, which are approached \cite{ref:motta19, ref:candia15, tex:leyton08} in ways that are unlikely to benefit simulations of classical systems like plasmas. In this situation, the conceptual aspects of plasma simulations on a quantum computer need to be developed from scratch. Here, we report a preliminary exploration of the (most obvious) long-term opportunities and likely obstacles for quantum simulations of classical plasmas. Our take on this problem is different from that of the authors who focus on quantum circuits for toy models \cite{ref:engel19, tex:shi20}. Toy models are of interest in quantum many-body physics, where even simple (efficiently mappable to qubits) Hamiltonians can produce dynamics that is both interesting and hard to simulate \cite{ref:georgescu14}. In plasma physics, though, the needs of toy-model simulations are typically satisfied already with classical computing (homogeneous turbulence may be an exception), so QC is of interest primarily for concrete practical applications. Hence, elaborate tricks that work only for special cases may not benefit the field in the long run. QC can become advantageous in plasma applications only if it can handle realistic, non-sparse, and, better yet, nonlinear Hamiltonians. Thus, rather than showing that QC can excel \textit{ad~hoc}, it may be more important to identify \textit{regular} high-level methods for mapping \textit{typical} plasma simulations on a quantum computer. This is the problem that we address below. Because practical plasma simulations are impossible with the minimalistic quantum computers that exist today, the approaches to be discussed are intended for future universal computers with error correction \cite{ref:devitt13}. Hopefully, the elementary algorithms of QC that make it promising ({e.g.,\/}\xspace sparse-matrix inversion \cite{ref:harrow09, ref:clader13}) will be made reliable enough by the time when error-corrected machines appear; hence, we are not concerned with low-level building blocks of QC. This approach is justified because our paper is not about quantum algorithms \textit{per~se}; rather, it is about reducing plasma problems to QC problems. Our paper is organized as follows. In \Sec{sec:linear}, we discuss the possibility of linear plasma simulations, particularly, related to modeling of radiofrequency (RF) waves. Both conservative and dissipative waves are considered. In \Sec{sec:nonlin}, we outline principles of nonlinear simulations in application to general dynamical systems, Hamiltonian dynamics, and fluid simulations in particular. In \Sec{sec:eigen}, we discuss QC applications to finding linear plasma eigenmodes, for example, magnetohydrodynamic modes (MHD), using hybrid quantum--classical computing. In \Sec{sec:varia}, we discuss applications of hybrid computing to nonlinear simulations. In \Sec{sec:conc}, we summarize our main results. In \App{app:llg}, we present a supplementary discussion to accentuate some of the basic ideas introduced in the main text. In \App{app:delta}, we elaborate on the definitions of the generalized functions that are used in the main text. \section{Linear dynamics} \label{sec:linear} First, let us discuss the possibility of linear plasma simulations, particularly RF-wave modeling. RF waves are commonly used as precision tools, for example, for plasma heating and current drive in fusion devices. Thus, it is important to be able to simulate these waves with fidelity, which is where QC could, in principle, make a difference. If quantum modeling could be made significantly faster than classical one, that would, for example, help increase the spatial resolution of RF-wave simulations. Then, one would be able to resolve high-order cyclotron resonances and also more robustly calculate mode conversion \cite{book:tracy} when the emergence of electrostatic oscillations with small wavelengths makes fine grids necessary. This could be useful for accurate modeling of waves in the electron-cyclotron, lower-hybrid, and ion-cyclotron frequency ranges \cite{book:stix}. In \Sec{sec:cold}, we show that a broad class of linear RF plasma waves can be mapped to \Eq{eq:schr} with a sparse Hamiltonian. Specifically, they include all waves in cold collisionless static plasmas, which can be arbitrarily inhomogeneous. In \Sec{sec:linearnonherm}, we consider general fluid waves in plasma, which can exhibit instabilities or irreversible dissipation. Such waves are governed by pseudo-Hermitian or non-Hermitian sparse Hamiltonians, and we shall discuss possible approaches to mapping those on a quantum computer. (We shall also return to such waves in \Sec{sec:eigen} in the context of the eigenvalue problem.) In \Sec{sec:kinetic}, we consider kinetic waves. Depending on a problem, the corresponding Hamiltonians can also be Hermitian; however, unlike for fluid waves, they are not sparse. Quantum simulations of such systems are less efficient, but we shall briefly outline some possibilities to deal with this issue. \subsection{Waves in cold collisionless plasmas: sparse Hermitian Hamiltonians} \label{sec:cold} \subsubsection{Introduction} \label{sec:rfintro} It is well known that the equations governing electromagnetic waves in vacuum allow the Schr\"odinger representation \eq{eq:schr} for the ``photon wave function'', which is generally six-dimensional \cite{foot:photon}. Such waves can be modeled using QHS, for which a concrete algorithm has been recently proposed \cite{ref:costa19, ref:suau21}. The Schr\"odinger representation is also known for waves in inhomogeneous media described by real nondispersive dielectric permittivity and magnetic permeability \cite{my:qdiel}. For stable waves in nondissipative media with arbitrary dispersion, the Schr\"odinger representation has been proven to exist too \cite{my:wkin}. It can be found from general principles in the small-wavelength limit, for example, in the geometrical-optics and quasioptical approximations \cite{my:quasiop1, my:quasiop2, my:quasiop3}. However, deriving the actual Schr\"odinger representations for exact, or ``full-wave'', linear plasma-wave problems requires a detailed consideration of plasma dynamics. \subsubsection{Basic equations} \label{sec:basiceq} There is at least one plasma model within which an exact Schr\"odinger representation of full-wave dynamics can be formulated explicitly and leads to sparse Hermitian Hamiltonians. This is the linearized model of cold collisionless static plasma, which is often sufficient for RF-wave modeling in practical applications (up to dissipation, which is discussed in \Sec{sec:linearnonherm}). Let us consider this model in detail. Suppose that plasma is formed by some~$\mc{N}$ species with charges~$e_s$, masses~$m_s$, and unperturbed densities $n_{0s} = n_{0s}(\vec{x})$, where~$\vec{x}$ is the spatial coordinate. The linearized equation for the fluid velocity~$\vec{v}_s$ of each species in a wave with electric field~$\vec{E}$~is \begin{gather}\label{eq:vs} \partial_t \vec{v}_s = (e_s/m_s)\,\vec{E} + \vec{v}_s \times \vec{\Omega}_s, \end{gather} where $\vec{\Omega}_s \doteq e_s \vec{B}_0(\vec{x})/(m_s c)$ is the $s$th-species gyrofrequency (the symbol $\doteq$ denotes definitions), $\vec{B}_0$ is the dc magnetic field, and $c$ is the speed of light. Consider a rescaled velocity $\vec{\zeta}_s \doteq \vec{v}_s(4\pi n_{0s} m_s)^{1/2}$, which has the same units as $\vec{E}$. Then, \Eq{eq:vs} becomes \begin{gather}\label{eq:zetas} \partial_t \vec{\zeta}_s = \omega_{ps}\,\vec{E} + \vec{\zeta}_s \times \vec{\Omega}_s, \end{gather} where $\omega_{ps} \doteq \smash{e_s (4\pi n_{0s}/m_s)^{1/2}}$ is the signed plasma frequency of species $s$. (This representation is also used in \Refs{ref:friedland88, my:covar, phd:ruiz17} for related calculations.) Let us complement this equation with Ampere's law and Faraday's laws, \begin{gather} \textstyle \partial_t \vec{E} = - \sum_s \omega_{ps}\vec{\zeta}_s + c \nabla \times \vec{B}, \\ \partial_t \vec{B} = - c \nabla \times \vec{E}, \label{eq:farad} \end{gather} where $\vec{B}$ is the wave magnetic field. Using the Hermitian matrices \begin{widetext} \begin{gather} \alpha_x = \left( \begin{array}{rrr} 0 & 0 & 0\\[0pt] 0 & 0 & -\mathrm{i}\\[0pt] 0 & \mathrm{i} & 0 \end{array} \right), \quad \alpha_y = \left( \begin{array}{rrr} 0 & 0 & \mathrm{i}\\[0pt] 0 & 0 & 0\\[0pt] -\mathrm{i} & 0 & 0 \end{array} \right), \quad \alpha_z = \left( \begin{array}{rrr} 0 & -\mathrm{i} & 0\\[0pt] \mathrm{i} & 0 & 0\\[0pt] 0 & 0 & 0 \end{array} \right), \end{gather} one can also express the vector products through $\vec{\alpha} \doteq \{\alpha_x, \alpha_y, \alpha_z\}$. [Note that $\alpha_a$ are related to the Gell--Mann matrices, which serve as infinitesimal generators of SU$(3)$.] Specifically, for any three-component column vectors $\vec{\mc{A}}$ and $\vec{\mc{B}}$, one has $\vec{\mc{A}} \times \vec{\mc{B}} = - \mathrm{i}(\vec{\alpha} \cdot \vec{\mc{A}}) \vec{\mc{B}}$, as can be verified by direct calculation. Then, \Eqs{eq:zetas}--\eq{eq:farad} can be written~as \begin{gather} \mathrm{i} \partial_t \vec{\zeta}_s = \mathrm{i}\omega_{ps}\,\vec{E} - (\vec{\alpha} \cdot \vec{\Omega}_s) \vec{\zeta}_s, \\ \textstyle \mathrm{i} \partial_t \vec{E} = - \mathrm{i}\sum_s\omega_{ps}\vec{\zeta}_s + \mathrm{i} c (\vec{\alpha} \cdot \boper{k}) \vec{B}, \\ \mathrm{i} \partial_t \vec{B} = - \mathrm{i} c (\vec{\alpha} \cdot \boper{k}) \vec{E}, \end{gather} where we have introduced the wavevector operator $\boper{k} \doteq - \mathrm{i}\nabla$. These equations can be represented as a $3(\mc{N} + 2)$-dimensional vector equation of the form \eq{eq:schr} with $\psi = (8\pi)^{-1/2}\{\vec{\zeta}_1, \vec{\zeta}_2, \ldots, \vec{\zeta}_\mc{N}, \vec{E}, \vec{B}\}$ and a time-independent Hermitian Hamiltonian \begin{gather}\label{eq:coldH} \oper{H} = \left( \begin{array}{cccccc} - \vec{\alpha} \cdot \vec{\Omega}_1(\vec{x}) & 0 & \ldots & 0 & \mathrm{i}\omega_{p1}(\vec{x}) & 0\\ 0 & - \vec{\alpha} \cdot \vec{\Omega}_2(\vec{x}) & \ldots & 0 & \mathrm{i}\omega_{p2}(\vec{x}) & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \ldots & - \vec{\alpha} \cdot \vec{\Omega}_\mc{N}(\vec{x}) & \mathrm{i}\omega_{p\mc{N}}(\vec{x}) & 0\\ - \mathrm{i}\omega_{p1}(\vec{x}) & - \mathrm{i}\omega_{p2}(\vec{x}) & \ldots & - \mathrm{i}\omega_{p\mc{N}}(\vec{x}) & 0 & \mathrm{i} c \vec{\alpha} \cdot \boper{k}\\ 0 & 0 & \ldots & 0 & -\mathrm{i} c \vec{\alpha} \cdot \boper{k}& 0 \end{array} \right). \end{gather} This $\oper{H}$ is linear in $\boper{k}$, so it is naturally represented by a sparse matrix when mapped to a grid. Thus, efficient QHS of collisionless cold-plasma waves are, in principle, possible using already existing algorithms. Details, including specific algorithms and possible issues with the initial-state preparation, can be found in \Refs{ref:costa19, ref:gourdeau17}, where QHS for similar Hamiltonians have been recently discussed. \subsubsection{Relevant measurements} \label{sec:coldmeas} For waves governed by Hamiltonians \eq{eq:coldH}, the output data of interest can be, say, the energy \begin{gather} \mcc{E} \doteq \int_V \Bigg( \sum_s \frac{n_{0s} m_s v_s^2}{2} + \frac{E^2}{8\pi} + \frac{B^2}{8\pi} \Bigg)\,\mathrm{d}\vec{x} \end{gather} within some finite volume~$V$, which can be expressed as $\mcc{E} = \int_V \psi^\dag(\vec{x}) \psi(\vec{x})\,\mathrm{d}\vec{x}$. Let us introduce the window operator $\oper{\msf{W}} = \msf{W}(\vec{x})$ such that its coordinate representation is a window function $\msf{W}$ defined via $\msf{W}(\vec{x} \in V) = 1$ and $\msf{W}(\vec{x} \notin V) = 0$. Then, we can express the energy as $\mcc{E} = \int \psi^\dag(\vec{x}) \oper{\msf{W}} \psi(\vec{x})\,\mathrm{d}\vec{x}$, where the integral is extended to the whole space. Hence, $\mcc{E}$ is the expectation value of $\oper{\msf{W}}$, \begin{gather} \mcc{E} = \braket{\psi|\oper{\msf{W}}|\psi}, \end{gather} so it can be naturally extracted as an outcome of a quantum simulation. The local energy \textit{density} can be extracted as $\mcc{E}/V$ at $V \to 0$. Other quantities bilinear in $\psi$ can be extracted similarly too, by replacing $\oper{\msf{W}}$ with the appropriate operators. \subsection{General fluid waves: pseudo-Hermitian and non-Hermitian sparse Hamiltonians} \label{sec:linearnonherm} \subsubsection{Introduction} If fluid plasma has inhomogeneous density \textit{and} finite temperature (or average flow velocity), then it has free energy \cite{ref:gardner63, ref:bernstein58, my:restack, ref:helander17} that can drive linear instabilities. Although the corresponding dynamics remains Hamiltonian in the general sense of the word, the ``quantumlike'' Hamiltonian that enters the corresponding Schr\"odinger equation ceases to be Hermitian and becomes \textit{pseudo}-Hermitian instead \cite{ref:larsson91, ref:brizard94, ref:mostafazadeh02}. This means that plasma dynamics is governed by \begin{gather}\label{eq:pseudo1} \mathrm{i} \partial_t \psi = \oper{H} \psi, \quad \oper{H}^\dag \oper{\eta} = \oper{\eta}\oper{H}, \end{gather} where $\oper{\eta}$ is some time-independent Hermitian operator. (For example, see \Ref{ref:brizard92} for the absence of Hermiticity in linearized MHD and also \Refs{my:wkeadv, my:shear, ref:qin19} for the absence of Hermiticity in hydrodynamical perturbations in sheared flows.) Unless $\oper{\eta}$ is positively defined, there is no variable transformation that maps \Eq{eq:pseudo1} to \Eq{eq:schr} and \Eq{eq:pseudo1} cannot be solved directly using QHS. The same conclusion applies if plasma is collisional. For example, consider cold electron-ion plasma, possibly with immobile neutrals in the background. Then, the electron and ion velocities satisfy \begin{gather} \partial_t \vec{v}_e = (e_e/m_e)\,\vec{E} + \vec{v}_e \times \vec{\Omega}_e - \nu_{en}\vec{v}_e - \nu_{ei}(\vec{v}_e - \vec{v}_i), \\ \partial_t \vec{v}_i = (e_i/m_i)\,\vec{E} + \vec{v}_i \times \vec{\Omega}_i - \nu_{in}\vec{v}_i - \nu_{ie}(\vec{v}_i - \vec{v}_e), \end{gather} where $\nu_{en}$ and $\nu_{in}$ are the electron and ion rates of collisions with neutrals respectively, $\nu_{ei}$ is the electron--ion collision rate, $\nu_{ie} = (Z m_e/m_i)\nu_{ei}$ is the ion--electron collision rate, and $Z$ is the ion charge state. The Hamiltonian that governs $\psi \doteq (8\pi)^{-1/2}\{ \vec{\zeta}_e, \vec{\zeta}_i, \vec{E}, \vec{B}\}$~is \begin{gather}\label{eq:Hu} \oper{H} = \left( \begin{array}{cccc} - \vec{\alpha} \cdot \vec{\Omega}_e - \mathrm{i} (\nu_{en}+\nu_{ei}) & \mathrm{i} \varsigma & \mathrm{i}\omega_{pe} & 0\\ \mathrm{i}\varsigma & - \vec{\alpha} \cdot \vec{\Omega}_i - \mathrm{i} (\nu_{in}+\nu_{ie}) & \mathrm{i}\omega_{pi} & 0\\ - \mathrm{i}\omega_{pe} & - \mathrm{i}\omega_{pi} & 0 & \mathrm{i} c \vec{\alpha} \cdot \boper{k}\\ 0 & 0 & -\mathrm{i} c \vec{\alpha} \cdot \boper{k}& 0 \end{array} \right), \end{gather} \end{widetext} where $\varsigma \doteq \nu_{ei} \sqrt{Z m_e/m_i}$ and $\vec{x}$-dependence of the coefficients is allowed, like in \Eq{eq:coldH}. (The remaining notation is the same as in \Sec{sec:cold}.) For non-Hermitian systems like those governed by \Eqs{eq:pseudo1} and \eq{eq:Hu}, some authors proposed methods close to QHS \cite{ref:motta19, ref:candia15}, but those methods are unlikely to suit plasma simulations. Instead, we propose to follow the idea from \Ref{ref:berry14}, which is as follows. \subsubsection{Initial-value problem} \label{sec:initv} Let us consider time as one of the coordinate variables and introduce the corresponding ``momentum'' (frequency, or energy) operator $\oper{\omega} \doteq \mathrm{i} \partial_t$. Let us also introduce $\oper{\mc{H}} \doteq \oper{\omega} - \oper{H}$. Then, one can rewrite \Eq{eq:schr} as $\oper{\mc{H}}\psi = \xi$, with $\xi(t, \vec{x}) \doteq \mathrm{i}\psi_0(\vec{x})\delta(t)$, where $\delta$ is the Dirac delta function. On a time grid $t = \lbrace t_0, t_1, \ldots \rbrace$, where $t_0 = 0$, this becomes $\oper{\mc{H}}\psi = \Xi$, where $\psi \doteq \lbrace \psi(t_0, \vec{x}), \psi(t_1, \vec{x}), \ldots \rbrace$, \begin{gather}\label{eq:Xin} \Xi_n(\vec{x}) = \mathrm{i}\psi_0(\vec{x})\delta_{n,0}, \end{gather} and $\delta_{a,b}$ is the Kronecker symbol. This equation can be represented as \begin{gather}\label{eq:AXY} \oper{\msf{A}} \msf{X} = \msf{Y}, \end{gather} where $\oper{\msf{A}}$ is a Hermitian operator. Specifically, \begin{gather}\label{eq:AAA} \oper{\msf{A}} \doteq \left( \begin{array}{cc} 0 & \oper{\mc{H}}\\ \oper{\mc{H}}^\dag & 0 \end{array} \right), \quad \msf{X} \doteq \left( \begin{array}{c} 0 \\ \psi \end{array} \right), \quad \msf{Y} \doteq \left( \begin{array}{c} \Xi \\ 0 \end{array} \right) . \end{gather} (Dissipation and instabilities are captured within this approach in the structure of the eigenvectors of $\oper{\msf{A}}$. In a way, these eigenvectors can be understood as ``surface modes'' bounded on the time axis to the initial and finite moments of time.) If \Eq{eq:AXY} is also discretized in space, then the nontrivial part of $\msf{Y}$ has dimension much less than $\dim\msf{X}$ due to the Kronecker symbol in \Eq{eq:Xin}, so the right-hand side of \Eq{eq:AXY} on a grid can be prepared efficiently. Then in principle, this equation can be solved efficiently using the known Harrow--Hassidim--Lloyd (HHL) or other quantum algorithms \cite{ref:harrow09, ref:childs17}. Naturally, those algorithms are not a magic wand; for example, they are efficient only for sparse matrices and require that the condition number scales well. A discussion of these problems is beyond the scope of our paper, but see \Refs{ref:clader13, ref:scherer17, ref:montanaro16b}. Let us only point out one issue, which is less technical. The eigenvalues $\lambda$ of the Hermitian operator $\oper{A}$ can be related to those of the original non-Hermitian operator $\oper{\mc{H}}$. By definition, \begin{gather}\label{eq:detl} \det \left( \begin{array}{cc} - \lambda \oper{I} & \oper{\mc{H}}\\ \oper{\mc{H}}^\dag & - \lambda \oper{I} \end{array} \right) = 0, \end{gather} where $\oper{I}$ is a unit operator. Using Schur's determinant identity, one can rewrite \Eq{eq:detl} as follows: \begin{multline} 0 = \det(- \lambda \oper{I})\,\det[- \lambda \oper{I} - \oper{\mc{H}}(- \lambda^{-1} \oper{I})\oper{\mc{H}}^\dag] \\ = \det(\lambda^2 \oper{I} - \oper{\mc{H}}\oper{\mc{H}}^\dag). \end{multline} Hence, $\lambda$ can be found as the (real) eigenvalues of $\smash{\pm (\oper{\mc{H}}\oper{\mc{H}}^\dag)^{1/2}}$. This shows that $\lambda$ may not depend analytically on the parameters of a problem even when $\oper{H}$ does. (In the special case when $\oper{\mc{H}}$ is Hermitian, $\lambda$ are simply the eigenvalues of $\smash{\pm \oper{\mc{H}}}$; then, they are analytic if $\oper{H}$ is analytic~\cite{ref:mengi14}.) To what extent this affects of robustness of the whole scheme is yet to be determined. \subsubsection{Boundary-value problem} \label{sec:boundary} RF-wave simulations in plasma physics are typically concerned with stationary waves, in which case the frequency $\omega$ is constant and $\psi$ prescribed on some boundary, say, an antenna. Then, instead of solving an initial-value problem, one can solve a boundary-value problem, which is even simpler. In this case, $\oper{\omega} = \omega$ is a constant and \Eq{eq:schr} can be expressed as $\oper{\msf{H}} \psi = 0$, where $\oper{\msf{H}} \doteq \omega - \oper{H}$. Let us assume the decomposition \begin{gather} \oper{\msf{H}} \doteq \left( \begin{array}{cc} \oper{\msf{H}}_{aa} & \oper{\msf{H}}_{ab}\\ \oper{\msf{H}}_{ba} & \oper{\msf{H}}_{bb} \end{array} \right), \quad \psi = \left( \begin{array}{c} a \\ b \end{array} \right), \end{gather} where $b$ is the part of $\psi$ that belongs to the antenna. Then, $a$ is governed by \begin{gather}\label{eq:Hbound} \oper{\mc{H}}a = \Xi, \quad \oper{\mc{H}} \doteq \oper{\msf{H}}_{aa}, \quad \Xi \doteq - \oper{\msf{H}}_{ab}b. \end{gather} Note that $\dim b$ is proportional, with a small coefficient, to the number of cells representing the plasma surface, while $\dim a$ is roughly the number of cells representing the plasma volume, $\dim a \gg \dim b$. This means that $b$ can be prepared efficiently, and thus so can $\Xi$. (Remember that $\oper{\msf{H}}_{ab}$ is sparse.) Furthermore, \Eq{eq:Hbound} has the same form as the one in the initial-value problem. Thus, in principle, this equation can be solved efficiently using the same method as in \Sec{sec:initv}, with the same reservations. \subsubsection{Relevant measurements} For dissipative linear waves, the result sought in simulations is typically the power $P_{\rm abs} = \int_V \mc{P}_{\rm abs}\,\mathrm{d}\vec{x}$ dissipated in some finite volume~$V$. (If dissipation is mainly resonant, it can be assumed well localized in space, so~$V$ can be small compared to the simulation box.) Most generally, $\mc{P}_{\rm abs}$ can be related to the anti-Hermitian part of the Hamiltonian $\oper{H}$; for example, see \Ref{my:zonal}. However, it is often enough to calculate this power within the geometrical-optics approximation \cite{book:stix}, \begin{gather}\label{eq:Pabs1} \mc{P}_{\rm abs} = \frac{\omega}{4\pi}\, \favr{\vec{E}^\intercal\vec{\epsilon}_A(t, \vec{x}, \omega, \vec{k}) \vec{E}}_t. \end{gather} Here, $\vec{E}^\intercal = \vec{E}^\dag$ is the transposed (real) electric-field vector, $\vec{\epsilon}_A = \vec{\epsilon}_A^\dag \doteq (\vec{\epsilon} - \vec{\epsilon}^\dag)/2\mathrm{i}$, $\vec{\epsilon}$ is the dielectric tensor that slowly depends on $t$ and $\vec{x}$, $\omega$ and $\vec{k}$ are the local frequency and the local wavevector, and $\favr{\ldots }_t$ denotes time averaging over the wave period. Within the geometrical-optics approximation, one can replace \Eq{eq:Pabs1} with $\mc{P}_{\rm abs} = \left\langle\vec{E}^\intercal \oper{\msf{P}} \vec{E} \right\rangle_t$, \begin{gather} \oper{\msf{P}} \doteq \frac{1}{8\pi} \left\{ \oper{\omega}\vec{\epsilon}_A(t, \vec{x}, \oper{\omega}, \boper{k}) + \big[\oper{\omega}\vec{\epsilon}_A(t, \vec{x}, \oper{\omega}, \boper{k})\big]^\dag \right\}. \end{gather} Then, the dissipated power can be expressed as the following expectation value: \begin{gather} P_{\rm abs} \propto \braket{\vec{E}| \oper{\msf{P}}\oper{\msf{W}} + \oper{\msf{W}} \oper{\msf{P}}| \vec{E}}, \end{gather} where $\oper{\msf{W}}$ is a window operator. For a boundary problem, $\oper{\msf{W}}$ is the same as in \Sec{sec:coldmeas} (and $\oper{\omega} = \omega$ is a real constant). For an initial-value problem, the window function must be defined in spacetime, and the length of that along the time axis must be much larger than the characteristic temporal period $2\pi/\omega$. \subsection{Kinetic waves} \label{sec:kinetic} Now, let us discuss the possibility of quantum simulations of kinetic waves. For simplicity,\footnote{A quantumlike formulation of the \textit{general} linearized Vlasov--Maxwell system is also possible \cite{ref:larsson91} but requires a more complicated definition of the state function, so we do not consider the general case in this preliminary study.} let us limit our discussion to the collisionless kinetic model where the background plasma is homogeneous and isotropic. For spatially monochromatic fields in Maxwellian plasma, this model was previously discussed in \Ref{ref:engel19}, but here, we present it in a somewhat more general form. In particular, we do not restrict the field profile, and our general approach can be readily extended to inhomogeneous nonisotropic plasmas with flows. Let us assume the distribution function of species $s$ in the form $f_s(t, \vec{x}, \vec{p}) = f_{0s}(\vec{p}) + \tilde{f}_s(t, \vec{x}, \vec{p})$. Here, $f_{0s}$ is the background distribution and $\tilde{f}_s \ll f_{0s}$ is a small perturbation that satisfies the linearized Vlasov equation \begin{multline}\label{eq:tf} \partial_t \tilde{f}_s + \vec{v}_s \cdot \nabla \tilde{f}_s + e_s(\vec{v}_s \times \vec{B}_0/c)\cdot \partial_\vec{p} \tilde{f}_s \\= - e_s(\vec{E} + \vec{v}_s \times \vec{B}/c)\cdot\partial_\vec{p} f_{0s}. \end{multline} Here, $\vec{v}_s \doteq \vec{p}/(\gamma_s m_s)$ and $\gamma_s \doteq (1 + p^2/m_s^2c^2)^{1/2}$ is the Lorentz factor. (We retain relativistic effects because keeping them does not significantly complicate our model.) Let us assume that the background distribution is isotropic, which we express as follows: \begin{gather} f_{0s}(\vec{p}) = F_s(\mc{E}_s(\vec{p})/T_s). \end{gather} Here, $\mc{E}_s =\gamma_s m_s c^2$ is the energy, $T_s > 0$ is some effective temperature or \textit{the} temperature, if the distribution is Maxwellian. Then, $\partial_\vec{p} f_{0s} = -\vec{v}_s F_s'/(m_s T_s)$, so $\vec{v}_s \times \vec{B} \cdot \partial_\vec{p} f_{0s} = 0$, and \Eq{eq:tf} becomes \begin{gather}\label{eq:ttf} \mathrm{i} \partial_t \tilde{f}_s = \oper{h}_s \tilde{f}_s - \mathrm{i} e_s \vec{E} \cdot \vec{v}_s F_s'/T_s. \end{gather} Here, $\oper{h}_s$ is an operator that is Hermitian on the phase space $\msf{z} \doteq (\vec{x}, \vec{p})$ under the Euclidean metric; specifically, \begin{align} \oper{h}_s \tilde{f}_s & \doteq \vec{v}_s \cdot (-\mathrm{i} \nabla) \tilde{f}_s + (\vec{v}_s \times \vec{B}_0/c)\cdot(-\mathrm{i}\partial_\vec{p}) \tilde{f}_s \\ & = -\mathrm{i} \nabla \cdot (\vec{v}_s \tilde{f}_s) -\mathrm{i}\partial_\vec{p} \cdot[ (\vec{v}_s \times \vec{B}_0/c) \tilde{f}_s]. \end{align} In order to make \Eq{eq:ttf} manifestly conservative in conjunction with Ampere's law, consider a rescaled distribution $g_s(t, \msf{z}) \doteq \tilde{f}_s(t, \msf{z})/[\mathrm{i} r_s(\mc{E}_s)]$ with $r_s = \sqrt{|F_s'|/4\pi T_s}$. Then, one obtains \begin{gather}\label{eq:Eg1} \textstyle \mathrm{i}\partial_t \vec{E} = \mathrm{i} c (\vec{\alpha} \cdot \boper{k}) \vec{B} + \sum_s \int \mathrm{d}\vec{p}\, R_s \vec{v}_s g_s, \\ \textstyle \mathrm{i}\partial_t g_s = \oper{h}_s g_s + \sigma_s R_s \vec{E} \cdot \vec{v}_s, \label{eq:Eg2} \end{gather} where $R_s \doteq e_s\sqrt{4\pi |F_s'|/T_s}$ and $\sigma_s \doteq \textrm{sign}\,(-F_s')$. Finally, let us discretize the momentum space, so $\int \mathrm{d}\vec{p} \mapsto \sum_{\vec{p}} \smash{(\Delta p)^3}$, and rescale $g_s \mapsto \smash{(\Delta p)^{-3/2}} g_s$ and $R_s \mapsto \smash{(\Delta p)^{-3/2}} R_s$. Then, the resulting model is as follows: \begin{gather} \textstyle \mathrm{i}\partial_t \vec{E} = \mathrm{i} c (\vec{\alpha} \cdot \boper{k}) \vec{B} + \sum_{s,\vec{p}} R_s \vec{v}_s g_s, \label{eq:EBg1} \\ \mathrm{i}\partial_t \vec{B} = -\mathrm{i} c (\vec{\alpha} \cdot \boper{k}) \vec{E}, \label{eq:EBg2} \\ \textstyle \mathrm{i}\partial_t g_s = \oper{h}_s g_s + \sigma_s R_s \vec{E} \cdot \vec{v}_s, \label{eq:EBg3} \end{gather} where we have included Faraday's law for completeness. Equations \eq{eq:EBg1} form a Schr\"odinger-type equation for the vector field $\psi = (8\pi)^{-1/2}\{\vec{E}, \vec{B}, \vec{g}\}$, where each element of the vector $\vec{g}$ is a field in the $\vec{x}$ space, $g_s(t, \vec{x}, \vec{p})$, in which $s$ and $\vec{p}$ are fixed parameters. Like in the previous sections, relevant quantities of interest in this case are bilinear functionals of $\psi$, or the expectation values of (spatial or phase-space) window operators and other linear operators. Also, the corresponding Hamiltonian can be symbolically expressed as follows: \begin{gather}\label{eq:Hkin} \oper{H} = \left( \begin{array}{ccc} 0 & \mathrm{i} c (\vec{\alpha} \cdot \boper{k}) & R \vec{v} \\ -\mathrm{i} c (\vec{\alpha} \cdot \boper{k}) & 0 & 0\\ \sigma R \vec{v} & 0 & \oper{h} \end{array} \right). \end{gather} If some $F_s$ are nonmonotonic ($\sigma_s \ne 1$), meaning that the plasma has free energy, this Hamiltonian is pseudo-Hermitian and can support instabilities, as expected. Otherwise ($\sigma_s = 1$), $\oper{H}$ is Hermitian and the corresponding plasma dynamics can, in principle, be modeled using QHS. However, note that $\oper{H}$ is not sparse, so QHS are less efficient for kinetic simulations than for cold-wave simulations. This problem was addressed in \Ref{ref:engel19}. (The model from \Ref{ref:engel19} is obtained from ours as a special case by assuming Maxwellian plasma and $\boper{k} = \vec{k}$.) There, the authors adopted the approach from \Refs{ref:low17, ref:low19, tex:low17, ref:low16}, which formally allows efficient QHS with arbitrary non-sparse Hamiltonians. The recent study \cite{ref:childs18} indicates that this approach may be challenging beyond toy problems\footnote{In this approach, the evolution operator $\exp(-\mathrm{i} \oper{H}t)$ is represented through a series of rotations whose angles are found numerically. The authors of \Ref{ref:childs18} ``were unable to compute [those angles] explicitly except in very small instances''.}, so its practicality remains to be determined; but other approaches may also be possible. Note that the parts of the distribution function $\tilde{f}_s$ corresponding to different velocity elements interact with each other only through the collective electric field rather than directly. This means that \textit{the graph of the $\oper{H}$ is a star} (or more precisely, star with loops, due to the diagonal terms). Such special structure potentially allows for efficient QHS \cite{ref:childs09, ref:loke12}, although explicit algorithms for modeling kinetic plasma waves are yet to be developed. \section{Nonlinear dynamics} \label{sec:nonlin} \subsection{Preliminary considerations} \label{sec:prelim} Suppose a generic ODE \begin{gather}\label{eq:uG} \dot{u} = g(t, u). \end{gather} Here, the dot denotes a derivative with respect to time $t$, $u \equiv u(t, u_0)$ is some vector $\lbrace u^1, u^2, \ldots, u^{d_u}\rbrace$, $u_0$ is a given initial value serving as a parameter, and $g \equiv \lbrace g^1, g^2, \ldots, g^{d_u} \rbrace$ is a vector function that may be nonlinear. (The upper indices denote the vector components and must not be confused with power indices.) In the ``standard'' quantum algorithm for nonlinear ODEs proposed in \Ref{tex:leyton08}, $u$ is encoded in the amplitude of the state function such that $\psi \,\propto\, u$. Suppose a simple nonlinearity, say, $g\, \propto\, u^2$. Then, \Eqs{eq:uG} can be solved on a quantum computer iteratively if there is a subroutine that can generate a state \begin{gather}\label{nl:2} \textstyle \ket{\psi'}= \sum_{ijk} A_{ijk} \psi_j \psi_k \ket{i} \end{gather} from a given state $\ket{\psi}=\sum_i \psi_i\ket{i}$. The nonlinear transformation $\ket{\psi} \mapsto \ket{\psi'}$ cannot be produced with a single copy of $\ket{\psi}$ due to the linear nature of quantum mechanics and the so-called no-cloning theorem \cite{book:nielsen}. Still, it can be produced with a unitary transformation $\exp(-\mathrm{i}\epsilon H)$ (with $\epsilon \ll 1$) if one has two copies of $\ket{\psi}$ and an additional ancilla qubit $P$ in the state initialized to $\ket{0}_P$, \begin{multline} \ket{\psi}\ket{\psi}\ket{0}_P\mapsto\exp(-\mathrm{i}\epsilon\oper{H})\ket{\psi}\ket{\psi}\ket{0}_P \\ \approx \ket{\psi}\ket{\psi}\ket{0}_P + \epsilon \oper{A}\ket{\psi}\ket{\psi}\ket{1}_P. \label{nl:3} \end{multline} Here, the two-state non-Hermitian operator $\oper{A}$ is given~by \begin{gather} \textstyle \oper{A} =\sum_{ijk} A_{ijk}\ket{i,0}\bra{j,k} \label{nl:4} \end{gather} and acts as follows: \begin{gather} \textstyle \oper{A} \ket{\psi}\ket{\psi}= \left(\sum_{ijk} A_{ijk}\psi_j \psi_k \ket{i}\right)\ket{0}. \label{nl:5} \end{gather} Also, the Hermitian Hamiltonian $\oper{H}$ is constructed to implement ``von~Neumann measurement operation'', which entangles the desired result with the ancilla qubit $P$ \cite{tex:leyton08}, \begin{gather} \oper{H}=-\mathrm{i} \oper{A} \otimes \ket{1}_P \bra{0}_P + \mathrm{i} \oper{A}^\dag\otimes\ket{0}_P\bra{1}_P. \label{nl:6} \end{gather} Then, measuring the ancilla qubit $P$ in the resulting state~\eq{nl:3} and post-selecting the results with $P$ in the state $\ket{1}_P$ results in the desired state $\ket{\psi'}$ with probability $\sim \epsilon^2$. Alternatively one can use the amplitude-amplification algorithm \cite{tex:brassard00} that requires $\sim 1/\epsilon$ operations to increase the amplitude of the the state with $\ket{1}_P$ to $\sim 1/2$. In either case, at least two copies of the state $\ket{\psi}$ are required at every iteration step, which are then replaced by one copy of $\ket{\psi'}$ by the algorithm. This means that the number of copies of the initial state scales exponentially with the number of steps. Furthermore, this method is effectively restricted to $g$ that are low-order polynomials of $u$. Hence, it is unlikely to be suitable for practical ODE solvers.\footnote{That said, this algorithm is advantageous in that the number of qubits it requires scales logarithmically with the number of degrees of freedom. We shall return to this in \Sec{sec:subdiscussion}.} The alternative is to convert a nonlinear problem \eq{eq:uG} into a linear one. Although some nonlinear equations allow \textit{ad~hoc} variable transformations that make them linear, such special cases are of limited interest in practice. A more reliable approach is to extend the configuration space by introducing sufficiently many auxiliary degrees of freedom. Sometimes, adding a single degree of freedom is already enough (\App{app:llg}), but here we shall focus on methods that are more universal. In \Sec{sec:ham}, we consider the case of classical Hamiltonian dynamics, and the most general case is considered in \Sec{sec:general}. Yet another, variational, approach to nonlinear simulations, which is based on hybrid quantum--classical computing, will be discussed in \Sec{sec:varia}. \subsection{Classical Hamiltonian systems} \label{sec:ham} Classical Hamiltonian systems can always be made linear via quantization. For example, suppose that \Eqs{eq:uG} have the form \begin{gather}\label{eq:zH} \dot{x}^a = \partial_{p_a} \mc{H}, \quad \dot{p}_a = -\partial_{x^a} \mc{H}, \end{gather} where $\mc{H} = \mc{H}(t, x, p)$ is some scalar function known as the Hamiltonian. This system can be mapped to a linear quantum system \begin{gather}\label{eq:heff} \mathrm{i} \hbar' \partial_t \psi = \oper{\mc{H}} \psi. \end{gather} Here, $\psi$ is some complex scalar field and $\hbar'$ is a fake Planck constant that is introduced arbitrarily such that it be small enough but not necessarily equal (or even comparable) to the true Planck constant. The operator $\oper{\mc{H}}$ can be obtained from $\mc{H}$ by, say, taking the Weyl transform of the latter.\footnote{For example, see \Ref{book:tracy} or the supplemental material in \Ref{my:quasiop1}.} A procedure that is less pleasing aesthetically but still sufficient is to replace $x^a$ with the coordinate operator $\oper{x}^a$ (assuming the coordinate space is Euclidean), replace $p_a$ with the momentum operator $-\mathrm{i} \hbar' \partial_{x^a}$, and then take the Hermitian part of the resulting operator. As long as the effective de Broglie wavelength associated with $\psi$ remains small compared to the characteristic scales of the problem, the dynamics generated by \Eq{eq:heff} will adequately reflect the dynamics of the original classical system, and the classical variables can be found as expectation values of $\psi$. For example, let us consider $\mc{H}$ that is the Hamiltonian of a nonrelativistic classical particle interacting with electromagnetic field: \begin{gather} \mc{H}(t, \vec{x}, \vec{p}) = \frac{1}{2m}\left[ \vec{p}-\frac{e}{c}\,\vec{A}(t, \vec{x}) \right]^2 + e\varphi(t, \vec{x}). \end{gather} Here, $m$ and $e$ are the particle mass and charge, $\vec{A}$ is a vector potential, and $\varphi$ is a scalar potential. Then, \begin{gather} \oper{\mc{H}} = \frac{1}{2m}\left[ \oper{\vec{p}}-\frac{e}{c}\,\vec{A}(t, \oper{\vec{x}}) \right]^2 + e\varphi(t, \oper{\vec{x}}), \end{gather} which is Hermitian already as is ({i.e.,\/}\xspace hermitization is not needed in this case). Assuming the Madelung representation $\psi = \sqrt{n}\,\mathrm{e}^{\mathrm{i}\theta/\hbar'}$, where $n$ and $\theta$ are real, one obtains (see, {e.g.,\/}\xspace \Ref{my:qlagr}) \begin{gather} \partial_t n + \nabla \cdot (n \vec{v}) = 0, \label{eq:n} \\ m (\partial_t + \vec{v} \cdot \nabla) \vec{v} = e (\vec{E} + \vec{v} \times \vec{B}/c)- \nabla Q, \label{eq:v} \end{gather} where $\vec{v} \doteq (\nabla \theta - e\vec{A}/c)/m$ is the velocity, $\vec{E} \doteq - (1/c)\partial_t \vec{A} - \nabla \varphi$ and $\vec{B} \doteq \nabla \times \vec{A}$ are the electric and magnetic fields, and $Q$ is the Bohm potential, which is given by \begin{gather} Q = - \frac{\hbar'^2}{2m}\,\frac{\nabla^2 \sqrt{n}}{\sqrt{n}}. \end{gather} At small enough $\hbar'$, the Bohm potential is negligible (assuming that the characteristic spatial scale of $n$ is independent of $\hbar'$), so one obtains a semiclassical model, whose characteristics are exactly \Eqs{eq:zH}. Notably, albeit not surprisingly, \Eqs{eq:n} and \eq{eq:v} are just the classical equations of cold charged fluid with density $n$ and velocity $\vec{v}$. In this sense, our approach allows solving not only discrete Hamilton's equations but nonlinear fluid equations as well. The only subtlety is that, by definition, \begin{gather}\label{eq:irr} \nabla \times (m\vec{v} + e\vec{A}/c) = \nabla \times \nabla \theta \equiv 0. \end{gather} (See also \Ref{ref:seliger68}, which elaborates on the related issue in the variational formulation of classical fluid mechanics.) If more general fluids need to be modeled, they can be represented as ensembles of fluids satisfying \Eq{eq:irr}; {i.e.,\/}\xspace multiple functions $\psi$ can be introduced. Also note that alternative approaches to quantum simulations of classical fluids within the Navier--Stokes model were recently discussed in \Refs{tex:budinski21, ref:gaitan20}. \subsection{General approach} \label{sec:general} Now, let us return to the general \Eq{eq:uG}. We shall assume that both $u$ and $g$ are real; otherwise, the real and imaginary parts of $u$ can be treated as independent components of a real vector that satisfies an equation of the form \eq{eq:uG}. Consider\footnote{A similar approach was also proposed in parallel in \Ref{ref:joseph20}. Since the first preprint of our paper was released, related ideas have also been proposed in \Refs{tex:engel20, tex:liu20, tex:lloyd20}.} \begin{gather}\label{eq:Fdelta} F(t, w) = \delta[w-u(t, u_0)] \end{gather} (as a reminder, $\delta$ is the Dirac delta function), which represents the probability distribution in space $w$ that corresponds to the solution $w = u (t,u_0)$ with specific~$u_0$. Then, one obtains \begin{gather} \partial_t F =-\dot{u}^a [\partial_{w^a}\delta (w-u)] =-\partial_{w^a}[g^a(t,w)\delta (w-u)], \nonumber \end{gather} where summation over repeated indices is assumed. This can be viewed as a linear continuity equation for $F$, \begin{gather} \partial_t F(t, w) + \nabla_w \cdot [g(t, w)F(t, w)] = 0. \label{eq:Fcont} \end{gather} Next, let us introduce $\psi \doteq \sqrt{F(t, w)}$. [For simplicity, one can consider $F = \delta(w-u)$ as a sufficiently narrow Gaussian; then $\sqrt{F}$ is defined as usual \cite{ref:craven85}. For a general definition and for how to map such objects to a grid, see \App{app:delta}.] This function satisfies \begin{gather} \partial_t\psi =-\frac{1}{2}\,(\nabla_w \cdot g)\psi - g \cdot \nabla_w\psi. \label{eq:psit} \end{gather} To rewrite this in a compact form, let us introduce the coordinate operators $\oper{w}^a$ on the the $w$ space and the corresponding momentum operators $\oper{\rho}_a$: \begin{gather} \oper{\rho}_a \doteq -\mathrm{i}\partial_{w^a}, \quad [\oper{w}^a, \oper{\rho}_b] = \mathrm{i} \delta_b^a, \end{gather} where $[\cdot, \cdot]$ is a commutator. Then, $g$ can also be viewed as an operator, $\oper{g}^a\doteq g^a(t, \oper{w})$, which is Hermitian, because $g^a$ is real. Accordingly, the above equation for $\psi$ can be expressed as \begin{gather} \mathrm{i} \partial_t\psi =\frac{1}{2}\, [\oper{\rho}_a, \oper{g}^a]\psi + \oper{g}^a \oper{\rho}_a \psi = \oper{H}\psi, \label{eq:psit2} \end{gather} where $\oper{H}$ is a linear \textit{Hermitian} operator given by \begin{gather}\label{eq:HL} \oper{H}=\frac{1}{2}\, (\oper{\rho}_a \oper{g}^a + \oper{g}^a \oper{\rho}_a). \end{gather} Equation \eq{eq:psit2} has the form of a geometrical-optics wave equation \cite{my:quasiop1}. It is also a Schr\"odinger equation with a sparse Hamiltonian, so it can be solved directly using QHS. Once the solution for $\psi$ has been obtained, the value of $u^a$ at any given $t$, which can be expressed as $u^a(t) = \int \delta (w-u(t, u_0))\,w^a \,\mathrm{d} w$, is readily found as the expectation value of $\oper{w}^a$ on~$\psi$: \begin{gather}\label{eq:psiw} u^a =\int [\psi(t, w)]^2 w^a \mathrm{d} w \equiv \braket{\psi | \oper{w}^a | \psi}, \end{gather} where we have used the fact that $\psi$ is real by definition. Note that mapping the nonlinear problem \eq{eq:uG} to the linear problem \eq{eq:psit2} is exact. (Discretization errors occur when the equations are mapped on a grid, but they are not different from those in classical simulations of linear systems.) A disadvantage of this approach is that simulating the dynamics in the $w$ space is computationally expensive; it requires a grid whose number of cells scales as $\smash{N_w \sim n_u^{d_u}}$, where $n_u$ is the number of cells on the $u^a$ axis (assuming for simplicity that $n_u$ is the same for all~$a$). Since the required number of qubits scales logarithmically with $N_w$, it thereby scales linearly with $d_u$. This imposes limitations on how many degrees of freedom $d_u$ can be handled in practice. For example, this may not be a practical approach for solving partial differential equations, because they correspond to large $d_u$ when mapped on a grid. However, this approach is advantageous compared to the one described in \Sec{sec:prelim} in that its requirements on the computational resources do not grow exponentially with time and no intermediate measurements are involved. Also note that the same method can be used at no extra cost to model the evolution of $u$ averaged over any given initial distribution $f_0(u_0)$. The only difference in this case is that instead of \Eq{eq:Fdelta}, $F$ is defined as follows: \begin{gather}\label{eq:F0} F(t, w) = \int \delta[w - u(t, u_0)] f_0(u_0)\,\mathrm{d} u_0. \end{gather} It may appear surprising that such linear superposition of solutions corresponding to different $u_0$ maps to a linear equation \eq{eq:psit2} even though $\psi \doteq \sqrt{F}$ depends on $F$ nonlinearly. But this is understood if one considers the problem on a grid. In this case, the continuous distribution $F$ splits into a sum of delta distributions, $F = \sum_n F_n$, and $F_n F_m \equiv 0$ for all $n \ne m$, because trajectories do not intersect. Thus, $\psi \doteq \sqrt{F}$ maps to the sum $\psi = \sum_n \psi_n$, where $\psi_n \doteq \sqrt{F_n}$ evolve independently, each with its own~$u_0$. In case of Hamiltonian dynamics, when ${\nabla_w \cdot g} = 0$, \Eq{eq:Fcont} becomes the Liouville equation, with $w$ being the phase-space coordinate, and coincides with the Schr\"odinger equation for $\psi$. (The fact that the Liouville equation can be viewed as a Schr\"odinger equation has long been known; for example, see \Ref{my:wkin} and references therein.) In this case, the method described here can be viewed as a phase-space reformulation of the method described in \Sec{sec:ham}. Although the dimension of $u$ is twice as large as the dimension of $x$ ($d_u = 2d_x$), the de~Broglie wavelength does not need to be resolved, so both approaches require about the same number of cells in the corresponding $w$~spaces. In a given application, the first (\Sec{sec:ham}) or the second (\Sec{sec:general}) approach may be advantageous depending, for example, on a specific Hamiltonian. \subsection{Stochastic differential equations} Another interesting class of problems is where the right-hand side of an ODE contains a stochastic term~$f^\alpha$: \begin{gather}\label{eq:Ed0} \dot{u}^\alpha=g^\alpha(t,u)+f^\alpha(t). \end{gather} Let us assume that $f^\alpha$ has Gaussian statistics with \begin{gather}\label{eq:Ed1} \favr{f^\alpha(t)} = 0, \quad \favr{f^\alpha(t) f^\beta(t')} = A^{\alpha\beta}\delta(t-t'). \end{gather} Then, the corresponding equation \eq{eq:Fcont} for $F$ acquires an additional term: \begin{gather} \partial_t F =-\partial_{w^\alpha}[g^\alpha(t,w)F]-\partial_{w^\alpha}\favr{f^\alpha\delta(w-u)}. \label{eq:Ed2} \end{gather} Using the Novikov formula for a Gaussian noise \cite{ref:novikov65, book:mccomb}, \begin{gather} \favr{f^\alpha(t) J[f]} = A^{\alpha\beta} \left\langle \frac{\delta J[f]}{\delta f^\beta(t)} \right\rangle, \label{eq:Ed3} \end{gather} one can express the last term in \Eq{eq:Ed2} as \begin{gather} \favr{f^\alpha(t)\delta(w-u)}=-A^{\alpha\beta}\partial_{w^\gamma} \left\langle \frac{\delta u^\gamma(t)}{\delta f^\beta(t)}\,\delta(w-u)\right\rangle. \label{eq:Ed4} \end{gather} It follows from \Eq{eq:Ed0} that $\delta u^\alpha(t)/\delta f^\beta(t)=\delta^\alpha_\beta/2$ and therefore \Eq{eq:Ed2} for $F$ becomes an equation of the Fokker--Planck form, \begin{gather} \partial_t F + \partial_{w^\alpha}[g^\alpha(t,w)F]=\frac{1}{2}\,\partial_{w^\alpha}(A^{\alpha\beta}\partial_{w^\beta}F). \label{eq:Ed5} \end{gather} Unlike \Eq{eq:Fcont}, this equation does not allow a simple Schr\"odinger representation. However, since \Eq{eq:Ed5} is linear, it can be solved using the general methods described in \Sec{sec:linearnonherm}. This can be used, for example, for studying homogeneous Navier--Stokes turbulence \cite{book:mccomb, ref:edwards64}. \subsection{Discussion} \label{sec:subdiscussion} To recap the above findings, the most interesting and relevant plasma problems are not immediately suited for the traditional QC architecture, which is a fit mainly for linear Schr\"odinger equations with Hermitian Hamiltonians. In order to map plasma problems to this architecture, it appears necessary to extend the configuration space~$\mc{C}$ (which is also how it is done in the recent \Refs{ref:joseph20, tex:engel20, tex:liu20, tex:lloyd20}.) Handling non-Hermiticity requires that the system size be only doubled (\Sec{sec:linearnonherm}), which is tolerable; however, nonlinearity presents a bigger challenge. Here, we have proposed a universal approach that allows for an arbitrary nonlinearity and dissipation. The idea is to encode the information about a dynamical system into a state vector that determines the probability of the system to be in a given part of $\mc{C}$ (\Sec{sec:general}). The dynamics of this vector is linear and unitary, so it can be naturally mapped to the QC architecture. Simulations for multiple initial conditions can be performed in parallel; this can be beneficial, for example, in optimization problems, where multiple initial guesses need to be processed for finding the global minimum. The required computational resources, or the number of qubits $N$, scale in our approach logarithmically with the required resolution and linearly with the number of degrees of freedom. That makes our approach particularly attractive for nonlinear-ODE solvers, where the number of degrees of freedom is not too large and the corresponding quantum Hamiltonians are sparse. Then, the corresponding run time scales linearly with~$N$. This scaling is fundamentally different from that in the commonly cited \Ref{tex:leyton08}, where the independent variable is encoded in the amplitude of the state function directly rather than through the probability amplitude. As a result, the run time and the required number of qubits in \Ref{tex:leyton08} scale logarithmically with the number of degrees of freedom but exponentially with the number of steps (\Sec{sec:prelim}). Also notably, an algorithm similar to that in \Ref{tex:leyton08} has been proposed recently for quantum optimization of polynomial functionals and exhibits similar scalings \cite{tex:rebentrost18}. The exponential scaling appears unavoidable for all algorithms of this type; hence, they are practically applicable only when the required number of steps is small. This, perhaps, rules them out as ODE solvers for plasma simulations. As a side note, though, such algorithms might be suitable for solving optimization problems in plasma physics. This is seen from the following example. Let us consider the problem of magnetic-field optimization for the recently proposed permanent-magnet stellarator~\cite{ref:helander20}. The problem consists of finding the locations $\vec{x}_i$ and the dipole moments $\vec{m}_i$ of permanent magnets (subject to the engineering constraints) that produce a certain ``target'' field $\vec{B}_T$ within a prescribed volume. To uniquely specify such field, it is sufficient to specify the normal component of $\vec{B}_T$ on the volume boundary $\mc{S}$. Then, the problem can be reduced to minimizing the objective function $\mc{I}(\vec{y}) \doteq \int_{\mc{S}}[({\bf B}-{\bf B}_T) \cdot {\bf n}]^2 \mathrm{d} \vec{s}$, where $\vec{B}$ is the actual field produced by the magnets, $\vec{n}$ is the unit vector field normal to $\mc{S}$, and $\vec{y} \doteq \{\vec{x}_i, \vec{m}_i\}$ is the array of all independent variables. The magnetic field can be approximated with a nonlinear polynomial function $\mc{I}(\vec{y})$. Then, the standard approach to optimizing~$\mc{I}$ is to reduce the set of free parameters~$\vec{y}$ to some smaller set~$\vec{z}$ that has the biggest impact on plasma performance; however, doing so limits the degree of optimization. A quantum algorithm potentially can do better, since it can handle much more degrees of freedom, perhaps, even the actual $\vec{y}$. The exponential scaling with the number of steps, which is the main bottleneck of the algorithm in \Ref{tex:rebentrost18}, is not a problem here, because the anticipated number of the iteration steps is not large (assuming a good initial guess is available). Therefore, by using the algorithm from \Ref{tex:rebentrost18}, one might be able to find a more optimal field configuration and thus improve plasma performance. \section{Eigenmodes and plasma stability} \label{sec:eigen} Another class of numerical plasma-physics problems for which QC can be useful is the problem of finding global linear eigenmodes and their frequencies $\omega$ \cite{ref:parker20}. Such problems emerge naturally, for example, in the context of MHD stability of fusion devices. As commonly known, eigenmodes of a static plasma governed by ideal MHD satisfy \cite{book:friedberg} \begin{gather}\label{eq:mhd} -\omega^2\rho \, \xi^a = \oper{\mc{F}}^a{}_b\xi^b, \end{gather} where $\xi$ is a vector field that characterizes plasma displacement from a given equilibrium, $\rho$ is the equilibrium density, and $\oper{\mc{F}}$ is a linear operator that is Hermitian under the inner product $\braket{\xi|\eta} = \int \xi_a^*\eta^a\,\mathrm{d}\vec{x}$; accordingly, all $\omega^2$ are real, while $\omega$ can be real or imaginary. Equation \eq{eq:mhd} can be rewritten as follows: \begin{gather}\label{eq:mhd2} \oper{H}\psi = \lambda \psi, \quad \psi \doteq \rho^{1/2}\xi, \quad \oper{H} \doteq \rho^{-1/2} \oper{\mc{F}} \rho^{-1/2}, \end{gather} and $\lambda \doteq -\omega^2$ are real. Since $\oper{\mc{F}}$ is Hermitian, so is $\oper{H}$. Then, \Eq{eq:mhd2} belongs to the class of problems that yield to known efficient quantum algorithms.\footnote{Notably, there also exist quantum algorithms for calculating (complex) eigenvalues of non-Hermitian operators \cite{ref:daskin13}. However, these algorithms are considerably less efficient.} One of them is the earliest quantum eigensolver \cite{ref:abrams99}, which is related to the HHL algorithm mentioned earlier. Another option is a hybrid quantum--classical method \cite{ref:peruzzo14, ref:mcclean16}, which can be efficient provided that: (i) $\oper{H}$ can be split into a polynomial sum of few-qubits operators, $\oper{H} = \sum_n \oper{H}_n$, and (ii) one can prepare ``ansatz'' quantum states on demand that cover the relevant part of Hilbert space with a given finite list of classical parameters. This hybrid method is briefly described as follows. First, one calculates the ``ground state'', which corresponds to the smallest eigenvalue $\lambda$. To do that, one starts by preparing an ansatz state $\Psi$ with some trial parameters and calculates $H_n \doteq \braket{\Psi|\oper{H}_n|\Psi}$ on a quantum computer. Then, one feeds the results into a classical computer. The latter calculates $H \doteq \braket{\Psi|\oper{H}|\Psi}$ by summing up $H_n$ and then applies an iterative classical algorithm to adjust the parameters of the ansatz state such that $H$ be minimized. The resulting eigenstate is termed $\Psi_0$, and the corresponding eigenvalue is found as $\lambda_0 = \braket{\Psi_0|\oper{H}|\Psi_0}$. Next, one similarly minimizes $H$ in the subspace of vectors orthogonal to $\Psi_0$ and obtains the next eigenstate $\Psi_1$ and the corresponding eigenvalue $\lambda_1 = \braket{\Psi_1|\oper{H}|\Psi_1}$, and so on. Alternatively, the eigenvalues $\lambda$ of $\smash{\oper{H}}$ can be found as the local minima of the functional $\smash{\braket{\Psi| (\oper{H}-\lambda \oper{I})^2|\Psi}}$. This algorithm allows one to find both real and imaginary eigenfrequencies $\omega = \sqrt{-\lambda}$ and thus explore plasma stability within ideal MHD. The quantum computer is used as a co-processor whose role is to efficiently calculate the matrix elements $H_n$, in which it can significantly outperform a classical computer \cite{ref:peruzzo14, ref:mcclean16}. Also note that the hybrid method imposes less strict requirements on the hardware. Each quantum calculation evaluates only a single matrix element, so the coherence time can be much smaller than that needed for solving the whole problem solely on a quantum computer. \section{Variational approach to nonlinear simulations} \label{sec:varia} The hybrid quantum--classical variational approach can also be used for general simulations, including simulations of dissipative and nonlinear systems, as proposed in \Ref{ref:lubasch20}. Like in the previous case, one works with an ansatz quantum state $\phi(\theta)$ that is prepared on demand for a given finite list of classical parameters $\theta$. Suppose that at some time~$t_n$, one has $\phi(\theta_n) \approx \psi(t_n)$, which is an approximation to a true solution $\psi(t_n)$ of the general type system $\partial_t \psi=\oper{O}(t)\psi$ at time $t_n$. At the next time step $t_{n+1} = t_n + \tau$, the true solution is given by \begin{gather} \psi(t_{n+1}) \approx \big[\,\oper{I} + \tau \oper{O}(t_n)\big]\psi(t_n) \approx \big[\,\oper{I} + \tau \oper{O}(t_n)\big]\phi(\theta_n). \end{gather} For linear systems, the operator $\oper{O}$ is prescribed and thus known at all times. For systems with polynomial nonlinearity, all nonlinear terms in $\oper{O}(t_n)\phi(t_n)$ can be evaluated using the projection method of \Ref{tex:leyton08} (see \Sec{sec:prelim}), since multiple copies of $\phi(\theta_n)$ can be constructed in parallel at any given time without restarting the simulation. Therefore, one can use a quantum computer to efficiently evaluate the ``cost function'' \begin{multline} C(\theta_{n+1}) \doteq ||\phi(\theta_{n+1})-\psi(t_{n+1})||^2 \\ \approx ||\phi(\theta_{n+1}) - [I +\tau \oper{O}(t_n)\phi(\theta_n)]\,||^2 \end{multline} for any $\theta_{n+1}$. Then, one can efficiently find $\theta_{n+1}$ that minimizes the quantity $C(\theta_{n+1})$ using a classical computer. This amounts to finding $\phi(\theta_{n+1})$ that is maximally close to the true solution $\psi(t_{n+1})$; in other words, the system is integrated from $t_n$ to $t_{n+1}$. This process can be iterated from the initial moment of time, when $\psi$ is given, for any number of steps. Much like in \Sec{sec:eigen}, the role of the quantum computer here is limited to evaluating the cost function, while the optimization is done using a classical computer, which makes the scheme hybrid. The potential disadvantage of this method is that the simulation accuracy strongly depends on how closely the ansatz $\phi$ can approximate the true solution $\psi$. However, using an ansatz also has important advantages. Since each $\phi(\theta_n)$ is constructed independently for given $\theta_n$, such algorithm does not require exponentially many copies of $\phi$, unlike the method in \Ref{tex:leyton08}. Also, the ansatz-based method does not require extension of the configuration space assumed in \Sec{sec:nonlin}. This can be useful for solving nonlinear partial differential equations, whose configuration space on a grid is large. For example, \Ref{ref:lubasch20} describes application of the hybrid variational algorithm to solving a nonlinear Schr\"odinger equation, which is a common model in theory of nonlinear plasma waves. \section{Conclusions} \label{sec:conc} Unlike quantum-mechanical systems that, in principle, can be mapped to the QC architecture more or less straightforwardly, modeling classical systems with quantum computers is challenging even at the conceptual level. Here, we report a preliminary exploration of the long-term opportunities and likely obstacles in this area. First, we show that many plasma-wave problems are naturally representable in a quantumlike form and thus are naturally fit for quantum computers. Second, we consider more general plasma problems that include non-Hermitian dynamics (instabilities, irreversible dissipation) and nonlinearities. We show that by extending the configuration space, such systems can also be represented in a quantumlike form and thus can be simulated with quantum computers too, albeit that requires more computational resources compared to the first case. Third, we outline potential applications of hybrid quantum--classical computers, which include analysis of global eigenmodes and also an alternative approach to nonlinear simulations. The work was supported by the U.S. DOE through Contract No.~DE-AC02-09CH11466. The authors also thank Stuart Hudson for valuable input.
{'timestamp': '2020-06-01T02:10:40', 'yymm': '2005', 'arxiv_id': '2005.14508', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14508'}
arxiv
\section{Introduction} Estimating the average treatment effect (ATE) is an important issue in many fields including social science and health science. See \cite{pspara}. There are two basic methodolo
gies: inverse propensity score-based estimation (PS) and outcome regression-based estimation (OR). But the former requires correctly postulated propensity model and the latter needs correctly postulated regression model. To avoid these misspecifications, as a very promising method, the doubly robust estimation (DR) has been well studied to become an almost matured field. See, \cite{dr} and \cite{comment}. As well known, the most commonly used method is parametric modeling, see \cite{drpara+pspara} for example. As long as one of the models in DR is correctly specified, the estimation can be consistent. Alternatively, to avoid model mis-specification, nonparametric modeling is also applied, see \cite{psnonpara}. Later, a compromise between parametric and nonparametric estimation rises in the context of missing data to give a semiparametic estimation, see a relevant reference \cite{semidr}. In this paper, we focus on investigating the estimation efficiencies of all possible combinations of PS and OR estimator obtained by respectively using parametric, semiparametric and nonparametric estimations of both $PS$ and $OR$ model. As such, the research described herewith does not provide much about methodology development, while gives insightful observations for which combinations would be good choices for use when the models are correctly specified and when they are not. To this end, we will derive their asymptotic distributions and compare their asymptotic variances with the semiparametric efficiency bound in \cite{efficiencybound}. Particularly, the messages about the estimations with misspecified models are new and interesting. We consider both locally misspecified and globally misspecified scenarios. Here, the local misspecifition means that the misspecified model is only distinct from the correctly specified model at a rate converging to zero as the sample size $n \to \infty$, and the global misspecification means that the model cannot converge to a correctly specified model. We mainly discuss the local misspecifications for parametric models as they are often popularly used. The details can be found in Section~2. The main findings are listed as follows. \begin{itemize} \item When both $PS$ and $OR$ models are correctly specified, all combinations of PS and OR estimator share the same asymptotic efficiency. This is expected and coincides with the existing studies for some of the combinations in the literature. \item When $OR$ model is globally misspecified parametrically and semiparametrically while $PS$ model is not, the consistency of any combination is unaffected, but the asymptotic variance is in general enlarged except for the cases when nonparametric $PS$ model is applied. In other words, nonparametrically estimating $PS$ model helps improve the estimation efficiency. Under the local misspecification, the asymptotic efficiency can be achieved. \item In contrast, when $PS$ model is globally misspecified parametrically and semiparametrically while $OR$ model is not, we cannot have a definitive result about whether the asymptotic efficiency is worsen comparing with the semi-parametric efficiency bound though the consistency is still guaranteed. In some cases, there is even a ``super-efficiency phenomenon" which the variance can even be smaller than the bound. We will give an example to show this phenomenon in Section~3. Again, when $OR$ model is estimated nonparametrically and a misspecified $PS$ model is used, the asymptotic efficiency still holds. As previously mentioned, nonparametrically estimating $OR$ model can improve the estimation efficiency. Again under the local misspecification, the asymptotic efficiency can be achieved. \item From the above, we can see that nonparametric estimation does help on improving asymptotic efficiency. However, this does not mean that it is always recommendable, particularly in high-dimensional scenarios, because it makes the tuning parameter in nonparametric estimation very difficult to choose and clearly causes estimation inefficacy. To reduce the impact of misspecification, semiparametric models, particularly with dimension reduction structure, could be a good choice. \end{itemize} All findings are summarised in the following table in which the black cells mean without such combinations. \begin{figure}[htbp] \centering \includegraphics[width=14cm]{ATE-summary.png} \end{figure} The rest of the paper is organized as follows: In Section 2, we introduce the counterfactual framework first to define average treatment effect and formalize notations, and then discuss doubly robust estimators and possible estimation methods for $PS$ and $OR$. In addition, we also introduce the concept of local mis-specification. In Section 3 we present the asymptotic properties of doubly robust estimators under various scenarios, and the comparisons between our conclusions and existing literature. Section 4 includes simulation studies and Section 5 summarizes the main conclusions in this article. Technical proofs are given in Appendix. \section{Doubly Robust Estimation} \subsection{Notation and setup} Let $D$ be an indicator of observed treatment status ($D=1$ if treated, $D=0$ if untreated) and $X$ be a $p$-dimensional vector of covariates not affected by the treatment status with $p\geq2$. Let $\mathcal{X}$ be the support of $X$. We adopt the counterfactual outcome framework (see \cite{framework1} and \cite{framework2}) here to estimate the average treatment effect. Each individual is assumed to have potential outcomes: $Y(1)$, if the subject has received treatment, and $Y(0)$, if the subject hasn't received treatment. Let $Y$ be the observed outcome given by $(1-D)Y(0)+DY(1)$. In real situation, we can observe either $Y(1)$ or $Y(0)$ but not both of them for each individual in the sample, so it is impossible to observe the average treatment effect directly. The goal is to estimate the average treatment effect defined as $$\Delta = \mathbb{E}[Y(1)]- \mathbb{E}[Y(0)]=\theta_1 - \theta_0.$$ \noindent We further make the following assumption throughout this paper. \begin{assumption} \label{uncounfounded} (Unconfoundedness) We assume that $D$ and $(Y(1), Y(0))$ are conditionally independent given $X$. \end{assumption} \noindent As mentioned previously, the prototypical doubly robust estimator proposed by \cite{dr} incorporates the information in both $PS$ model and $OR$ model so that it remains consistent even if one of the $PS$ or $OR$ model is misspecified. There are different choices of $PS$ model and $OR$ model including parametric model, nonparametric model and semiparametric model. In the next section, doubly robust estimators under different model combinations will be presented. \subsection{Estimation procedures and further assumptions} Define respectively the true $PS$ and $OR$ model as $P(D=1|X)=p(X)$, and $ \mathbb{E}[Y|X,D=1]= \mathbb{E}[Y(1)|X]=m_1(X)$ and $ \mathbb{E}[Y|X,D=0]= \mathbb{E}[Y(0)|X]=m_0(X)$. Then the average treatment effect can be identified by $$\Delta=\theta_1-\theta_0= \mathbb{E}\left[\frac{DY}{p(X)}+\left(1-\frac{D}{p(X)}\right)m_1(X)-\frac{(1-D)Y}{1-p(X)}-\left(1-\frac{1-D}{1-p(X)}\right)m_0(X)\right].$$ Let $\big\{x_i,d_i,y_i\big\}^{n}_{i=1}$ be an independent random sample of size $n$ from the joint distribution of $(X,D,Y)$. Note that $x_i$ is a $p$-dimensional vector of covariates, $d_i$ is the binary indicator of treatment status and $y_i$ is the response of \textsl{i}-th individual. From \cite{dr}, the doubly robust estimator is defined as \begin{equation*} \begin{split} \hat{\Delta}&=n^{-1}\sum_{i=1}^{n}\left[\frac{d_iy_i}{\hat{p}(x_i)}+\left(1-\frac{d_i}{\hat{p}(x_i)}\right)\hat{m}_1(x_i)\right]-n^{-1}\sum_{i=1}^{n}\left[\frac{(1-d_i)y_i}{1-\hat{p}(x_i)}+\left(1-\frac{1-d_i}{1-\hat{p}(x_i)}\right)\hat{m}_0(x_i)\right]\\ &=\hat{\theta}_1 - \hat{\theta}_0, \end{split} \end{equation*} \noindent where $\hat{p}(x)$ is an estimated propensity score, $\hat{m}_1(x)$ and $\hat{m}_0(x)$ are estimated outcome regression models, which have different formulas under different model structures. As all combinations discussed in this paper are convergent to some quantities $\Delta^*$, we then write $\hat{\Delta} \to \Delta^*$ in probability as $n\to\infty$. Note that $\Delta=\Delta^*$ when either (but not necessarily both) $PS$ model or $OR$ model is correctly specified due to the double robustness property. Firstly, when parametric models are considered, without loss of generality, we assume a logistic regression model $\Tilde{p}(x;\beta)=\frac{\exp(x^T\beta )}{1+\exp(x^T\beta)}$ with true parameter $\beta_0$ as the $PS$ model and linear regression models $\Tilde{m}_1(x;\gamma_1)=x^T\gamma_1$ and $\Tilde{m}_0(x;\gamma_0)=x^T\gamma_0$ with true parameter $\gamma_{1,0}$ and $\gamma_{0,0}$ as the $OR$ models. Maximum likelihood estimation (MLE) is used to estimate the unknown parameters. Denote the estimators respectively as $\hat{\beta}$, $\hat{\gamma}_1$ and $\hat{\gamma}_0$. We further make the following assumptions on these proposed models. \begin{assumption} \label{psassumption} Let $\Theta_{\beta} \subset \mathbb{R}^p$ be the parameter space for $\beta$ which is open and convex. We assume that the proposed propensity score model $\Tilde{p}(x;\beta): \mathbb{R}^p \to \mathbb{R}$ is differentiable with respect to $\beta$. Further, we assume that $\Tilde{p}(x;\beta)$ is bounded away from 0 and 1 for any $\beta \in \Theta_{\beta}$. \end{assumption} \begin{assumption} \label{orassumption} Let $\Theta_{\gamma_0}\subset \mathbb{R}^p$ and $\Theta_{\gamma_1}\subset \mathbb{R}^p$ be the parameter space for $\gamma_0$ and $\gamma_1$ respectively which are open and convex. We assume that the proposed outcome regression model $\Tilde{m}_j(x;\gamma_j): \mathbb{R}^p \to \mathbb{R}$ is differentiable with respect to $\gamma_j$, $j=0,1$. \end{assumption} \noindent According to \cite{mle1}, when models are correctly specified, we have $\sqrt{n}(\hat{\beta}-\beta_0)\xrightarrow[]{d}N(0,I^{-1}(\beta_0))$, $\sqrt{n}(\hat{\gamma}_1-\gamma_{1,0})\xrightarrow[]{d}N(0,I^{-1}(\gamma_{1,0}))$ and $\sqrt{n}(\hat{\gamma}_0-\gamma_{0,0})\xrightarrow[]{d}N(0,I^{-1}(\gamma_{0,0}))$, where $I(\beta_0)$, $I(\gamma_{1,0})$ and $I(\gamma_{0,0})$ are the Fisher information matrices. When models are misspecified, the convergence of MLE can also be obtained. See \cite{mle2}. We have $\sqrt{n}(\hat{\beta}-\beta^*)\xrightarrow[]{d}N(0,V(\beta^*))$, where $V(\beta^*)$ is the information sandwich variance matrix. Note that $\beta^*$ is the value of $\beta$ which minimizes the Kullback–Leibler discrepancy with respect to $\beta$. Similarly, we have $\sqrt{n}(\hat{\gamma}_1-\gamma^*_1)\xrightarrow[]{d}N(0,V(\gamma^*_1))$ and $\sqrt{n}(\hat{\gamma}_0-\gamma^*_0)\xrightarrow[]{d}N(0,V(\gamma^*_0))$. Further, we introduce the concept of local misspecification for parametric models. Suppose the correctly specified models have the following forms: \begin{equation} \label{localdef} \begin{split} p(x)&=\Tilde{p}(x;\beta_0)(1+\delta \times s(x)),\\ m_1(x)&=\Tilde{m}_1(x;\gamma_{1,0})+\delta_1\times s_1(x),\\ m_0(x)&=\Tilde{m}_0(x;\gamma_{0,0})+\delta_0 \times s_0(x). \end{split} \end{equation} If $\delta$ is a nonzero fixed constant, we say that $p(x)$ is globally misspecified. If $\delta \to 0$, we say it is locally misspecified. Similarly, we can define the global and local misspecification for $m_1(x)$ and $m_0(x)$. Secondly, when semiparametric models are considered, we propose the $PS$ model $g(\alpha^TX):=P(D=1|\alpha^TX)$ and the $OR$ models $r_1(\alpha^T_1X):=\mathbb{E}[Y(1)|\alpha^T_1X, D=1]$ and $r_0(\alpha^T_0X):=\mathbb{E}[Y(0)|\alpha^T_0X,\\ D=0]$ with dimension reduction structures $\alpha^TX$, $\alpha^T_1X$ and $\alpha^T_0X$ respectively. Similarly, we can define alternative $PS$ models $q_1(\alpha^T_1X):=P(D=1|\alpha^T_1X)$ and $q_0(\alpha^T_0X):=P(D=1|\alpha^T_0X)$. Note that $p(X)=P(D=1|X)=P(D=1|\alpha^TX)=g(\alpha^TX)$, $m_1(X)=E[Y(1)|X]=E[Y(1)|\alpha^T_1X]=E[Y(1)|\alpha^T_1X,D=1]=r_1(\alpha^T_1X)$ and $m_0(X)=E[Y(0)|X]=E[Y(0)|\alpha^T_0X]=E[Y(0)|\alpha^T_0X,D=0]=r_0(\alpha^T_0X)$ if and only if the dimension reduction structures are correctly specified. We assume that $\alpha$, $\alpha_1$ and $\alpha_0$ are vectors whose Euclidean norms equal 1. There are several available methods of obtaining root-$n$ consistent estimators for $\alpha$, $\alpha_1$ and $\alpha_0$ as mentioned in \cite{semidr}. Therefore, the impact of estimating $\alpha$, $\alpha_1$ and $\alpha_0$ is not considered in this paper. The corresponding semiparametric estimators are $\hat{g}(\alpha^Tx)$, $\hat{r}_1(\alpha^T_1x)$ and $\hat{r}_0(\alpha^T_0x)$ with \begin{equation} \label{semidef} \begin{split} \hat{g}(\alpha^Tx)&=\frac{\sum_{j=1}^{n}d_jL_b(\alpha^Tx,\alpha^Tx_j)}{\sum_{j=1}^{n}L_b(\alpha^Tx,\alpha^Tx_j)},\\ \hat{r}_1(\alpha^T_1x)&=\frac{\sum_{j=1}^{n}d_jy_jK_{h_{m_1}}(\alpha^T_1x,\alpha^T_1x_j)}{\sum_{j=1}^{n}d_jK_{h_{m_1}}(\alpha^T_1x,\alpha^T_1x_j)},\\ \hat{r}_0(\alpha^T_0x)&=\frac{\sum_{j=1}^{n}(1-d_j)y_jK_{h_{m_0}}(\alpha^T_0x,\alpha^T_0x_j)}{\sum_{j=1}^{n}(1-d_j)K_{h_{m_0}}(\alpha^T_0x,\alpha^T_0x_j)}, \end{split} \end{equation} where $L_b(u,v)=\frac{1}{b}L\left(\frac{u-v}{b}\right)$, $K_{h_{m_1}}(u,v)=\frac{1}{h_{m_1}}K\left(\frac{u-v}{h_{m_1}}\right)$ and $K_{h_{m_0}}(u,v)=\frac{1}{h_{m_0}}K\left(\frac{u-v}{h_{m_0}}\right)$. Note that $K(\cdot): \mathbb{R} \to \mathbb{R}$, $L(\cdot): \mathbb{R} \to \mathbb{R}$ are kernel functions of order 2 and $b,h_{m_1},h_{m_0}$ are corresponding bandwidths. We further make the following assumption for the kernel functions and bandwidths. \begin{assumption} \label{semiassumption} Kernel functions $K(\cdot)$ and $L(\cdot)$ are symmetric around 0, compactly supported and at least twice continuously differentiable with $\int u^2K(u)du<\infty$ and $\int u^2L(u)du<\infty$. The bandwidths $b,h_{m_1},h_{m_0}$ satisfy the following conditions as $n \to \infty$: (a) $b\to0$, $nb\to \infty$, $nb^3\to \infty$, $nb^4\to0$, $\log(n)/(nb^3)\to 0$; (b) $h_{m_1}, h_{m_0}\to0$, $nh_{m_1}, nh_{m_0}\to \infty$, $nh^3_{m_1}, nh^3_{m_0}\to \infty$, $nh^4_{m_1}, nh^4_{m_0}\to0$ and $\log(n)/(nh^3_{m_1})$, $\log(n)/(nh^3_{m_0}) \to 0$. \end{assumption} Thirdly, when nonparametric models are considered, we assume the estimated $PS$ model $\hat{p}(x)$ and $OR$ models $\hat{m}_1(x)$, $\hat{m}_0(x)$ have the following form: \begin{equation} \label{nondef} \begin{split} \hat{p}(x)&=\frac{\sum_{j=1}^{n}d_j\Tilde{L}_{\Tilde{b}}(x,x_j)}{\sum_{j=1}^{n}\Tilde{L}_{\Tilde{b}}(x,x_j)}, \\ \hat{m}_1(x)&=\frac{\sum_{j=1}^{n}d_jy_j\Tilde{K}_{\Tilde{h}_{m_1}}(x,x_j)}{\sum_{j=1}^{n}d_j\Tilde{K}_{\Tilde{h}_{m_1}}(x,x_j)}, \quad \hat{m}_0(x)=\frac{\sum_{j=1}^{n}(1-d_j)y_j\Tilde{K}_{\Tilde{h}_{m_0}}(x,x_j)}{\sum_{j=1}^{n}(1-d_j)\Tilde{K}_{\Tilde{h}_{m_0}}(x,x_j)}, \end{split} \end{equation} where $\Tilde{L}_{\Tilde{b}}(u,v)=\frac{1}{\Tilde{b}^p}\Tilde{L}\left(\frac{u-v}{\Tilde{b}}\right)$, $\Tilde{K}_{\Tilde{h}_{m_1}}(u,v)=\frac{1}{\Tilde{h}^p_{m_1}}\Tilde{K}\left(\frac{u-v}{\Tilde{h}_{m_1}}\right)$ and $\Tilde{K}_{\Tilde{h}_{m_0}}(u,v)=\frac{1}{\Tilde{h}^p_{m_0}}\Tilde{K}\left(\frac{u-v}{\Tilde{h}_{m_0}}\right)$. Note that $\Tilde{K}(\cdot): \mathbb{R}^p \to \mathbb{R}$ and $\Tilde{L}(\cdot): \mathbb{R}^p \to \mathbb{R}$ are kernel functions of order s, where $s>p$ is a positive integer. The corresponding bandwidths are $\Tilde{b},\Tilde{h}_{m_1},\Tilde{h}_{m_0}$. We further make the following assumption for the kernel functions and bandwidths. \begin{assumption} \label{nonparaassumption} Kernel functions $\Tilde{K}(\cdot)$ and $\Tilde{L}(\cdot)$ are symmetric around 0, compactly supported and at least s times continuously differentiable with $\int u^s\Tilde{K}(u)du<\infty$ and $\int u^s\Tilde{L}(u)du<\infty$. The bandwidths $\Tilde{b},\Tilde{h}_{m_1},\Tilde{h}_{m_0}$ satisfy the following conditions as $n \to \infty$: (a) $\Tilde{b}\to0$, $n\Tilde{b}^{p+2}\to \infty$, $n\Tilde{b}^{2s}\to0$, $\log(n)/(n\Tilde{b}^{p+s})\to 0$; (b) $\Tilde{h}_{m_1}, \Tilde{h}_{m_0}\to0$, $n\Tilde{h}^{p+2}_{m_1}, n\Tilde{h}^{p+2}_{m_0}\to \infty$, $n\Tilde{h}^{2s}_{m_1}, n\Tilde{h}^{2s}_{m_0}\to0$ and $\log(n)/(n\Tilde{h}^{p+s}_{m_1}), \log(n)/(n\Tilde{h}^{p+s}_{m_0}) \to 0$. \end{assumption} Furthermore, let $f(x): \mathbb{R}^p \to \mathbb{R}$ be the density function of $X$, $\Tilde{f}(\alpha^Tx): \mathbb{R} \to \mathbb{R}$ be the density function of $\alpha^TX$, $\Tilde{f}_1(\alpha^T_1x): \mathbb{R} \to \mathbb{R}$ be the density function of $\alpha^T_1X$ and $\Tilde{f}_0(\alpha^T_0x): \mathbb{R} \to \mathbb{R}$ be the density function of $\alpha^T_0X$. Recall that the true $PS$ model $p(x): \mathbb{R}^p \to \mathbb{R}$, the proposed semiparametric $PS$ model $g(\alpha^Tx): \mathbb{R} \to \mathbb{R}$ and the alternative $PS$ models $q_1(\alpha^T_1x): \mathbb{R} \to \mathbb{R}$ and $q_0(\alpha^T_0X): \mathbb{R} \to \mathbb{R}$ are defined as $p(X):=P(D=1|X)$, $g(\alpha^TX):=P(D=1|\alpha^TX)$, $q_1(\alpha^T_1X):=P(D=1|\alpha^T_1X)$ and $q_0(\alpha^T_0X):=P(D=1|\alpha^T_0X)$. These functions are useful in deriving the asymptotic distribution of $\hat{\Delta}$. We make the following assumption about $f(\cdot), \Tilde{f}(\cdot), \Tilde{f}_1(\cdot), \Tilde{f}_0(\cdot), p(\cdot), g(\cdot), q_1(\cdot),q_0(\cdot)$ throughout the paper. \begin{assumption} \label{general} Density functions $f(\cdot), \Tilde{f}(\cdot), \Tilde{f}_1(\cdot), \Tilde{f}_0(\cdot)$ and propensity score models $p(\cdot), g(\cdot), q_1(\cdot),q_0(\cdot)$ are bounded away from 0 and 1. \end{assumption} As a result, we can obtain the following nine estimators using different combinations of $PS$ and $OR$ estimator: \begin{equation}\label{drdef} \begin{split} &\hat{\Delta}_1=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\Tilde{p}(x_i;\hat{\beta})}+\left(1-\frac{d_i}{\Tilde{p}(x_i;\hat{\beta})}\right)\Tilde{m}_1(x_i;\hat{\gamma}_1)-\frac{(1-d_i)y_i}{1-\Tilde{p}(x_i;\hat{\beta})}+\left(1-\frac{1-d_i}{1-\Tilde{p}(x_i;\hat{\beta})}\right)\Tilde{m}_0(x_i;\hat{\gamma}_0)\right\}\\ &\hat{\Delta}_2=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\Tilde{p}(x_i;\hat{\beta})}+\left(1-\frac{d_i}{\Tilde{p}(x_i;\hat{\beta})}\right)\hat{m}_1(x_i)-\frac{(1-d_i)y_i}{1-\Tilde{p}(x_i;\hat{\beta})}+\left(1-\frac{1-d_i}{1-\Tilde{p}(x_i;\hat{\beta})}\right)\hat{m}_0(x_i)\right\}\\ &\hat{\Delta}_3=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\hat{p}(x_i)}+\left(1-\frac{d_i}{\hat{p}(x_i)}\right)\Tilde{m}_1(x_i;\hat{\gamma}_1)-\frac{(1-d_i)y_i}{1-\hat{p}(x_i)}+\left(1-\frac{1-d_i}{1-\hat{p}(x_i)}\right)\Tilde{m}_0(x_i;\hat{\gamma}_0)\right\}\\ &\hat{\Delta}_4=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\hat{p}(x_i)}+\left(1-\frac{d_i}{\hat{p}(x_i)}\right)\hat{m}_1(x_i)-\frac{(1-d_i)y_i}{1-\hat{p}(x_i)}+\left(1-\frac{1-d_i}{1-\hat{p}(x_i)}\right)\hat{m}_0(x_i)\right\}\\ &\hat{\Delta}_5=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\hat{g}(\alpha^Tx_i)}+\left(1-\frac{d_i}{\hat{g}(\alpha^Tx_i)}\right)\Tilde{m}_1(x_i;\hat{\gamma}_1)-\frac{(1-d_i)y_i}{1-\hat{g}(\alpha^Tx_i)}+\left(1-\frac{1-d_i}{1-\hat{g}(\alpha^Tx_i)}\right)\Tilde{m}_0(x_i;\hat{\gamma}_0)\right\}\\ &\hat{\Delta}_6=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\Tilde{p}(x_i;\hat{\beta})}+\left(1-\frac{d_i}{\Tilde{p}(x_i;\hat{\beta})}\right)\hat{r}_1(\alpha^T_1x_i)-\frac{(1-d_i)y_i}{1-\Tilde{p}(x_i;\hat{\beta})}+\left(1-\frac{1-d_i}{1-\Tilde{p}(x_i;\hat{\beta})}\right)\hat{r}_0(\alpha^T_0x_i)\right\}\\ &\hat{\Delta}_7=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\hat{g}(\alpha^Tx_i)}+\left(1-\frac{d_i}{\hat{g}(\alpha^Tx_i)}\right)\hat{m}_1(x_i)-\frac{(1-d_i)y_i}{1-\hat{g}(\alpha^Tx_i)}+\left(1-\frac{1-d_i}{1-\hat{g}(\alpha^Tx_i)}\right)\hat{m}_0(x_i)\right\}\\ &\hat{\Delta}_8=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\hat{p}(x_i)}+\left(1-\frac{d_i}{\hat{p}(x_i)}\right)\hat{r}_1(\alpha^T_1x_i)-\frac{(1-d_i)y_i}{1-\hat{p}(x_i)}+\left(1-\frac{1-d_i}{1-\hat{p}(x_i)}\right)\hat{r}_0(\alpha^T_0x_i)\right\}\\ &\hat{\Delta}_9=\frac1n\sum_{i=1}^{n}\left\{\frac{d_iy_i}{\hat{g}(\alpha^Tx_i)}+\left(1-\frac{d_i}{\hat{g}(\alpha^Tx_i)}\right)\hat{r}_1(\alpha^T_1x_i)-\frac{(1-d_i)y_i}{1-\hat{g}(\alpha^Tx_i)}+\left(1-\frac{1-d_i}{1-\hat{g}(\alpha^Tx_i)}\right)\hat{r}_0(\alpha^T_0x_i)\right\}. \end{split} \end{equation} We can show the consistencies of these estimators even if one of $PS$ or $OR$ model is misspecified, see Appendix 6.1. In the next section, we focus on studying their asymptotic distributions. \section{Asymptotic distributions} In this section, we derive the asymptotic distributions of the proposed estimators. The comparisons between their asymptotic variances and the semiparametric efficiency bound are also presented. Detailed proofs can be found in Appendix 6.2. \begin{theorem} Suppose that the $PS$ and $OR$ models are correctly specified. Under Assumptions 1-6 in Section~2, for all nine combinations, we have $\sqrt{n}(\hat{\Delta}_k-\Delta)\xrightarrow{d} N(0,\Sigma_1)$ for $k=1,\cdots,9$ with $$\Sigma_1=\mathbb{E}\left\{\frac{\mathrm{Var}[Y(1)|X]}{p(X)}+\frac{\mathrm{Var}[Y(0)|X]}{1-p(X)}+\left[m_1(X)-\mathbb{E}[Y(1)]-m_0(X)+\mathbb{E}[Y(0)]\right]^2\right\},$$ which is the same as the semiparametric efficiency bound shown by \cite{efficiencybound}. \end{theorem} \begin{remark} The results for $\hat{\Delta}_1$ (parametric+parametric) and $\hat{\Delta}_4$ (nonparametric+nonparametric) coincide with the results in the literature, see e.g. \cite{drpara+pspara} and \cite{denonpara}. For $\hat{\Delta}_9$ (semiparametric+semiparametric), the result is similar to that in \cite{semidr} in the context of missing data. The other results are newly derived in this paper. \end{remark} \begin{theorem} Suppose that the $PS$ model is correctly specified and the $OR$ model is globally misspecified with fixed nonzero $\delta_1$ and $\delta_0$. We then have the estimators $\hat{\Delta}_k$ for $k=1, 3,5, 6, 8, 9$. Under Assumptions 1-6 in Section~2, \begin{equation} \begin{split} &\sqrt{n}(\hat{\Delta}_k-\Delta)\xrightarrow{d} N(0,\Sigma_1), \quad \mbox{for \, \, $k=3,8$}\\ &\sqrt{n}(\hat{\Delta}_k-\Delta)\xrightarrow{d} N(0,\Sigma_2), \quad \mbox{for \, \, $k=1,5$}\\ &\sqrt{n}(\hat{\Delta}_k-\Delta)\xrightarrow{d} N(0,\Sigma_3), \quad \mbox{for \, \, $k=6,9$}\\ \end{split} \end{equation} where $\hat{\Delta}_3$ is nonparametric+misspecified parametric and $\hat{\Delta}_8$ is nonparametric+misspecified semiparametric, $\Sigma_1$ is defined in Theorem~1 and $\Sigma_2$ and $\Sigma_3$ are as follows: \begin{equation*} \begin{split} &\Sigma_2 =\Sigma_1+\mathbb{E}\bigg\{\Big[\sqrt{\frac{1}{p(X)}-1}[\Tilde{m}_1(X;\gamma^*_1)-m_1(X)]+\sqrt{\frac{1}{1-p(X)}-1}[\Tilde{m}_0(X;\gamma^*_0)-m_0(X)]\\ &\qquad+\sqrt{p(X)(1-p(X))}w(X)\Big]^2\bigg\} \geq \Sigma_1, \\ &\Sigma_3=\Sigma_1+\mathbb{E}\bigg\{\Big[\sqrt{\frac{1}{p(X)}-1}[r_1(\alpha^T_1X)-m_1(X)]+\sqrt{\frac{1}{1-p(X)}-1}[r_0(\alpha^T_0X)-m_0(X)]\\ &\qquad+\sqrt{p(X)(1-p(X))}w(X)\Big]^2\bigg\} \geq \Sigma_1.\\ \end{split} \end{equation*} The equalities hold if $OR$ models are correctly specified. Note that $w(X)$ is different for different estimators. See Appendix 6.2 for details. \end{theorem} \begin{remark} The result for $\hat{\Delta}_1$ (parametric+parametric) coincides with the results in \cite{misspecification-reference1} and \cite{misspecification-reference2}. The other results are newly derived in this paper. The results show some interesting phenomena. Firstly, due to the nonparametric estimation for the correctly specified $PS$ model, $\hat \Delta_k$ for $k=3,8$, we can achieve the asymptotic efficiency. Secondly, under locally misspecification of $OR$ models with $\delta_1\to 0$ and $\delta_0\to 0$, $\Sigma_2$ converges to $\Sigma_1$. That is, the asymptotic variances of $\hat{\Delta}_1$ and $\hat{\Delta}_5$ converge to $\Sigma_1$ as $\delta_1\to 0$ and $\delta_0\to 0$. \end{remark} \begin{theorem} Suppose that the $PS$ model is globally misspecified with fixed nonzero $\delta$, while the $OR$ model is correctly specified. Under Assumptions 1-6 in Section~2, \begin{equation} \begin{split} &\sqrt{n}(\hat{\Delta}_k-\Delta)\xrightarrow{d} N(0,\Sigma_1), \quad \mbox{for \, \, $k=2,7$},\\ &\sqrt{n}(\hat{\Delta}_k-\Delta)\xrightarrow{d} N(0,\Sigma_4), \quad \mbox{for \, \, $k=1,6$},\\ &\sqrt{n}(\hat{\Delta}_k-\Delta)\xrightarrow{d} N(0,\Sigma_5), \quad \mbox{for \, \, $k=5,9$}, \end{split} \end{equation} \noindent where $\Sigma_4$ and $\Sigma_5$ are as follows: \begin{equation*} \begin{split} &\Sigma_4=\Sigma_1+\mathbb{E}\left\{\frac{1}{p(X)}\textrm{Var}[Y(1)|X]\left[\left(\frac{p(X)}{\Tilde{p}(X;\beta^*)}+w_1(X)p(X)\right)^2-1\right]\right\}\\ &\qquad+\mathbb{E}\left\{\frac{1}{1-p(X)}\textrm{Var}[Y(0)|X]\left[\left(\frac{1-p(X)}{1-\Tilde{p}(X;\beta^*)}+w_0(X)(1-p(X))\right)^2-1\right]\right\}, \end{split} \end{equation*} \begin{equation*} \begin{split} &\Sigma_5=\Sigma_1+\mathbb{E}\left\{\frac{1}{p(X)}\textrm{Var}[Y(1)|X]\left[\left(\frac{p(X)}{g(\alpha^TX)}+w_1(X)p(X)\right)^2-1\right]\right\}\\ &\qquad+\mathbb{E}\left\{\frac{1}{1-p(X)}\textrm{Var}[Y(0)|X]\left[\left(\frac{1-p(X)}{1-g(\alpha^TX)}+w_0(X)(1-p(X))\right)^2-1\right]\right\}. \end{split} \end{equation*} Note that $w_1(X)$ and $w_0(X)$ are different for different estimators. See Appendix 6.2 for details. \end{theorem} \begin{remark} The results show some interesting phenomena. Firstly, again, due to the nonparametric estimation for the $OR$ model, the estimators $\hat \Delta_k$ for $k=2,7$ can achieve the asymptotic efficiency. Secondly, under locally misspecification of $PS$ model with $\delta\to 0$, $\Sigma_4$ converges to $\Sigma_1$. That is, the estimators $\hat{\Delta}_1$ and $\hat{\Delta}_5$ can also achieve the semiparametric efficiency bound. Thirdly, it is difficult to compare $\Sigma_4$ and $\Sigma_5$ with $\Sigma_1$. We still have difficulty to reach a general conclusion. This case is very different from the case with correctly specified $PS$ model and misspecified $OR$ model as stated in Theorem~2. \end{remark} Although a general comparison is very difficult to theoretically determine under what circumstances $\Sigma_4$ and $\Sigma_5$ would be smaller than $\Sigma_1$ and under what circumstances they are larger, we give a simple example to show that the asymptotic variance $\Sigma_4$ derived from $\hat \Delta_1$ could be smaller than $\Sigma_1$ in certain cases. Suppose that the true propensity score function is simply a constant function $p(x)=p^*$ and the assumed propensity score model is also a constant $g$, where $0< p^*,g < 1$. We further assume that $\mathbb{E}(X)=0, \mathrm{Var}[Y(1)|X]=\mathrm{Var}[Y(0)|X]$. Then we have $w_1(x)=w_0(x)=0$, the formula of $\Sigma_4$ can be reduced to $$\Sigma_4 - \Sigma_1=\mathbb{E}\left\{\mathrm{Var}[Y(1)|X]\right\}\left\{\frac{p^*}{g^2}+\frac{1-p^*}{(1-g)^2}-\frac{1}{p^*}-\frac{1}{1-p^*}\right\}.$$ \noindent For each fixed value $p^*$, we can determine whether the asymptotic variance is enlarged or not by looking at the function $f(g)=\frac{p^*}{g^2}+\frac{1-p^*}{(1-g)^2}-\frac{1}{p^*}-\frac{1}{1-p^*}$. That is, if $f(g)=0$, $\Sigma_4=\Sigma_1$; if $f(g)>0$, $\Sigma_4>\Sigma_1$; if $f(g)<0$, $\Sigma_4<\Sigma_1$. In Figure 1, we plot three curves of the $f(g)$ about $g$ with $p^*=1/4, 1/2, 3/4$. First, we can see that $f(g)=0$ when $p^*=g=1/4, 1/2, 3/4$. This means that when the model is correctly specified, the variance can achieve the semiparametric efficiency bound. Second, when $p^*=1/2$, $f(g)\geq0$ for all values of $g$. In other words, misspecification always causes the variance enlargement. In contrast, when $p^* \not =1/2$, the situation becomes different. From the curves with $p^*=1/4, 3/4$, we can see that the semiparametric efficiency bound can only be achieved at $g=1/4, 3/4$ accordingly, otherwise, there are ranges of $g$ such that the variances can even be smaller than the bound. This shows possible ``super-efficiency phenomenon" when the misspecification occurs. \begin{figure}[htbp] \centering \includegraphics[width=13cm]{ATE-DR.pdf} \caption{The plots of $f(g)$ for each fixed $p^*$. The left and right panels illustrate that the asymptotic variance $\Sigma_4$ derived from $\hat \Delta_1$ could be smaller than the semiparametric efficiency bound $\Sigma_1$.} \end{figure} \section{Numerical investigation} We conduct some Monte Carlo simulations to investigate the performances of these doubly robust estimators in finite sample scenarios in terms of bias, standard deviation and mean squared error. The experiments are repeated 1000 times, and the sample size is taken to be 1000. Suppose for each subject $i=1,2,...,n$, the 10-dimensional covariates $X_i=(x_{i1}, ..., x_{i10})^T$ is independently drawn from $N(0, I)$, where $I$ is the $10\times10$ identity matrix. The potential outcomes $Y(1)$ and $Y(0)$ follow $N(\mathbb{E}[Y(1)], 1)$ and $N(\mathbb{E}[Y(0)], 1)$ respectively, where $$\mathbb{E}[Y(1)]=10+\beta^TX\;\; and\;\; \mathbb{E}[Y(0)]=5+\beta^TX.$$ Let $\beta$ be $(0.5,0.5,0.5,0.5,0,...,0)^T$. Following these two specific regression models, the true ATE is equal to 5. The true propensity scores are determined by a logistic regression model $$P(D=1|X=x)=\frac{\exp(\alpha^Tx+s_0)}{1+\exp(\alpha^Tx+s_0)},$$ where $\alpha=\alpha^\prime/(1+s^2_1)^{1/2}$, $\alpha^\prime=\beta/\left\lVert \beta\right\rVert+(0,...,0,s_1)^T$. Similar to the setting in missing data, see \cite{simulation1}, let constant $s_0$ control the proportion of treated subjects and let constant $s_1$ control the closeness between $\alpha$ and $\beta$. When $s_1=0$, $\alpha$ and $\beta$ are the same. When $s_1=1$, the angle between $\alpha$ and $\beta$ is $45^\circ$. When $s_1$ is large enough, $\alpha$ and $\beta$ are vertical to each other. For each subject, the treatment indicator $d_i$ is generated from a Bernoulli distribution with parameter $P(D=1|X=x_i)$. If the parametric method is used to model the propensity score, a logistic regression of $d_i$ on $x_{ij}\,'s$ is regarded as a correctly specified $PS$ model. Similarly, a linear regression of $y_i$ on $x_{ij}\,'s$ is regarded as the correctly specified $OR$ model. Following the setting in \cite{simulation2}, we introduce covariates $Z_i=(z_{i1},...,z_{i10})^T$ as below: \begin{equation*} \begin{split} &z_{i1}=\exp(x_{i1}/3),\:z_{i2}=\frac{x_{i2}}{1+\exp(x_{i1})}+10,\: z_{i3}=\left(\frac{x_{i1}x_{i3}}{25}+0.6\right)^3,\: z_{i4}=(x_{i2}+x_{i4}+20)^2\\ &z_{i5}=\exp(x_{i5}/3),\:z_{i6}=\frac{x_{i6}}{1+\exp(x_{i5})}+10,\: z_{i7}=\left(\frac{x_{i5}x_{i7}}{25}+0.6\right)^3,\: z_{i8}=(x_{i6}+x_{i8}+20)^2\\ &z_{i9}=\exp(x_{i9}/3),\: z_{i10}=\frac{x_{i10}}{1+\exp(x_{i9})}+10 \end{split} \end{equation*} Suppose $Z_i\;'s$ are used instead of $X_i\;'s$, a logistic regression of $d_i$ on $z_{ij}\,'s$ is a misspecified $PS$ model and a linear regression of $y_i$ on $z_{ij}\,'s$ means a misspecified $OR$ model. If the semiparametric method is applied, $\alpha^TX$ is a correct dimension reduction structure for propensity while $\beta^TX$ is a mis-specified dimension reduction structure. Similarly, for outcome regression models, $\beta^TX$ is a correct dimension reduction structure while $\alpha^TX$ leads to a mis-specified dimension reduction structure. The kernel functions $K(\cdot)$ and $L(\cdot)$ are taken to be Guassian kernels $K(t)=L(t)=(2\pi)^{(-1/2)}exp(-t^2/2)$. For the nonparametric method, the multiple Guassian kernel $K(t)=L(t)=(2\pi)^{(-p/2)}exp(-||t||^2/2)$ is adopted. Inspired by \cite{semidr}, we consider the effect of closeness between $\alpha$ and $\beta$ and the proportion of untreated subjects on the performances of these $DR$ estimators. We also investigate the impact of mis-specification. \subsection{Effect of proportion of untreated} In this section, we first investigate the impact of proportion of untreated subjects on the bias, standard deviation (std) and mean squared error (mse). The closeness of $\alpha$ and $\beta$ is fixed and set to be $45^\circ$. We consider three scenarios in which the proportion of untreated is chosen to be 25\%, 50\% and 75\%. The simulation results are summarised in Tables~1, 2 and 3. Compared to the case where the proportions of untreated are chosen to be around 25\% and 75\%, the stds and mses are the smallest when untreated subjects are about 50\%. In terms of bias, a balanced design with 50\% of untreated subjects gives the smallest biases for most of the estimators. For $\hat{\Delta}_1,\hat{\Delta}_3,\hat{\Delta}_5$ in Table~1 and $\hat{\Delta}_1$, $\hat{\Delta}_5$ in Table~3, the biases of these estimators in three scenarios are small enough so that we can ignore the impact from the proportion of untreated. Secondly, we compare the performances of estimators when models are correctly specified, see Table~1. As we proved in theory, all estimators have the identical asymptotic variance. In the finite sample cases, we can observe that they perform similarly with regard to stds, and $\hat{\Delta}_8$ with nonparametric $PS$ and semiparametric $OR$ estimation works well in any scenario. In terms of biases, the biases of $\hat{\Delta}_2$, $\hat{\Delta}_4$ and $\hat{\Delta}_7$ are greater than that of other estimators in any scenario. This seems to mean that the bias becomes larger if we use nonparametric estimation for the $OR$ model. Among these three estimators, $\hat{\Delta}_4$ with both nonparametric $PS$ and $OR$ estimation has the largest bias, which shows the inefficiency in nonparametric estimation. Finally, we explore the influence of misspecification, see Tables~2 and 3 and compare the results with Table~1. It is noteworthy that model misspecification has the least impact on std when there is 50\% of untreated subjects. Theoretically, $\hat{\Delta}_k$, $k=1,5,6,9$ are consistent but not efficient when $OR$ model is misspecified. In Table 2, we observe that the biases and stds of $\hat{\Delta}_1$ and $\hat{\Delta}_5$ are considerably enlarged. For $\hat{\Delta}_6$ and $\hat{\Delta}_9$, we only see slightly increases in the biases and stds. Recall that in theory $\hat{\Delta}_3$ and $\hat{\Delta}_8$ can achieve the semiparametric efficiency bound when $OR$ is misspecified. However, in the finite sample case we conduct, we do not see this property for $\hat{\Delta}_3$ as its bias and std are significantly enlarged in Table 2. Theoretically, $\hat{\Delta}_k$, $k=1,5,6,9$ are consistent, but their efficiencies cannot be determined when $PS$ model is mis-specified. In Table~3, we observe that the stds of these estimators in the simulation are not necessarily enlarged, which coincides with our theoretical results. Recall that $\hat{\Delta}_2$ and $\hat{\Delta}_7$ can achieve the semiparametric efficiency bound when $PS$ is mis-specified. Their performances in Table 3 are similar to their performances in Table 1, which supports our theory. Therefore, overall, the numerical results support the theoretical conclusions. \subsection{Effect of closeness between $\alpha$ and $\beta$} We now examine the influence of closeness between $\alpha$ and $\beta$. The proportion of treated subjects is taken to be around $50\%$. We consider three scenarios in which the closeness between $\alpha$ and $\beta$ is chosen to be $0^\circ$, $45^\circ$ and $90^\circ$. The simulation results are presented in Tables~4, 5 and 6. We can observe that the larger the angle is, the smaller the bias is for most of the estimators, but there is some influence on $std$ and $mse$. For $\hat{\Delta}_1,\hat{\Delta}_3,\hat{\Delta}_5$ in Table 4 and $\hat{\Delta}_1$, $\hat{\Delta}_5$ in Table 6, the biases of these estimators in three scenarios are small so that we can ignore the impact from the proportion of untreated. Compare the performances of estimators when models are correctly specified (Table 4). We can observe that these estimators perform similarly with regard to stds, but the most efficient estimator is again $\hat{\Delta}_8$ in any scenario. In terms of biases, the biases of $\hat{\Delta}_2$, $\hat{\Delta}_4$ and $\hat{\Delta}_7$ are significantly greater than those of other estimators when the angle is set to be $0^\circ$. However, as the angle increases, the gap becomes smaller. Finally, we explore the influence of misspecification, see Tables~5 and 6 and compare the results with those in Table 4. Note that the misspecification of semiparametric models no longer exists when the angle is set to be $0^\circ$. From Table 5, the stds of $\hat{\Delta}_1$ and $\hat{\Delta}_5$ are considerably enlarged. The enlargements become smaller as the angle increases. The stds of $\hat{\Delta}_6$, $\hat{\Delta}_8$ and $\hat{\Delta}_9$ increase slightly. The enlargements become more seriously in scenario 3. Again, theoretically consistent and efficient estimator $\hat{\Delta}_3$ does not perform well in the limited numerical study. With increase of the angle, the enlargements on bias and std of $\hat{\Delta}_3$ reasonably reduce. Theoretically, $\hat{\Delta}_k$, $k=1,5,6,9$ are consistent, but we are unable to make a definitive comparison in terms of their asymptotic variances when the $PS$ model is misspecified. In accordance with the theory in this paper, the stds of these estimators in the simulation are not necessarily enlarged. Recall that the variance of $\hat{\Delta}_2$ and $\hat{\Delta}_7$ still achieves the semiparametric efficiency bound when $PS$ is misspecified. The comparison between the results in Table 6 and Table 4, we can see the coincidence with the theory. In summary, the proportion of untreated has an impact on biases, stds and mses while the closeness between $\alpha$ and $\beta$ has an impact on biases only. The misspecification of $PS$ model seems to have less impact for bias, std and mse than the misspecification of $OR$ model. This effect is much more serious when the parametric estimation of $OR$ model is used. \begin{table}[!htbp] \centering \caption{Correctly specified $PS$ and $OR$ models, $45^{\circ}$ between $\alpha$ and $\beta$} \begin{tabular}{l*{10}{c}} \toprule Estimator & \multicolumn{3}{c}{25\% of untreated subjects} & \multicolumn{3}{c}{50\% of untreated subjects} & \multicolumn{3}{c}{75\% of untreated subjects} \\ \midrule {} & bias & std & mse & bias & std & mse & bias & std & mse \\ $\hat{\Delta}_1$ & -0.0024 & 0.0970 & 0.0094 & -0.0041 & 0.0759 & 0.0058 & -0.0027 & 0.0974 & 0.0095\\ $\hat{\Delta}_2$ & 0.1868 & 0.0925 & 0.0435 & 0.1448 & 0.0749 & 0.0266 & 0.1852 & 0.0940 & 0.0431\\ $\hat{\Delta}_3$ & -0.0024 & 0.0897 & 0.0081 & -0.0030 & 0.0722 & 0.0052 & -0.0013 & 0.0898 & 0.0081\\ $\hat{\Delta}_4$ & 0.3146 & 0.0930 & 0.1076 & 0.2937 & 0.0768 & 0.0922 & 0.3130 & 0.0923 & 0.1065\\ $\hat{\Delta}_5$ & -0.0023 & 0.0922 & 0.0085 & -0.0033 & 0.0735 & 0.0054 & -0.0022 & 0.0923 & 0.0085\\ $\hat{\Delta}_6$ & 0.0052 & 0.0954 & 0.0091 & 0.0005 & 0.0761 & 0.0058 & 0.0045 & 0.0956 & 0.0091\\ $\hat{\Delta}_7$ & 0.2144 & 0.0962 & 0.0552 & 0.1762 & 0.0797 & 0.0374 & 0.2126 & 0.0967 & 0.0545\\ $\hat{\Delta}_8$ & 0.0426 & 0.0760 & 0.0076 & 0.0322 & 0.0673 & 0.0056 & 0.0428 & 0.0763 & 0.0076\\ $\hat{\Delta}_9$ & 0.0140 & 0.0871 & 0.0078 & 0.0084 & 0.0724 & 0.0053 & 0.0138 & 0.0870 & 0.0078\\ \bottomrule \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{Misspecified $OR$ model, $45^{\circ}$ between $\alpha$ and $\beta$} \begin{tabular}{l*{10}{c}} \toprule Estimator & \multicolumn{3}{c}{25\% of untreated subjects} & \multicolumn{3}{c}{50\% of untreated subjects} & \multicolumn{3}{c}{75\% of untreated subjects} \\ \midrule {} & bias & std & mse & bias & std & mse & bias & std & mse \\ $\hat{\Delta}_1$ & 0.0240 & 0.4105 & 0.1689 & -0.0099 & 0.2204 & 0.0486 & 0.0129 & 0.4012 & 0.1610\\ $\hat{\Delta}_3$ & 1.4122 & 0.5267 & 2.2714 & 0.3671 & 0.3397 & 0.2500 & -0.9007 & 0.7190 & 1.3276\\ $\hat{\Delta}_5$ & 0.3649 & 0.3400 & 0.2486 & 0.0841 & 0.1790 & 0.0391 & -0.2117 & 0.3252 & 0.1505\\ $\hat{\Delta}_6$ & 0.0061 & 0.1019 & 0.0104 & 0.0026 & 0.0776 & 0.0060 & 0.0058 & 0.1041 & 0.0109\\ $\hat{\Delta}_8$ & 0.0621 & 0.0893 & 0.0118 & 0.0490 & 0.0788 & 0.0086 & 0.0633 & 0.0901 & 0.0121\\ $\hat{\Delta}_9$ & 0.0167 & 0.1076 & 0.0119 & 0.0132 & 0.0913 & 0.0085 & 0.0168 & 0.1101 & 0.0124\\ \bottomrule \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{Misspecified $PS$ model, $45^{\circ}$ between $\alpha$ and $\beta$} \begin{tabular}{l*{10}{c}} \toprule Estimator & \multicolumn{3}{c}{25\% of untreated subjects} & \multicolumn{3}{c}{50\% of untreated subjects} & \multicolumn{3}{c}{75\% of untreated subjects} \\ \midrule {} & bias & std & mse & bias & std & mse & bias & std & mse \\ $\hat{\Delta}_1$ & -0.0017 & 0.1110 & 0.0123 & -0.0037 & 0.0766 & 0.0059 & -0.0019 & 0.0971 & 0.0094\\ $\hat{\Delta}_2$ & 0.2345 & 0.0925 & 0.0635 & 0.2071 & 0.0751 & 0.0485 & 0.2387 & 0.0948 & 0.0660\\ $\hat{\Delta}_5$ & -0.0029 & 0.0901 & 0.0081 & -0.0030 & 0.0723 & 0.0052 & -0.0013 & 0.0900 & 0.0081\\ $\hat{\Delta}_6$ & 0.0174 & 0.1081 & 0.0120 & 0.0128 & 0.0757 & 0.0059 & 0.0205 & 0.0932 & 0.0091\\ $\hat{\Delta}_7$ & 0.2168 & 0.0912 & 0.0553 & 0.1801 & 0.0751 & 0.0381 & 0.2158 & 0.0914 & 0.0549\\ $\hat{\Delta}_9$ & 0.0135 & 0.0802 & 0.0066 & 0.0082 & 0.0694 & 0.0049 & 0.0141 & 0.0806 & 0.0067\\ \bottomrule \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{Correctly specified $PS$ and $OR$ models, 50\% untreated subjects} \begin{tabular}{l*{10}{c}} \toprule Estimator & \multicolumn{3}{c}{$0^{\circ}$ between $\alpha$ and $\beta$} & \multicolumn{3}{c}{$45^{\circ}$ between $\alpha$ and $\beta$} & \multicolumn{3}{c}{$90^{\circ}$ between $\alpha$ and $\beta$} \\ \midrule {} & bias & std & mse & bias & std & mse & bias & std & mse \\ $\hat{\Delta}_1$ & 0.0002 & 0.0713 & 0.0051 & -0.0041 & 0.0759 & 0.0058 & -0.0031 & 0.0707 & 0.0050\\ $\hat{\Delta}_2$ & 0.2102 & 0.0707 & 0.0492 & 0.1448 & 0.0749 & 0.0266 & -0.0034 & 0.0694 & 0.0048\\ $\hat{\Delta}_3$ & 0.0011 & 0.0685 & 0.0047 & -0.0030 & 0.0722 & 0.0052 & -0.0028 & 0.0671 & 0.0045\\ $\hat{\Delta}_4$ & 0.4193 & 0.0729 & 0.1811 & 0.2937 & 0.0768 & 0.0922 & -0.0041 & 0.0728 & 0.0053\\ $\hat{\Delta}_5$ & 0.0004 & 0.0696 & 0.0048 & -0.0033 & 0.0735 & 0.0054 & -0.0032 & 0.0688 & 0.0047\\ $\hat{\Delta}_6$ & 0.0088 & 0.0714 & 0.0052 & 0.0005 & 0.0761 & 0.0058 & -0.0033 & 0.0706 & 0.0050\\ $\hat{\Delta}_7$ & 0.2526 & 0.0737 & 0.0693 & 0.1762 & 0.0797 & 0.0374 & -0.0046 & 0.0774 & 0.0060\\ $\hat{\Delta}_8$ & 0.0538 & 0.0652 & 0.0071 & 0.0322 & 0.0673 & 0.0056 & -0.0032 & 0.0630 & 0.0040\\ $\hat{\Delta}_9$ & 0.0175 & 0.0697 & 0.0052 & 0.0084 & 0.0724 & 0.0053 & -0.0038 & 0.0673 & 0.0045\\ \bottomrule \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{Misspecified $OR$ model, 50\% untreated subjects} \begin{tabular}{l*{10}{c}} \toprule Estimator & \multicolumn{3}{c}{$0^{\circ}$ between $\alpha$ and $\beta$} & \multicolumn{3}{c}{$45^{\circ}$ between $\alpha$ and $\beta$} & \multicolumn{3}{c}{$90^{\circ}$ between $\alpha$ and $\beta$} \\ \midrule {} & bias & std & mse & bias & std & mse & bias & std & mse \\ $\hat{\Delta}_1$ & 0.0125 & 0.2632 & 0.0693 & -0.0099 & 0.2204 & 0.0486 & -0.0021 & 0.1715 & 0.0294\\ $\hat{\Delta}_3$ & 0.4617 & 0.4240 & 0.3928 & 0.3671 & 0.3397 & 0.2500 & 0.0649 & 0.2960 & 0.0917\\ $\hat{\Delta}_5$ & 0.1104 & 0.2016 & 0.0528 & 0.0841 & 0.1790 & 0.0391 & 0.0081 & 0.1681 & 0.0283\\ $\hat{\Delta}_6$ & 0.0088 & 0.0714 & 0.0052 & 0.0026 & 0.0776 & 0.0060 & -0.0024 & 0.0742 & 0.0055\\ $\hat{\Delta}_8$ & 0.0538 & 0.0652 & 0.0071 & 0.0490 & 0.0788 & 0.0086 & -0.0050 & 0.0825 & 0.0068\\ $\hat{\Delta}_9$ & 0.0175 & 0.0697 & 0.0052 & 0.0132 & 0.0913 & 0.0085 & -0.0067 & 0.1002 & 0.0101\\ \bottomrule \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{Misspecified $PS$ model, 50\% untreated subjects} \begin{tabular}{l*{10}{c}} \toprule Estimator & \multicolumn{3}{c}{$0^{\circ}$ between $\alpha$ and $\beta$} & \multicolumn{3}{c}{$45^{\circ}$ between $\alpha$ and $\beta$} & \multicolumn{3}{c}{$90^{\circ}$ between $\alpha$ and $\beta$} \\ \midrule {} & bias & std & mse & bias & std & mse & bias & std & mse \\ $\hat{\Delta}_1$ & 0.0013 & 0.0716 & 0.0051 & -0.0037 & 0.0766 & 0.0059 & -0.0036 & 0.0773 & 0.0060\\ $\hat{\Delta}_2$ & 0.2957 & 0.0714 & 0.0925 & 0.2071 & 0.0751 & 0.0485 & -0.0040 & 0.0727 & 0.0053\\ $\hat{\Delta}_5$ & 0.0004 & 0.0696 & 0.0048 & -0.0030 & 0.0723 & 0.0052 & -0.0028 & 0.0673 & 0.0045\\ $\hat{\Delta}_6$ & 0.0267 & 0.0714 & 0.0058 & 0.0128 & 0.0757 & 0.0059 & -0.0039 & 0.0772 & 0.0060\\ $\hat{\Delta}_7$ & 0.2526 & 0.0737 & 0.0693 & 0.1801 & 0.0751 & 0.0381 & -0.0039 & 0.0696 & 0.0049\\ $\hat{\Delta}_9$ & 0.0175 & 0.0697 & 0.0052 & 0.0082 & 0.0694 & 0.0049 & -0.0032 & 0.0633 & 0.0040\\ \bottomrule \end{tabular} \end{table} \newpage \section{Discussion} In this paper, the classical doubly robust estimation for ATE is revisited. We consider nine combinations of the estimated $PS$ model and $OR$ model under parametric, semiparametric and nonparametric model structures. When the models are correctly specified, these nine estimators reach the same semiparametric efficiency bound. In other words, these nine estimators are all asymptotically efficient. Under the locally misspecified parametric $PS$ or $OR$ model which converges to the underlying parametric model, the estimators can still achieve the semiparametric efficiency bound. Further, when the $OR$ model is globally misspecified and the $PS$ model is correctly specified, the asymptotic variance is always greater than or equal to the semiparametric efficiency bound. Yet, when the $PS$ model is globally misspecified and the $OR$ model is correctly specified, the situation becomes complicated. The asymptotic variance may not be always enlarged and in some cases, could be even smaller than the semiparametric efficiency bound. This phenomenon is interesting and worth a further study.
{'timestamp': '2020-08-19T02:09:37', 'yymm': '2005', 'arxiv_id': '2005.14381', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14381'}
arxiv
\section{Introduction and related works} A Bayesian Network (BN) is a type of a probabilistic graphical model that can be viewed as a Directed Acyclic Graph (DAG), where nodes represent uncertain vari
ables and arcs represent dependency or causal relationship between variables. The structure of a BN can be learned from data, and there are three main classes of structure learning: constraint-based, score-based and hybrid learning. The first type relies on conditional independence tests to construct the skeleton and orient edges, whereas the second type searches over the space of possible graphs and returns the graph that maximises a fitting score. Hybrid learning refers to algorithms that combine both constraint-based and score-based learning. \par A common problem when learning BNs from data is that of causal insufficiency, where data fail to capture all the relevant variables. Variables not captured by data are referred to as latent variables (also known as unobserved or unmeasured variables). In the real world, latent variables are impossible to avoid either because data may not be available or simply because some variables are unknown unknowns for which we will never seeks to record data. A special case of a latent variable, referred to as a latent confounder, is an unobserved common cause of two or more observed variables in a BN. While known latent variables pose less of a problem in knowledge-based BNs, where methods exist that enable users to model latent variables not present in the data under the assumption the statistical outcomes are already influenced by the causes an expert might identify as variables missing from the dataset \citep{constantinou}, they can be a problem in structure learning. This is because child nodes that share an unobserved common cause will be found to be directly related, even when they are not, and this is a widely known problem that gives rise to spurious correlations in the presence of confounding. \par The traditional DAG has proven to be unsuitable when structure learning is performed under the assumption that some variables are latent. This is because a DAG assumes causal sufficiency and does not capture latent variables. Ancestral graphs have been proposed as a solution to this problem, and represent an extension of DAGs that capture hidden variables. Specifically, the Maximal Ancestral Graph (MAG) \citep{richardson} is a special case of a DAG where arcs indicate direct or ancestral relationships, and bi-directed edges represent confounding. Moreover, a Partial Ancestral Graph (PAG) represents a set of Markov equivalent MAGs \citep{RePEc:mtp:titles:0262194406}, in the same way a Complete Partial Directed Acyclic Graph (CPDAG) represents a set of Markov equivalent DAGs. Fig 1 illustrates an example of a DAG with latent variables $L_1$ and $L_2$, along with its corresponding Markov equivalent MAGs and the PAG of Markov equivalent MAGs. Both types of ancestral graph, MAGs and PAGs, can be used to represent causally insufficient systems. \begin{figure}[h] \centering \includegraphics[scale=0.5]{figure1.jpg} \end{figure} \par The most popular BN structure learning algorithm for causally insufficient systems is the constraint-based FCI, which is based on the PC algorithm \citep{RePEc:mtp:titles:0262194406}. Various modified versions of FCI have been published in the literature and include the augmented FCI which improves the orientation phase by extending the orientation rules of FCI from four to ten \citep{zhang}, the conservative FCI (cFCI) which uses additional conditional independence tests to restrict unambiguous orientations and improve the identification of definite colliders \citep{DBLP:journals/corr/RamseyZS12}, and the RFCI which skips some of the orientation rules in FCI and performs fewer conditional independence tests that make the algorithm faster and more suitable to problems of 1000s of variables, in exchange for minor accuracy \citep{colombo}. These constraint-based algorithms assume the joint probability distribution is a perfect map with a faithful graph, and this is often not practical when working with real data. Moreover, the orientation phase depends on the accuracy of the skeleton and hence, any errors from the first phase are propagated to the orientation phase. GFCI relaxes these issues by incorporating the score-based approach of FGS \citep{DBLP:journals/corr/Ramsey15a}, which is an enhanced version of Greedy Equivalence Search (GES), thereby producing a hybrid learning algorithm that outperforms the constraint-based versions of FCI \citep{pmlr-v52-ogarrio16}. \par In addition to the FCI variants, other algorithms have been proposed that are based on different approaches to structure learning. These include the GSPo, the M\textsuperscript{3}HC and the GSMAG algorithms. Specifically, the GSPo is an ordering-based search algorithm that uses greedy search over the space of independence maps (IMAPs) to determine the minimal IMAP \citep{Bernstein2019OrderingBasedCS}. This is achieved by defining a partial ordered set (poset) that is linked to the IMAP, expressed as a discrete optimisation problem. However, GSPo uses a random starting point for a poset, and this makes the algorithm non-deterministic since each run is likely to produce a different result. On the other hand, the M\textsuperscript{3}HC is a hybrid learning algorithm \citep{tsirlis} that adds a constraint-based learning phase to the greedy search of the GSMAG algorithm \citep{Triantafillou2016ScorebasedVC}. Both M\textsuperscript{3}HC and GSMAG assume the data are continuous and normally distributed, and Tsirlis et al. \citep{tsirlis} showed that hybrid algorithms such as M\textsuperscript{3}HC and GFCI demonstrate better performance over the other relevant constraint-based algorithms. \par The paper is organised as follows: Section 2 describes the CCHM algorithm, Section 3 describes the evaluation process, Section 4 presents the results, and we provide our concluding remarks and a discussion for future work in Section 5. \section {Conservative rule and Causal effect Hill-climbing for MAG (CCHM)} CCHM is a hybrid structure learning algorithm defined as a Structural Equation Model (SEM), under the assumption the data are continuous and follow a multivariate Gaussian distribution. The process of CCHM can be divided into two phases. The first phase adopts the conditional independence steps from cFCI to construct the skeleton of the graph, and to further classify definite colliders as whitelist and definite non-colliders as blacklist. The second phase involves score-based learning that uses the Bayesian Information Criterion (BIC) as the objective function, adjusted for MAGs, where edge orientation is augmented with causal effect measures. These steps are described in more detail in the subsections that follow. \subsection{Definite colliders (whitelist) and definite non-colliders (blacklist)} Conditional independence tests are used to determine the edges between variables and to produce the skeleton graph. A p-value associates with each statistical test result, which is used to sort conditional independencies in ascending order. An alpha hyperparameter is then used as the cut-off threshold in establishing independence. For each conditional independency $A$$\mathrel{\text{\scalebox{1.07}{$\perp\mkern-10mu\perp$}}}$$B$$\mid$$Z$, $Z$ is recorded as the separation set (Sepset) of variables $A$ and $B$. The orientation of edges is determined by a method inherited from cFCI, where extra conditional independence tests over all unshielded triples determine the classification of each of those triples as either a definite collider or a definite non-collider: \begin{itemize} \item Given unshielded triple $A$-$C$-$B$, perform conditional independence tests on $A$ and $B$ over all neighbours of $A$ and $B$. \item If $C$ is \textbf{NOT} in all Sepsets of $A$ and $B$, add $A$-$C$-$B$ to the whitelist as a definite collider. \item If $C$ is in \textbf{ALL} Sepsets of $A$ and $B$, add $A$-$C$-$B$ to the blacklist as a definite non-collider. \end{itemize} \subsection{Bayesian Information Criterion (BIC) for MAG} The score-based learning part of CCHM involves hill-climbing greedy search that minimises the BIC score, which is a function for goodness-of-fit in BNs based on Occam’s razor principle \citep{adnan}. The BIC score balances the Log-Likelihood (LL) fitting against a penalty term for model dimensionality. CCHM adopts the BIC function used in the M\textsuperscript{3}HC and GSMAG algorithms, and which is adjusted for MAGs \citep{tsirlis}. Formally, given a dataset over vertices $V$ with a distribution $\mathcal{N}\left(0,\Sigma\right)$ where $\Sigma$ is a covariance matrix calculated from the dataset, a unique solution $Y$ is found where $\hat{\Sigma}=\left(I-\mathcal{B}\right)^{-1}{\Omega\left(I-\mathcal{B}\right)}^{-t}$. MAG $\mathcal{G}$ is constructed from linear equations $Y=\mathcal{B}\cdot Y+\epsilon$, where $Y=\left\{Y_i\middle|i\in V\right\}$, $\mathcal{B}$ is a $V\times V$ coefficient matrix for the directed edge $j$ to $i$ $\left\{\beta_{ij}\right\}$, I is an identity matrix, $\epsilon$ is a positive random error vector for the bidirected edge $j$ to $i$ $\left\{\omega_{ij}\right\}$, and the error covariance matrix $\Omega=Cov\left(\epsilon\right)=\left\{\omega_{ii}\right\}$. The BIC score is then calculated as follows \citep{richardson}: \[ BIC\left(\hat{\sum}\middle|\mathcal{G}\right)=-2\ln{\left(l_\mathcal{G}\left(\hat{\sum}\middle|\mathcal{G}\right)\right)}+\ln{\left(N\right)\left(2\left|V\right|+\left|E\right|\right)} \qquad (1) \] where $l_\mathcal{G}$ is likelihood function, $\left|V\right|$ and $\left|E\right|$ are the size of nodes and edges that are part of the complexity penalty term, and $N$ is the sample size. Similar to the factorisation property of DAGs, the score $l_\mathcal{G}\left(\hat{\sum}\middle|\mathcal{G}\right)$ can be decomposed into c-components ${(S}_k)$ of $\mathcal{G}$ which refer to the connected components that are partitioned by removing all directed edges \citep{nowzohour}: \[ l_\mathcal{G}\left(\hat{\sum}\middle|\mathcal{G}\right)=-\frac{N}{2}\sum_{k}S_k \qquad (2) \] \begin{center} where $ S_k=\left|C_k\right|\cdot\ln{\left(2\pi\right)}+ln\left(\frac{\left|{\hat{\Sigma}}_{\mathcal{G}_k}\right|}{\prod_{j\in{\rm Pa}_{\mathcal{G}_k}}\sigma_{kj}^2}\right)+\frac{N-1}{N}\cdot tr\left[{\hat{\Sigma}}_{\mathcal{G}_k}^{-1}S_{\mathcal{G}_k}-\left|{\rm Pa}_\mathcal{G}\left(C_k\right)\setminus\left\{C_k\right\}\right|\right]$ \end{center} and where $C_k$ denotes the set of nodes for each c-component$\ k, \mathcal{G}_k$ is the marginalisation from $C_k$, with all their parent nodes are defined as ${\rm Pa}_\mathcal{G}\left(C_k\right)$ in $C_k, \sigma_{kj}^2$ represents the diagonal ${\hat{\Sigma}}_{\mathcal{G}_k}$ of the parent node k. The likelihood $\hat{\Sigma}$ is estimated by the RICF algorithm \citep{drton}. \subsection{Direct causal criteria for CCHM} Because the BIC is a Markov equivalent score, it is incapable of orientating all edges from statistical observations. Optimising for BIC under causal insufficiency returns a PAG, or one of the MAGs that are part of the equivalence class of the optimal PAG. In this paper, we are interested in orientating all edges and discovering a MAG. We achieve this using Pearl’s do-calculus \citep{causality} to measure the direct causal effect on edges that remain undirected by BIC. The direct causal effect is estimated by intervention that renders the intervening variable independent of its parents. \section*{Theorem: Single-door criterion for direct effect} Single-Door Criterion for direct effect \citep{causality}: Given $X$$\to$$Y$, path coefficient is identifiable and equal to the regression coefficient if \begin{itemize} \item There existed a set of variable $Z$ such that $Z$ contains no descendant of $Y$ \item $Z$ is d-separated set of $X$ and $Y$ in subgraph removing $X$$\to$$Y$ \end{itemize} The interpretation of the path coefficient $(\beta)$ in the regression of Theorem can be expressed as the direct causal effect determined by the rate of change of $E\left[Y\right]$ given intervention $X$ \citep{Maathuis2009EstimatingHI} as follows. \begin{center} $ \beta=\frac{\partial}{\partial x}E\left[Y|\ do(x)\right] =E[Y|do(X\ =x+1)] - E[Y|do(X= x)] $ for any value of $x$ \end{center} This assumes that all casual effect parameters are identifiable, and that the path coefficient or the direct causal effect is the regression coefficient estimated from the likelihood function. Let $A$$\to$$B$ be the edge in the ground truth graph, the SEM $B=\beta_AA+\epsilon_B$, if we assume that we have $A\sim\mathcal{N}\left(\mu_A,\sigma^2_A\right),\epsilon_B\sim\mathcal{N}(0,\sigma_{\epsilon_B}^2)$, and $\epsilon_B$ and $A$ are independent. Thus, $E\left[B\right]=\beta_AE\left[A\right],\ {\sigma^2}_B={\beta_A}^2{\sigma^2}_A+\sigma_{\epsilon_B}^2$. For every pair $A$ and $B$ in the learned graph, two causal graphs where $A$$\to$$B$ and $A$←$B$ need to be constructed to measure the direct causal effects. Specifically, \begin{itemize} \item For graphs $A$$\to$$B$, do the intervention on A; i.e., $do(a)$ \citep{causality} (page 161) \[ {\ \ \beta}_A\ =\ \frac{E\left[BA\right]}{E\left[A^2\right]} \qquad (3) \] \item For graphs $B$$\to$$A$, do the intervention on B; i.e., $do(b)$. \[ \beta_B\ =\ \frac{E\left[AB\right]}{E\left[B^2\right]} \qquad (4) \] \end{itemize} From (3) and (4); \[ \frac{\beta_A}{\beta_B}=\ \frac{E\left[B^2\right]}{E\left[A^2\right]}\ =\ \frac{{E\left[B\right]}^2+{\sigma^2}_B}{{E\left[A\right]}^2+{\sigma^2}_A}\ \] Substitute$\ E\left[B\right]=\beta_AE\left[A\right],\ {\sigma^2}_B={\beta_A}^2{\sigma^2}_A+\sigma_{\epsilon_B}^2 $ from the graph, \[ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=\ \frac{\beta_A^2{E\left[A\right]}^2+{\beta_A}^2{\sigma^2}_A+\sigma_{\epsilon_B}^2}{{E\left[A\right]}^2+{\sigma^2}_A}\ =\ \beta_A^2+\frac{\sigma_{\epsilon_B}^2}{{E\left[A\right]}^2+{\sigma^2}_A}\ \ \ \ \ \qquad (5) \] If $E\left[A\right]\ =\mu_A=0,{\sigma^2}_A=1\ $and $\ {\sigma^2}_{\epsilon_B}=1\ $in$\ \ (5) $ \\ \begin{center} $ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \frac{\beta_A}{\beta_B}=\ {\beta_A}^2+1$; we have the probability $(\left|\beta_A\right|>\left|\beta_B\right|)=1$ \end{center} Algorithm 1 describes the steps of CCHM in detail. \begin{figure}[H] \centering \includegraphics[scale=0.5]{algorithm1.jpg} \end{figure} \section{Evaluation} The accuracy of the CCHM algorithm is compared to the outputs of the M\textsuperscript{3}HC, GSPo, GFCI, RFCI, FCI, and cFCI algorithms, when applied to the same data. The M\textsuperscript{3}HC algorithm was tested using the MATLAB implementation by Triantafillou \citep{causalgraphs}, the GFCI and RFCI algorithms were tested using the Tetrad-based rcausal package in R \citep{rcausal}, and the GSPo algorithm was tested using the causaldag Python package by Squires \citep{causaldag}. The computational time of CCHM is compared to the M\textsuperscript{3}HC, FCI and cFCI, which are based on the same MATLAB package. \par All experiments are based on synthetic data. However, we divide them into experiments based on data generated from BNs which had their structure and dependencies randomised, and data generated from real-world BNs. Randomised BNs were generated using Triantafillou’s \citep{causalgraphs} MATLAB package. We created a total of 600 random Gaussian DAGs that varied in variable size, max in-degree, and sample size. Specifically, 50 DAGs were generated for each combination of variables $V$ and max in-degree settings $\mathcal{D}$, where $V$ = \{10, 20, 50, 70, 100, 200\} and $\mathcal{D}$=\{3, 5\}. Each of those 600 graphs was then used to generate two datasets of sample size 1,000 and 10,000, for a total of 1,200 datasets. Data were generated assuming linear Gaussian parameters $\mu$=0 and $\sigma^2$=1 and uniformly random coefficients $\pm$[0.1,0.9] for each parent set to avoid very weak or very strong edges. Approximately 10\% of the variables in the data are made latent in each of the 600 datasets. \par In addition to the randomised networks, we made use of four real-world Gaussian BNs taken from the bnlearn repository \citep{bnlearn}. These are the a) \emph{MAGIC-NIAB} (44 nodes) which captures genetic effects and phenotypic interactions for Multiparent Advanced Generation Inter-Cross (MAGIC) winter wheat population, b) \emph{MAGIC-IRRI} (64 nodes) which captures genetic effects and phenotypic interactions for MAGIC indica rice population, c) \emph{ECOLI70} (46 nodes) which captures the protein-coding genes of \emph{E. coli}, and d) \emph{ARTH150} (107 nodes) which captures the gene expressions and proteomics data of \emph{Arabidopsis Thaliana}. Each of these four BNs was used to generate data, with the sample size set to 10,000. For each of the four datasets, we introduced four different rates of latent variable: 0\%, 10\%, 20\% and 50\%. This made the total number of real-world datasets 16; four datasets per BN. \par The following hyperparameter settings are used for all algorithms: a) alpha=0.01 for the fisher’s z hypothesis test for datasets generated by the randomised \footnote{The large number of datasets produced by the randomised graphs (i.e., 600) meant that we had to restrict the alpha parameter to alpha=0.01 for all algorithms in those experiments.} BNs, b) alpha =0.05, 0.01, 0.001 (all cases tested) for datasets generated by the real-world BNs, and c) the max Sepset size of the conditioning set is set to 4 so that runtime is maintained at reasonable levels. The maximum length of discriminating paths is also set to 4 in the four FCI-based algorithms (this is the same as the max Sepset size). For GSPo, the depth of depth-first search is set to 4 and the randomised points of posets equal to 5 (these are the default settings). Because GSPo is a non-deterministic algorithm that generates a different output each time it is executed, we report the average scores obtained over five runs. Lastly, all algorithms were restricted to a four-hour runtime limit. \par Further, because the algorithms will output either a PAG or a MAG, we convert all MAG outputs into the corresponding PAGs. The accuracy of the learned graphs is then assessed with respect to the true PAG. The results are evaluated using the traditional measures of Precision and Recall, the Structural Hamming Distance (SHD) which represents the difference in the number of edges and edge orientations between the learned and the true graphs, and the Balance Scoring Function (BSF) which returns a balanced score by taking into consideration all four confusion matrix parameters as follows \citep{DBLP:journals/corr/abs-1905-12666}: $BSF=0.5\left(\frac{TP}{a}+\frac{TN}{i}-\frac{FP}{i}-\frac{FN}{a}\right) $ where $a$ is the numbers of edges and $i$ is the number of direct independences in the ground true graph, and $i=\frac{n(n-1)}{2}-a$. The BSF score ranges from -1 to 1, where 1 refers to the most accurate graph (i.e., matches the true graph), 0 refers to a baseline performance that is equal to that of a fully connected or an empty graph, and -1 refers to the worst possible graph (i.e., the reverse result of the true graph). \section{Results} \subsection{Random Gaussian Bayesian Networks} Fig 2 presents the Precision and Recall scores the algorithms achieve on the datasets generated by the randomised BNs. The scores are averaged across the different settings of variable size and max in-degree. Note that because there was no noteworthy difference between the overall results obtained from the two different data sample sizes, we only report the results based on sample size 10,000. The results and conclusions based on the datasets with sample size 10,000 also hold for the datasets with sample size 1,000. \par Overall, the results show that, the CCHM outperforms all other algorithms in terms of both Precision and Recall, and across all settings excluding Recall under max in-degree 5 where GSPo ranks highest (Fig 2b). While GSPo appears to perform best when the number of variables is smallest, its performance decreases sharply with the number of variables, and fails to produce a result within the 4-hour time limit when the number of variables is largest. \begin{figure} [H] \centering \includegraphics[scale=0.4]{figure2.jpg} \end{figure} \par The results show no noticeable difference between FCI and its variant RFCI, whereas the cFCI and GFCI show strong improvements over FCI, with cFCI outperforming all FCI-based algorithms. Moreover, the performance of cFCI is on par with that of M\textsuperscript{3}HC. Note that while CCHM employs the BIC objective function of M\textsuperscript{3}HC, CCHM outperforms M\textsuperscript{3}HC in both sparse (Fig 2a) and dense (Fig 2b) graphs. This result provides empirical evidence that a) the conservative rules used in the constraint-based phase of CCHM, and b) the do-calculus used in the score-based phase of CCHM, have indeed improved structure learning performance. \par Fig 3 compares the average runtime of CCHM to the runtimes of the other algorithms. The runtime comparison is restricted to algorithms that are based on the same MATLAB implementation on which CCHM is based. The results show that CCHM is marginally faster than cFCI and slower than the other algorithms, with the worst case scenario observed when the number of variables is largest, where CCHM is approximately two times slower than FCI. \par Fig 4 presents the SHD and BSF scores, along with the corresponding numbers of edges generated by each algorithm. Both the SHD and BSF metrics rank CCHM highest, and these results are consistent with the Precision and Recall results previously depicted in Fig 2. The number of edges produced by CCHM is in line with the number of edges produced by the other algorithms, and this observation provides confidence that CCHM achieves the highest scores due to accuracy rather than due to the number of edges, which may sometimes bias the result of a metric \citep{DBLP:journals/corr/abs-1905-12666}. One inconsistency between the SHD and other metrics involves the GFCI algorithm, where SHD ranks lower than all the other FCI-based algorithms, something which contradicts the results of Precision, Recall, and BSF. Interestingly, while GSPo produces the highest BSF scores when the number of variables is just 10, its performance diminishes drastically with the number of variables and quickly becomes the worst performer (refer to the BFS scores in Fig 4a); an observation that is largely consistent with the results in Fig 2. \begin{figure} [H] \centering \includegraphics[scale=0.4]{figure4.jpg} \end{figure} \subsection{Real-world Gaussian Bayesian Networks} The reduced number of experiments that associate with the real-world GBNs, with respect to the random GBNs (i.e., 16 instead of 600), enabled us to also test the sensitivity of the algorithms on the alpha hyperparameter, which reflects the significance cut-off point in establishing independence. Fig 5 presents the SHD scores for each of the four real-world GBNs, and over different rates of latent variables in the data. The results in each case study are restricted to the top three algorithms, and this is because we report three different results for each of the top three algorithms which are derived from the corresponding three different hyperparameter inputs alpha specified in Fig 5. \begin{figure} [H] \centering \includegraphics[scale=0.38]{figure5.jpg} \end{figure} \par Only four algorithms (CCHM, M\textsuperscript{3}HC, cFCI and GSPo) achieved a top-three performance in any of the four networks, and this suggests that the relative performance between algorithms is rather consistent across different case studies. While there is no clear relationship between the rate of latent variables and SHD score, the results do suggest that the accuracy of the algorithms decreases with the rate of latent variables in the data. This is because while we would expect the SHD score to decrease with less variables in the data, since less variables lead to potentially fewer differences between the learned and the true graph (refer to Fig 4), the results in Fig 5 reveal a weak increasing trend in SHD core with the rate of latent variables in the data. \par Overall, the CCHM algorithm was part of the top three algorithms in all the four case studies. Specifically, CCHM generated the lowest SHD error in networks (a) and (b). The results in network (c) were less consistent, with GSPo ranked 1\textsuperscript{st} at latent variable rates of 10\% and 20\%, and CCHM ranked 1\textsuperscript{st} at latent variable rates of 0\% and 50\%. In contrast, the results based on network (d) show no noteworthy differences in the performance between the three top algorithms. Overall, the results suggest that cFCI and GSPo are much more sensitive to the alpha hyperparameter compared to the CCHM and M\textsuperscript{3}HC algorithms, and that CCHM generally performs best when alpha=0.01. \section{Discussion and future works} This paper builds on recent developments in BN structure learning under causal insufficiency with a novel structure learning algorithm, called CCHM, that combines constraint-based and score-based learning with causal effects to learn GBNs. The constraint-based part of CCHM adopts features from the state-of-the-art cFCI algorithm, whereas the score-based part is based on traditional hill-climbing greedy search that minimises the BIC score. CCHM applies Pearl’s do-calculus as a method to orientate the edges that both constraint-based and score-based learning fail to do so from observational data. The results show that CCHM outperforms the state-of-the-art algorithms in the majority of the experiments, which include both randomised and real-world GBNs. \par A limitation of this work is that the algorithm assumes linear GBNs and that the data are continuous. Future work will extend this approach to discrete BNs, where causal insufficiency remains an important open problem \citep{inbook}. Other directions include investigating different strategies in the way the do-calculus effect is applied to the process of structure learning; e.g., it can be applied directly to the calculation of the BIC score during score-based learning, or computed as the total causal effect of the graph using do-calculus rules or via back-door adjustment with graph surgery. Lastly, causal insufficiency represents just one type of data noise that exist in real-world datasets, and future work will also investigate the effects of causal insufficiency when combined with other types of noise in the data. \acks{This research was supported by the ERSRC Fellowship project EP/S001646/1 on \emph{Bayesian Artificial Intelligence for Decision Making under Uncertainty} \citep{anthony2018}, by The Alan Turing Institute in the UK under the EPSRC grant EP/N510129/1, and by the Royal Thai Government Scholarship offered by Thailand’s Office of Civil Service Commission (OCSC).}
{'timestamp': '2021-07-14T02:26:12', 'yymm': '2005', 'arxiv_id': '2005.14319', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14319'}
arxiv
\section{Introduction\label{sec:introduction}} Recently the TOTEM experiment measured differential cross-sections of elastic proton-proton collisions in the TeV energy range, from $\sqrt{s} = 2.76$
through 7 and 8 to 13 TeV, together with the total, elastic and inelastic cross-sections and the real to imaginary ratio of the scattering amplitude at vanishing four-momentum transfer. These measurements provided surprizes and unexpected results. First of all, the shape of the differential cross-section of elastic scattering at $\sqrt{s} = 7$ TeV was different from all the predictions. The total cross-section increases with increasing $\sqrt{s}$ according to theoretical expectations based on Pomeron-exchange, corresponding experimentally to the production of large rapidity gaps in high energy proton-proton and proton-antiproton collisions. These events correspond to large angular regions where no particle is produced. Their fraction, in particular the ratio of the elastic to the total proton-proton cross-section is increased above 25 \% at LHC energies. In the language of quantum chromodynamics (QCD), the field theory of strong interactions, Pomeron-exchange corresponds to the exchange of even number of gluons with vacuum quantum numbers. In 1973, a crossing-odd counterpart to the Pomeron was proposed by L. Lukaszuk and B. Nicolescu, the so-called Odderon~\cite{Lukaszuk:1973nt}. In QCD, Odderon exchange corresponds to the $t$-channel exchange of a color-neutral gluonic compound state consisting of an odd number of gluons, as noted by Bartels, Vacca and Lipatov in 1999~\cite{Bartels:1999yt}. The Odderon effects remained elusive for a long time, due to lack of a definitive and statistically significant experimental evidence. A direct way to probe the Odderon in elastic scattering is by comparing the differential cross-section of particle-particle and particle-antiparticle scattering at sufficiently high energies~\cite{Jenkovszky:2011hu,Ster:2015esa}. Such a search was published at the ISR energy of $\sqrt{s}=53$ GeV in 1985~\cite{Breakstone:1985pe}, that resulted in an indication of the Odderon, corresponding to a 3.35$\sigma$ significance level obtained from a simple $\chi^2$ calculation, based on 5 $pp$ and 5 $p\bar p$ data points in the 1.1 $\le $$ |t| $$ \le 1.5$ GeV$^2$ range (around the diffractive minimum). This significance is smaller than the $5 \sigma$ threshold, traditionally accepted as a threshold for a discovery level observation in high energy phyics. Furthermore, the colliding energy of $\sqrt{s} = 53$ GeV was not sufficiently large so the possible Reggeon exchange effects were difficult to evaluate and control. These difficulties rendered the Odderon search at the ISR energies rather inconclusive, but nevertheless inspiring and indicative, motivating further studies. In a series of recent papers, the TOTEM Collaboration published results with important implications for the Odderon search. These papers studied elastic proton-proton collisions in the LHC energy range between $\sqrt{s} = 2.76$ and $13$ TeV~\cite{Antchev:2017dia,Antchev:2017yns,Antchev:2018edk,Antchev:2018rec}. The total cross section, $\sigma_{\rm tot}(s)$ was found to increase, while the real-to-imaginary ratio, $\rho_0(s)$, is found to decrease with increasing energy, first identified at $\sqrt{s} = 13$ TeV \cite{Antchev:2017dia,Antchev:2017yns}. These experimental results at vanishing four-momentum transfer were consistent with a possible Odderon effect and triggered an intense theoretical debate (see e.g.~Refs.~\cite{Khoze:2017swe,Samokhin:2017kde,Csorgo:2018uyp,Broilo:2018qqs,Pancheri:2018yhd,Goncalves:2018nsp,Selyugin:2018uob,Khoze:2018bus,Broilo:2018els,Troshin:2018ihb,Dremin:2018uwt,Martynov:2018nyb,Martynov:2018sga,Shabelski:2018jfq,Khoze:2018kna,Hagiwara:2020mqb,Contreras:2020lrh,Gotsman:2020mkd}). For example, Ref.~\cite{Gotsman:2018buo} demonstrated that such an indication at $t= 0$ is not a unique Odderon signal, as such a behaviour can be attributed to secondary Reggeon effects. In spite of the rich experimental results and the hot theoretical debate, the Odderon remained rather elusive at vanishing four-momentum transfer even in the TeV energy range ~\cite{Petrov:2018wlv} . However, at larger four-momentum transfers, in the interference (diffractive dip and bump or minimum-maximum) region, the Odderon signals are significant at LHC energies. Let us mention here only two of them: the four-momentum transfer dependent nuclear slope parameter $B(t)$ and the scaling properties of elastic scattering at the TeV energy region. Two independent, but nearly simultaneous phenomenological papers suggested that the four-momentum transfer dependence of the nuclear slope parameter, $B(t)$ is qualitatively different in elastic proton-proton and proton-antiproton collisions~\cite{Csorgo:2018uyp,Martynov:2018sga}. The TOTEM experiment has demonstrated in ref.~\cite{Antchev:2018rec} that indeed in elastic $pp$ collisions at $\sqrt{s} = 2.76$ TeV, the nuclear slope $B(t)$ is increasing (swings) before it decreases and changes sign in the interference (diffractive dip and bump or minimum-maximum) region. After the diffractive maximum, the nuclear slope becomes positive again. In contrast, elastic $p\bar p$ collisions measured by the D0 collaboration at the Tevatron energy of $\sqrt{s} = 1.96$ TeV did not show such a pronounced diffractive minimum-maximum structure, instead an exponentially decreasing cone region at low $-t$ with a constant $B(t)$ is followed by a shoulder structure, without a pronunced diffractive minimum and maximum structure. The TOTEM collaboration presented its results on the elastic $pp$ differential cross-section at $\sqrt{s} = 2.76$ TeV and concluded in ref.~\cite{Antchev:2018rec} that {\it ``under the condition that the effects due to the energy difference between TOTEM and D0 can be neglected,} these results {\it provide evidence for a colourless 3-gluon bound state exchange in the t-channel of the proton-proton elastic scattering".} This energy gap has been closed recently, in a model-independent way, based on a re-analysis of already published data using the scaling properties of elastic scattering in both $pp$ and $p\bar p$ collisions at TeV energies: Refs.~\cite{Csorgo:2019ewn,Csorgo:2020msw,Csorgo:2020rlb} reported about a statistically significant Odderon signal in the comparison of the $H(x,s)$ scaling functions of elastic $pp$ collisions at $\sqrt{s} = 7.0 $ TeV to that of $p\bar p$ collisions at $\sqrt{s} = 1.96$ TeV. The difference between these scaling functions carries an at least $6.26$ $\sigma$ Odderon signal, if all the vertical and horizontal, point-to-point fluctuating and point-to-point correlated errors are taken into account. If the interpolation between the datapoints at $7$ TeV is considered as a theoretical curve, the significance of the Odderon signal goes up to $6.55$ $\sigma$. Instead of comparing the cross sections directly, this method removes the dominant $s$ dependent quantities, by scaling out the $s$-dependencies of $\sigma_{\rm tot}(s)$, $\sigma_{\rm el}(s)$, $B(s)$ and $\rho_0(s)$, as well as the normalization of the $H(x,s)$ scaling function, that also cancels the point-to-point correlated and $t$-independent normalization errors. The model-independence of the results of refs.~\cite{Csorgo:2018uyp,Csorgo:2019ewn,Csorgo:2020msw,Csorgo:2020rlb} is an advantage when a significant and model-independent Odderon signal is searched for. The domain of the signal region can also be determined with model-independent methods. Both the signal and its domain can be directly determined from the comparison of D0 and TOTEM data. However, a physical interpretation or a theoretical context is also desired, not only to gain a better understanding of the results, in order to have a more physical picture, but also to gain a predictive power and to be able to extrapolate the results to domains where experimental data are lacking, or, to regions where the scaling relations are violated. To provide such a picture is one of the goals of our present manuscript. In this work, we continue a recent series of theoretical papers~\cite{Nemes:2012cp,CsorgO:2013kua,Csorgo:2013bwa,Nemes:2015iia}. These studies investigated the differential cross-section of elastic $pp$ collisions, but did not study the same effects in elastic $p\bar p$ collisions. The framework of these studies is the real extended and unitarized Bialas-Bzdak model, based on refs.~\cite{Bialas:2006kw,Bzdak:2007qq,Bialas:2006qf,Bialas:2007eg}. This model considers protons as weakly bound states of constituent quarks and diquarks, or $p=(q,d)$ for short (for a more detailed summary of the model see \ref{sec:appendix-a}). In a variation on this theme, the diquark in the proton may also be considered to be a weakly bound state of two constituent quarks, leading to the $p=(q,(q,q))$ variant of the Bialas-Bzdak model~\cite{Bialas:2006kw,Bzdak:2007qq}. The model is based on Glauber's multiple scattering theory of elastic collisions~\cite{Glauber:1955qq,Glauber:1970jm,Glauber:2006gd}, assuming additionally, that all elementary distributions follow a random Gaussian elementary process, and can be characterized by the corresponding $s$-dependent Gaussian radii. These distributions include the parton distribution inside the quark, characterized by a Gaussian radius $R_q(s)$, the distributions of the partons inside the diquarks, characterized by the Gaussian radius $R_d(s)$ and the typical separation between the quarks and the diquarks characterized by the Gaussian radius $R_{qd}(s)$. In refs.~\cite{Nemes:2012cp,CsorgO:2013kua,Nemes:2015iia} it was shown that the $p=(q,(q,q))$ variant of the Bialas-Bzdak model gives too many diffractive minima, while experimentally only a single diffractive minimum is observed in $pp$ collisions. This is a result that is consistent with the earlier detailed studies of elastic nucleus-nucleus collisions in ref.~\cite{Czyz:1969jg}, that observed that a single diffractive minimum occures only in elastic deuteron-deuteron or $(p,n)+(p,n)$ collisions, so the number of diffractive minima increases as either of the elastically colliding composite objects develops a more complex internal structure. In the original version of the Bialas-Bzdak model, the scattering amplitude was assumed to be completely imaginary~\cite{Bialas:2006kw}. This stucture resulted in a completely vanishing differential cross-section at the diffractive minima. This model was supplemented by a real part, first perturbatively~\cite{Nemes:2012cp,CsorgO:2013kua,Csorgo:2013bwa}, subsequently in a non-perturbative and unitary manner~\cite{Nemes:2015iia}. This way a new parameter called $\alpha(s)$ was introduced, that controls the value of the differential cross-section at the diffractive minimum (it is not to be confused with the strong coupling constant of QCD, that we denote in this work as $\alpha_s^{\rm QCD}$). Our $\alpha(s)$ is a kind of opacity parameter, that measures the strength of the real part of the scattering amplitude, so it is responsible for both for filling up the dip region of the differential cross-sections and for the description of the real to imaginary ratio $\rho$ at vanishing four-momentum transfer. The structure of this unitary, Real Extended Bialas-Bzdak model (abbreviated as ReBB model) is thus very interesting as there are only four $s$-dependent physical parameters: $R_q$, $R_d$, $R_{qd}$ and $\alpha$. However three out of these four parameters is a geometrical parameter, characterizing the $s$ dependence of parton distributions inside the protons. Hence, it is natural to assume, that these distributions are the same inside protons and anti-protons, while the opacity parameter $\alpha$ may be different in elastic $pp$ and $p\bar p$ collisions. So it is natural to expect, that this $\alpha(s)$ parameter may carry an Odderon signal as its excitation function might be very different in elastic $pp$ collisions, that feature a pronounced dip at every measured energy even in the TeV energy range~\cite{Antchev:2018rec}, while in elastic $p\bar p$ collisions, a significant dip is lacking even in measurements in the TeV energy range~\cite{Abazov:2012qb}. In this manuscript, we thus extend the applications of the ReBB model from elastic $pp$ to elastic $p\bar p$ collisions using the model exactly in the same form, as it was described in Ref.~\cite{Nemes:2015iia}. We fit exactly the same four physical parameters to describe the differential cross-section of elastic proton-antiproton ($p\bar p$) scattering. Later we shall see that at the same energy, the geometrical parameters in $pp$ and $p\bar p$ collisions are apparently consistent with one another, within the systematic errors of the analysis we obtain the same $R_q(s)$, $R_d(s)$ and $R_{qd}(s)$ functions for $pp$ and $p\bar p$ reactions. In this manuscript, we thus can investigate also the following independent questions: \begin{itemize} \item Is the Real Extended Bialas-Bzdak model of ref.~\cite{Nemes:2015iia} able to describe not only elastic $pp$ but also $p\bar p$ collisions? \item Is it possible to characterize the Odderon with only one physical parameter: the difference of the opacity parameter $\alpha(s)$ in $pp$ and in $p\bar p$ collisions: $\alpha^{pp}(s) \ne \alpha^{p\bar p}(s)$? \end{itemize} We shall see that the answer to both of these questions is a definitive yes. The structure of the manuscript is as follows. In Section~\ref{s:formalism} we recapitulate the definition of the key physical quantities in elastic scattering and mention their main relations. In Section~\ref{sec:fitdescription} we present the various error definitions and the evaluated $\chi^2$ formulae of both $pp$ and $p\bar p$ datasets. Subsequently, in Section~\ref{sec:fit_results} we detail the optimization method and summarize the fit results in terms of four physical parameters determined at four different energies as listed in Table~\ref{tab:fit_parameters}, that form the basis of the determination of the energy dependencies of the model parameters in Section~\ref{sec:excitation_functions}. The energy dependencies of both proton-proton and proton-antiproton elastic scattering in the TeV energy range are determined by a set of 10 physical parameters only, as listed in Table~\ref{tab:excitation_pars}. As a next step for establishing the reliability of this $s$-dependence of the model parameters, we have performed also the so called validation or sanity tests in Section ~\ref{sec:sanity_tests}: we have cross-checked that the obtained trends reproduce in a statistically acceptable manner each of the measured data also those, that were not utilized so far to establish the $s$-dependencies of the ReBB model parameters. After establishing that the excitation function of the ReBB model reproduces the measured data, we predict the experimentally not yet available large-$t$ differential cross-section of $pp$ collisions at $\sqrt{s} = 0.9$, $4$, $5$ and $ 8$ TeV and we present the extrapolations of the $pp$ differential cross-sections measured at the LHC energies of 2.76 and 7.0 TeV to the Tevatron energy of 1.96 TeV. Vice versa, we also extrapolate the $p\bar p$ differential cross-sections from the SPS and Tevatron energies of $0.546$ and $1.96$ TeV to the LHC energies of 2.76 and 7.0 TeV in Section~\ref{sec:extrapolations}. These results are discussed in detail and put into context in Section~\ref{sec:discussion}. We summarize the results and conclude in Section~\ref{sec:summary}. This work is closed with four Appendices. For the sake of completeness, the unitary, real part extended Bialas-Bzdak model of ref.~\cite{Nemes:2015iia} is summarized in ~\ref{sec:appendix-a}. In \ref{sec:appendix-b} we derive and detail the relations between the opacity parameter $\alpha$ of the ReBB model and the real-to-imaginary ratio $\rho_0$. The main properties of Odderon and Pomeron exchange including the corresponding differential and total cross-sections in the TeV energy range are summarized in \ref{sec:appendix-c}. Two small theorems are also given here: Theorem I indicates that if the differential cross-sections of elastic $pp$ and $p\bar p$ collisions are not the same in the TeV energy range, then the crossing-odd component of the elastic amplitude (Odderon) cannot vanish, while Theorem II proves that in the framework of the ReBB model, this is indeed due to the difference between the opacity parameters $\alpha(s)$ for $pp$ and $p\bar p$ collisions, linking also mathematically the difference in the dip-filling property of the differential cross-sections of elastic scattering to the measurement of $\rho$ at the $t=0$ within the ReBB model. The non-linear corrections to the linear in $\ln(s)$ excitation functions are also determined with the help of ISR $pp$ data at $\sqrt{s} = 23.5 $ GeV energy. These results are discussed in \ref{sec:appendix-d}, and found to have negligible effects on our results presented in the main body of the manuscript, corresponding to the TeV energy range. \section{Formalism} \label{s:formalism} The elastic amplitude $T(s,t)$ (where $s$ is the squared central mass energy, and $t$ is the squared four-momentum transfer) is defined in Ref.~\cite{Nemes:2015iia} by Eq.~(6), Eq.~(9) and Eq. (29), furthermore summarized also in \ref{sec:appendix-a}. The experimentally measurable physical quantities, \textit{i.e.} the elastic differential cross section, the total, elastic and inelastic cross sections and the ratio $\rho_0$ are defined, correspondingly, as: \begin{equation} \frac{{\rm d}\sigma}{{\rm d} t}(s,t)=\frac{1}{4\pi}\left|T\left(s,t\right)\right|^2\, , \label{eq:differential_cross_section} \end{equation} \begin{equation} \sigma_{tot}(s)=2{\rm Im}T(s,t=0)\, , \label{eq:total_cross_section} \end{equation} \begin{equation} \sigma_{el}(s)=\int \, dt \frac{d\sigma}{dt}(s,t), \label{eq:elastic_cross_section} \end{equation} \begin{equation} \sigma_{in}(s)=\sigma_{tot}(s)-\sigma_{el}(s) \label{eq:inelastic_cross_section} \end{equation} and \begin{align} \rho_0(s)=\frac{\text{Re}\,T(s,t=0)}{\text{Im}\,T(s,t=0)}\,. \label{eq:rho_parameter} \end{align} The earlier results show that the ReBB model gives statistically acceptable, good quality fits with CL $\ge 0.1 $ \% to the $pp$ differential cross section data at the ISR energies of 23.5 and 62.5 GeV as well as at the LHC energy of 7 TeV, in the $-t \ge 0.377$ GeV$^2$ kinematic region \cite{Nemes:2015iia}. Continuing that study, in this work we apply exactly the same formalism, without any change, to the description of the differential cross-sections of proton-antiproton ($p\bar p$) scattering. This allows us to search for Odderon effects by comparing the $pp$ and $p\bar p$ differential cross sections at same energies and squared momentum transfer. Any significant difference between the $pp$ and $p\bar p$ processes at the same energy at the TeV scale provides an evidence for the Odderon exchange. In order to make this manuscript as self-contained and complete as reasonably possible, we have provided a derivation of this well-known property, in the form of Theorem I of \ref{sec:appendix-c}. \section{Fitting method \label{sec:fitdescription}} Compered to the earlier ReBB study \cite{Nemes:2015iia}, in order to more precisely estimate the significance of a possible Odderon effect, here we use a more advanced form of $\chi^{2}$ definition which relies on a method developed by the PHENIX Collaboration and described in detail in Appendix A of Ref.~\cite{Adare:2008cg}. This method is based on the diagonalization of the covariance matrix, if the experimental errors can be separated to the following types of uncertainties: \begin{itemize} \item Type A errors which are point-to-point fluctuating (uncorrelated) systematic and statistical errors; \item Type B errors which are point-to-point varying but correlated systematic uncertainties, for which the point-to-point correlation is 100 \%; \item Type C systematic errors which are point-independent, overall systematic uncertainties, that scale all the data points up and down by exactly the same, point-to-point independent factor. \end{itemize} In what follows we index these errors with the index of the data point as well as with subscripts $a$, $b$ and $c$, respectively. In the course of the minimization of the ReBB model we use the following $\chi^{2}$ function: \begin{eqnarray} \chi ^{2}&=&\left(\sum _{j=1}^{M} \left(\sum _{i=1}^{n_{j}}\frac{ \left( d_{ij}+ \epsilon _{bj} \widetilde\sigma _{bij}+ \epsilon _{cj}d_{ij} \sigma _{cj}-th_{ij} \right) ^{2}}{\widetilde{ \sigma }_{ij}^{2}}\right)+ \epsilon _{bj}^{2}+ \epsilon _{cj}^{2}\right) + \nonumber \\ & & \qquad \qquad + \left( \frac{d_{ \sigma _{tot}}-th_{ \sigma _{tot}}}{ \delta \sigma _{tot}} \right) ^{2}+ \left( \frac{d_{ \rho_{0}}-th_{ \rho _{0}}}{ \delta \rho _{0}} \right) ^{2}. \label{eq:chi2-final} \end{eqnarray} This definition includes type A, point-to-point uncorrelated errors, type B point-to-point dependent but correlated errors and type C, point independent correlated errors. Furthermore, not only vertical, but the frequently neglected horizontal errors are included too. Let us detail below the notation of this $\chi^2$ definition, step by step: \begin{itemize} \item $M$ is the number of sub-datasets, corresponding to several, separately measured ranges of $t$, indexed with subscript $j$, at a given energy $\sqrt{s}$. Thus $\sum_{j=1}^M n_j$ gives the number of fitted data points at a given center of mass energy $\sqrt{s}$; \item $d_{ij}$ is the $i$th measured differential cross section data point in sub-dataset $j$ and $th_{ij}$ is the corresponding theoretical value calculated from the ReBB model; \item$\widetilde{ \sigma }_{ij}$ is the type A, point-to-point fluctuating uncertainty of the data point $i$ in sub-dataset $j$, scaled by a multiplicative factor such that the fractional uncertainty is unchanged under multiplication by a point-to-point varying factor: \begin{equation} \widetilde{ \sigma }_{ij}^{2}= \widetilde\sigma _{aij} \left( \frac{d_{ij}+ \epsilon _{bj} \widetilde\sigma _{bij}+ \epsilon _{cj}d_{ij} \sigma _{cj}}{d_{ij}} \right) \end{equation} where the terms \begin{equation} \widetilde\sigma_{kij} =\sqrt{\sigma _{kij}^2+ (d^{\prime}_{ij} \delta_{k}t_{ij})^2}, \ \ k\in\{a,b\}, \end{equation} include also the A and B type horizontal errors on $t$ following the propagation of the horizontal error to the $\chi^2$ as utilized by the so-called effective variance method of the CERN data analysis programme ROOT; $d^{\prime}_{ij}$ denotes the numerical derivative in point $t_{ij}$ with errors of type $k\in\{a,b\}$, denoted as $\delta_{k}t_{ij}$. The numerical derivative is calculated as \begin{equation} d^{\prime}(t_{ij})=\frac{d_{(i+1)j}-d_{ij}}{t_{(i+1)j}-t_{ij}}; \end{equation} \item The correlation coefficients for type B and C errors are denoted by $\epsilon_b$ and $\epsilon_c$, respectively. These numbers are free parameters to be fitted to the data, their best values are typically in the interval $(-1,1)$; \item The last two terms in Eq.~(\ref{eq:chi2-final}) are to fit also the measured total cross-section and ratio $\rho_{\rm 0}$ values along the differential cross section data points; $d_{\sigma_{\rm tot}}$ and $d_{\rho_{\rm 0}}$ denote the measured total cross section and ratio $\rho_{\rm 0}$ values, $\delta\sigma_{\rm tot}$ and $\delta\rho_{\rm 0}$ are their full errors, $\sigma_{\rm tot,th}$ and $\rho_{\rm 0,th}$ are their theoretical value calculated from the ReBB model; \end{itemize} This scheme has been validated by evaluating the $\chi^2$ from a full covariance matrix fit and from the PHENIX method of diagonalizing the covariance matrix of the differential cross-section of elastic $pp$ scattering measured by TOTEM at $\sqrt{s} = 13$ TeV~\cite{Antchev:2017dia}, using the L\'evy expansion method of Ref.~\cite{Csorgo:2018uyp}. The fit with the full covariance matrix results in the same minimum within one standard deviation of the fit parameters \cite{Csorgo:2020rlb}, hence in the same significance, as the fit with the PHENIX method. Based on this validation, we apply the PHENIX method in the data analysis described in this manuscript. Let us note also that in case of the $\sqrt{s} = 7 $ TeV TOTEM data set, analysed below, the B type systematic errors, that shift all the data points together up or down with a $t$-dependent value are measured to be asymmetric~\cite{Antchev:2013gaa}. This effect is handled by using the up or down type B errors depending on the sign of the correlation coefficient $\epsilon_b$: for positive or negative sign of $\epsilon_b$, we utilized the type B errors upwards, or downwards, respectively. Note that the type A errors, that enter the denominator of the $\chi^2$ definition of eq.~(\ref{eq:chi2-final}), are symmetric even in the case of this $\sqrt{s} = 7$ TeV $pp$ dataset. The $\chi^2$ distribution assumes symmetric type A errors that enter the denominators of the $\chi^2$ definition. Thus, even in this case of asymmetric type B errors, that enter the numerators of eq.~(\ref{eq:chi2-final}) at $\sqrt{s} = 7$ TeV, the $\chi^2$ distribution can be utilized to estimate the significances and confidence levels of the best fits. \section{Fit results\label{sec:fit_results}} The ReBB model was fitted to the proton-proton differential cross section data measured by the TOTEM Collaboration at $\sqrt{s}= 2.76$, $7.0$ and $13$ TeV, based on refs.~\cite{Antchev:2018rec,Antchev:2013gaa,Antchev:2017dia} as well as to differential cross section data of elastic proton-antiproton scattering measured at $\sqrt{s} = 0.546$ and $1.96$ TeV in refs.~\cite{Battiston:1983gp,Bozzo:1985th,Abazov:2012qb}, respectively. Similarly to earlier studies of refs.~\cite{Bialas:2006kw,Bialas:2007eg,Csorgo:2013bwa,CsorgO:2013kua,Nemes:2015iia}, the model parameters $A_{qq}=1$ and $\lambda=\frac{1}{2}$ were kept at constant values throughout the fitting procedure. Here $A_{qq}$ corresponds to a normalization constant and $\lambda$ describes the mass ratio of constituent quarks to diquarks in the $p=(q,d)$ version of the Real Extended Bialas-Bzak model of ref.~\cite{Nemes:2015iia}. Thus the number of free parameters of this model, for a fixed $s$ and specific collision type is reduced to four: $R_{qd},\,R_{q},\,R_{d}$ and $\alpha$. It is natural to expect that $R_q(s)$, $R_d(s)$ and $R_{qd}(s)$ are the same functions of $s$, both for $pp$ and $p\bar p$ collisions, as the distribution of partons inside protons at a given energy is expected to be the same as that of anti-partons inside anti-protons. In this section, this is however not assumed but tested and the parameters of the ReBB model are determined at four different colliding energies in the TeV region, using $pp$ data sets at $\sqrt{s} = 2.76$ and $7 $ TeV, and $p\bar p$ datasets at $\sqrt{s} = 0.546$ and $1.96$ TeV. These fits were performed in the diffractive interference or dip and bump region, with datapoints before the diffractive minimum and after the maximum as well, in each case the limited range is not greater than $0.372 \le -t \le 1.2 $ GeV$^2$. In this kinematic range, the ReBB model provided a data description with a statistically acceptable fit quality, with confidence levels CL $\ge 0.1$ \% in each case. In this manuscript, our aim is to extrapolate the differential cross-section of elastic $pp$ and $p\bar p$ collisions to exactly the same energies, in order to conclude in a model dependent way about the significance of a crossing-odd or Odderon effect in these data. For this purpose, a model that can be used to study the excitation function of the $pp$ and $p\bar p$ differential cross-sections in the $0.5 \le \sqrt{s} \le 7$ TeV domain is sufficient. The results of such kind of statistically acceptabe quality fits are summarized in Table~\ref{tab:fit_parameters} and detailed below. Other data sets, that do not have sufficient amount of data in this interference region were utilized for cross-checks only, to test the extracted energy dependencies of the model parameters as detailed in Sec.~\ref{sec:sanity_tests}. Additionally, we also describe the current status of our fits to describe the differential cross-section at $\sqrt{s} = 13$ TeV at the end of this section. We thus describe three fits to $pp$ differential cross section data sets at $\sqrt{s} = 2.76$, $7$ and $13$ TeV as well as two fits to $p\bar p$ differential cross section datasets at $\sqrt{s} = 0.546$ and $1.96$ TeV, respectively. Our fit results are graphically shown in Figs.~\ref{fig:reBB_model_fit_0_546_TeV}-\ref{fig:reBB_model_fit_13_TeV}. The minimization of the $\chi^2$ defined by Eq.~(\ref{eq:chi2-final}) was done with Minuit and the parameter errors were estimated by using the MINOS algorithm which takes into account both parameter correlations and non-linearities. We accept the fit as a successful representation of the fitted data under the condition that the fit status is converged, the error matrix is accurate and the confidence level of the fit, CL is $\ge 0.1$ \%, as indicated on Figs.~\ref{fig:reBB_model_fit_0_546_TeV}-\ref{fig:reBB_model_fit_7_TeV}. As these criteria are not satisfied on Fig.~\ref{fig:reBB_model_fit_13_TeV}, the parameters of this fit were not taken into account when determining the excitation functions or the energy dependence of the physical fit parameters in the few TeV energy range. Let us now discuss each fit in a bit more detail. The S$p\bar p$S differential cross section data on elastic $p\bar p$ collisions~\cite{Battiston:1983gp,Bozzo:1985th} were measured in the squared momentum transfer range of $0.03\leq|t|\leq1.53 $ GeV$^2$ which in the fitted range has been subdivided into two sub-ranges with different normalization uncertainties (type C errors): for $0.37\leq|t|\leq0.495 $ GeV$^2$ $\sigma_{c}$ = 0.03 and for $0.46\leq|t|\leq1.2$ GeV$^2$ $\sigma_{c}$ = 0.1. In case of this data set, the vertical type A errors $\sigma_{ai}$ are available but the horizontal type A errors ($\delta_{a}t_i$) and the type B errors either vertical ($\sigma_{bi}$) or horizontal ($\delta_{b}t_i$) were not published. The measured total cross section with its total uncertainty is $\sigma_{\rm tot}=61.26\pm0.93$ mb \cite{Tanabashi:2018oca} while the $\rho_0=0.135\pm0.015$ value was measured at the slightly different energy of $\sqrt{s} = 0.541$ GeV. The total, elastic and inelastic cross sections and the parameter $\rho_0$ are calculated according to Eqs.~(\ref{eq:total_cross_section})-(\ref{eq:rho_parameter}). The fit is summarized in Fig.~\ref{fig:reBB_model_fit_0_546_TeV}. The fit quality is satisfactory, CL = 8.74 \%. Compared to the available data in the literature \cite{Tanabashi:2018oca} ($\sigma_{in}=48.39\pm1.01$ mb and $\sigma_{el}=12.87\pm0.3$ mb) the model reproduces the experimental values of the forward measurables within one $\sigma$, thus these fit parameters represent the data in a statistically acceptable manner. The elastic $p\bar p$ differential cross section data is available at $\sqrt{s} = 1.96$ TeV in the range of $0.26\leq|t|\leq1.20 $ GeV$^2$, as published by the D0 Collaboration in ref.~\cite{Abazov:2012qb}, with a type C normalization uncertainty of $\sigma_{c}$ = 0.144. For this data set, the vertical type A and type B errors were not published separately. Actually, the quadratically added statistical and systematic uncertainties were published, and as the statistical errors are point to-point fluctuating, type A errors, in our analysis the combined $t$ dependent D0 errors were handled as type A, combined statistical and systematic errors. Horizontal type A and type B errors were not published in ref.~\cite{Abazov:2012qb}. At this energy, we do not find published experimental $\sigma_{\rm tot} $ and $\rho_0$ values. The values of the total cross section and parameter $\rho_0$ at this energy, that we utilized in the fitting procedure, are the predicted values from the COMPETE Collaboration \cite{Cudell:2002xe}: $\sigma_{\rm tot}=78.27\pm1.93$ mb and $\rho_0=0.145\pm0.006$. The quality of the corresponding fit, shown in Fig.~\ref{fig:reBB_model_fit_1_96_TeV}, is satisfactory, CL = 51.12 \%, and the COMPETE values of forward measurables are reproduced within one standard deviation. We conclude that the corresponding ReBB model parameters represent the data in a statistically acceptable manner. Based on the successful description of these two $p\bar p$ datasets at $\sqrt{s} = 0.546$ and $1.96$ TeV, we find that the form of the ReBB model as specified for $pp$ collisions in ref.~\cite{Nemes:2015iia} is able, without any modifications, to describe the differential cross-section of elastic $p\bar p$ collisions in the TeV energy range. Let us now discuss the new fits of the same model to elastic $pp$ collisions in the TeV energy range. At $\sqrt{s} = 2.76 $ TeV, the differential cross section data of elastic $pp$ collisions was measured in the $t$ range of $0.072\leq -t\leq 0.74 $ GeV$^2$ by the TOTEM Collaboration~\cite{Antchev:2018rec}. Actually, this measurement was performed in two subranges: $0.072\leq|t|\leq0.462 $ GeV$^2$ and $0.372\leq|t|\leq0.74 $ GeV$^2$. Both ranges had the same normalization uncertainty of $\sigma_c$ $=$ $0.06$. During the fit the $t$-dependent vertical statistical (type A) and vertical systematic (type B) errors (both horizontal and vertical ones), the normalization (type C) errors and the experimental value of the total cross section with its total uncertainty ($\sigma_{\rm tot}=84.7\pm3.3$ mb \cite{Antchev:2017dia}) were taken into account. Horizontal type A and type B errors are not published at this energy. The fit quality of the ReBB model is demonstrated on Fig.~\ref{fig:reBB_model_fit_2_76_TeV}: the fit is satisfactory, with CL = 36.52 \%. The experimental values of the forward measurables ($\sigma_{in}=62.8\pm2.9$ mb, $\sigma_{el}=21.8\pm1.4$ mb \cite{Nemes:2017gut,Antchev:2017dia}) are reproduced within one standard deviations. Experimental data is not yet available for parameter $\rho_0$, however the value for $\rho_0$, calculated from the fitted ReBB model, is within the total error band of the COMPETE prediction \cite{Cudell:2002xe}. We thus conclude that the corresponding ReBB model parameters represent the $pp$ data at $\sqrt{s} = 2.76 $ TeV in a statistically acceptable manner. At $\sqrt{s} = 7$ TeV, the $pp$ differential cross section data was published by the TOTEM Collaboration~\cite{Antchev:2013gaa}, measured in the range of $0.005\leq|t|\leq2.443 $ GeV$^2$ . The measurement was performed in two subranges: $0.005\leq|t|\leq0.371$ GeV$^2$ and $0.377\leq|t|\leq2.443 $ GeV$^2$. Both ranges had the same normalization uncertainty of $\sigma_c$ = 0.042. The fit includes only the second subrange with the $t$-dependent (both vertical and horizontal) statistical (type A) and systematic (type B) errors, the normalization (type C) error and the experimental values of the total cross section and the parameter $\rho_0$ with their total uncertainties ($\sigma_{\rm tot}=98.0\pm2.5$ mb and $\rho_0=0.145\pm0.091$ \cite{Antchev:2013haa}). The quality of the corresponding fit, shown in Fig.~\ref{fig:reBB_model_fit_7_TeV}, is statistically acceptable with a CL = 0.71 \%. The experimental values of the forward measurables ($\sigma_{in}=72.9\pm1.5$ mb, $\sigma_{el}=25.1\pm1.1$ mb \cite{Antchev:2013haa}) are reproduced by the fitted ReBB model within one sigma (the experimental and calculated values overlap within their errors). We thus conclude that the corresponding ReBB model parameters represent these $pp$ data at $\sqrt{s} = 7.0 $ TeV in a statistically acceptable manner, in the fitted range of $0.377\leq|t|\leq1.205 $ GeV$^2$, before and after the diffractive minimum. At $\sqrt{s} = 8$ TeV, the TOTEM collaboration did not yet publish the final differential cross-section results in the range of the diffractive minumum and maximum. However, preliminary results were presented at conferences~\cite{Kaspar:2018ISMD}, and the differential cross-section in the low $-t$ region was published in ref.~\cite{Antchev:2015zza}. We thus use this dataset for a cross-check only, but the lack of the data in the diffractive minimum prevents us to do a full ReBB model fit. Additional data at very low $-t$, in the Coulomb-Nuclear Interference region is also available from TOTEM at this particular energy~\cite{Antchev:2016vpy}, however, in the present study we do not discuss the kinematic range, where Coulomb effects may play any role. At $\sqrt{s} = 13$ TeV, the differential cross section data was measured by the TOTEM collaboration in the range of $0.03\leq|t|\leq3.8$ GeV$^2$ \cite{Antchev:2018edk} with a normalization (type C) uncertainty of $\sigma_c$ = 0.055. As far as we know, the only statistically acceptable quality fit with CL $\geq$ 0.1 \% to this dataset so far was obtained by some of us with the help of the model-independent L\'evy series in ref.~\cite{Csorgo:2018uyp}. We also note that several new features show up in the soft observables of elastic scattering, with a threshold behaviour around $\sqrt{s} = 5-7$ TeV, certainly below 13 TeV~\cite{Csorgo:2019fbf}. We have cross-checked, if the ReBB model, that works reasonably well from $\sqrt{s} = 23.5$ GeV to $7$ TeV, is capable to describe this data set at $\sqrt{s} = 13$ TeV in statistically acceptable manner, or not? The result was negative, as indicated in Fig.~\ref{fig:reBB_model_fit_13_TeV}. This fit includes the $t$-dependent statistical (type A) and systematic (type B) errors, the normalization (type C) error and the experimental values of the total cross section and the parameter $\rho_0$ with their total uncertainties ($\sigma_{\rm tot}=110.5\pm2.4$ mb and $\rho_0=0.09\pm0.01$ \cite{Antchev:2017yns}). The quality of the obtained fit (Fig.~\ref{fig:reBB_model_fit_13_TeV}) is not satisfactory, CL = 3.17$\times10^{-11}$ \% and neither the experimental values of the cross sections ($\sigma_{in}=79.5\pm1.8$ mb, $\sigma_{el}=31.0\pm1.7$ mb \cite{Antchev:2017dia} ) are reproduced by the fitted ReBB model within one sigma at 13 TeV. However, the value of $\rho_0$ was described surprisingly well. This TOTEM dataset is very detailed and precise and changes of certain trends in $B(s)$ and the ratio $\sigma_{\rm el}(s)/\sigma_{\rm tot}(s)$ are seen experimentally~\cite{Csorgo:2019fbf}. Theoretically, a new domain of QCD may emerge at high energies, possibly characterised by hollowness or toroidal structure, corresponding to a black ring-like distribution of inelastic scatterings \cite{Dremin:2014dea,Dremin:2014spa,Albacete:2016pmp,Troshin:2017ucy}. A statistically significant, more than 5 $\sigma$ hollowness effect was found at $\sqrt{s} = 13$ TeV within a model-independent analysis of the shadow profile at these energies, using the technique of L\'evy series~\cite{Csorgo:2018uyp}. We conclude that the ReBB model needs to be generalized to have a stronger non-exponential feature at low $-t$ to accommodate the new features of the differential cross-section data at $\sqrt{s} = 13$ TeV or larger energies. This work is currently in progress, but goes well beyond the scope of the current manuscript. Most importantly, such a generalization is not necessary for a comparision of the differential cross-sections of elastic $pp$ and $p\bar p$ collisions in the few TeV range, as we have to bridge only a logarithmically small energy difference between the top D0 energy of $\sqrt{s} = 1.96$ TeV and the lowest TOTEM energy of $\sqrt{s} = 2.76$ TeV. We thus find, that the Real Extended Bialas - Bzdak model describes effectively and in a statistically acceptable manner the differential cross-sections of elastic $pp$ and $p\bar p$ collisions in the few TeV range of $0.546 \le \sqrt{s} \le 7 $ TeV and in the squared four-momentum transfer range of $0.37\leq-t\leq1.2 $ GeV$^2$. Its physical fit parameters represent the data and their energy dependence thus can be utilized to determine the excitation function of these model parameters, as detailed in Section~\ref{sec:excitation_functions}. The values of the physical fit parameters and their errors obtained from the above discussed physically and statistically acceptable fits are summarized in Table~\ref{tab:fit_parameters}, where four datasets are analyzed and four different physical parameters are extracted at four different energies. These sixteen physical parameters form the basis of the determination of the energy dependencies, that are determined to be consistent with affine linear functions of $\ln(s)$. Three scale parameters are within errors the same in elastic $pp$ and $p\bar p$ collisions, while the opacity parameters are different for $pp$ and $p\bar p$ collisions. Thus the excitation functions, the energy dependence of the differential cross-sections both for $pp$ and $p\bar p$ elastic scattering is determined by 5x2 = 10 physical parameters in this framework of calculations. These 10 parameters are summarized in Table ~\ref{tab:excitation_pars}. We thus conclude, that this Real Extended Bialas-Bzdak model is good enough to extrapolate the differential cross-section of elastic $pp$ collisions down to $\sqrt{s} = 0.546$ and $1.96$ TeV, and to extrapolate the same of elastic $p\bar p$ collisions up to $\sqrt{s} = 2.76$ and $7$ TeV. We duly note that, in order to evaluate similar observables at $\sqrt{s} = 13$ TeV or at even higher energies in a realistic manner, this model needs to be generalized and further developed. \begin{table*}[htb] \caption{The values of the fitted ReBB model parameters to $pp$ and $p\bar p$ data from SPS to LHC energies. The errors and the values are rounded up to three valuable decimal digits. For 7 TeV, the parameter error values shown in parenthesis do not include the contribution from the parameter correlations, i.e., are less than the MINOS errors. } \label{tab:fit_parameters} \begin{center} {\begin{tabular}{|c|c|c|c|c|} \hline\noalign{\smallskip} $\sqrt{s}$ [TeV] &0.546 ($p\bar p$) &1.960 ($p\bar p$) &2.760 ($pp$) &7.000 ($pp$) \\ \noalign{\smallskip}\hline\noalign{\smallskip} $|t|$ [GeV$^{2}$] &(0.375, 1.210) &(0.380, 1.200) &(0.372, 0.741) &(0.377, 1.205) \\ $\chi^{2}/NDF$ &44.49/33 &8.22/9 &17.32/16 & 80.29/52 \\ CL [\%] &8.74 &51.12 &36.52 & 0.713 \\ $R_{q}$ [fm] &0.349 $\pm$ 0.003 &0.396 $\pm$ 0.006 &0.419 $\pm$ 0.011 & 0.438 $\pm$ 0.005 ($\pm$ 0.001)\\ $R_{d}$ [fm] &0.825 $\pm$ 0.004 &0.869 $\pm$ 0.012 &0.877 $\pm$ 0.014 & 0.920 $\pm$ 0.009 ($\pm$ 0.002)\\ $R_{qd}$ [fm] &0.284 $\pm$ 0.010 &0.294 $\pm$ 0.029 &0.197 $\pm$ 0.084 & 0.333 $\pm$ 0.026 ($\pm$ 0.002)\\ $\alpha$ &0.117 $\pm$ 0.002 &0.163 $\pm$ 0.005 &0.126 $\pm$ 0.006 & 0.125 $\pm$ 0.002 ($\pm$ 0.001)\\ $\epsilon_{b1}$ &-- &-- &-0.094 $\pm$ 0.946 &0.001 $\pm$ 0.003\\ $\epsilon_{c1}$ &-0.398 $\pm$ 0.911 &-0.013 $\pm$ 0.834 & 0.059 $\pm$ 0.985 &-0.091 $\pm$ 0.866\\ $\epsilon_{c2}$ &-0.090 $\pm$ 0.416 &-- &-- &-- \\ \hline \end{tabular}} \end{center} \end{table*} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_546_GeV_fit_ni_2020_5_24_0_34_47.pdf} \caption{ The fit of the ReBB model to the $p\bar p$ SPS $\sqrt{s}=0.546$~TeV data~\cite{Battiston:1983gp,Bozzo:1985th} in the range of $0.37 \leq -t \leq 1.2$ GeV$^2$. The fit includes the published errors, that are statistical (type A) and the normalization (type C) uncertainties, as well as the experimental value of the total cross section with its full error according to Eq.~(\ref{eq:chi2-final}). The fitted parameters are shown in the left bottom corner and their values are rounded up to three decimal digits. The fit quality parameters and the values of the total, inelastic and elastic cross-sections as well as the value of the $\rho_0$ parameter are summarized in the top right part of the plot. } \label{fig:reBB_model_fit_0_546_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_1_96_TeV_fit_ni_Rq_2020_5_24_0_27_47.pdf} \caption{The fit of the ReBB model to the $p\bar p$ D0 $\sqrt{s}=1.96$~TeV data \cite{Abazov:2012qb} in the range of $0.37\leq-t\leq1.2 $ GeV$^2$. The fit includes the $t$-dependent statistical and systematic uncertainties added together quadratically and treated as type A errors as well as the normalization (type C) uncertainty according to Eq.~(\ref{eq:chi2-final}). The values of the total cross section and parameter $\rho_0$ used in the fit are the predicted values from the COMPETE Collaboration \cite{Cudell:2002xe}. Otherwise, same as Fig.~\ref{fig:reBB_model_fit_0_546_TeV}. } \label{fig:reBB_model_fit_1_96_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_2_76_TeV_fit_ni_2020_5_24_0_32_7.pdf} \caption{The fit of the ReBB model to the $pp$ TOTEM $\sqrt{s}=2.76$~TeV data in the range of $0.37\leq-t\leq0.74 $ GeV$^2$ \cite{Antchev:2018rec}. The fit includes the $t$-dependent statistical (type A) and systematic (type B) uncertainties, the normalization (type C) uncertainty and the experimental value of the total cross section with its full error according to Eq.~(\ref{eq:chi2-final}). Otherwise, same as Fig.~\ref{fig:reBB_model_fit_0_546_TeV}. } \label{fig:reBB_model_fit_2_76_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_7_TeV_fit_ni_2020_5_27_14_24_51.pdf} \caption{The fit of the ReBB model to the $pp$ TOTEM $\sqrt{s}=7$~TeV data in the range of $0.37\leq-t\leq1.2 $ GeV$^2$ \cite{Antchev:2013gaa}. The fit includes the $t$-dependent statistical (type A) and systematic (type B) uncertainties, the normalization (type C) uncertainty and the experimental values of the total cross section and parameter $\rho_0$ with their full error according to Eq.~(\ref{eq:chi2-final}). Otherwise, same as Fig.~\ref{fig:reBB_model_fit_0_546_TeV}. } \label{fig:reBB_model_fit_7_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_13_TeV_fit_ni_2020_5_24_1_52_53.pdf} \caption{The fit of the ReBB model to the $pp$ TOTEM $\sqrt{s}=13$~TeV data in the range of $0.37\leq-t\leq1.2$ GeV$^2$ \cite{Antchev:2018edk}. The fit includes the $t$-dependent statistical (type A) and systematic (type B) uncertainties, the normalization (type C) uncertainty and the experimental values of the total cross section and parameter $\rho_0$ with their full error according to Eq.~(\ref{eq:chi2-final}). The fit parameters do not represent the data in a statistically acceptable manner, given that CL $\ll$ 0.1 \% . Otherwise, same as Fig.~\ref{fig:reBB_model_fit_0_546_TeV}. } \label{fig:reBB_model_fit_13_TeV} \end{figure} \section{Excitation functions of the fit parameters \label{sec:excitation_functions}} The values of the physical fit parameters and their errors obtained from the above discussed physically and statistically acceptable fits are summarized in Table~\ref{tab:fit_parameters}. This table contains a list of five different physical parameters. Out of them the three scale parameters called $R_q$, $R_d$ and $R_{qd}$ can be determined at four different energies, providing 12 numbers, while the opacity parameters $\alpha^{pp}$ and $\alpha^{p\bar p}$ describing $pp$ and $p\bar p$ collisions can both be determined at two different energies only, providing additional 4 numbers, all-together 16 physical input parameters. These 16 physical parameters form the basis of the determination of the energy dependencies, that are determined to be consistent with affine linear functions of $\ln(s)$. Namely, we fitted the $s$-dependence of the model parameters one by one, using the affine linear logarithmic function, \begin{equation} P(s)=p_{0} + p_{1}\cdot\ln{(s/s_{0})}, \ \ P\in{\{R_{q},R_{d},R_{qd},\alpha\}}, \label{eq:parametrization_of_extrapolation_lin} \end{equation} where $p_0$ and $p_1$ are free parameters, $s_{0}$ is fixed at 1 GeV$^{2}$. We obtain good quality fits, with methods and results similar to that of ref.~\cite{Nemes:2015iia}, with confidence levels $CL \gg 0.1$ \%, as detailed in Table~\ref{tab:excitation_pars}. Three scale parameters are within errors the same in elastic $pp$ and $p\bar p$ collisions, while the opacity parameters are different for $pp$ and $p\bar p$ collisions. Thus the excitation functions, the energy dependene of the differential cross-sections both for $pp$ and $p\bar p$ elastic scattering is determined by 5x2 = 10 phyiscal parameters in the framework of the ReBB model. The energy dependencies of the scale parameters, $R_q$, $R_d$, and $R_{qd}$ are graphically shown in Figs.~\ref{fig:par_Rq_lin}-\ref{fig:par_Rqd_lin}. These figures clearly indicate that the energy dependence of the geometrical scale parameters consistent with the same evolution, namely the same linear rise in $\ln(s)$ for both $pp$ and $p\bar p$ scattering: when we fitted these parameters together, with a linear logarithmic function, we have obtained a statistically acceptable fit in each of these three cases. This result extends and improves the earlier results published in ref.~\cite{Nemes:2015iia} for elasic $pp$ scattering to the case of both $pp$ and $p\bar p$ collisions in a natural manner. For a comparision, these earlier results are also shown with a dotted red line on the panels of Fig. ~\ref{fig:reBB_model_log_lin_extrapolation_fits}, indicating the improved precision of the current analysis, due to more data points are included in the TeV energy range. For the opacity parameter $\alpha$, seen on panel (d) of Fig.~\ref{fig:reBB_model_log_lin_extrapolation_fits}, the situation is different: the $pp$ and $p\bar p$ points are not on the same trend, because the $\alpha$ parameters that characterize the dip in the ReBB model, are obtained with great precision both in the $pp$ and in the $p\bar p$ cases. The difference between the excitation functions of $\alpha^{pp}(s)$ and $\alpha^{p\bar p}(s)$ corresponds to the qualitative difference between the differential cross-section of elastic $pp$ and $p\bar p$ collisions in the few TeV energy range: the presence of a persistent dip and bump structure in the differential cross-section of elastic $pp$ collisions, and the lack of a similar feature in elastic $p\bar p$ collisions. Thus in the case of parameter $\alpha$ we have to consider, that there are only two, rather precisely determined data points in both $pp$ and $p\bar p$ collisions from the presented ReBB model studied so far. We can already conclude that they cannot be described by a single line as an affine linear fit with eq.~(\ref{eq:parametrization_of_extrapolation_lin}) would fail. Without additional information, we cannot determine the trends and its uncertainties as two points can always connected with a straight line, so an affine linear description of both the two $pp$ and the two $p\bar p$ data points would have a vanishing $\chi^2$ and an indeterminable confidence level. This problem, however, is solved by utilizing the results of \ref{sec:appendix-b} on the proportionality between the model parameter $\alpha$ and the experimentally measurable real-to-imaginary ratio $\rho_0$. This proportionality is shown graphically in Fig.~\ref{fig:rho0-per-alpha-vs-P0-LHC}. The constant of proportionality in the few TeV region is an almost energy independent constant value, $\rho_0/\alpha = 0.85 \pm 0.01$, well within the errors of the $\rho_0$ measurements, in agreement with a theoretically obtained function, showed with a red solid line on Fig.~\ref{fig:rho0-per-alpha-vs-P0-LHC} and derived in \ref{sec:appendix-b}. This proportionality allows one to add new datapoints to the trends of $\alpha(s)$ both for the $pp$ and for the $p\bar p$ cases by simply rescaling the mesured $\rho_0$ values. We found three additional published experimental data of $\rho_0$ for $p\bar p$ collisions, $\rho_0 = 0.135 \pm 0.015$ at $\sqrt{s} = 0.541 $ by the UA4/2 Collaboration in ref.~\cite{Haguenauer:1993kan} and $1.8$ TeV by the E-710 and the E811 collaborations in refs.~\cite{Amos:1991bp,Avila:2002bp}, respectively. At $\sqrt{s} = 1.8$ TeV, we have utilized the combined value of these E-710 and E811 measurements~\cite{Avila:2002bp}, corresponding to $\rho_0(p\bar p) = 0.135 \pm 0.044$. The constancy of these $\rho_0(s)$ values in the few TeV energy range, when converted with the help of Fig. ~\ref{fig:rho0-per-alpha-vs-P0-LHC} to the opacity parameter $\alpha(p\bar p)$ of the Bialas-Bzdak model, leads to the lack of diffractive minima hence an Odderon signal in elastic $p\bar p$ collisions, leading to an $\alpha(p\bar p) \approx 0.16 \pm 0.06$ which is within its large errors the same as the $\alpha = 0.163 \pm 0.005$ value obtained from the ReBB model fit to D0 data at $\sqrt{s} = 1.96 $ TeV, summarized on Fig.~\ref{fig:reBB_model_fit_1_96_TeV}. Similarly the $\alpha$ parameter extracted from $\rho_0$ at $\sqrt{s} = 0.541$ TeV is $\alpha \approx 0.16 \pm 0.02$ which is within twice the relatively large errors of the $\rho_0$ analysis the same as the value of $\alpha(p\bar p) = 0.117 \pm 0.002 $ obtained from the analysis of the differential cross-section, shown on Fig.~\ref{fig:reBB_model_fit_0_546_TeV}. These indicate a slowly rising value for $\alpha(p\bar p)$ or correspondingly, $\rho_0(p\bar p)$ in the TeV energy range. The final values of these datapoints together with the corresponding errors are connected with a long-dashed line in Panel (d) of Fig.~\ref{fig:reBB_model_log_lin_extrapolation_fits}. Table~\ref{tab:excitation_pars} indicates that for $\alpha(p\bar p)$ the coefficient $p_1(p\bar p) = 0.018 \pm 0.002$ is a significantly positive number. For the opacity coefficient in elastic $pp$ collisions, $\alpha(pp)$ on the other hand an oppisite effect is seen, when the $\rho_0$ measurements at $\sqrt{s} = 7$ and $8$ TeV are also taken into account, based on the data of the TOTEM Collaboration published in refs.~\cite{Antchev:2013iaa,Antchev:2016vpy}. As by now it is very well known, these values indicate a nearly constant, actually decreasing trend, and based on the fits of the extracted four data points of $\alpha(pp)$ we find that in the few TeV energy range, this trend is nearly constant, indicated by the solid red line of panel (d) of Fig. ~\ref{fig:reBB_model_log_lin_extrapolation_fits} . Table~\ref{tab:excitation_pars} indicates that for $\alpha(pp)$ the coefficient of increase with $\ln(s)$ is consistent with zero in this energy range, $p_1(pp) = - 0.003 \pm 0.003$, which is significantly less from the above quoted positive number for $p_1(p\bar p) = 0.018 \pm 0.002$. Thus it is easy to see, that the Odderon signal in this analysis can be an estimated $6-7\sigma$ effect, as a consequence of the inequality $p_1(pp) \neq p_1(p\bar p)$ alone. In the subsequent sections we first test if the excitation functions, determined with the help of the $p_0$ and $p_1$ parameters of Table~\ref{tab:excitation_pars} indeed reproduce the data at all the measured energies in the relevant kinematic range, then we proceed carefully to determine the significance of a model dependent Odderon signal. We perform these cross-checks against all kind of available data, including those data that were not utilized in the determination of the trends for example because their acceptance was too limited to determine all the fit parameters of the ReBB model. \begin{table*}[tbh] \caption { Summary of the parameter values which determine the energy dependence by fitting a linear logarithmic model according to Eq.~(\ref{eq:parametrization_of_extrapolation_lin}). The values of the parameters are rounded up to three valuable decimal digits. For $R_q$, $R_d$ and $R_{qd}$, the values of the parameters $p_0$ and $p_1$ are given in units of femtometers (fm). For the parameters $\alpha(pp)$ and $\alpha(p\bar p)$, the parameters $p_0$ and $p_1$ are dimensionless.} \begin{center} {\begin{tabular}{|c|c|c|c|c|c|} \hline Parameter & $R_{q}$ [$fm$] & $R_{d}$ [$fm$] & $R_{qd}$ [$fm$] & $\alpha$ ($pp$) &$\alpha$ ($p\bar p$) \\ \hlin $\chi^{2}/NDF$ & $1.596/2$ & $0.469/2$ & $2.239/2$ & $0.760/2$ &$1.212/2$ \\ CL [\%] & 45.03 & 79.10 & 32.65 & 0.68 &54.54 \\ \hline $p_{0}$ & $0.131\pm0.010$ & $0.590\pm0.015$ & $0.158\pm0.035$ & $0.167\pm0.060$ &$-0.103\pm0.027$ \\ $p_{1}$ & $0.017\pm0.001$ & $0.019\pm0.001$ & $0.010\pm0.002$ & $-0.003\pm0.003$ &$0.018\pm0.002$ \\ \hline \end{tabular}} \end{center} \label{tab:excitation_pars} \end{table*} \begin{figure}[H] \centering \subfloat[Parameter $R_q$ \label{fig:par_Rq_lin}]{% \includegraphics[width=0.5\linewidth]{figs/reBB_par_fits_Rq_2020_5_27_21_37_43.pdf}% }\hfill \subfloat[Parameter $R_d$\label{fig:par_Rd_lin}]{% \includegraphics[width=0.5\linewidth]{figs/reBB_par_fits_Rd_2020_5_27_21_37_43.pdf}% }\hfill \subfloat[Parameter $R_{qd}$\label{fig:par_Rqd_lin}]{% \includegraphics[width=0.5\linewidth]{figs/reBB_par_fits_Rqd_2020_5_27_21_37_43.pdf}% }\hfill \subfloat[Parameter $\alpha$\label{fig:par_alpha_lin}]{% \includegraphics[width=0.5\linewidth]{figs/reBB_par_fits_Ralpha_2020_5_27_21_37_43.pdf}% } \caption{The energy dependence of the parameters of the ReBB model, $R_{q}$, $R_{d}$, $R_{qd}$ and $\alpha$, collected in Table~\ref{tab:fit_parameters} and determined by fitting a linear logarithmic model, Eq.~(\ref{eq:parametrization_of_extrapolation_lin}), to each of them one by one.} \label{fig:reBB_model_log_lin_extrapolation_fits} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{figs/reBB_alpha_rho_p0_2020_6_4_18_59_36.pdf} \caption{The dependence of $\rho_0/\alpha$ on $\lambda$ in the TeV energy range. The data points are generated numerically by using the trends of the ReBB model parameters, $R_q$, $R_d$, $R_{qd}$, shown in Figs.~\ref{fig:par_Rq_lin}-\ref{fig:par_Rqd_lin} and the experimentally measured ratio $\rho_0$ values. The red curve represents the result of the analytical calculation showing a good agreement with the numerical calculations.} \label{fig:rho0-per-alpha-vs-P0-LHC} \end{figure} \section{Sanity tests\label{sec:sanity_tests}} In this section we show that the determined energy dependence trends are reliable in the kinematic range of $0.546 \le \sqrt{s} \le 8 $ TeV and $0.37\leq-t\leq1.2 $ GeV$^2$. For this purpose we performed the so-called sanity tests: we have cross-checked if the trends summarized in Table~\ref{tab:excitation_pars} indeed represent all the available differential cross-section data on both $pp$ and $p\bar p$ elastic scattering in the mentioned kinematic range. We used both those data which were and which were not utilized in the determination of the energy dependence trends for example because their acceptance was too limited to determine all the fit parameters of the ReBB model. To perform these cross-checks, the differential cross sections are fitted with all the four physical parameters of the ReBB model, $\alpha(s)$, $R_q(s)$, $R_d(s)$ and $R_{qd}(s)$, fixed to their extrapolated value obtained with the help of the results summarized in Table~\ref{tab:excitation_pars}, while the correlation coefficients of the type B and C errors, or the $\epsilon$ parameters in the $\chi^2$ definition of eq.~(\ref{eq:chi2-final}) are fitted to the data as free parameters. The results for the data at $\sqrt{s} = 0.546$, $0.63$, $1.8$, $1.96$, $2.76$ and $7$ TeV are shown in Figs.~\ref{fig:reBB_model_test_0_546_GeV}-\ref{fig:reBB_model_test_7_TeV}. All of these sanity tests resulted in the description of these data with a statistically acceptable confidence level of CL $ \geq $ 0.1 \%. As an additional sanity test, we have also cross-checked if this ReBB model describes the $pp$ and $p\bar p$ total cross section $\sigma_{\rm tot}(s)$ and real to imaginary ratio $\rho_0(s)$ data in a statistically acceptable manner, or not. These results are presented in Fig.~\ref{fig:reBB_model_test_sig_tot} and Fig.~\ref{fig:reBB_model_test_rho}, respectively. As the calculated confidence levels are higher than 0.1 \% in all of these cases, we can happily conclude that the energy dependent trends of the ReBB model are really reasonable and reliable in the investigated $0.541 \le \sqrt{s} \le 8$ TeV energy and in the $0.377 \le -t \le 1.2$ GeV$^2$ squared four-momentum transfer range. Thus this model can be used reliably to extrapolate both the $pp$ and the $p\bar p$ differential cross-sections in this limited kinematic range of $(s,t)$, based only on 10 physical model parameters, summarized in Table~\ref{tab:excitation_pars}. \clearpage \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_546_GeV_fit_ni_2020_5_27_22_3_47.pdf} \caption{Result of the sanity test for the 0.546 TeV $p\bar p$ elastic differential cross section data \cite{Battiston:1983gp,Bozzo:1985th} in the range of $0.37\leq-t\leq1.2$ GeV$^2$. This sanity test was performed as a fit during which the model parameters $R_q$, $R_d$, $R_{qd}$ and $\alpha$ were fixed to their $s$-dependent value based on Table~\ref{tab:excitation_pars}, while correlation coefficients $\epsilon$-s in the $\chi^2$ definition, Eq.~(\ref{eq:chi2-final}), were fitted as free parameters. Thus the physical parameters $R_q$, $R_d$, $R_{qd}$ and $\alpha$ are printed on the plot without error bars while the fitted correlation coefficients are given with their errors. The best parameter values are rounded up to three valuable decimal digits.} \label{fig:reBB_model_test_0_546_GeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_630_GeV_test_ni_2020_5_27_22_2_24.pdf} \caption{Result of a sanity test, similar to Fig.~\ref{fig:reBB_model_test_0_546_GeV}, but for the $\sqrt{s} = 0.63$ TeV $p\bar p$ elastic differential cross section data of ref.~\cite{Bernard:1986ye}, fitted in the range $0.7\leq-t\leq1.2$ GeV$^2$. } \label{fig:reBB_model_test_0_630_GeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_1_8_GeV_test_ni_2020_5_27_22_5_55.pdf} \caption{ Result of a sanity test, same as Fig.~\ref{fig:reBB_model_test_0_546_GeV}, but for the 1.8 TeV $p\bar p$ elastic differential cross section data \cite{Amos:1990fw} in the range of $0.37\leq-t\leq0.6 $ GeV$^2$. } \label{fig:reBB_model_test_1_8_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_1_96_TeV_fit_ni_Rq_2020_5_27_22_5_28.pdf} \caption{Result of a sanity test, same as Fig.~\ref{fig:reBB_model_test_0_546_GeV}, but for the $\sqrt{s} = 1.96$ TeV $p\bar p$ elastic differential cross section data \cite{Abazov:2012qb} in the range of $0.37\leq-t\leq1.2 $ GeV$^2$. } \label{fig:reBB_model_test_1_96_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_2_76_TeV_fit_ni_2020_5_27_22_5_22.pdf} \caption{Result of a sanity test, same as Fig.~\ref{fig:reBB_model_test_0_546_GeV}, but for the $\sqrt{s} = 2.76$ TeV $pp$ elastic differential cross section data \cite{Antchev:2018rec} in the range of $0.37\leq-t\leq0.7$ GeV$^2$. } \label{fig:reBB_model_test_2_76_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_7_TeV_fit_ni_2020_5_28_20_38_8.pdf} \caption{Result of a sanity test, same as Fig.~\ref{fig:reBB_model_test_0_546_GeV}, but for the $pp$ elastic differential cross section data at $\sqrt{s} = 7$ TeV from ref.~\cite{Antchev:2013gaa}, in the fitted range of $0.37\leq -t \leq 1.2~$GeV$^2$. } \label{fig:reBB_model_test_7_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_sig_tot_2020_5_27_23_20_15.pdf} \caption{Result of the sanity test for $pp$ \cite{Antchev:2013iaa,Antchev:2013paa,Antchev:2017dia} and $p\bar p$ \cite{Tanabashi:2018oca} total cross section data. It was calculated from the model when the values of the parameters $R_q$, $R_d$, $R_{qd}$ and $\alpha$ were taken from eq.~\ref{eq:parametrization_of_extrapolation_lin} and Table~\ref{tab:excitation_pars}, corresponding to the linear curves shown on panels (a)-(d) of Fig.\ref{fig:reBB_model_log_lin_extrapolation_fits}. } \label{fig:reBB_model_test_sig_tot} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_rho_2020_5_27_23_20_18.pdf} \caption{Sanity test result for $pp$ \cite{Antchev:2013iaa,Antchev:2016vpy,Antchev:2017yns} and $p\bar p$ \cite{Tanabashi:2018oca} parameter $\rho_0$ data, as calculated from the model when the values of the parameters $R_q$, $R_d$, $R_{qd}$ and $\alpha$ were taken from eq.~\ref{eq:parametrization_of_extrapolation_lin} and Table~\ref{tab:excitation_pars}, corresponding to the linear curves shown on panels (a)-(d) of Fig.\ref{fig:reBB_model_log_lin_extrapolation_fits}. On this plot, a model dependent Odderon effect is clearly identified: it corresponds to $\rho_0^{pp}(s)\neq\rho_0^{p\bar p}(s)$, the non-vanishing difference between the excitation functions of $\rho_0$ for $pp$ and $\rho_0$ for $p\bar p$ collisions, as detailed in \ref{sec:appendix-c}. } \label{fig:reBB_model_test_rho} \end{figure} \clearpage \section{Extrapolations\label{sec:extrapolations}} According to our findings in Section~\ref{sec:excitation_functions} the energy dependencies of the scale parameters $R_{q}$, $R_{d}$ and $R_{qd}$ are identical for $pp$ and $p\bar p$ scattering, only the energy dependence of the opacity parameter $\alpha$ differs. The statistically acceptable quality of the fits shown in Fig.~\ref{fig:reBB_model_log_lin_extrapolation_fits} and the success of the sanity tests performed in the previous section allow for a reliable extrapolation of the differential cross-sections of elastic $pp$ and $p\bar p$ collisions with the help of the ReBB model~\cite{Nemes:2015iia}, limited to the investigated $0.541 \le \sqrt{s} \le 8$ TeV center of mass energy and in the $0.377 \le -t \le 1.2$ GeV$^2$ four-momentum transfer range. We extrapolate, in the TeV energy range, the $pp$ differential cross sections to energies where measured $p\bar p$ data exist and the other way round, the $p\bar p$ differential cross sections to energies where measured $pp$ data exist. Thus three of such extrapolations were performed: $pp$ extrapolation to $\sqrt{s} = 1.96$ TeV, to compare it to the 1.96 TeV D0 $p\bar p$ $d\sigma/dt$ data, and $p\bar p$ extrapolations to $\sqrt{s} = 2.76$ and $7$ TeV, to compare them to the $d\sigma/dt$ $pp$ data measured by TOTEM at these energies. Since the energy dependencies of the scale parameters $R_{q}$, $R_{d}$ and $R_{qd}$ are identical for $pp$ and $p\bar p$ scattering, as discussed in Sec.~\ref{sec:excitation_functions}, in the course of the extrapolations their values are fixed at their fitted values given in Tab.~\ref{tab:fit_parameters}, furthermore, since the energy dependence of the $\alpha$ parameter differs for $pp$ and $p\bar p$ scattering, the $\alpha(pp)$ and $\alpha(p\bar p)$ values are fixed from their energy dependence trend seen in Fig.~\ref{fig:par_alpha_lin}. In addition, during the extrapolations, the $\epsilon$ parameters in the $\chi^2$ definition, Eq.~(\ref{eq:chi2-final}), were optimized, furthermore the last two terms in Eq.~(\ref{eq:chi2-final}), i.e., the total cross section and $\rho_0$-parameter term, were not included. This way we handled the type B and type C errors of the published $pp$ differential cross-section to match these data as much as possible to the differential cross-section of elastic $p\bar p$ collisions within the allowed systematic errors, and vice versa. The results of the extrapolations are shown in Fig.~\ref{fig:reBB_model_extr_1_96_TeV}, Fig.~\ref{fig:reBB_model_extr_2_76_TeV} and Fig.~\ref{fig:reBB_model_extr_7_TeV}. The error band around these extrapolations is also evaluated, based on the envelope of one standard deviation errors of the $R_q(s)$, $R_d(s)$, $R_{qd}(s)$ model parameters and the $p_0$ and $p_1$ parameters of $\alpha(s)$. As an example, the resulting ten curves -- considering that the values of the scale parameters are taken from the original fit while the value $\alpha$ is taken from the trend -- are explicitly shown for 1.96 TeV in Fig.~\ref{fig:reBB_model_extr_1_96_TeV}. While at $\sqrt{s} = 1.96$ TeV no statistically significant difference is observed between the extrapolated $pp$ and measured $p\bar p$ differential cross sections, at $\sqrt{s} = 2.76$ and $7$ TeV, remarkable and statistically significant differences can be observed. In Figs.~\ref{fig:reBB_model_extr_2_76_TeV} and \ref{fig:reBB_model_extr_7_TeV}, even an untrained eye can see, that the dip is filled in case of elastic $p\bar p$ scattering, while it is not filled in elastic $pp$ scattering. Thus we confirm the prediction of ref.~\cite{Donnachie:1983hf}, that predicted, based on a three-gluon exchange picture that dominates at larger values of $-t$, that the dip will be filled in high energy $p\bar p$ elastic collisions. In this work, the differences between elastic $pp$ and $p\bar p$ collisions are quantified by the confidence levels obtained from the comparision of the extrapolated curves to the measured data: at 2.76 TeV, the hypothesis that these extrapolations agree with the data is characterized by a $CL = 1.092 \times 10^{-10}$ \%, while at 7 TeV, CL = 0 \%. Theoretically the observed difference can be attributed only to the effect of a C-odd exchange, as detailed recently in refs.~\cite{Csorgo:2019ewn,Csorgo:2020msw,Csorgo:2020rlb}. At the TeV energy scale, the secondary Reggeon exchanges are generally known to be negligible. This effect has been also specifically cross-checked and confirmed recently in ref.~\cite{Szanyi:2018ain}. Thus in the few TeV energy range of the LHC, the only source of a difference between the differential cross-sections of elastic $pp$ and $p\bar p$ collisions can be a $t$-channel Odderon exchange. In the modern language of QCD, the Odderon exchange corresponds to the exchange of C-odd colorless bound states consisting of odd number of gluons~\cite{Bartels:1999yt,Donnachie:1990wd,Donnachie:1983hf}. Thus the CL, calculated for the 2.76 TeV $p\bar p$ extrapolation, corresponds to an Odderon observation with a probability of $ P $ $ = $ $1 - CL$ $ = $ $ 1 - 1.092 \times 10^{-12}$. This corresponds to a $\chi^2/NDF = 100.35/20$ and to a 7.12 $\sigma$ model dependent significance for the observation of a $t$-channel Odderon exchange, and the existence of the colorless bound states containing odd number of gluons. When extrapolating the $pp$ differential cross-sections from 2.76 down to 1.96 TeV, however, significance is lost, corresponding to a $\chi^2/NDF = 24.28/13$ and to a 2.19 $\sigma$ effect, less than a 3 $\sigma$ effect in this comparison. However, these two significances at 1.96 and 2.76 TeV can be combined, providing a combined $\chi^2/NDF = 124.63/33$, that corresponds to a statistically significant, 7.08 $\sigma$ effect. This 7.08 $\sigma$ combined significance increases to an even larger significance of an Odderon observation, when we extrapolate the differential cross-section of elastic proton - anti\-proton collisions to $\sqrt{s} = 7.0$ TeV, where the probability of Odderon observation becomes practically unity. Given that a 7.08 $\sigma$ effect is already well above the usual 5 $\sigma$, statistically significant discovery level, we quote this as the possibly lowest level of the significance of our model-dependent Odderon observation. As already mentioned in the introduction we have also been recently involved in a truly model-independent search for Odderon effects in the comparision of the scaling properties of the differential cross-sections of elastic $pp$ and $p\bar p$ collisions in a similar $s$ but in the complete available $t$ range. As compared to the model-dependent studies summarized in this manuscript, the advantage of the model-independent scaling studies of refs.~\cite{Csorgo:2019ewn,Csorgo:2020msw,Csorgo:2020rlb} is that they scale out all the effects from the differences between $pp$ and $p\bar p$ elastic collisions due to possible differences in their $\sigma_{\rm el}(s)$, $B(s)$ and their product, the $\sigma_{\rm el}(s) B(s)$ = $\sigma_{\rm tot}^2(s) (1 + \rho^2_0(s))$ functions. As part of the Odderon signal in the ReBB model is apparently in the difference between the $\rho_0(s)$ excitation functions for $pp$ and $p\bar p$ collisions, the significance of the Odderon signal is reduced in this model independent analysis. When considering the interpolations as theoretical curves, the significance is reduced to a 6.55 $\sigma$ effect~\cite{Csorgo:2019ewn}, but when considering that the interpolations between experimental data have also horizontal and vertical, type A and type B errors, the signicance of the Odderon signal is further reduced to a 6.26 $\sigma$ effect~\cite{Csorgo:2020msw,Csorgo:2020rlb}. Thus we conclude that the Odderon is now discovered, both in a model-dependent and in a model-independent manner, with a statistical significance that is well above the 5 $\sigma$ discovery limit of high energy particle physics. Finally we close this section with the predictions to the experimentally not yet available large-$t$ differential cross-section of $pp$ collisions at $\sqrt{s} = 0.9$, $4$, $5$ and $ 8$ TeV shown in Fig.~\ref{fig:reBB_model_extr_8_TeV}. \clearpage \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_1_96_TeV_fit_ni_2020_11_27_16_23_16.pdf} \caption{The ReBB model extrapolation for the $pp$ $d\sigma/dt$ at $\sqrt{s}=1.96$~TeV compared to the $p\bar p$ D0 $d\sigma/dt$ data \cite{Abazov:2012qb} measured at the same energy. The yellow band is the uncertainty of the extrapolation. The calculated CL value between the extrapolated model and the measured data does not indicate a significant difference between the $pp$ and $p\bar p$ differential cross sections.} \label{fig:reBB_model_extr_1_96_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_2_76_TeV_fit_ni_2020_6_4_23_0_29.pdf} \caption{The ReBB model extrapolation for the $p\bar p$ $d\sigma/dt$ at $\sqrt{s}=2.76$~TeV compared to the $pp$ TOTEM $d\sigma/dt$ data\cite{Antchev:2018rec} measured at the same energy. The yellow band is the uncertainty of the extrapolation. The calculated CL value between the extrapolated model and the measured data indicates a significant difference between the $pp$ and $p\bar p$ differential cross sections, corresponding to a 7.1 $\sigma$ significance for the $t$-channel Odderon exchange.} \label{fig:reBB_model_extr_2_76_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{figs/reBB_7_TeV_fit_ni_2020_6_4_22_57_33.pdf} \caption{The ReBB model extrapolation for the $p\bar p$ $d\sigma/dt$ at $\sqrt{s}=7$~TeV compared to the $pp$ TOTEM $d\sigma/dt$ data \cite{Antchev:2013gaa} measured at the same energy. The yellow band is the uncertainty of the extrapolation. The calculated CL value between the extrapolated model and the measured data indicates a significant difference between the $pp$ and $p\bar p$ differential cross sections, hence a significant Odderon effect, that is dominant around the dip region.} \label{fig:reBB_model_extr_7_TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.85\linewidth]{figs/reBB_8_TeV_fit_ni_2020_5_28_15_41_51.pdf} \caption{Predictions from the ReBB model, for the $d\sigma/dt$ of elastic $pp$ collisions at $\sqrt{s}=8$, 5, 4, and 0.9 TeV.} \label{fig:reBB_model_extr_8_TeV} \end{figure} \section{Discussion \label{sec:discussion}} In the previous sections, we have investigated what happens if we interpret the data in terms of a particular model, the Real Extended Bialas-Bzdak Model. This allows also to consider the Odderon signal in the excitation function of the model parameter $\alpha$. We have shown in ~\ref{sec:appendix-b} that this model parameter is proportional to the experimentally measured parameter $\rho_0$, the ratio of the real to the imaginary part of the scattering amplitude at the optical point, and related the coefficient of proportionality to the value of the imaginary part of the scattering amplitude at vanishing impact parameter, $\lambda(s)=Imt_{el}(s,b=0)$, for the $\sqrt{s} \leq 8$ TeV elastic proton-proton collisions, and we have shown that within the framework of this ReBB model, the very different trend of $\rho_0(s)$ in proton-proton and in proton-antiproton collisions enhances the model-independent Odderon signal, from a 6.26 $\sigma$ and 6.55 $\sigma $ effect to a combined, at least $7.08 $ $\sigma$ effect. Recently, the TOTEM Collaboration concluded, that only one condition is yet to be satisfied to see a statistically significant Odderon signal: namely, the logarithmically small energy gap between the lowest TOTEM energy of $\sqrt{s} = 2.76$ TeV at LHC and the highest D0 energy of 1.96 TeV at Tevatron needs to be closed. This energy gap has been closed in a model-independent way in refs.~\cite{Csorgo:2020msw,Csorgo:2020rlb,Csorgo:2019ewn}, using the scaling properties of elastic scattering, and by comparing the $H(x) = \frac{1}{B \sigma_{el}} \frac{d\sigma}{dt}$ scaling functions of elastic proton-proton and proton-antiproton collisions, as a function of $x = - t B$ at $\sqrt{s} = 1.96$, $2.76$ and $7.0$ TeV. The advantages of that method, with respect to comparing the cross sections directly include the scaling out of the $s$-dependencies of $\sigma_{\rm el}(s)$, $B(s)$ and their product, $\sigma_{\rm el}(s) B(s)$ = $\sigma_{\rm tot}^2(s) (1 + \rho^2_0(s))$, as well as the normalization of the $H(x)$ scaling function that cancels the point-to-point correlated and $t$-independent normalization errors. The validity of the $H(x)$ scaling for $pp$ collisions and its violation in $p \bar p$ collisions in the few TeV energy range resulted in a discovery level statistical significance of an Odderon signal, characterized in refs.~\cite{Csorgo:2020msw,Csorgo:2020rlb,Csorgo:2019ewn} to be at least $6.26$ $\sigma$, model independently, based on a careful interpolation of the experimental data-points, their point-to-point fluctuating, point-to-point correlated and data point dependent as well as point-to-point correlated and data point independent errors. If these errors are considered as errors on a theory curve, then the significance goes up to at least $6.55$ $\sigma$~\cite{Csorgo:2019ewn}. In high energy particle physics, the standard accepted discovery threshold corresponds to a 5$\sigma$ effect. In the previous section, we have shown, that the statistical significance of an Odderon observation in the limited $0.541 \le \sqrt{s} \le 8$ TeV center of mass energy and in the $0.377 \le -t \le 1.2$ GeV$^2$ four-momentum transfer range is at least a combined 7.08 $\sigma$ effect, corresponding to a statistically significant and model dependent Odderon observation. The $\sqrt{s} = 7 $ TeV $pp$ differential cross-sections are measured with asymmetric type B errors. In order to make sure that our results are reliable and reproducible, we have performed several cross-checks to test the reliability of our fit at $\sqrt{s} = 7 $ TeV. One of these tests related to the handling of the asymmetric type B, $t$-dependent systematic errors. We have performed cross-checks for taking at every point either the smaller or the larger of the up and down type B errors to have a lower or an upper limit on their effects. We found that the parameters of the ReBB model remained stable for such a symmetrization of the type B systematic errors, as the modification of the fit parameters due to such a symmetrization was within the quoted errors on the fit parameters. Our final fits, presented before, were done with asymmetric type B errors, as detailed in Section~\ref{sec:fit_results}. So we conclude that our fit at $\sqrt{s} = 7 $ TeV is stable even for the symmetrization of the type B systematic errors. We have also investigated the stability of our result for the case, when the energy range is extended towards lower values of $\sqrt{s}$, in the ISR energy range, detailed in \ref{sec:appendix-d}. When the $\sqrt{s} = 23.5$ GeV energy data are included to those summarized in Table~\ref{tab:fit_parameters}, the energy dependence of the model parameters becomes quadratic in $\ln(s)$. This provides 3x5 = 15 model parameters for this broader energy range, as summarized in Table~\ref{tab:excitation_pars_quadratic} and detailed in \ref{sec:appendix-d}. This way, the non-linear terms are confirmed to be negligibly small in the TeV energy range, where we find the significant Odderon effects, with the help of as little as only 10 model parameters. These 10 parameters are given in Table ~\ref{tab:excitation_pars}. It turns out in Sec.~\ref{sec:fit_results}, that the ReBB model as presented in ref.~\cite{Nemes:2015iia} does not yet provide a statistically acceptable fit quality to the differential cross-section of $\sqrt{s} = 13$ TeV elastic $pp$ scattering. This might be due to the emergence of the black-ring limit of elastic proton-proton scattering instead of the expected black-disc limit. In what follows we shortly discuss the earlier and more recent results on the black ring shaped interaction region of the colliding protons. A complementary way of studying the high-energy scattering processes is by passing from the momentum transfer $t$ to the impact parameter $b$. In 1963 van Hove introduced the inelasticity profile or the overlap function~\cite{van1963phenomenological,VanHove:1964rp}, which corresponds to the impact parameter distribution of the inelastic cross section characterizing the shape of the interaction region of two colliding particles. The natural expectation is that the most inelastic collisions are central, i.e., the inelasticity profiles have a maximum at $b=0$ consistently with the black disc terminology. The possibility of a minimum at $b = 0$, i.e., the peripheral form of the inelastic function was first considered in Ref.~\cite{TROSHIN1993175} which implies the shape of a black ring rather than that of a black disc. In Ref.~\cite{Dremin:2014dea}, it was shown that the inelasticity profile of protons is governed by the ratio of the slope of the diffraction cone to the total cross section through the variable $Z=4\pi B/\sigma_{tot}$ and the evolution to values of $Z<1$ at LHC energies implies a transition from the black disk picture of the interaction region to a black ring (or torus-like) shape. These results were reviewed in Ref.~\cite{Dremin:2014spa} using the unitarity relation in combination with experimental data on elastic scattering in the diffraction cone. Ref.~\cite{Dremin:2014dea} concludes that the shape of the interaction region of colliding protons could be reliably determined if the behavior of the elastic scattering amplitude at all transferred momenta was known. The black ring shape of the interaction region can be interpreted as the presence of a hollow at small impact parameter values. In Refs.~\cite{Arriola:2016bxa,RuizArriola:2016ihz,Broniowski:2017aaf,Broniowski:2017rhz} the authors study the hollowness phenomenon within an inverse scattering approach based on empirical parameterizations. Ref.~\cite{RuizArriola:2016ihz} concludes that the very existence of the hollowness phenomenon is quantum-mechanical in nature. Hollowness has also been reported to emerge from a gluonic hot-spot picture of the $pp$ collision at the LHC energies~\cite{Albacete:2016pmp}. It is shown in Ref.~\cite{Broniowski:2017rhz} that the emergence of such a hollow strongly depends on the phase of the scattering amplitude. In Ref.~\cite{Broniowski:2018xbg} the authors demonstrated the occurrence of the hollowness phenomenon in a Regge model above $\sqrt{s} \sim 3$~TeV. Ref.~\cite{Troshin:2017ucy} discusses the absorptive (saturation of the black disk limit) and reflective (saturation of the unitarity limit) scattering modes of proton-proton collisions concluding that a distinctive feature of the transition to the reflective scattering mode is the developing peripheral form of the inelastic overlap function. Reflective scattering is detailed also in Refs.~\cite{Troshin:2007fq,Troshin:2020dvd,Troshin:2020bcn}. The authors of Ref.~\cite{Petrov:2018rbi} argue that the presence of nonzero real part of the elastic scattering amplitude in the unitarity condition enables to conserve the traditional black disk picture refuting the existence of the hollowness effect. However, as noted in Ref.~\cite{Broniowski:2018xbg}, the criticism that has been raised in Ref.~\cite{Petrov:2018rbi} is based on an incorrect perception of the approximations involved and does not address the arbitrariness of the $t$-dependence of the ratio $\rho$ which is crucial for hollowness. In Refs.~\cite{Campos:2018tsb,Campos:2020edy} the hollowness effect is interpreted as a consequence of fundamental thermodynamic processes. Ref.~\cite{Csorgo:2019fbf} notes that the onset of the hollowness effect is possibly connected to the opening of a new channel between $\sqrt{s}$ = 2.76 and 7 TeV as indicated by the measured $\sigma_{el}/\sigma_{tot}$ ratio and the slope parameter $B_0$ data. In Ref.~\cite{Csorgo:2019egs} the model independent L\'evy imaging method is employed to reconstruct the proton inelasticity profile function and its error band at different energies. This method established a statistically significant proton hollowness effect, well beyond the 5$\sigma$ discovery limit. This conclusion is based on a model independent description of the TOTEM proton-proton differential cross-section data at $\sqrt{s} = 13$ TeV with the help of the L\'evy imaging method, that represents the TOTEM data in a statistically acceptable manner, corresponding to a confidence level of CL = 2 \%. \section{Summary \label{sec:summary}} Currently, the statistically significant observation of the elusive Odderon is a hot research topic, with several interesting and important results and contributions. In the context of this manuscript, Odderon exchange corresponds to a crossing-odd component of the scattering amplitude of elastic proton-proton and proton-antiproton collisions, that does not vanish at asymptotically high energies, as probed experimentally by the D0 Collaboration for proton-antiproton and by the TOTEM Collaboration for proton-proton elastic collisions in the TeV energy range. Theoretically, the observed differences can be attributed only to the effect of a C-odd exchange, as detailed recently in refs.~\cite{Csorgo:2019ewn,Csorgo:2020msw,Csorgo:2020rlb}. Those model independent studies resulted in an at least 6.26 $\sigma$ statistical significance of the Odderon exchange~\cite{Csorgo:2019ewn,Csorgo:2020msw,Csorgo:2020rlb}. The goal of the research summarized in this manuscript was to cross-check, in a model-dependent way, the persistence of these Odderon-effects, and to provide a physical picture to interpret these results. Using the ReBB model of ref.~\cite{Nemes:2015iia}, developed originally to describe precisely the differential cross-section of elastic proton-proton collisions, we were able to describe also the proton-antiproton differential cross section at $\sqrt{s} = 0.546$ and $1.96$ TeV without any modification of formalism. We have shown also that this model describes the proton-proton differential cross section at $\sqrt{s} = 2.76$ and $7$ TeV, also in a statistically acceptable manner, with a CL $> $ 0.1 \%. Using our good quality, statistically acceptable fits for the $0.5 \le\sqrt{s} \le 8$ TeV energy region, we have determined the energy dependence of the model parameters to be an affine linear function of $\ln(s/s_0)$. We have verified this energy dependence by demonstrating that the exctitation functions of the physical parameters of the Real Extended Bialas-Bzdak model satisfy the so-called sanity tests: they describe in a statistically acceptable manner not only those four datasets that formed the basis of the determination of the excitation function, but all other published datasets in the $\sqrt{s} = 0.541$ - $8.0$ TeV energy domain. We have also demonstrated that the excitation functions for the total cross-sections and the $\rho_0$ ratios correspond to the experimentally estabished trends. Remarkably, we have observed that the energy dependence of the geometrical scale parameters for $pp$ and $p\bar p$ scattering are identical in elastic proton-proton and proton-antiproton collisions: only the energy dependence of the shape or opacity parameter $\alpha(s)$ differs significantly between $pp$ and $p\bar p$ collisions. After determining the energy dependence of the model parameters we made extrapolations in order to compare the $pp$ and $p\bar p$ differential cross sections in the few TeV energy range, corresponding to the energy of D0 measurement at $\sqrt{s} = 1.96$ TeV in ref.~\cite{Abazov:2012qb} and the TOTEM measurements at $\sqrt{s} = 2.76$ and $7.0$ TeV. Doing this, we found evidence for the Odderon exchange with a high statistical significance. We have cross-checked, that this evidence withstands several reasonable cross-checks, for example the possible presence of small quadratic terms of $\ln(s/s_0)$ in the excitation functions of the parameters of this model. Subsequently, we have also predicted the details of the diffractive interference (dip and bump) region at $\sqrt{s} =0.9$, $4$, $5$ and $8$ TeV\footnote{Currently, TOTEM preliminary experimental data are publicly presented from an on-going analysis at $\sqrt{s} = 8 $ TeV, see ref.~\cite{Kaspar:2018ISMD} for further details.} We have shown that within the framework of this ReBB model, the very different trend of $\rho_0(s)$ in proton-proton and in proton-antiproton collisions enhances the model-independent Odderon signal, from a 6.26 $\sigma$ and 6.55 $\sigma $ effect of refs.~\cite{Csorgo:2019ewn,Csorgo:2020msw,Csorgo:2020rlb} to an at least $7.08 $ $\sigma$ effect. This gain of significance is due to the possibility of extrapolating the differential cross-sections of elastic $p\bar p$ scattering from $\sqrt{s }$ $=$ 1.96 TeV to 2.76 TeV. It is important to note that in the evaluation of the 7.08 $\sigma$ Odderon effect, only $p\bar p$ data at $\sqrt{s} = 1.96$ TeV and $pp$ data at $\sqrt{s} = 2.76$ TeV were utilized, amounting to a model dependent but successful closing of the energy gap between D0 and TOTEM measurements. Let us also emphasize that our Odderon observation is valid in the limited kinematic range of $0.541 \le \sqrt{s} \le 8$ TeV center of mass energy and in the $0.377 \le -t \le 1.2$ GeV$^2$ four-momentum transfer range. When extrapolating the $pp$ differential cross-sections from 2.76 down to 1.96 TeV, however, significance is lost, corresponding to a $\chi^2/NDF = 24.28/13$ and to a 2.19 $\sigma$ effect, which is less than a 3 $\sigma$ effect at 1.96 TeV. However, these two significances at 1.96 and 2.76 TeV can be combined, providing a $\chi^2/NDF = 124.63 /33$, that corresponds to a statistically significant, combined 7.08 $\sigma$ effect. This 7.08 $\sigma$ combined significance increases to an even larger significance of an Odderon observation, when we extrapolate the differential cross-section of elastic proton-antiproton collisions to $\sqrt{s} = 7.0$ TeV. Given that a 7.08 $\sigma$ effect is already well above the usual 5 $\sigma$, statistically significant discovery level, we quote this as the possibly lowest level of the significance of our model-dependent Odderon observation. Concerning the direction of future research: Odderon is now discovered both in a model-independent way, described in refs.~\cite{Csorgo:2019ewn,Csorgo:2020msw,Csorgo:2020rlb}, and in a model-dependent way, described in this manuscript; so the obvious next step is to extract its detailed properties, both in a model-independent and in a model-dependent manner. The main properties of the Odderon as well as the Pomeron, based on the ReBB model, are already summarized in ~\ref{sec:appendix-c}. Let us also note, that the ReBB model as presented in ref.~\cite{Nemes:2015iia} does not yet provide a statistically acceptable fit quality to the differential cross-section of $\sqrt{s} = 13$ TeV elastic $pp$ scattering. This might be due to the emergence of the black-ring limit of elastic proton-proton scattering instead of the expected black-disc limit, as detailed in Sec.~\ref{sec:discussion}, or due to the very strong non-exponential features of the differential cross-sections in these collisions at low $-t$\footnote{ We see that the ReBB model has a leading order exponential feature. If we want to describe the significantly non-exponential features of differential cross-section in the low-$|t|$ range~\cite{Antchev:2015zza,Antchev:2017yns}, the model has to be generalized for a possible non-exponential behaviour at low $|t|$.}, as shown in ref.~\cite{Antchev:2017dia,Antchev:2017yns}. So we conclude that the Real Extended Bialas-Bzdak model needs to be further generalized for the top LHC energies and above. This work is in progress, but it goes clearly well beyond the scope of the current, already rather detailed manuscript. Importantly, any possible outcome of these follow-up studies is not expected to modify the model behavior at the presently investigated energy range, and hence our work is apparently completed, refinements are not necessary from the point of view of the task solved in this manuscript. In short, we determined the model-dependent statistical significance of the Odderon observation to be an at least 7.08 $\sigma$ effect in the $0.5 \le \sqrt{s} \le 8$ TeV center of mass energy and $0.377 \le -t \le 1.2$ GeV$^2$ four-momentum transfer range. Our analysis is based on the analysis of published D0 and TOTEM data of refs.~\cite{Abazov:2012qb,Antchev:2017dia,Antchev:2018rec} and uses as a tool the Real Extended Bialas-Bzdak model of ref.~\cite{Nemes:2015iia}. We have cross-checked that this unitary model works in a statistically acceptable, carefully tested and verified manner in this particular kinematic range. Our main results are illustrated on Figs.~\ref{fig:reBB_model_extr_2_76_TeV} and \ref{fig:reBB_model_extr_7_TeV}. \section*{Acknowledgments} First of all, we would like to thank F. Nemes, who initiated this project, and started to test the Real Extended Bialas-Bzdak model agains the proton-antiproton data. He also provided several valuable technical help for us in the initial phases of these studies. We greatfully acknowledge inspiring discussions with C. Avila, S. Giani, P. Grannis, G. Gustaf\-son, L. Jenkovszky, E. Levin, B. Nicolescu, T. Nov\'ak, K. \"Osterberg, R. Pasechnik, C. Royon, A. Ster and M. Strikman. Our research has been partially supported by the \'UNKP-18-2 New National Excellence Program of the Ministry of Human Capacities, and by the NKFIH grants No. FK-123842, FK-123959 and K133046 as well as by the EFOP 3.6.1-16-2016-00001 grant (Hungary), as well as by the framework of the COST Action CA15213 ``Theory of hot matter and relativistic heavy-ion collisions (THOR)" of the European Union.
{'timestamp': '2020-06-01T02:11:23', 'yymm': '2005', 'arxiv_id': '2005.14538', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14538'}
arxiv
\section{Introduction and results} Diophantine approximation has been widely studied by mathematicians. In 1842, Dirichlet \cite{D1842} proved an illustrious theorem as follows.\\ {\bf Dirichl
et Theorem} For any two real numbers $\theta,~Q$ with $Q\geq 1$, there is an integer $n\in[1, Q]$ such that $$\lVert n \theta\rVert<Q^{-1},$$ where $\lVert\xi\rVert$ denotes the distance from $\xi$ to the nearest integer. Dirichlet Theorem is called a \emph{uniform approximation theorem} in \cite[p.p. 2]{W12}. A weak form of Dirichlet Theorem, called an \emph{asymptotic approximation theorem} in \cite[p.p. 2]{W12}, which was often refered to as a corollary of Dirichlet Theorem in the litterature has already existed in the book of Legendre \cite[1808, p.p. 18-19]{L2009}: for any real number $\theta$, there are infinitely many $n\in\mathbb{N}$ such that $$\lVert n\theta\rVert<n^{-1}.$$ For the general case, Khintchine in 1924 \cite{K1924} showed that for a positive function $\psi:\mathbb{N}\rightarrow\mathbb{R}^+$, if $x\mapsto x\psi(x)$ is non-increasing, then the set $$\mathcal{L}_\psi:=\left\{\theta\in\mathbb{R}:\lVert n\theta\rVert<\psi(n),\text{ for infinitely many }n\in\mathbb{N}\right\}$$ has Lebesgue measure zero if the series $\sum\psi(n)$ converges and has full Lebesgue measure otherwise. In the case where the set has Lebesgue measure zero, it is natural to calculate the Hausdorff dimension of $\mathcal{L}_\psi$. The first result on the Hausdorff dimension of $\mathcal{L}_\psi$ dates back to Jarn\'{\i}k-Bosicovitch Theorem \cite{B34,J29}. It was shown that for any $\tau>1$, one has $${\rm dim}_H\left(\left\{\theta\in\mathbb{R}:\lVert n\theta\rVert<\dfrac{1}{n^\tau},\text{ for infinitely many }n\in\mathbb{N}\right\}\right)=\dfrac{2}{1+\tau},$$ where ${\rm dim}_H(\cdot)$ denotes the Hausdorff deminsion of a set. In analogy with the classical Diophantine approximation, Hill and Velani \cite{HV1995} studied the approximation properties of the orbits of a dynamical system and introduced the so called \emph{shrinking target problems}: for a measure preserving dynamical system $(M,\mu,T)$ with a metric $d$ and a positive function $\psi$, define the set of all \emph{$\psi$-well approximable} points by $x_0$ as $$\mathcal{L}(T,\psi,x_0):=\{x\in M:d(T^nx,x_0)<\psi(n),\text{ for infinitely many $n\in\mathbb{N}$}\},$$ what is the size (Lebesgue measure, Hausdorff dimension) of $\mathcal{L}(T,\psi,x_0)$? They studied the case where $T$ is an expanding rational map of the Riemann sphere $\overline{\mathbb{C}}=\mathbb{C}\cup\{\infty\}$. In this papper, we are interested in the approximation properties of the orbits of $\beta$-transformations. The $\beta$-transformation $T_\beta~(\beta>1)$ on $[0,1)$ is defined by $$T_\beta(x):=\beta x-\lfloor\beta x\rfloor,$$ where $\lfloor\cdot\rfloor$ is the integer part function. For any positive function $\psi:\mathbb{N}\rightarrow\mathbb{R}^+$, define the set of \emph{$\psi$-well asymptotically approximable} points by $x_0$ as $$\mathcal{L}(\psi,x_0):=\left\{x\in[0,1]:\lvert T_\beta^n x-x_0\rvert<\psi(n),\text{ for infinitely many $n\in\mathbb{N}$}\right\}.$$ By \cite[Theorem 2A, B, C]{P67}, the set $\mathcal{L}(\psi,x_0)$ has Lebesgue measure zero if and only if the series $\sum\psi(n)$ converges. Shen and Wang \cite[Theorem 1.1]{SW2013} showed that for any real number $\beta>1$ and any point $x_0\in[0,1]$, one has $${\rm dim}_H\left(\mathcal{L}(\psi, x_0)\right)=\dfrac{1}{1+v},\quad\text{where $v:=\liminf\limits_{n\rightarrow\infty}\dfrac{-\log_\beta\psi(n)}{n}$}.$$ Parallel to the asymptotic approximation theorem, it is also worth of studying the uniform approximation properties as in Dirichlet Theorem. The uniform Diophantine approximation related to $\beta$-transformations was studied by Bugeaud and Liao \cite{YLiao2016}. For $x\in[0,1)$, let $$v_\beta(x):=\sup\left\{v\geq 0:T^n_\beta x<(\beta^n)^{-v},\text{ for infinitely many $n\in\mathbb{N}$}\right\},$$ $$\hat{v}_\beta(x):=\sup\left\{v\geq 0:\forall~N\gg 1,~T^n_\beta x<(\beta^N)^{-v}\text{ has a solution $n\in[0,N]$}\right\}.$$ The exponents $v_\beta$ and $\hat{v}_\beta$ were introduced in \cite{AB10}(see also \cite[Ch.7]{B2012}). Bugeaud and Liao \cite{YLiao2016} proved the following theorem.\\{\bf Theorem BL} (\cite[Theorem 1.4]{YLiao2016}) \emph{For any $v\in(0,\infty)$ and any $\hat{v}\in(0,1)$, if $v<\hat{v}/(1-\hat{v})$, then the set $$\left\{x\in[0,1]:v_\beta(x)= v\right\}\cap\left\{x\in[0,1]:\hat{v}_\beta(x)\geq\hat{v}\right\}$$ is empty. Otherwise, $${\rm dim}_H\left(\{x\in[0,1]:v_\beta(x)= v\}\cap\{x\in[0,1]:\hat{v}_\beta(x)=\hat{v}\}\right)=\dfrac{v-\hat{v}-v\hat{v}}{(1+v)(v-\hat{v})}.$$}{\bf Theorem BL} can be considered as the special case where $x_0=0$. The aim of this paper is to study the Diophantine approximation sets in \cite{YLiao2016} for any fixed $x_0\in(0,1]$. \begin{Definition}\label{defexp} Let $\beta>1$, fix $x_0\in[0,1]$. For any $x\in[0,1]$, denote by $\mathcal{V}_\beta(x,x_0)$ the supremum of the real numbers $v$ for which the equation $$\lvert T_\beta^n x-x_0\rvert<(\beta^n)^{-v}$$ has infinitely many solutions in integers $n\in\mathbb{N}$. Denote by $\hat{\mathcal{V}}_\beta(x,x_0)$ the supremum of the real numbers $\hat{v}$ for which, for every sufficiently large integer $N$, the equation $$\lvert T_\beta^n x-x_0\rvert<(\beta^N)^{-\hat{v}}$$ has a solution $n\in\mathbb{N}$ with $1\leq n\leq N$. \end{Definition} Our main results are the following Theorems \ref{A} and \ref{B}. \begin{theoremalph}\label{A} Let $\beta>1$. For any $x_0\in[0,1]$, any $v\in(0,\infty)$ and any $\hat{v}\in(0,1)$, if $v<\hat{v}/(1-\hat{v})$, then the set $$\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)=v\right\}\cap\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)\geq\hat{v}\right\}$$ is empty. Otherwise, the set $$\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)= v\right\}\cap\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}$$ has the Hausdorff dimension $$\dfrac{v-\hat{v}-v\hat{v}}{(1+v)(v-\hat{v})}.$$ \end{theoremalph} \begin{theoremalph}\label{B} Let $\beta>1$, the set $$\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=0\right\}$$ is of full Lebesgue measure. If $0<\hat{v}\leq1$, then $${\rm dim}_H\left( \left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}\right)=\left(\dfrac{1-\hat{v}}{1+\hat{v}}\right)^2.$$ Otherwise, the set $$\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)>1\right\}$$ is countable. \end{theoremalph} Persson and Schmeling \cite{PS08} gave another point of view, by letting $\beta$ vary and considering the $\beta$-expansions of $1$. They \cite[Theorem 14]{PS08} showed that for any $1<\beta_0<\beta_1<2$ and any $v\geq0$, one has $${\rm dim}_H\left( \left\{\beta\in(\beta_0,\beta_1):\mathcal{V}_\beta(1,x_0)=v\right\}\right)=\dfrac{1}{1+v}.$$ In \cite{LPWW}, the assumption $\beta_1<2$ is removed. In the same way as {\bf Theorem BL}, Bugeaud and Liao \cite{YLiao2016} also proved that for any $v\in(0,\infty)$ and any $\hat{v}\in(0,1)$, if $v<\hat{v}/(1-\hat{v})$, then $$\left\{\beta>1:v_\beta(1)=v\right\}\cap\left\{\beta>1:\hat{v}_\beta(1)\geq\hat{v}\right\}=\emptyset,$$ Otherwise, $${\rm dim}_H\left(\{\beta>1:v_\beta(1)= v\}\cap\left\{\beta>1:\hat{v}_\beta(1)=\hat{v}\right\}\right)=\dfrac{v-\hat{v}-v\hat{v}}{(1+v)(v-\hat{v})}.$$ When considering the exponents $\mathcal{V}_\beta(1,x_0)$ and $\hat{\mathcal{V}}_\beta(1,x_0)$, we obtain the following Theorems \ref{C} and \ref{D}. \begin{theoremalph}\label{C} For any $x_0\in[0,1]$, any $v\in(0,\infty)$ and any $\hat{v}\in(0,1)$, if $v<\hat{v}/(1-\hat{v})$, then the set $$\left\{\beta>1:\mathcal{V}_\beta(1,x_0)=v\right\}\cap\left\{\beta>1:\hat{\mathcal{V}}_\beta(1,x_0)\geq\hat{v}\right\}$$ is empty. Otherwise, $${\rm dim}_H\left(\{\beta>1:\mathcal{V}_\beta(1,x_0)= v\}\cap\left\{\beta>1:\hat{\mathcal{V}}_\beta(1,x_0)=\hat{v}\right\}\right)=\dfrac{v-\hat{v}-v\hat{v}}{(1+v)(v-\hat{v})}.$$ \end{theoremalph} \begin{theoremalph}\label{D} For any $x_0\in[0,1]$ and any $\hat{v}\in[0,1]$, one has $${\rm dim}_H\left( \left\{\beta>1:\hat{\mathcal{V}}_\beta(1,x_0)\geq\hat{v}\right\}\right)={\rm dim}_H\left( \left\{\beta>1:\hat{\mathcal{V}}_\beta(1,x_0)=\hat{v}\right\}\right)$$ and $${\rm dim}_H \left(\{\beta>1:\hat{\mathcal{V}}_\beta(1,x_0)=\hat{v}\}\right)=\left(\dfrac{1-\hat{v}}{1+\hat{v}}\right)^2.$$ \end{theoremalph} Our paper is organized as follows. We recall some classical results of the theory of $\beta$-transformations in Section $2$. Theorems \ref{A} and \ref{B} are proved in Section $3$. Section $4$ establishes Theorems \ref{C} and \ref{D}. \medskip \section{Beta-transformations} Beta-expansion was introduced by R\'{e}nyi \cite{Ren57} in 1957. For any $\beta>1$, the $\beta$-transformation $T_{\beta}$ on $[0,1)$ is defined by $$T_\beta x=\beta x-\lfloor\beta x\rfloor,$$ where $\lfloor\xi\rfloor$ denotes the largest integer less than or equal to $\xi$. Let \begin{equation*} \lceil\beta\rfloor= \begin{cases} \beta -1, & \text{if}~\beta \text{ is a positive integer},\\ \lfloor \beta \rfloor, & \text{otherwise}. \end{cases} \end{equation*} \begin{Definition} A sequence $\{\varepsilon_n:\varepsilon_n=\varepsilon_n(x,\beta)\}_{n\geq 1}\in\mathcal{A}^{\mathbb{N}}:=\{0,1,\cdots,\lceil\beta\rfloor\}^{\mathbb{N}}$ is called the $\beta$-expansion of a number $x\in[0,1)$, if \begin{equation}\label{E1} x=\dfrac{\varepsilon_1}{\beta}+\dfrac{\varepsilon_2}{\beta^2}+\cdots+\dfrac{\varepsilon_n}{\beta^n}+\cdots, \end{equation} where $\varepsilon_n(x,\beta)=\lfloor\beta T^{n-1}_\beta x\rfloor$, for all positive integers $n\in\mathbb{N}$. We also write $$d_\beta(x)=\left(\varepsilon_1,\cdots,\varepsilon_n,\cdots\right).$$ \end{Definition} We can extend the definition of the $\beta$-transformation to the point $1$ as:$$T_\beta 1=\beta-\lfloor\beta \rfloor.$$ One can obtain $$ 1=\dfrac{\varepsilon_1(1,\beta)}{\beta}+\dfrac{\varepsilon_2(1,\beta)}{\beta^2}+\cdots+\dfrac{\varepsilon_n(1,\beta)}{\beta^n}+\cdots,$$ where $\varepsilon_n(1,\beta)=\lfloor\beta T^{n-1}_\beta 1\rfloor,~\text{for all positive integers $n\in\mathbb{N}$}$. We also write $$ d_\beta(1)=\left(\varepsilon_1(1,\beta),\cdots,\varepsilon_n(1,\beta),\cdots\right).$$ If $d_\beta(1)$ is finite, i.e. there is an integer $m>0$ such that $\varepsilon_m(1,\beta)\neq 0$ and $\varepsilon_i(1,\beta)=0$ for all $i>m$, then $\beta$ is called a \emph{simple Parry number}. In this case, the infinite $\beta$-expansion of $1$ is defined as $$(\varepsilon^\ast_1(\beta),\varepsilon^\ast_2(\beta),\cdots,\varepsilon^\ast_n(\beta),\cdots):=(\varepsilon_1(1,\beta),\varepsilon_2(1,\beta),\cdots,\varepsilon_m(1,\beta)-1)^\infty,$$ where $(\omega)^\infty$ denotes the periodic sequence. If $d_\beta(1)$ is infinite, then we define $$(\varepsilon^\ast_1(\beta),\varepsilon^\ast_2(\beta),\cdots,\varepsilon^\ast_n(\beta),\cdots):=(\varepsilon_1(1,\beta),\varepsilon_2(1,\beta),\cdots,\varepsilon_n(1,\beta),\cdots).$$ Endow the set $\mathcal{A}^{\mathbb{N}}$ with the product topology and define the one-sided shift operator $\sigma$ as $$\sigma\left((\omega_n)_{n\geq1}\right):=(\omega_{n+1})_{n\geq1},$$ for any infinite sequence $(\omega_n)_{n\geq1}\in\mathcal{A}^{\mathbb{N}}$. The lexicographical order $<_{lex}$ on $\mathcal{A}^{\mathbb{N}}$ is defined as $$\omega=(\omega_1,\omega_2,\cdots)<_{lex}\omega'=(\omega'_1,\omega'_2,\cdots),$$ if $\omega_1<\omega'_1$ or if there is an integer $k\geq 2$ such that for all $1\leq i< k$, $\omega_i=\omega'_i$ but $\omega_k<\omega'_k$. Denote by $\omega\leq_{lex}\omega'$ if $\omega<_{lex}\omega'$ or $\omega=\omega'$. A finite word $(\omega_1,\omega_2,\cdots,\omega_n)$ is called $\beta$-admissible, if there is $x\in[0,1]$ such that the $\beta$-expansion of $x$ begins with $(\omega_1,\omega_2,\cdots,\omega_n).$ An infinite sequence $(\omega_1,\omega_2,\cdots,\omega_n,\cdots)$ is called $\beta$-admissible, if there is $x\in[0,1]$ such that the $\beta$-expansion of $x$ is $(\omega_1,\omega_2,\cdots,\omega_n,\cdots)$. An infinite sequence $(\omega_1,\omega_2,\cdots,\omega_n,\cdots)$ is self-admissible, if $$\sigma^k(\omega_1,\omega_2,\cdots,\omega_n,\cdots)\leq_{lex}(\omega_1,\omega_2,\cdots,\omega_n,\cdots),~\text{for }k\geq0.$$ Denote by $\Sigma_\beta$ the set of all infinite $\beta$-admissible sequences and denote by $\Sigma^n_\beta$ the set of all $\beta$-admissible sequences with length $n$. The $\beta$-admissible sequences are characterized by Parry \cite{P1960} and R\'{e}nyi \cite{Ren57}. \begin{Theorem}\label{Admissible} Let $\beta>1$, \begin{enumerate}[(1)] \item (\cite[Lemma 1]{P1960}) A word $\omega=(\omega_n)_{n\geq 1}\in\Sigma_\beta$ if and only if $$\sigma^k(\omega)\leq_{lex}(\varepsilon^\ast_1(\beta),\varepsilon^\ast_2(\beta),\cdots,\varepsilon^\ast_n(\beta),\cdots),~\text{for all $k\geq 0$}.$$ \item (\cite[Lemma 3]{P1960}) For any $x_1,~x_2\in[0,1]$, $x_1<x_2$ if and only if $$d_\beta(x_1)<_{lex}d_\beta(x_2).$$ \item (\cite[Lemma 4]{P1960}) For any $\beta_2>\beta_1>1$, one has $$\Sigma^n_{\beta_1}\subseteq\Sigma^n_{\beta_2},\quad\Sigma_{\beta_1}\subseteq\Sigma_{\beta_2}.$$ \end{enumerate} \end{Theorem} \begin{Theorem}\label{cardinality}(\cite[Theorem 2]{Ren57}) For any $\beta>1$, one has $$\beta^n\leq\sharp \Sigma^n_\beta\leq\dfrac{\beta^{n+1}}{\beta-1},$$ where $\sharp$ denotes the cardinality of a finite set. \end{Theorem} For every $(\omega_1,\cdots,\omega_n)\in\Sigma^n_\beta$, we call $$I_n(\omega_1,\cdots,\omega_n):=\{x\in[0,1]:d_\beta(x) \text{ starts with }\omega_1,\cdots,\omega_n\}$$ an \emph{$n$-th order basic interval} with respect to $\beta$. Denote by $I_n(x)$ the $n$-th order basic interval containing $x$. The basic intervals are also called \emph{cylinders}. It is crucial to estimate the lengths of the basic intervals. We will use the key notion of \textquotedblleft full cylinder\textquotedblright introduced by Fan and Wang \cite{FW2012}. For any $(\omega_1,\cdots,\omega_n)\in\Sigma^n_\beta$, a basic interval $I_n(\omega_1,\cdots,\omega_n)$ is said to be full if its length is $\beta^{-n}$. Denote by $\lvert I_n(\omega_1,\cdots,\omega_n)\rvert$ the length of the $n$-th order basic interval. \begin{Proposition}\label{full} (\cite[Lemma 3.1]{FW2012} and \cite[Lemma 2.5]{SW2013}) For any $(\omega_1,\cdots,\omega_n)\in\Sigma^n_\beta$, the following statements are equivalent: \begin{enumerate}[(1)] \item $I_n(\omega_1,\cdots,\omega_n)$ is a full basic interval. \item $T^n_\beta I_n(\omega_1,\cdots,\omega_n)=[0,1)$. \item For any $\omega'=(\omega'_1,\cdots,\omega'_m)\in\Sigma^m_\beta$, the concatenation $$(\omega_1,\cdots,\omega_n,\omega'_1,\cdots,\omega'_m)\in\Sigma^{n+m}_\beta, \text{ i.e., is $\beta$-admissible.}$$ \end{enumerate} \end{Proposition} \begin{Proposition}\label{fullc} (\cite[Corollary 2.6]{SW2013}) \begin{enumerate}[(1)] \item If $(\omega_1,\cdots,\omega_{n+1})$ is a $\beta$-admissible sequence with $\omega_{n+1}\neq0$, then $$I_{n+1}(\omega_1,\cdots,\omega'_{n+1})$$ is full for any $0\leq \omega'_{n+1}<\omega_{n+1}$. \item For every $\omega\in\Sigma^n_\beta$, if $I_n(\omega)$ is full, then for any $\omega'\in\Sigma^m_\beta$, one has $$\lvert I_{n+m}(\omega,\omega')\rvert=\lvert I_n(\omega)\rvert\cdot\lvert I_m(\omega')\rvert=\dfrac{\lvert I_m(\omega')\rvert}{\beta^n}.$$ \item For any $\omega\in\Sigma^n_\beta$, if $I_{n+m}(\omega,\omega')$ is a full basic interval contained in $I_n(\omega)$ with the smallest order, then $$\lvert I_{n+m}(\omega,\omega')\rvert\geq\dfrac{\lvert I_n(\omega)\rvert}{\beta}.$$ \end{enumerate} \end{Proposition} Next, we define a sequence of numbers $\beta_N$ approaching to $\beta$. Given the infinite $\beta$-expansion $(\varepsilon^\ast_1(\beta),\varepsilon^\ast_2(\beta),\cdots,\varepsilon^\ast_n(\beta),\cdots)$ of $1$. For any $\varepsilon^\ast_N(\beta)>0$, let $\beta_N$ be the unique real solution of the equation \begin{equation}\label{ED1} 1=\dfrac{\varepsilon^\ast_1(\beta)}{z}+\cdots+\dfrac{\varepsilon^\ast_N(\beta)}{z^N}. \end{equation} Therefore, $\beta_N<\beta$ and the sequence $\{\beta_N:N\geq 1\}$ increases and converges to $\beta$ when $N$ tends to infinity. \begin{Lemma}\label{length} (\cite[Lemma 2.7]{SW2013}) For every $\omega\in\Sigma^n_{\beta_N}$ viewed as an element of $\Sigma^n_\beta$, one has $$\dfrac{1}{\beta^{n+N}}\leq\lvert I_n(\omega_1,\cdots,\omega_n)\rvert\leq\dfrac{1}{\beta^n}.$$ \end{Lemma} \medskip \section{Proofs of Theorems \ref{A} and \ref{B} }\label{sec2} For $\beta>1$ and $x_0\in[0,1]$, by the definitions of $\mathcal{V}_\beta(x,x_0)$ and $\hat{\mathcal{V}}_\beta(x,x_0)$, it can be checked that for every $x\in[0,1]$, we have $$\hat{\mathcal{V}}_\beta(x,x_0)\leq\mathcal{V}_\beta(x,x_0).$$ We first consider two special cases $\hat{\mathcal{V}}_\beta(x,x_0)=\mathcal{V}_\beta(x,x_0)=0$ and $\hat{\mathcal{V}}_\beta(x,x_0)=\mathcal{V}_\beta(x,x_0)=\infty$. \begin{Lemma}\label{spezero} If $\hat{\mathcal{V}}_\beta(x,x_0)=\mathcal{V}_\beta(x,x_0)=0$, then the set $$\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)=0\right\}\cap\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=0\right\}$$ is of full Lebesgue measure. \end{Lemma} \begin{proof} Note that for any fixed $x_0\in[0,1]$ and any $x\in[0,1]$, we always have $$\hat{\mathcal{V}}_\beta(x,x_0)\leq\mathcal{V}_\beta(x,x_0).$$ Then, $$\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)=0\right\}\subseteq\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=0\right\}.$$ Hence, if for any fixed $x_0\in[0,1]$, we can prove that $\mathcal{V}_\beta(x,x_0)=0$, for Lebesgue almost every $x\in[0,1]$, then we prove the lemma. Now, we only need to prove $$m\left(\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)>0\right\}\right)=0,$$ where $m(\cdot)$ denotes the Lebesgue measure of a set. In fact, we have $$\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)>0\right\}=\cup^\infty_{k=1}\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)>1/k\right\}$$ and $\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)>1/k\right\}$ is a subset of $$\left\{x\in[0,1]:\lvert T^n_\beta x-x_0\rvert<\beta^{-n/k},\quad\text{for infinite many }n\in\mathbb{N}\right\}.$$ Since $\sum^\infty_{n=1}\beta^{-n/k}<\infty$ for any $k\geq1$, by \cite[Theorem 2A, B, C]{P67}, $$m\left(\left\{x\in[0,1]:\lvert T^n_\beta x-x_0\rvert<\beta^{-n/k},\quad\text{for infinite many }n\in\mathbb{N}\right\}\right)=0.$$ Therefore, for any $k\geq1$, we have $$m(\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)>1/k\right\})=0.$$ Thus, $$m\left(\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)>0\right\}\right)=0.$$ \end{proof} When $\hat{\mathcal{V}}_\beta(x,x_0)=\mathcal{V}_\beta(x,x_0)=\infty$, we have the following Lemma \ref{infinite}. \begin{Lemma}\label{infinite} If the number $\hat{\mathcal{V}}_\beta(x,x_0)=\infty$, then $$\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\infty\right\}=\bigcup^\infty_{n=1}\bigcup_{\omega\in\Sigma^n_\beta}\left\{x\in[0,1]:d_\beta(x)=(w,d_\beta(x_0))\right\}$$ is countable. \end{Lemma} \begin{proof} For $\beta>1$ and $x_0\in[0,1]$, we suppose $$d_\beta(x_0)=(\epsilon_1,\epsilon_2,\cdots,\epsilon_n,\cdots).$$ If $x\in\cup^\infty_{n=1}\cup_{\omega\in\Sigma^n_\beta}\left\{x\in[0,1]:d_\beta(x)=(w,d_\beta(x_0))\right\}$, then there exists an integer $n_0$ such that $\lvert T_\beta^{n_0}x-x_0\rvert=0$. Therefore, for any $n\geq n_0$, there is $n_0\in[1,n]$ such that $$\lvert T_\beta^{n_0}x-x_0\rvert=0.$$ Thus, $\hat{\mathcal{V}}_\beta(x,x_0)=\infty$. Now, we prove $$\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\infty\right\}\subseteq\cup^\infty_{n=1}\cup_{\omega\in\Sigma^n_\beta}\left\{x\in[0,1]:d_\beta(x)=(w,d_\beta(x_0))\right\}.$$ By contrary, for any $x$ with $\hat{\mathcal{V}}_\beta(x,x_0)=\infty$, we suppose $$x\notin\cup^\infty_{n=1}\cup_{\omega\in\Sigma^n_\beta}\left\{x\in[0,1]:d_\beta(x)=(w,d_\beta(x_0))\right\}.$$ Denote the $\beta$-expansion of $x$ by $$x=\dfrac{a_1}{\beta}+\dfrac{a_2}{\beta^2}+\cdots+\dfrac{a_n}{\beta^n}+\cdots,$$ where $a_i\in\{0,\cdots,\lceil\beta\rfloor\}$, for all $i\geq1$. We can take two increasing sequences $\left\{n'_i:i\geq 1\right\}$ and $\left\{m'_i:i\geq1\right\}$ with the properties: \begin{enumerate}[(1)] \item For every $i\geq1$, one has $$a_{n'_i}>0,\quad a_{n'_i+1}=\epsilon_1,~a_{n'_i+2}=\epsilon_2,~\cdots,~a_{m'_i-1}=\epsilon_{m'_i-n'_i-1},\quad a_{m'_i}>0.$$ \item For every $a_n=0$, there is an integer $i$ such that $n'_i<n<m'_i$. \end{enumerate} By the choice of $\left\{n'_i: i\geq 1\right\}$ and $\left\{m'_i:i\geq1\right\}$, for every $i\geq1$, one has $n'_i<m'_i<n'_{i+1}$. Since $\hat{\mathcal{V}}_\beta(x,x_0)>0$, one has $$\limsup_{i\rightarrow\infty}(m'_i-n'_i)=\infty.$$ Taking $n_1=n'_1$ and $m_1=m'_1$, suppose $m_k,~n_k$ have been defined. Let $i_1=1$ and $i_{k+1}:=\min\{i>i_k:m'_i-n'_i> m_k-n_k\},~{\rm for }~k\geq1.$ Then, define $$n_{k+1}:=n'_{i_{k+1}},\quad m_{k+1}:=m'_{i_{k+1}}.$$ Therefore, the sequence $\{i_k:k\geq1\}$ is well defined. By this way, we obtain the subsequences $\{n_k:k\geq1\}$ and $\{m_k:k\geq1\}$ of $\left\{n'_i:i\geq 1\right\}$ and $\left\{m'_i:i\geq1\right\}$, respectively, such that the sequence $\{m_k-n_k: k\geq1\}$ is non-decreasing. Notice $\beta^{n_k-m_k}<\lvert T^{n_k}_\beta x-x_0\rvert<\beta^{n_k-m_k+1}$, we have $$\hat{\mathcal{V}}_\beta(x,x_0)=\liminf_{k\rightarrow\infty}\dfrac{m_k-n_k}{n_{k+1}}\leq1.$$ This contradicts our assumption $\hat{\mathcal{V}}_\beta(x,x_0)=\infty$. Thus, we have proved $$\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\infty\right\}\subseteq\cup^\infty_{n=1}\cup_{\omega\in\Sigma^n_\beta}\left\{x\in[0,1]:d_\beta(x)=(w,d_\beta(x_0))\right\}.$$ Therefore, $$\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\infty\right\}=\cup^\infty_{n=1}\cup_{\omega\in\Sigma^n_\beta}\left\{x\in[0,1]:d_\beta(x)=(w,d_\beta(x_0))\right\},$$ which implies that the set $\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\infty\right\}$ is countable. \end{proof} \begin{Lemma}\label{empty} The set $\{x\in[0,1]:1<\hat{\mathcal{V}}_\beta(x,x_0)<\infty\}$ is empty. \end{Lemma} \begin{proof} This follows from the proof of Lemma \ref{infinite}. \end{proof} For $\beta>1$ and $x_0\in[0,1]$, we suppose $$d_\beta(x_0)=(\epsilon_1,\epsilon_2,\cdots,\epsilon_n,\cdots).$$ For the case $\mathcal{V}_\beta(x,x_0)\in(0,\infty)$ and $\hat{\mathcal{V}}_\beta(x,x_0)\in(0,1)$, we have the following discussion and complete the proof of Theorem \ref{A}. \paragraph{{\bf Upper bound}} For any $x\in[0,1]$, denote its $\beta$-expansion by $$x=\dfrac{a_1}{\beta}+\dfrac{a_2}{\beta^2}+\cdots+\dfrac{a_n}{\beta^n}+\cdots.$$ Since $\hat{\mathcal{V}}_\beta(x,x_0)\in(0,1)$, by the same way as Lemma \ref{infinite}, we can take the maximal subsequences $\{n_k:k\geq1\}$ and $\{m_k:k\geq1\}$ of $\left\{n'_i:i\geq 1\right\}$ and $\left\{m'_i:i\geq1\right\}$, respectively. Similarly, notice that $$\beta^{n_k-m_k}<\lvert T^{n_k}_\beta x-x_0\rvert<\beta^{n_k-m_k+1}.$$ We have \begin{equation}\label{E2} \mathcal{V}_\beta(x,x_0)=\limsup_{n\rightarrow\infty}\dfrac{m_k-n_k}{n_k}=\limsup_{n\rightarrow\infty}\dfrac{m_k}{n_k}-1, \end{equation} \begin{equation}\label{E3} \hat{\mathcal{V}}_\beta(x,x_0)\leq\liminf_{n\rightarrow\infty}\dfrac{m_k-n_k}{n_{k+1}}\leq\liminf_{n\rightarrow\infty}\dfrac{m_k-n_k}{m_k}=1-\limsup_{n\rightarrow\infty}\dfrac{n_k}{m_k}. \end{equation} Since $\left(\limsup\dfrac{n_k}{m_k}\right)\cdot\left(\limsup\dfrac{m_k}{n_k}\right)\geq 1$, one has \begin{equation}\label{in1} \mathcal{V}_\beta(x,x_0)\geq\dfrac{\hat{\mathcal{V}}_\beta(x,x_0)}{1-\hat{\mathcal{V}}_\beta(x,x_0)},\qquad \hat{\mathcal{V}}_\beta(x,x_0)\leq\dfrac{\mathcal{V}_\beta(x,x_0)}{1+\mathcal{V}_\beta(x,x_0)}. \end{equation} We can derive from \ref{in1} that if $v<\hat{v}/(1-\hat{v})$, then the set $$\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)=v\right\}\cap\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)\geq\hat{v}\right\}$$ is empty. Otherwise, under the case where $\mathcal{V}_\beta(x,x_0)=v$ and $\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}$, take the two sequences $\{n_k:k\geq1\}$ and $\{m_k:k\geq1\}$ such that $$\mathcal{V}_\beta(x,x_0)=\lim_{n\rightarrow\infty}\dfrac{m_k-n_k}{n_k},\qquad \hat{\mathcal{V}}_\beta(x,x_0)\leq\liminf_{n\rightarrow\infty}\dfrac{m_k-n_k}{n_{k+1}}.$$ Given $0<\varepsilon<\hat{v}/2$, for $k$ large enough, one has \begin{equation}\label{E4} (v-\varepsilon)n_k\leq m_k-n_k\leq(v+\varepsilon)n_k, \end{equation} \begin{equation}\label{E5} m_k-n_k\geq(\hat{v}-\varepsilon)n_{k+1}. \end{equation} By inequality (\ref{E4}), one has $$(1+v-\varepsilon)m_{k-1}\leq(1+v-\varepsilon)n_k\leq m_k.$$ Therefore, the sequence $\{m_k:k\geq1\}$ increases at least exponentially. Since $n_k\geq m_{k-1}$ for every $k\geq2$, the sequence $\{n_k:k\geq1\}$ also increases at least exponentially. Thus, there is a positive constant $C$ such that $k\leq C\log_\beta n_k$. Combining (\ref{E4}) and (\ref{E5}), one obtains $$(\hat{v}-\varepsilon)n_{k+1}\leq(v+\varepsilon)n_k.$$ Thus, for $k$ large enough, there is an integer $n_0$ and a postive real number $\varepsilon_1$ small enough such that the sum of all lengths of the blocks of $0$ in the prefix of length $n_k$ of the infinite sequence $a_1a_2\cdots$ is at least equal to \begin{eqnarray*} n_k(\hat{v}-\varepsilon)\left\lgroup1+\dfrac{\hat{v}-\varepsilon}{v+\varepsilon}+\dfrac{(\hat{v}-\varepsilon)^2}{(v+\varepsilon)^2}+\cdots\right\rgroup-n_0&=&n_k\dfrac{(\hat{v}-\varepsilon)(v+\varepsilon)}{v-\hat{v}+2\varepsilon}-n_0\\ &\geq& n_k\left(\dfrac{v \cdot \hat{v}}{v-\hat{v}}-\varepsilon_1\right). \end{eqnarray*} Among the digits $a_1\cdots a_{m_k}$, there are $k$ blocks of digits which are \textquoteleft free\textquoteright. Denote their lengths by $l_1,\cdots,l_k$. There is an small number $\varepsilon_2$ such that $$\sum_{i=1}^kl_i\leq n_k-n_k\left(\dfrac{v\cdot \hat{v}}{v-\hat{v}}-\varepsilon_1\right)= n_k(1+\varepsilon_2)\dfrac{v-\hat{v}-v\cdot\hat{v}}{v-\hat{v}}.$$ By Theorem \ref{cardinality}, there are at most $\beta\cdot\beta^{l_i}/(\beta-1)$ ways to choose the block with length $l_i$. Thus, one has in total at most $$\left(\dfrac{\beta}{\beta-1}\right)^k\cdot\beta^{\sum_{i=1}^kl_i}\leq \left(\dfrac{\beta}{\beta-1}\right)^k\cdot\beta^{n_k(1+\varepsilon_2)(v-\hat{v}-v\cdot\hat{v})/(v-\hat{v})}$$ possible choices of the digits $a_1\cdots a_{m_k}$. On the other hand, there are at most $k (k\leq C\log_\beta n_k)$ blocks of $0$ in the prefix of length $n_k$ of the infinite sequence $a_1a_2\cdots$. Since there are at most $n_k$ possible choices for their first index, one has in total at most $(n_k)^{C\log_\beta n_k}$ possible choices. Consequently, the set $$\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)= v\right\}\cap\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}$$ is covered by $$\left(\dfrac{\beta n_k}{\beta-1}\right)^{C\log_\beta n_k}\cdot\beta^{n_k(1+\varepsilon_2)(v-\hat{v}-v\cdot\hat{v})/(v-\hat{v})}$$ basic intervals of length at most $\beta^{-m_k}$. Moreover, by (\ref{E4}), there is a small number $\varepsilon_3>0$ such that $$\beta^{-m_k}\leq \beta^{-(1+v)(1-\varepsilon_3)n_k}.$$ Take $\varepsilon'=\max\{\varepsilon_2,~\varepsilon_3\}$, we consider the series $$\sum_{N\geq1}(N)^{C\log_\beta N}\beta^{N(1+\varepsilon')(v-\hat{v}-v\cdot \hat{v})/(v-\hat{v})}\beta^{-(1+v)(1-\varepsilon')Ns}.$$ The critical exponent $s_0$ such that the series converges if $s>s_0$ and diverges if $s<s_0$ is given by $$s_0=\dfrac{1+\varepsilon'}{1-\varepsilon'}\cdot\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}.$$ By a standard covering argument and the arbitrariness of $\varepsilon'$, the Hausdorff dimension of the set $$\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)= v\right\}\cap\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}$$ is at most equal to $$\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}.$$ \paragraph{{\bf Lower bound}} To obtain the lower bound, we will construct a suitable Cantor type set. For $v\in(0,\infty)$ and $\hat{v}\in(0,1)$ with $v\geq\hat{v}/(1-\hat{v})$, let $$n'_k=\left\lfloor\left(\dfrac{v}{\hat{v}}\right)^k\right\rfloor,\quad m'_k=\lfloor(1+v)n'_k\rfloor,\quad k=1,2,\cdots.$$ Making an adjustment, we can choose two subsequences $\{n_k\}$ and $\{m_k\}$ with $n_k<m_k<n_{k+1}$ for every $k\geq1$ such that $\{m_k-n_k\}$ is a non-decreasing sequence and \begin{equation}\label{lowle} \lim_{k\rightarrow\infty}\dfrac{m_k-n_k}{n_k}=v,\qquad \lim_{k\rightarrow\infty}\dfrac{m_k-n_k}{n_{k+1}}=\hat{v}. \end{equation} Recall that the $\beta$-expansion of the fixed point $x_0$ is $$d_\beta(x_0)=(\epsilon_1,\epsilon_2,\cdots,\epsilon_n,\cdots).$$ Consider the set of real numbers $x\in[0,1]$ whose $\beta$-expansion $$x=\dfrac{a_1}{\beta}+\dfrac{a_2}{\beta^2}+\cdots+\dfrac{a_n}{\beta^n}+\cdots,$$ satisfies that for all $k\geq1$, $$a_{n_k}>1,~a_{n_k+1}=\epsilon_1,~a_{n_i+2}=\epsilon_2,~\cdots,~a_{m_i-1}=\epsilon_{m_i-n_i-1},~a_{m_k}>0,$$ $$a_{m_k+(m_k-n_k)}=a_{m_k+2(m_k-n_k)}=\cdots=a_{m_k+t_k(m_k-n_k)}=1,$$ where $t_k$ is the largest integer such that $m_k+t_k(m_k-n_k)<n_{k+1}$. Then, $$t_k\leq\dfrac{n_{k+1}-m_k}{m_k-n_k}\leq\dfrac{2}{\hat{v}},$$ for $k$ large enough. Therefore, the sequence $\{t_k:k\geq1\}$ is bounded. Fix $N$, let $\beta_N$ be the real number defined by the infinite $\beta$-expansion of $1$ as equality (\ref{ED1}). We replace the digit $1$ for $a_{n_k},~a_{m_k}$ and $a_{m_k+i(m_k-n_k)}$ for any $1\leq i\leq t_k$ by the block $0^N10^N$. Fill other places by blocks belonging to $\Sigma_{\beta_N}$. Thus, we have constructed the Cantor type subset $E$. Since $\{t_k\}$ is bound, one has $$\lim_{k\rightarrow\infty}\dfrac{m_k-n_k-1+2N}{n_k+(4k-2)N+\sum_{i=1}^{k-1}2Nt_i}=\lim_{k\rightarrow\infty}\dfrac{m_k-n_k}{n_k}=v,$$ $$\lim_{k\rightarrow\infty}\dfrac{m_k-n_k-1+2N}{n_{k+1}+(4k+2)N+\sum_{i=1}^k2Nt_i}=\lim_{k\rightarrow\infty}\dfrac{m_k-n_k}{n_{k+1}}=\hat{v}.$$ According to the construction, the sequence $d_\beta(x)$ is in $\Sigma_{\beta_N}.$ We distribute the mass uniformly when meet a block in $\Sigma_{\beta_N}$ and keep the mass when go through the positions where the digits are determined by construction of $E$. The Bernoulli measure $\mu$ on $E$ is defined as follows. If $n<n_1$, then define $\mu(I_n)=1/\sharp\Sigma^n_{\beta_N}$. If $n_1\leq n\leq m_1+4N$, then define $\mu(I_n)=1/\sharp\Sigma^{n_1-1}_{\beta_N}$. If there is an integer $t$ with $0\leq t\leq t_1-1$ such that $$m_1+4N+(t+1)(m_1-n_1)+2Nt<n\leq m_1+4N+(t+1)(m_1-n_1)+2N(t+1),$$ then define $$\mu(I_n)=\dfrac{1}{\sharp\Sigma^{n_1-1}_{\beta_N}}\cdot\dfrac{1}{\left(\sharp\Sigma^{m_1-n_1-1}_{\beta_N}\right)^{t+1}}.$$ If there is an integer $t$ with $0\leq t\leq t_1$ such that $$m_1+4N+t(m_1-n_1)+2Nt<n\leq c,$$ where $c:=\min\{n_2+4N+2Nt_1, m_1+4N+(t+1)(m_1-n_1)+2Nt\}$, then define $$\mu(I_n)=\dfrac{1}{\sharp\Sigma^{n_1-1}_{\beta_N}}\cdot\dfrac{1}{\left(\sharp\Sigma^{m_1-n_1-1}_{\beta_N}\right)^t}\cdot\dfrac{1}{\sharp\Sigma^{n-(m_1+4N+t(m_1-n_1)+2Nt)}_{\beta_N}}.$$ For $k\geq2$, let $$l_k:=n_k+4(k-1)N+\sum^{k-1}_{i=1}2Nt_i,\quad h_k:=m_k+4kN+\sum^{k-1}_{i=1}2Nt_i,$$ $$p_k:=m_k-n_k-1,\quad q_k:=h_k+t_k(m_k-n_k)+2Nt_k.$$ If $l_k\leq n\leq h_k$, then define $$\mu(I_n)=\dfrac{1}{\sharp\Sigma^{n_1-1}_{\beta_N}}\cdot\dfrac{1}{\prod^{k-1}_{i=1}\left(\sharp \Sigma^{p_i}_{\beta_N}\right)^{t_i}\cdot\left(\sharp\Sigma^{l_{i+1}-q_i-1}_{\beta_N}\right)}=\mu(I_{l_k})=\mu(I_{h_k}).$$ If there is an integer $t$ with $0\leq t\leq t_k-1$ such that $$h_k+(t+1)(m_k-n_k)+2Nt<n\leq h_k+(t+1)(m_k-n_k)+2N(t+1),$$ then define $$\mu(I_n)=\mu(I_{h_k})\cdot\dfrac{1}{\left(\sharp\Sigma^{p_k}_{\beta_N}\right)^{t+1}}.$$ If there is an integer $t$ with $0\leq t\leq t_k$ such that $$h_k+t(m_k-n_k)+2Nt<n\leq\min\{l_{k+1}, h_k+(t+1)(m_k-n_k)+2Nt\},$$ then define $$\mu(I_n)=\mu(I_{h_k})\cdot\dfrac{1}{\left(\sharp\Sigma^{p_k}_{\beta_N}\right)^t}\cdot\dfrac{1}{\sharp\Sigma^{n-(h_k+t(m_k-n_k)+2Nt)}_{\beta_N}}.$$ By the construction and Proposition \ref{full}, $I_{h_k}$ is full. For calculating the local dimension of $\mu$, we discuss different cases as follows. {\bf Case $A$:} If $n=h_k$, then \begin{eqnarray*} \liminf_{k\rightarrow\infty}\dfrac{\log_\beta\mu(I_{h_k})}{\log_\beta \lvert I_{h_k}\rvert}&=&\liminf_{k\rightarrow\infty}\dfrac{n_1-1+\sum\limits_{i=1}^{k-1}\left(t_ip_i+l_{i+1}-q_i-1\right)}{h_k}\cdot\log_\beta\beta_N\\&=&\liminf_{k\rightarrow\infty}\dfrac{n_1-1+\sum\limits_{i=1}^{k-1}\left(l_{i+1}-h_i-2Nt_i-1\right)}{h_k}\cdot\log_\beta\beta_N. \end{eqnarray*} Recall that $\{t_k:k\geq1\}$ is bounded and $\{m_k:k\geq1\}$ grows exponentially fast in terms of $k$, therefore, $$\liminf_{k\rightarrow\infty}\dfrac{\log_\beta\mu(I_{h_k})}{\log_\beta \lvert I_{h_k}\rvert}=\liminf_{k\rightarrow\infty}\dfrac{\sum_{i=1}^{k-1}\left(n_{i+1}-m_i\right)}{m_k}\log_\beta\beta_N.$$ By equalities (\ref{lowle}), one has $$\lim_{k\rightarrow\infty}\dfrac{m_k}{n_k}=1+v,\quad \lim_{k\rightarrow\infty}\dfrac{m_{k+1}}{m_k}=\dfrac{v}{\hat{v}},\quad\lim_{k\rightarrow\infty}\dfrac{n_{k+1}}{m_k}=\dfrac{v}{\hat{v}(1+v)}.$$ According to Stolz-Ces\`{a}ro Theorem, \begin{eqnarray*} \liminf_{k\rightarrow\infty}\dfrac{\sum_{i=1}^{k-1}\left(n_{i+1}-m_i\right)}{m_k}&=&\liminf_{k\rightarrow\infty}\dfrac{n_{k+1}-m_k}{m_{k+1}-m_k}\\&=&\liminf_{k\rightarrow\infty}\dfrac{\dfrac{n_{k+1}}{m_k}-1}{\dfrac{m_{k+1}}{m_k}-1}=\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}. \end{eqnarray*} Thus, $$\liminf_{k\rightarrow\infty}\dfrac{\log_\beta\mu(I_{h_k})}{\log_\beta\lvert I_{h_k}\rvert}=\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}\cdot\log_\beta\beta_N.$$ {\bf Case $B$:} For an integer $n$ large enough, if there is $k\geq2$ such that $l_k\leq n\leq h_k$, then $$\liminf_{k\rightarrow\infty}\dfrac{\log_\beta\mu(I_n)}{\log_\beta\lvert I_n\rvert}\geq \liminf_{k\rightarrow\infty}\dfrac{\log_\beta\mu(I_n)}{\log_\beta\lvert I_{h_k}\rvert}=\liminf_{k\rightarrow\infty}\dfrac{\log_\beta\mu(I_{h_k})}{\log_\beta\lvert I_{h_k}\rvert}.$$ {\bf Case $C$:} For an integer $n$ large enough, if there is an integer $t$ with $0\leq t\leq t_k-1$ such that $$h_k+(t+1)(m_k-n_k)+2Nt<n\leq h_k+(t+1)(m_k-n_k)+2N(t+1),$$ then one has $$\mu(I_n)\leq\mu(I_{h_k})\cdot\beta^{-(t+1)p_k}_N.$$ Since $I_{h_k}$ is full, by Proposition \ref{fullc}, $\lvert I_n\rvert=\lvert I_{h_k}\rvert\cdot\lvert I_{n-h_k}(\omega')\rvert$, where $\omega'$ is an admissible block in $\Sigma^{n-h_k}_{\beta_N}$. By Lemma \ref{length}, $$\lvert I_n\rvert\geq\lvert I_{h_k}\rvert\cdot\beta^{-(n-h_k+N)}.$$ Hence, $$\dfrac{-\log_\beta\mu(I_n)}{-\log_\beta\lvert I_n\rvert}\geq\dfrac{-\log_\beta\mu(I_{h_k})+(t+1)p_k\log_\beta\beta_N}{-\log_\beta\lvert I_{h_k}\rvert+((t+1)p_k+N(2t+1))}\geq\dfrac{-\log_\beta\mu(I_{h_k})}{-\log_\beta\lvert I_{h_k}\rvert}\cdot\varphi(N),$$ where $\varphi(N)<1$ and $\varphi(N)$ tends to $1$ as $N$ tends to infinity. If there is an integer $t$ with $0\leq t\leq t_k$ such that $$h_k+t(m_k-n_k)+2Nt<n\leq\min\{l_{k+1}, h_k+(t+1)(m_k-n_k)+2Nt\},$$ then letting $l:=n-(h_k+t(m_k-n_k)+2Nt)$, one has $$\mu(I_n)\leq\mu(I_{h_k})\cdot\beta^{-tp_k-l}_N.$$ Since $I_{h_k}$ is full, by Proposition \ref{fullc}, $\lvert I_n\rvert=\lvert I_{h_k}\rvert\cdot\lvert I_{n-h_k}(\omega')\rvert$, where $\omega'$ is an admissible block in $\Sigma^{n-h_k}_{\beta_N}$. By Lemma \ref{length}, $\lvert I_{n-h_k}(\omega')\rvert\geq\beta^{-(n-h_k+N)}.$ Therefore, $$\lvert I_n\rvert\geq\lvert I_{h_k}\rvert\cdot\beta^{-(n-h_k+N)}.$$ Hence, $$\dfrac{-\log_\beta\mu(I_n)}{-\log_\beta\lvert I_n\rvert}\geq\dfrac{-\log_\beta\mu(I_{h_k})+(tp_k+l)\log_\beta\beta_N}{-\log_\beta\lvert I_{h_k}\rvert+(tp_k+l+t+N(2t+1))}\geq\dfrac{-\log_\beta\mu(I_{h_k})}{-\log_\beta\lvert I_{h_k}\rvert}\cdot\varphi(N).$$ Therefore, in all cases, $$\liminf_{k\rightarrow\infty}\dfrac{\log_\beta\mu(I_n)}{\log_\beta\lvert I_n\rvert}\geq\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}\cdot\log_\beta\beta_N\cdot\varphi(N).$$ Given a point $x\in E$, let $r$ be a number with $\lvert I_{n+1}(x)\rvert\leq r<\lvert I_n(x)\rvert$. We consider the ball $B(x,r)$. By Lemma \ref{length}, every $n$-th order basic interval $I_n$ satisfies $\lvert I_n\rvert\geq \beta^{-(n+N)}$. Hence, the ball $B(x,r)$ interests at most $\lfloor 2\beta^N\rfloor+2$ basic intervals of order $n$. On the other hand, $$r\geq\lvert I_{n+1}(x)\rvert\geq\beta^{-(n+1+N)}=\beta^{-(1+N)}\cdot\beta^{-n}\geq\beta^{-(1+N)}\cdot\lvert I_n(x)\rvert.$$ Therefore, $$\liminf_{r\rightarrow0}\dfrac{\log_\beta\mu(B(x,r))}{\log_\beta r}=\liminf_{n\rightarrow\infty}\dfrac{\log_\beta\mu(I_n(x))}{\log_\beta\lvert I_n(x)\rvert}.$$ Let $N$ tend to infinity, by Mass Distribution Principle \cite[p.p. 60]{FK90}, we get the lower bound $$\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}.$$ Hence, the proof of Theorem \ref{A} is complete. Now, we prove Theorem \ref{B}. \begin{proof}[Proof of Theorem \ref{B}] If $\hat{\mathcal{V}}_\beta(x,x_0)=0$, by Lemma \ref{spezero}, the set $$\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=0\right\}$$ is of full Lebesgue measure. If $\hat{\mathcal{V}}_\beta(x,x_0)>1$, by Lemma \ref{infinite} and Lemma \ref{empty}, the set $$\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)>1\right\}=\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\infty\right\}$$ is countable. If $\hat{v}\in(0,1)$, for any $v\geq\hat{v}/(1-\hat{v})$ and any positive integer $L$ large enough, by the similar discussion with upper bound in the proof of Theorem \ref{A}, the Hausdorff dimension of the set $$\left\{x\in[0,1]:v\leq\mathcal{V}_\beta(x,x_0)< v+1/L\right\}\cap\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}$$ is at most equal to $$\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}+\dfrac{\hat{v}^2}{L(1-\hat{v})}.$$ Let $L$ tend to $\infty$, regard the formula as a function of $v$ with $v\geq\hat{v}/(1-\hat{v})$, the maximum is attained for $v=2\hat{v}/(1-\hat{v})$. Thus, $${\rm dim}_H\left( \left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}\right)\leq\left(\dfrac{1-\hat{v}}{1+\hat{v}}\right)^2.$$ On the other hand, from the similar discussion with lower bound in the proof of Theorem \ref{A}, we also have $${\rm dim}_H\left( \left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}\right)\geq\left(\dfrac{1-\hat{v}}{1+\hat{v}}\right)^2.$$ Thus, $${\rm dim}_H\left( \left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}\right)=\left(\dfrac{1-\hat{v}}{1+\hat{v}}\right)^2.$$ \end{proof} \section{Proofs of Theorems \ref{C} and \ref{D}} Following the approach of Persson and Schmeling \cite{PS08}, we take a correspondence between the $\beta$-shift and the parameter. Parry \cite[Lemma 2]{P1960} characterized the $\beta$-expansion of $1$. \begin{Theorem}\label{adone} A sequence $(\omega_1,\omega_2,\cdots,\omega_n,\cdots)$ is the $\beta$-expansion of $1$ for some $\beta>1$ if and only if it is self-admissible. \end{Theorem} \paragraph{{\bf Upper bound}} We consider a interval $(\beta_0,\beta_1)$, where $1<\beta_0<\beta_1$. For $v\in(0,\infty)$ and $\hat{v}\in(0,1)$, let $$\mathcal{L}_{v,\hat{v}}:=\left\{\beta>1:\mathcal{V}_\beta(1,x_0)=v\right\}\cap\left\{\beta>1:\hat{\mathcal{V}}_\beta(1,x_0)=\hat{v}\right\},$$ $$\mathcal{L}_{v,\hat{v}}(\beta_0,\beta_1):=\left\{\beta\in(\beta_0,\beta_1):\mathcal{V}_\beta(1,x_0)=v\right\}\cap\left\{\beta\in(\beta_0,\beta_1):\hat{\mathcal{V}}_\beta(1,x_0)=\hat{v}\right\}.$$ By Theorem \ref{adone}, each self-admissible sequence corresponds to a real number $\beta>1$. Assume that $\mathcal{S}_{\beta_1}$ is the set of all self-admissible sequences in $\Sigma_{\beta_1}$ and $\pi_{\beta_1}$ is the natural projection from the $\beta$-shift to interval $[0,1]$. Thus, there exists a one-to-one map $$\rho_{\beta_1}:\pi_{\beta_1}(\mathcal{S}_{\beta_1})\rightarrow(1,\beta_1).$$ Define the subset $\mathcal{D}_{v,\hat{v}}$ of $\Sigma_{\beta_1}$ as $$\pi^{-1}_{\beta_1}\left(\left\{x\in[0,1]:\mathcal{V}_\beta(x,x_0)= v\right\}\cap\left\{x\in[0,1]:\hat{\mathcal{V}}_\beta(x,x_0)=\hat{v}\right\}\right).$$ The H\"{o}lder exponent of the restriction of the map $\rho_{\beta_1}$ on $\pi_{\beta_1}(\mathcal{S}_{\beta_1}\cap\mathcal{D}_{v,\hat{v}})$ equals $\log\beta_0/\log\beta_1$. Since $\mathcal{L}_{v,\hat{v}}(\beta_0,\beta_1)\subseteq\rho_{\beta_1}(\pi_{\beta_1}(\mathcal{S}_{\beta_1}\cap\mathcal{D}_{v,\hat{v}}))$, $${\rm dim}_H\mathcal{L}_{v,\hat{v}}(\beta_0,\beta_1)\leq{\rm dim}_H\rho_{\beta_1}(\pi_{\beta_1}(\mathcal{S}_{\beta_1}\cap\mathcal{D}_{v,\hat{v}}))\leq\dfrac{\log\beta_1}{\log\beta_0}{\rm dim}_H\pi_{\beta_1}(\mathcal{S}_{\beta_1}\cap\mathcal{D}_{v,\hat{v}}).$$ By Theorem \ref{A}, letting $\beta_1$ tend to $\beta_0$, if $v<\hat{v}/(1-\hat{v})$, the set $\mathcal{L}_{v,\hat{v}}(\beta_0,\beta_1)$ is empty. Otherwise, $${\rm dim}_H\mathcal{L}_{v,\hat{v}}(\beta_0,\beta_1)\leq\dfrac{v-\hat{v}-v\hat{v}}{(1+v)(v-\hat{v})}.$$ \paragraph{{\bf Lower bound}} Take $\beta_2$ with $1<\beta_0<\beta_1<\beta_2$ such that the $\beta_2$-expansion of $1$ ends with zeros. Thus, the $\beta$-shift $\sum_{\beta_2}$ is a subshift of finite type. Bugeaud and Liao gave a way to calculate the lower bound of the Hausdorff dimension of $\mathcal{L}_{v,\hat{v}}(\beta_0,\beta_1)$. \begin{Theorem}(\cite[Theorem 5.1]{YLiao2016})\label{lowb} Given real numbers $1<\beta_0<\beta_1<\beta_2$. For any $v\in(0,\infty)$ and any $\hat{v}\in(0,1)$ with $v\geq\hat{v}/(1-\hat{v})$, one has $${\rm dim}_H\rho^{-1}_{\beta_2}\mathcal{L}_{v,\hat{v}}(\beta_0,\beta_1)\geq\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}\cdot\dfrac{\log\beta_1}{\log\beta_2}.$$ \end{Theorem} From Theorem \ref{lowb} and Persson and Schmeling \cite[Theorem 14]{PS08}, we have $${\rm dim}_H\mathcal{L}_{v,\hat{v}}(\beta_0,\beta_1)\geq\dfrac{v-\hat{v}-v\cdot\hat{v}}{(1+v)(v-\hat{v})}\cdot\dfrac{\log\beta_1}{\log\beta_2}.$$ Letting $\beta_2$ tend to $\beta_1$, we obtain the lower bound. Thus, we complete the proof of Theorem \ref{C}. For the proof of Theorem \ref{D}, one can follow from the proof of Theorem \ref{B}. We omit the details.
{'timestamp': '2020-10-13T02:33:48', 'yymm': '2005', 'arxiv_id': '2005.14277', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14277'}
arxiv
\section{Introduction} \label{sec_introduction} A recent area of research in inverse problems has been the development and study of new eigenvalue problems arising from scattering theory (cf. \cite{a
udibert_cakoni_haddar,audibert_chesnel_haddar,audibert_chesnel_haddar2019,cakoni_colton_meng_monk,camano_lackner_monk,cogar,cogar2019,cogar2020,cogar_colton_meng_monk,cogar_colton_monk,cogar_colton_monk2019}). In addition to generating mathematically interesting problems, the study of the resulting eigenvalues may find potential application in nondestructive testing of materials. In particular, eigenvalues could potentially be used as a target signature in order to characterize a scattering medium and indicate when it has experienced some perturbation of its material coefficients. An early example is the use of transmission eigenvalues, and we refer to \cite{cakoni_colton_haddar} for a comprehensive treatment of the subject. Transmission eigenvalues carry information about the medium of interest and noticeably shift in response to changes in the medium, but their detection from scattering data requires multifrequency data. In addition, only real transmission eigenvalues can be detected, and for an absorbing medium no real transmission eigenvalues exist. \par In order to remedy these shortcomings, new eigenvalue problems have been generated by comparing the measured scattering data to that of an auxiliary scattering problem that depends on a parameter and is entirely artificial, i.e. it does not depend upon the physical medium of interest. If we seek values of this parameter for which the measured and auxiliary scattering data might coincide for certain types of incident fields, then we arrive at an eigenvalue problem dependent upon the material coefficients of the medium in which this parameter serves as the eigenvalue. Each choice of auxiliary scattering problem produces a new eigenvalue problem that may potentially be used to detect flaws in a medium. The first example of this approach in \cite{cakoni_colton_meng_monk} featured auxiliary data from an exterior impedance problem with parameter $\lambda$, which resulted in the well-known Stekloff eigenvalue problem. Through a series of numerical examples in two dimensions the authors demonstrated that Stekloff eigenvalues may be detected from measured scattering data and that they shift in response to changes in the refractive index of the medium. \par The sensitivity of Stekloff eigenvalues to changes in the scattering medium was not always found to be significant, and this observation led the authors of \cite{cogar_colton_meng_monk} to choose the auxiliary data as that of scattering by an inhomogeneous medium that depends upon an additional fixed parameter $\gamma$. The resulting eigenvalue problem has a similar form to the standard transmission eigenvalue problem, but the structure and techniques used to analyze it were of a different nature. The authors demonstrated that in many cases the fixed parameter $\gamma$ may be tuned in order to increase the sensitivity of the so-called modified transmission eigenvalues to changes in the medium, often with an increase of an order of magnitude. Variations of this idea were explored in \cite{cogar,cogar2019,cogar_colton_monk}. \par Both of the problems mentioned above related to acoustic scattering, but in \cite{camano_lackner_monk} this approach was first applied to electromagnetic scattering, which is the context of our current investigation. This first foray into generating electromagnetic eigenvalue problems again saw the choice of auxiliary data arising from an exterior impedance problem, but the resulting eigenvalue problem lacked the same solvability properties of its acoustic counterpart. In particular, the authors used a simple example to show that the eigenvalues could no longer correspond to those of a compact operator, which would prove problematic in the standard approach to establishing solvability results of the associated electromagnetic Stekloff eigenvalue problem (an issue later overcome in \cite{halla2,halla1} using $T$-coercivity). Recognizing that the auxiliary problem could be changed at will, the authors modified the boundary condition of the auxiliary problem in order to remove the degenerate branch of eigenvalues and obtained a well-behaved eigenvalue problem. Through the numerical examples in \cite{camano_lackner_monk,cogar_colton_monk2019}, it has been shown that this generalization of electromagnetic Stekloff eigenvalues is sensitive to changes in the electromagnetic properties of a medium, but like the acoustic case this shift is not always significant. \par As in the acoustic case, this observation leads us to consider an auxiliary problem that depends on a fixed tuning parameter $\gamma$. The obvious first choice is the electromagnetic version of the problem considered in \cite{cogar_colton_meng_monk}, which represents electromagnetic scattering by a medium with constant electric permittivity $\eta$ and magnetic permeability $\gamma$. By similar reasoning to \cite{camano_lackner_monk} and \cite{cogar_colton_meng_monk} this choice results in the modified transmission eigenvalue problem in which we seek $\eta\in\mathbb{C}$ and a nonzero pair $(\mathbf{w},\mathbf{v})$ such that \begin{subequations} \label{mp} \begin{align} \curl\curl\mathbf{w} - k^2\epsilon\mathbf{w} &= \mathbf{0} \text{ in } B, \label{mp1} \\ \curl\gamma^{-1}\curl\mathbf{v} - k^2\eta\mathbf{v} &= \mathbf{0} \text{ in } B, \label{mp2} \\ \un\times(\mathbf{w} - \mathbf{v}) &= \mathbf{0} \text{ on } \partial B, \label{mp3} \\ \un\times(\curl\mathbf{w} - \gamma^{-1}\curl\mathbf{v}) &= \mathbf{0} \text{ on } \partial B, \label{mp4} \end{align} \end{subequations} where $\epsilon$ is the relative electric permittivity of the physical medium, $k>0$ is the wave number, and $B$ is a Lipschitz domain in $\mathbb{R}^3$ containing the support of $1-\epsilon$. We will provide more assumptions on these quantities in the next section, but for now we examine the structure of the eigenvalues of \eqref{mp} in the simple case where $B$ is the unit ball in $\mathbb{R}^3$ and $\epsilon$ is a constant in $B$. \par In this case we may use separation of variables in order to solve \eqref{mp} in a similar manner to \cite{camano_lackner_monk}, and the result is that $\eta\neq0$ is an eigenvalue if and only if for some $n\in\mathbb{N}_0$ it is a zero of one of the determinant functions \begin{subequations} \label{det} \begin{align} d_n^{(a)}(\eta) &= \left(1-\gamma^{-1}\right)j_n(k\sqrt{\epsilon})j_n(k\sqrt{\gamma\eta}) + k\sqrt{\epsilon} j_n'(k\sqrt{\epsilon})j_n(k\sqrt{\gamma\eta}) - k\sqrt{\gamma^{-1}\eta} j_n(k\sqrt{\epsilon}) j_n'(k\sqrt{\gamma\eta}), \label{det_a} \\ d_n^{(b)}(\eta) &= (\eta-\epsilon)j_n(k\sqrt{\epsilon})j_n(k\sqrt{\gamma\eta}) + k\eta\sqrt{\epsilon} j_n'(k\sqrt{\epsilon})j_n(k\sqrt{\gamma\eta}) - k\epsilon\sqrt{\gamma\eta} j_n(k\sqrt{\epsilon}) j_n'(k\sqrt{\gamma\eta}), \label{det_b} \end{align} \end{subequations} where $j_n$ is the spherical Bessel function of the first kind of order $n$. Unlike in \cite{camano_lackner_monk}, we cannot simply solve for the eigenvalues in this case, as they are roots of a family of transcendental functions. Thus, we instead provide a plot of these roots in Figure \ref{fig_sov1} for $k = 1$, $\epsilon = 2$, $\gamma = 0.5$. \begin{figure} \begin{center} \includegraphics[scale=0.5]{figure1.png} \caption{The first few eigenvalues of \eqref{mp} computed using separation variables as the roots of the determinant functions defined in \eqref{det}. The roots of $d_n^{(a)}$ and $d_n^{(b)}$ are marked by a red $+$ symbol and a blue $\times$ symbol, respectively. The vertical dashed line marks the constant value of the permittivity $\epsilon$.} \label{fig_sov1} \end{center} \end{figure} We see from Figure \ref{fig_sov1} that the roots of the family $\{d_n^{(a)}\}$ appear to diverge towards $+\infty$, whereas the roots of $\{d_n^{(b)}\}$ do not. In \cite{chesnel} it is shown that the set of standard transmission eigenvalues for Maxwell's equations is discrete without finite accumulation point whenever $\epsilon-1$ is bounded away from zero. Performing the same calculations in the present case implies the same result for the eigenvalues of \eqref{mp} whenever $\epsilon-\eta$ is bounded away from zero, i.e. the eigenvalues are discrete without finite accumulation point in the domain $\mathbb{C}\setminus\{\epsilon\}$ in the case of constant $\epsilon$. From this result we see that the only possible finite accumulation point in this case is $\eta=2$, and we observe this accumulation point of the case (b) eigenvalues in Figure \ref{fig_sov1}. \par We have thus encountered the same difficulty as in \cite{camano_lackner_monk}, in which the eigenvalues accumulate at both infinity and some finite point. The usual approach in studying this type of eigenvalue problem is to define a solution operator $\Psi$ whose spectrum is related to the eigenvalues of \eqref{mp} and to prove that $\Psi$ is compact. However, the spectral theorem for compact operators implies that the eigenvalues of $\Psi$ must accumulate only at zero, and hence the eigenvalues of \eqref{mp} may only accumulate at infinity. Therefore, our numerical evidence suggests that the eigenvalues of \eqref{mp} cannot be related to the spectrum of a compact operator, and we have lost one of our main analytical tools. While we note that techniques similar to those used in \cite{halla2,halla1} for the unmodified electromagnetic Stekloff eigenvalue problem might applied to analyze \eqref{mp}, it is not immediately clear how to do so. As a consequence, properties of these eigenvalues are currently unknown. \par This observation motivates us to consider a different eigenvalue problem that will explicitly force compactness of the resulting solution operator $\Psi$, in which we seek $\eta\in\mathbb{C}$ and a nonzero triple $(\mathbf{w},\mathbf{v},p)$ such that \begin{subequations} \label{modp} \begin{align} \curl\curl\mathbf{w} - k^2\epsilon\mathbf{w} &= \mathbf{0} \text{ in } B, \label{modp1} \\ \curl\gamma^{-1}\curl\mathbf{v} - k^2\eta\mathbf{v} + k^2\nabla p &= \mathbf{0} \text{ in } B, \label{modp2} \\ \div\mathbf{v} &= 0 \text{ in } B, \label{modp3} \\ \un\cdot\mathbf{v} &= 0 \text{ on } \partial B, \label{modp4} \\ \un\times(\mathbf{w} - \mathbf{v}) &= \mathbf{0} \text{ on } \partial B, \label{modp5} \\ \un\times(\curl\mathbf{w} - \gamma^{-1}\curl\mathbf{v}) &= \mathbf{0} \text{ on } \partial B. \label{modp6} \end{align} \end{subequations} Whereas this goal was accomplished in \cite{camano_lackner_monk} by essentially removing the problematic branch of eigenvalues corresponding to case (b), the new problem \eqref{modp} only modifies this branch of eigenvalues. Solving the problem using the separation of variables approach in the Appendix shows that the branch corresponding to case (a) is unchanged, but the branch corresponding to case (b) is dramatically different. In particular, we no longer observe a sequence of eigenvalues converging to a finite point. In fact, the smallest eigenvalue for case (b) is approximately $\eta = 18.317$. The outline of this paper is as follows. In Section \ref{sec_aux} we introduce the physical scattering problem of interest and the auxiliary problem that we will use in order to generate the eigenvalue problem \eqref{modp}, and we establish that the auxiliary problem is well-posed. The goal of Section \ref{sec_mitp} is to prove a solvability result for a nonhomogeneous version of \eqref{modp} that will allow us to study the properties of the eigenvalues. We begin this investigation in Section \ref{sec_props} by showing that the eigenvalues are discrete without finite accumulation point, and we establish that eigenvalues exist whenever $\epsilon$ is real-valued. In Section \ref{sec_determine} we show that the eigenvalues may be determined from measured scattering data using the linear sampling method. Section \ref{sec_numerics} is devoted to the presentation of a simple numerical example for scattering by a ball in order to illustrate the method. We conclude with a discussion of some open questions and potential avenues of research in Section \ref{sec_conclusion}, followed by a short appendix that provides some details regarding the separation of variables procedure mentioned above. \section{The physical and auxiliary scattering problems} \label{sec_aux} Given an incident electric field $\mathbf{E}^i$ which satisfies the free-space Maxwell's equations in $\mathbb{R}^3$, we seek a scattered field $\mathbf{E}^s\in\Hcurlloc{\mathbb{R}^3\setminus\overline{D}}$ and a total field $\mathbf{E}\in\Hcurl{D}$ which satisfy the standard Maxwell system \begin{subequations} \label{sc} \begin{align} \curl\curl \mathbf{E}^s - k^2 \mathbf{E}^s &= \mathbf{0} \text{ in } \mathbb{R}^3\setminus\overline{D}, \label{sc1} \\ \curl\curl \mathbf{E} - k^2\epsilon \mathbf{E} &= \mathbf{0} \text{ in } D, \label{sc2} \\ \un\times\mathbf{E} - \un\times\mathbf{E}^s &= \un\times \mathbf{E}^i \text{ on } \partial D, \label{sc3} \\ \un\times\curl \mathbf{E} - \un\times\curl \mathbf{E}^s &= \un\times\curl \mathbf{E}^i \text{ on } \partial D, \label{sc4} \\ \mathclap{\lim_{r\to\infty} \left(\curl \mathbf{E}^s\times\mathbf{x} - ikr\mathbf{E}^s\right) = 0,} \label{sc5} \end{align} \end{subequations} where $\epsilon$ is the relative electric permittivity of the medium, $k>0$ is the wave number, $\overline{D}$ is the support of the contrast $\epsilon - 1$, and $\un$ is the outward unit normal vector of the boundary $\partial D$. We assume that $\epsilon = 1$ outside of a sufficiently large ball centered at the origin, which implies that $D$ is bounded. We also assume that $D$ is a Lipschitz domain with connected complement and that $\epsilon$ satisfies $\Re(\epsilon)\ge\epsilon_*>0$ and $\Im(\epsilon)\ge0$ a.e. in $D$. In order to permit the application of the unique continuation principle, we assume that $\epsilon|_D$ lies in the space \begin{equation*} W_{\Sigma}^{1,\infty}(D) := \{\mu\in L^\infty(D) \mid \nabla(\mu|_{\Omega_i})\in\mathbf{L}^\infty(\Omega_i), \; i = 1,2,\dots,M\}, \end{equation*} where $\{\Omega_i\}_{i=1}^M$ is a partition of $D$. We refer to \eqref{sc} as the physical scattering problem, and under the assumptions given above this problem is well-posed for any incident field $\mathbf{E}^i$ (cf. \cite{colton_kress}). \par We now introduce an auxiliary scattering problem that will allow us to generate an eigenvalue problem that depends on the permittivity $\epsilon$, but we remark that the auxiliary problem itself is independent of this parameter. We choose a bounded Lipschitz domain $B\subset\mathbb{R}^3$ that contains $D$ (e.g. a ball or $B=D$), and we seek $\mathbf{E}_0^s\in \Hcurlloc{\mathbb{R}^3\setminus\overline{B}}$, $\mathbf{E}_0\in \Hcurl{B}$, and $P\in H_*^1(B) := H^1(B)/\mathbb{C}$ which satisfy \begin{subequations} \label{aux} \begin{align} \curl\curl \mathbf{E}_0^s - k^2 \mathbf{E}_0^s &= \mathbf{0} \text{ in } \mathbb{R}^3\setminus\overline{B}, \label{aux1} \\ \curl\gamma^{-1}\curl \mathbf{E}_0 - k^2\eta \mathbf{E}_0 + k^2\nabla P &= \mathbf{0} \text{ in } B, \label{aux2} \\ \div \mathbf{E}_0 &= 0 \text{ in } B, \label{aux3} \\ \un\cdot \mathbf{E}_0 &= 0 \text{ on } \partial B, \label{aux4} \\ \un\times\mathbf{E}_0 - \un\times\mathbf{E}_0^s &= \un\times \mathbf{E}^i \text{ on } \partial B, \label{aux5} \\ \un\times\gamma^{-1}\curl \mathbf{E}_0 - \un\times\curl \mathbf{E}_0^s &= \un\times\curl \mathbf{E}^i \text{ on } \partial B, \label{aux6} \\ \mathclap{\lim_{r\to\infty} \left(\curl \mathbf{E}_0^s\times\mathbf{x} - ikr\mathbf{E}_0^s\right) = 0,} \label{aux7} \end{align} \end{subequations} where $\gamma>0$ is a fixed constant and $\eta\in\mathbb{C}$ is the parameter of interest that will later serve as an eigenvalue. We would like to establish solvability of this nonstandard problem, and we begin by showing uniqueness of solutions whenever $\Im(\eta)\ge0$. \begin{theorem} \label{theorem_uniqueness} If $\Im(\eta)\ge0$, then there exists at most one solution of \eqref{aux} for a given incident field $\mathbf{E}^i$. \end{theorem} \begin{proof} By linearity it suffices to show that the only solution of \eqref{aux} corresponding to $\mathbf{E}^i = \mathbf{0}$ is $(\mathbf{E}_0^s,\mathbf{E}_0,P) = (\mathbf{0},\mathbf{0},0)$. Indeed, we suppose that $(\mathbf{E}_0^s,\mathbf{E}_0,P)$ is a solution for $\mathbf{E}^i = \mathbf{0}$, and we extend $\mathbf{E}_0$ as $\mathbf{E}_0 := \mathbf{E}_0^s$ in $\mathbb{R}^3\setminus\overline{B}$, which lies in $\Hcurlloc{\mathbb{R}^3}$ due to the boundary condition \eqref{aux5}. Setting $\mathbf{H}_0 := \frac{1}{ik}\curl\mathbf{E}_0$ and integrating by parts against $\overline{\mathbf{E}_0}$ in \eqref{aux1}, we see that \begin{equation*} \int_{B_R\setminus\overline{B}} \left( \abs{\curl\mathbf{E}_0}^s - k^2\abs{\mathbf{E}_0}^2 \right) dx - ik\int_{\partial B_R} \un\times\overline{\mathbf{E}_0}\cdot\mathbf{H}_0 ds - \int_{\partial B} (\un\times\curl\mathbf{E}_0)\cdot\overline{\mathbf{E}_{0,T}} ds = 0, \end{equation*} where $B_R$ is a ball centered at the origin chosen such that $\overline{B}\subset B_R$. If we integrate by parts in \eqref{aux2} in a similar manner and apply the transmission conditions \eqref{aux5}--\eqref{aux6}, then we obtain \begin{align*} \int_{B_R\setminus\overline{B}} \left( \abs{\curl\mathbf{E}_0}^2 - k^2\abs{\mathbf{E}_0}^2 \right) dx - ik\int_{\partial B_R} &\un\times\overline{\mathbf{E}_0}\cdot\mathbf{H}_0 ds \\ + \int_B \left(\gamma^{-1}\abs{\curl\mathbf{E}_0}^2 - k^2\eta\abs{\mathbf{E}_0}^2 \right) &dx + k^2\int_B \nabla P\cdot\overline{\mathbf{E}_0} dx = 0. \end{align*} The vanishing divergence and normal component of $\mathbf{E}_0$ required by \eqref{aux3}--\eqref{aux4} imply that the last integral on the left-hand side vanishes, and by taking the imaginary part of both sides it follows that \begin{equation*} \Re\int_{\partial B_R} \un\times\overline{\mathbf{E}_0}\cdot\mathbf{H}_0 ds = -k\Im(\eta)\int_B \abs{\mathbf{E}_0}^2 dx \le 0. \end{equation*} By Rellich's lemma (cf. \cite{colton_kress}) we see that $\mathbf{E}_0 = \mathbf{0}$ in $\mathbb{R}^3\setminus\overline{B}_R$, and the unique continuation principle for Maxwell's equations implies further that $\mathbf{E}_0 = \mathbf{0}$ in $\mathbb{R}^3\setminus\overline{B}$. In particular, we observe from the transmission conditions \eqref{aux5}--\eqref{aux6} that $\un\times\mathbf{E}_0 = \un\times\gamma^{-1}\curl\mathbf{E}_0 = \mathbf{0} \text{ on } \partial B$, and as a consequence we may integrate by parts in \eqref{aux2} against $\nabla\overline{P}$ to obtain \begin{equation*} k^2\int_B \abs{\nabla P}^2 dx - k^2\eta\int_B \mathbf{E}_0\cdot\nabla\overline{P} dx = 0. \end{equation*} From \eqref{aux3}--\eqref{aux4} we see that the second integral vanishes, which implies that $\nabla P = \mathbf{0}$ and hence $P = 0$ in $B$ since $P\in H_*^1(B)$. Finally, we see that $\mathbf{E}_0$ satisfies \begin{equation*} \curl{\tilde\gamma}^{-1}\curl\mathbf{E}_0 - k^2\tilde{\eta}\mathbf{E}_0 = \mathbf{0} \text{ in } \mathbb{R}^3, \end{equation*} where for any constant $\alpha$ we define $\tilde{\alpha} := \alpha$ in $B$ and $\tilde{\alpha} := 1$ elsewhere. Since $\mathbf{E}_0$ is identically zero in $\mathbb{R}^3\setminus\overline{B}$, the unique continuation principle implies that $\mathbf{E}_0 = \mathbf{0}$ in $B$ as well, and we conclude that $(\mathbf{E}_0^s,\mathbf{E}_0,P) = (\mathbf{0},\mathbf{0},0)$. \qed\hfill $\Box$ \end{proof} We now aim to show that \eqref{aux} is well-posed whenever $\Im(\eta)\ge0$, and in particular we show that this problem is of Fredholm type, i.e. existence of solutions follows from uniqueness. In the following remark we introduce two modifications of the problem that will allow us to derive an equivalent variational formulation of \eqref{aux}. \begin{remark} \label{remark_mod1} \textnormal{ First, we choose $\boldsymbol{\varphi}\in \Hcurlloc{\mathbb{R}^3\setminus\overline{B}}$ to be the unique radiating solution of \begin{align*} \curl\curl\boldsymbol{\varphi} - k^2\boldsymbol{\varphi} &= \mathbf{0} \text{ in } \mathbb{R}^3\setminus\overline{B}, \\ \un\times\boldsymbol{\varphi} &= \un\times\mathbf{E}^i \text{ on } \partial B. \end{align*} By the well-posedness of this standard problem there exists a constant $C_K$ independent of $\mathbf{E}^i$ such that \begin{equation*} \norm{\boldsymbol{\varphi}}_{\Hcurl{K})} \le C_K\norm{\un\times\mathbf{E}^i}_{\mathbf{H}^{-1/2}(\Div,\partial B)}, \end{equation*} where $K$ is any bounded subset of $\mathbb{R}^3\setminus\overline{B}$. We write $\mathbf{u} := \mathbf{E}_0^s + \boldsymbol{\varphi}$, and we observe from the boundary conditions \eqref{aux5}--\eqref{aux6} that \begin{align*} \un\times\mathbf{E}_0 - \un\times\mathbf{u} &= \mathbf{0} \text{ on } \partial B, \\ \un\times\gamma^{-1}\curl \mathbf{E}_0 - \un\times\curl \mathbf{u} &= \un\times\curl \mathbf{E}^i - \un\times\curl\boldsymbol{\varphi} \text{ on } \partial B. \end{align*} Second, we choose $\zeta\in H_*^1(B)$ to be the unique solution of \begin{align*} \Delta\zeta &= 0 \text{ in } B, \\ \normal{\zeta} &= k^{-2}\nabla_{\partial B}\cdot\left( \un\times\curl \mathbf{E}^i - \un\times\curl\boldsymbol{\varphi} \right) \text{ on } \partial B, \end{align*} where $\nabla_{\partial B}\cdot$ denotes the surface divergence on $\partial B$, and we remark that in a similar manner there exists a constant $C>0$ independent of $\mathbf{E}^i$ and $\boldsymbol{\varphi}$ such that \begin{equation*} \norm{\nabla\zeta}_B \le C\norm{\nabla_{\partial B}\cdot\left( \un\times\curl \mathbf{E}^i - \un\times\curl\boldsymbol{\varphi} \right)}_{H^{-1/2}(\partial B)}. \end{equation*} We write $p := P-\zeta$ in $B$, and we see from \eqref{aux2} that \begin{equation*} \curl\gamma^{-1}\curl\mathbf{E}_0 - k^2\eta\mathbf{E}_0 + k^2\nabla p = -k^2\nabla\zeta \text{ in } B. \end{equation*} The reason for this modification is to guarantee that certain relationships between the solution fields are homogeneous in our upcoming analysis.} \end{remark} In addition to the modifications from Remark \ref{remark_mod1}, we formulate \eqref{aux} on a bounded domain $B_R$ using the electric-to-magnetic Calder{\'o}n operator $G_e:\mathbf{H}^{-1/2}(\Div,\partial B_R)\to\mathbf{H}^{-1/2}(\Div,\partial B_R)$, which maps the tangential component of the electric field on $\partial B_R$ to the tangential component of the magnetic field on $\partial B_R$ that arises from the unique radiating solution of the free-space Maxwell's equations in the exterior domain $\mathbb{R}^3\setminus\overline{B_R}$. We refer to \cite{monk} for details on this operator. This operator serves to replace the equation and radiation condition for the scattered electric field in the exterior domain $\mathbb{R}^3\setminus\overline{B_R}$ with a boundary condition on $\partial B_R$. \par If we write $\mathbf{v} := \mathbf{E}_0$ for convenience, then an equivalent formulation of \eqref{aux} is to seek $\mathbf{u}\in\Hcurl{B_R\setminus\overline{B}}$, $\mathbf{v}\in\Hcurl{B}$, and $p\in H_*^1(B)$ which satisfy \begin{subequations} \label{auxb} \begin{align} \curl\curl \mathbf{u} - k^2 \mathbf{u} &= \mathbf{0} \text{ in } B_R\setminus\overline{B}, \label{auxb1} \\ \curl\gamma^{-1}\curl \mathbf{v} - k^2\eta \mathbf{v} + k^2\nabla p &= -k^2\nabla\zeta \text{ in } B, \label{auxb2} \\ \div \mathbf{v} &= 0 \text{ in } B, \label{auxb3} \\ \un\cdot \mathbf{v} &= 0 \text{ on } \partial B, \label{auxb4} \\ \un\times\mathbf{v} - \un\times\mathbf{u} &= \mathbf{0} \text{ on } \partial B, \label{auxb5} \\ \un\times\gamma^{-1}\curl \mathbf{v} - \un\times\curl \mathbf{u} &= \mathbf{h} \text{ on } \partial B, \label{auxb6} \\ \un\times\curl\mathbf{u} &= ik G_e(\un\times\mathbf{u}) \text{ on } \partial B_R, \label{auxb7} \end{align} \end{subequations} where $\mathbf{h} := \un\times\curl\mathbf{E}^i - \un\times\curl\boldsymbol{\varphi}\in \mathbf{H}^{-1/2}(\Div,\partial B)$. In order to study an equivalent variational formulation of \eqref{auxb} we introduce the space \begin{equation*} \boldsymbol{\mathcal{X}} := \left\{(\mathbf{u},\mathbf{v},p)\in\Hcurl{B_R\setminus\overline{B}}\times\Hcurl{B}\times H_*^1(B) \;\middle|\; \un\times\mathbf{u}-\un\times\mathbf{v} = \mathbf{0} \text{ on } \partial B\right\}, \end{equation*} equipped with the usual inner product $(\cdot,\cdot)_{\boldsymbol{\mathcal{X}}}$ and induced norm $\norm{\cdot}_{\boldsymbol{\mathcal{X}}}$ inherited from the component spaces. If $(\mathbf{u},\mathbf{v},p)$ satisfies \eqref{auxb} and we integrate by parts in \eqref{auxb1}--\eqref{auxb3} against the test function components $(\mathbf{u}',\mathbf{v}',p')\in \boldsymbol{\mathcal{X}}$, then we obtain \begin{align*} (\curl\mathbf{u},\curl\mathbf{u}')_{B_R\setminus\overline{B}} - k^2(\mathbf{u},\mathbf{u}')_{B_R\setminus\overline{B}} + \inner{\un\times\curl\mathbf{u}}{\mathbf{u}_T'}_{\partial B_R} - \inner{\un\times\curl\mathbf{u}}{\mathbf{u}_T'}_{\partial B} &= 0, \\ (\curl\mathbf{v},\curl\mathbf{v}')_B - k^2\eta(\mathbf{v},\mathbf{v}')_B + \inner{\un\times\gamma^{-1}\curl\mathbf{v}}{\mathbf{v}_T'}_{\partial B} + k^2(\nabla p,\mathbf{v}')_B &= -k^2(\nabla\zeta,\mathbf{v}')_B, \\ (\mathbf{v},\nabla p')_B &= 0, \end{align*} where for a Lipschitz domain $\mathcal{O}\subseteq\mathbb{R}^3$ with boundary $\partial\mathcal{O}$ we denote by $(\cdot,\cdot)_\mathcal{O}$ the inner product on $\mathbf{L}^2(\mathcal{O})$ and by $\inner{\cdot}{\cdot}_{\partial\mathcal{O}}$ the duality pairing of $\mathbf{H}^{-1/2}(\Div,\partial\mathcal{O})$ and $\mathbf{H}^{-1/2}(\Curl,\partial\mathcal{O})$ (with the second argument conjugated). In some instances we will also use $\inner{\cdot}{\cdot}_{\partial\mathcal{O}}$ to denote the duality pairing of $H^{-1/2}(\partial\mathcal{O})$ and $H^{1/2}(\partial\mathcal{O})$ for scalar functions, and we will sometimes use the shorthand $\norm{\cdot}_B$ to represent the norms $\norm{\cdot}_{L^2(B)}$ and $\norm{\cdot}_{\mathbf{L}^2(B)}$. The combination of these equations along with the boundary conditions yields a variational problem in which we seek $(\mathbf{u},\mathbf{v},p)\in \boldsymbol{\mathcal{X}}$ satisfying \begin{equation} a((\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p')) = \ell(\mathbf{u}',\mathbf{v}',p') \quad\forall(\mathbf{u}',\mathbf{v}',p')\in \boldsymbol{\mathcal{X}}, \label{varprob_aux} \end{equation} where the bounded sesquilinear form $a(\cdot,\cdot)$ is defined by \begin{align*} a((\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p')) &:= (\curl\mathbf{u},\curl\mathbf{u}')_{B_R\setminus\overline{B}} + \gamma^{-1}(\curl\mathbf{v},\curl\mathbf{v}')_B - k^2(\mathbf{u},\mathbf{u}')_{B_R\setminus\overline{B}} \\ &\quad\quad\quad- k^2\eta(\mathbf{v},\mathbf{v}')_B + k^2(\nabla p,\mathbf{v}')_B + k^2(\mathbf{v},\nabla p')_B \\ &\quad\quad\quad+ ik\inner{G_e(\un\times\mathbf{u})}{\mathbf{u}_T'}_{\partial B_R} \quad\forall(\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p')\in \boldsymbol{\mathcal{X}}, \end{align*} and the bounded antilinear functional $\ell$ is defined by \begin{equation*} \ell(\mathbf{u}',\mathbf{v}',p') := -k^2(\nabla\zeta,\mathbf{v}')_B - \inner{\mathbf{h}}{\mathbf{v}_T'}_{\partial B} \quad\forall(\mathbf{u}',\mathbf{v}',p')\in \boldsymbol{\mathcal{X}}. \end{equation*} We now investigate the properties of solutions of \eqref{varprob_aux}, and we begin by introducing the space \begin{equation*} S := \left\{ (\varphi,\psi,q)\in H^1(B_R\setminus\overline{B})\times H^1(B)\times H_*^1(B) \;\middle|\; \begin{array}{c} \varphi = \psi \text{ on } \partial B \\ \inner{\varphi}{1}_{\partial B} = \inner{\psi}{1}_{\partial B} = 0 \end{array} \right\}. \end{equation*} For any $(\varphi,\psi,q)\in S$ it follows that $\un\times\nabla\varphi - \un\times\nabla\psi = 0$ on $\partial B$ since $\varphi = \psi$ on $\partial B$, and as a result we have $(\nabla\varphi,\nabla\psi,q)\in \boldsymbol{\mathcal{X}}$. With $(\mathbf{u}',\mathbf{v}',p) = (\nabla\varphi,\nabla\psi,q)$ we see that any solution $(\mathbf{u},\mathbf{v},p)$ of \eqref{varprob_aux} must satisfy \begin{align} \begin{split} k^2(\mathbf{u},&\nabla\varphi)_{B_R\setminus\overline{B}} + k^2\eta(\mathbf{v},\nabla\psi)_B - k^2(\nabla p,\nabla\psi)_B - k^2(\mathbf{v},\nabla q)_B \\ &\quad- ik\inner{G_e(\un\times\mathbf{u})}{\nabla_{\partial B_R}\varphi}_{\partial B_R} = k^2(\nabla\zeta,\nabla\psi)_B + \inner{\mathbf{h}}{\nabla_{\partial B}\psi}_{\partial B}. \label{varS} \end{split} \end{align} We first observe that by choosing $\varphi = 0$ and $\psi = 0$ we have \begin{equation*} (\mathbf{v},\nabla q)_B = 0 \quad\forall q\in H_*^1(B), \end{equation*} which reflects the conditions \eqref{auxb3}--\eqref{auxb4} in the reformulated auxiliary problem. By choosing $\varphi\in H_0^1(B_R\setminus\overline{B})$, $\psi = 0$, and $q=0$ we have \begin{equation*} (\mathbf{u},\nabla\varphi)_{B_R\setminus\overline{B}} = 0 \quad\forall\varphi\in H_0^1(B_R\setminus\overline{B}), \end{equation*} and by instead choosing $\varphi = 0$, $\psi\in H_0^1(B)$, and $q=0$ we observe that \begin{equation*} (\nabla p,\nabla\psi)_B = -(\nabla\zeta,\nabla\psi)_B = 0 \quad\forall\psi\in H_0^1(B) \end{equation*} since $\Delta\zeta = 0$ in $B$ by construction. Thus, it follows that \begin{equation*} \div\mathbf{u} = 0 \text{ in } B_R\setminus\overline{B} \text{ and } \Delta p = 0 \text{ in } B, \end{equation*} and applying the divergence theorem yields \begin{align*} (\mathbf{u},\nabla\varphi)_{B_R\setminus\overline{B}} &= \inner{\un\cdot\mathbf{u}}{\varphi}_{\partial B_R} - \inner{\un\cdot\mathbf{u}}{\varphi}_{\partial B}, \\ (\nabla p,\nabla\psi)_B &= \inner{\normal{p}}{\psi}_{\partial B}, \\ (\nabla\zeta,\nabla\psi)_B &= k^{-2}\inner{\nabla_{\partial B}\cdot\mathbf{h}}{\psi}_{\partial B}, \end{align*} where the last equation follows from the definition of $\mathbf{h}$ and the construction of $\zeta$. If we substitute these equations into \eqref{varS} and use the definition of the surface divergence (cf. \cite{monk}), then we see that \begin{equation*} \inner{\un\cdot\mathbf{u} - \frac{1}{ik}\nabla_{\partial B_R}\cdot G_e(\un\times\mathbf{u})}{\varphi}_{\partial B_R} - \inner{\un\cdot\mathbf{u} + \normal{p}}{\psi}_{\partial B} = 0. \end{equation*} Choosing $\psi = 0$ in $B$ yields \begin{equation} \un\cdot\mathbf{u} - \frac{1}{ik}\nabla_{\partial B_R}\cdot G_e(\un\times\mathbf{u}) = 0 \text{ on } \partial B_R, \label{eq_X01} \end{equation} and choosing $\varphi$ such that $\varphi = 0$ near $\partial B_R$ yields \begin{equation} \un\cdot\mathbf{u} + \normal{p} = 0 \text{ on } \partial B. \label{eq_X02} \end{equation} We introduced $\zeta$ in Remark \ref{remark_mod1} in order to ensure that \eqref{eq_X01} and \eqref{eq_X02} are homogeneous and hence may be used to define a subspace of $\boldsymbol{\mathcal{X}}$. In particular, we define the solution space \begin{equation*} \boldsymbol{\mathcal{X}}_0 := \left\{ (\mathbf{u},\mathbf{v},p)\in \boldsymbol{\mathcal{X}} \;\middle|\; \begin{array}{c} \div\mathbf{u} = 0 \text{ in } B_R\setminus\overline{B}, \,\div\mathbf{v} = 0 \text{ in } B, \,\Delta p = 0 \text{ in } B, \\ \un\cdot\mathbf{u} - \frac{1}{ik}\nabla_{\partial B_R}\cdot G_e(\un\times\mathbf{u}) = 0 \text{ on } \partial B_R, \\ \un\cdot\mathbf{u}+\normal{p} = 0 \text{ on } \partial B, \,\un\cdot\mathbf{v} = 0 \text{ on } \partial B \end{array} \right\}, \end{equation*} equipped with the same inner product and norm as $\boldsymbol{\mathcal{X}}$, and we observe that \eqref{varprob_aux} may be equivalently posed with $\boldsymbol{\mathcal{X}}_0$ in place of $\boldsymbol{\mathcal{X}}$. A necessary ingredient to establishing the Fredholm property of \eqref{varprob_aux} is a compactness result for the space $\boldsymbol{\mathcal{X}}_0$, and our main tool is the following theorem (cf. \cite[Theorem 2]{costabel} and \cite[Theorem 3.47]{monk}). \begin{theorem} \label{theorem_costabel} Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^3$, and let $\mathbf{u}\in\Hcurl{\Omega}\cap\Hdiv{\Omega}$. Then $\un\times\mathbf{u}\in\mathbf{L}_t^2(\partial \Omega)$ if and only if $\un\cdot\mathbf{u}\in L^2(\partial \Omega)$, and in either case we have $\mathbf{u}\in\mathbf{H}^{1/2}(\Omega)$ and the estimates \begin{subequations} \begin{align} \norm{\un\times\mathbf{u}}_{\mathbf{L}_t^2(\partial\Omega)} &\le C\Bigr( \norm{\mathbf{u}}_{\mathbf{L}^2(\Omega)} + \norm{\curl\mathbf{u}}_{\mathbf{L}^2(\Omega)} + \norm{\div\mathbf{u}}_{L^2(\Omega)} + \norm{\un\cdot\mathbf{u}}_{L^2(\partial\Omega)} \Bigr), \label{costabel1} \\ \norm{\un\cdot\mathbf{u}}_{L^2(\partial\Omega)} &\le C\Bigr( \norm{\mathbf{u}}_{\mathbf{L}^2(\Omega)} + \norm{\curl\mathbf{u}}_{\mathbf{L}^2(\Omega)} + \norm{\div\mathbf{u}}_{L^2(\Omega)} + \norm{\un\times\mathbf{u}}_{\mathbf{L}_t^2(\partial\Omega)} \Bigr), \label{costabel2} \\ \norm{\mathbf{u}}_{\mathbf{H}^{1/2}(\Omega)} &\le C\Bigr( \norm{\mathbf{u}}_{\mathbf{L}^2(\Omega)} + \norm{\curl\mathbf{u}}_{\mathbf{L}^2(\Omega)} + \norm{\div\mathbf{u}}_{L^2(\Omega)} + \norm{\un\times\mathbf{u}}_{\mathbf{L}_t^2(\partial\Omega)} \Bigr). \label{costabel3} \end{align} \end{subequations} \end{theorem} \begin{theorem} \label{theorem_X0} The space $\boldsymbol{\mathcal{X}}_0$ is compactly embedded into $\boldsymbol{\mathcal{L}} := \mathbf{L}^2(B_R\setminus\overline{B})\times\mathbf{L}^2(B)\times H_*^1(B)$. \end{theorem} \begin{proof} Let $\{(\mathbf{u}_j,\mathbf{v}_j,p_j)\}_{j\in\mathbb{N}}$ be a bounded sequence in $\boldsymbol{\mathcal{X}}_0$. We see in particular that $\{\mathbf{v}_j\}$ is a bounded sequence in $\Hcurl{B}$ with $\div\mathbf{v}_j = 0\in L^2(B)$ and $\un\cdot\mathbf{v}_j = 0 \in L^2(\partial B)$. Theorem \ref{theorem_costabel} then implies that $\{\mathbf{v}_j\}$ is a bounded sequence in $\mathbf{H}^{1/2}(B)$, and from \eqref{costabel1} it follows that $\{\un\times\mathbf{v}_j\}$ is bounded in $\mathbf{L}_t^2(\partial B)$. The compact embedding of $\mathbf{H}^{1/2}(B)$ into $\mathbf{L}^2(B)$ implies that we may extract a subsequence of $\{\mathbf{v}_j\}$ that converges in $\mathbf{L}^2(B)$, and we pass to the corresponding subsequence of $\{(\mathbf{u}_j,\mathbf{v}_j,p_j)\}_{j\in\mathbb{N}}$. We now turn our attention to the sequence $\{\mathbf{u}_j\}$, and we begin by choosing $\tilde{\mathbf{u}}_j\in\Hcurlloc{\mathbb{R}^3\setminus\overline{B_R}}$ to be the unique radiating solution of \begin{align*} \curl\curl\tilde{\mathbf{u}}_j - k^2\tilde{\mathbf{u}}_j &= \mathbf{0} \text{ in } \mathbb{R}^3\setminus\overline{B_R}, \\ \un\times\tilde{\mathbf{u}}_j &= \un\times\mathbf{u}_j \text{ on } \partial B_R. \end{align*} We observe that the extension \begin{equation*} \mathbf{u}_j^e := \left\{\begin{array}{ll} \mathbf{u}_j \text{ in } B_R\setminus\overline{B}, \\ \tilde{\mathbf{u}}_j \text{ in } \mathbb{R}^3\setminus\overline{B_R}, \end{array} \right. \end{equation*} lies in $\Hcurlloc{\mathbb{R}^3\setminus\overline{B}}$ since the tangential component is continuous across $\partial B_R$. We may also apply the relation (cf. \cite[Remark 3.32]{monk}) \begin{equation*} \nabla_{\partial B_R}\cdot(\un\times\boldsymbol{\xi}) = -\un\cdot(\curl\boldsymbol{\xi})|_{\partial B_R} \quad\forall\boldsymbol{\xi}\in\Hcurlloc{\mathbb{R}^3\setminus\overline{B_R}} \end{equation*} along with the definition of $\tilde{\mathbf{u}}_j$ in order to obtain \begin{equation*} \un\cdot\mathbf{u}_j = \frac{1}{ik}\nabla_{\partial B_R}\cdot G_e(\un\times\mathbf{u}_j) = \frac{1}{ik}\nabla_{\partial B_R}\cdot \left( \frac{1}{ik}\un\times\curl\tilde{\mathbf{u}}_j \right) = \frac{1}{k^2}\un\cdot\curl\curl\tilde{\mathbf{u}}_j = \un\cdot\tilde{\mathbf{u}}_j. \end{equation*} It follows that $\mathbf{u}_j^e\in\Hdiv{\mathbb{R}^3\setminus\overline{B}}$ with $\div\mathbf{u}_j^e = 0$ in $\mathbb{R}^3\setminus\overline{B}$. We consider a smooth cutoff function $\chi\in C_0^\infty(\mathbb{R}^3)$ such that $\chi = 1$ in $\overline{B_R}$, and we let $B_{\tilde{R}}$, $\tilde{R}>R$, be a ball centered at the origin that strictly contains the support of $\chi$. We see that each term $\chi\mathbf{u}_j^e$ satisfies \begin{align*} \un\times(\chi\mathbf{u}_j^e) &= \mathbf{0} \text{ on } \partial B_{\tilde{R}}, \\ \un\times(\chi\mathbf{u}_j^e) &= \un\times\mathbf{v}_j \text{ on } \partial B, \end{align*} which implies that the sequence $\{(\un\times(\chi\mathbf{u}_j^e))|_{\partial(B_{\tilde{R}}\setminus\overline{B})}\}$ is bounded in $\mathbf{L}_t^2(\partial(B_{\tilde{R}}\setminus\overline{B}))$. Another application of Theorem \ref{theorem_costabel} implies that $\{\mathbf{u}_j\}$ is bounded in $\mathbf{H}^{1/2}(B_R\setminus\overline{B})$ and allows us to extract a convergent subsequence of $\{\mathbf{u}_j\}$ in $\mathbf{L}^2(B_R\setminus\overline{B})$. We again pass to the corresponding subsequence of $\{(\mathbf{u}_j,\mathbf{v}_j,p_j)\}_{j\in\mathbb{N}}$. The estimate \eqref{costabel2} also implies that the sequence $\{\un\cdot\mathbf{u}_j\}$ is bounded in $L^2(\partial B)$, and from the definition of $\boldsymbol{\mathcal{X}}_0$ we see that $\left\{\un\cdot\nabla p_j\right\}$ is bounded in $L^2(\partial B)$ as well. Since $\curl\nabla p_j = 0$ and $\div\nabla p = \Delta p = 0$ in $B$, a final application of Theorem \ref{theorem_costabel} implies that $\{\nabla p_j\}$ is a bounded sequence in $\mathbf{H}^{1/2}(B)$, and we again extract a subsequence convergent in $\mathbf{L}^2(B)$. A simple argument shows that the limit of this sequence may be written as the gradient of a scalar potential, which implies that the corresponding subsequence of $\{p_j\}$ converges in $H_*^1(B)$. We conclude that there exists a subsequence of $\{(\mathbf{u}_j,\mathbf{v}_j,p_j)\}_{j\in\mathbb{N}}$ which converges in $\boldsymbol{\mathcal{L}}$. \qed\hfill $\Box$ \end{proof} \begin{remark} \label{remark_Ge} \textnormal{In order to decompose the sesquilinear form $a(\cdot,\cdot)$ into coercive and compact parts, we first require such a decomposition of the Calderon operator $G_e$. From \cite[Section 10.3.2]{monk} there exist operators $G_e^{(1)}, G_e^{(2)}:\mathbf{H}^{-1/2}(\Div,\partial B_R)\to\mathbf{H}^{-1/2}(\Div,\partial B_R)$ which satisfy} \begin{enumerate}[label=(\roman*)] \item \textnormal{$G_e = G_e^{(1)} + G_e^{(2)}$;} \item \textnormal{$G_e^{(1)}\circ\gamma_t\circ P_1:\boldsymbol{\mathcal{X}}_0\to\mathbf{H}^{-1/2}(\Div,\partial B_R)$ is compact, where $\gamma_t:\Hcurl{B_R\setminus\overline{B}}\to\mathbf{H}^{-1/2}(\Div,\partial B_R)$ is the tangential trace operator $\mathbf{u}\mapsto\un\times\mathbf{u}|_{\partial B_R}$ and $P_1:\boldsymbol{\mathcal{X}}_0\to\Hcurl{B_R\setminus\overline{B}}$ is the projection operator $(\mathbf{u},\mathbf{v},p)\mapsto\mathbf{u}$;} \item \textnormal{$ikG_e^{(2)}$ is nonnegative.} \end{enumerate} \end{remark} We now define the operators $\mathbb{A},\mathbb{B}:\boldsymbol{\mathcal{X}}_0\to \boldsymbol{\mathcal{X}}_0$ by means of the Riesz representation theorem such that \begin{align*} (\mathbb{A}(\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p'))_{\boldsymbol{\mathcal{X}}_0} &= (\curl\mathbf{u},\curl\mathbf{u}')_{B_R\setminus\overline{B}} + \gamma^{-1}(\curl\mathbf{v},\curl\mathbf{v}')_B \\ &\quad\quad\quad+ k^2(\mathbf{u},\mathbf{u}')_{B_R\setminus\overline{B}} + k^2(\mathbf{v},\mathbf{v}')_B + (\nabla p,\nabla p')_B \\ &\quad\quad\quad+ ik\inner{G_e^{(2)}(\un\times\mathbf{u})}{\mathbf{u}_T'}_{\partial B_R}, \\ (\mathbb{B}(\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p'))_{\boldsymbol{\mathcal{X}}_0} &= -2k^2(\mathbf{u},\mathbf{u}')_{B_R\setminus\overline{B}} - k^2(1+\eta)(\mathbf{v},\mathbf{v}')_B - (\nabla p,\nabla p')_B \\ &\quad\quad\quad+ ik\inner{G_e^{(1)}(\un\times\mathbf{u})}{\mathbf{u}_T'}_{\partial B_R}, \end{align*} for all $(\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p')\in \boldsymbol{\mathcal{X}}_0$. We see that \begin{equation*} ((\mathbb{A}+\mathbb{B})(\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p'))_{\boldsymbol{\mathcal{X}}_0} = a((\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p')) \quad\quad\quad\quad\forall(\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p')\in \boldsymbol{\mathcal{X}}_0, \end{equation*} and as a result we need only study the operators $\mathbb{A}$ and $\mathbb{B}$. It is clear from the definition of $\mathbb{A}$ and Remark \ref{remark_Ge} that \begin{equation*} \abs{(\mathbb{A}(\mathbf{u},\mathbf{v},p),(\mathbf{u}',\mathbf{v}',p'))_{\boldsymbol{\mathcal{X}}_0}} \ge C\left( \norm{\mathbf{u}}_{\Hcurl{B_R\setminus\overline{B}}}^2 + \norm{\mathbf{v}}_{\Hcurl{B}}^2 + \norm{\nabla p}_B^2 \right), \end{equation*} and it follows from the Lax-Milgram lemma that $\mathbb{A}:\boldsymbol{\mathcal{X}}_0\to \boldsymbol{\mathcal{X}}_0$ is invertible with bounded inverse. The compactness of $\mathbb{B}:\boldsymbol{\mathcal{X}}_0\to \boldsymbol{\mathcal{X}}_0$ follows easily from Theorem \ref{theorem_X0} and Remark \ref{remark_Ge}, and consequently we see that the operator $\mathbb{A} + \mathbb{B}$ is a Fredholm operator of index zero. Therefore, we conclude that the auxiliary problem \eqref{aux} is of Fredholm type, and since we already showed uniqueness of solutions in Theorem \ref{theorem_uniqueness} we obtain the following result. We remark that the proof of well-posedness may also be approached using the limiting absorption principle as in \cite{nguyen2018}. \begin{theorem} \label{theorem_aux} If $\Im(\eta)\ge0$, then there exists a unique solution of \eqref{aux} with the estimate \begin{equation*} \norm{\mathbf{E}_0^s}_{\Hcurl{B_R\setminus\overline{B}}} + \norm{\mathbf{E}_0}_{\Hcurl{B}} + \norm{\nabla P}_B \le C_R\left( \norm{\un\times\mathbf{E}^i}_{\mathbf{H}^{-1/2}(\Div,\partial B)} + \norm{\un\times\curl\mathbf{E}^i}_{\mathbf{H}^{-1/2}(\Div,\partial B)} \right), \end{equation*} where the constant $C_R$ is independent of $\mathbf{E}^i$ but depends on the domain $B_R$. \end{theorem} Now that we have established that the auxiliary problem is well-posed, we turn our attention to the physical and auxiliary data that will be collected in order to determine the eigenvalues corresponding to a medium. If we choose a plane wave incident field with direction $\mathbf{d}\in\mathbb{S}^2$ and polarization $\mathbf{p}\in\mathbb{R}^3\setminus\{\mathbf{0}\}$ given by \begin{equation*} \mathbf{E}^i(\mathbf{x},\mathbf{d};\mathbf{p}) := \frac{i}{k}\curl\curl\mathbf{p} e^{ik\mathbf{x}\cdot\mathbf{d}}, \end{equation*} then the scattered field for both the physical and auxiliary scattering problems (\eqref{sc} and \eqref{aux}, respectively) has the asymptotic behavior \begin{align*} \mathbf{E}^s(\mathbf{x}) &= \frac{e^{ik\abs{\mathbf{x}}}}{\abs{\mathbf{x}}}\left[ \mathbf{E}_\infty(\hat{\mathbf{x}},\mathbf{d};\mathbf{p}) + \mathcal{O}\left(\frac{1}{\abs{\mathbf{x}}}\right) \right], \quad\abs{\mathbf{x}}\to\infty, \\ \mathbf{E}_0^s(\mathbf{x}) &= \frac{e^{ik\abs{\mathbf{x}}}}{\abs{\mathbf{x}}}\left[ \mathbf{E}_{0,\infty}(\hat{\mathbf{x}},\mathbf{d};\mathbf{p}) + \mathcal{O}\left(\frac{1}{\abs{\mathbf{x}}}\right) \right], \quad\abs{\mathbf{x}}\to\infty, \end{align*} where for a nonzero $\mathbf{x}\in\mathbb{R}^3$ we define $\hat{\mathbf{x}} := \frac{\mathbf{x}}{\abs{\mathbf{x}}}$. The amplitudes $\mathbf{E}_\infty(\hat{\mathbf{x}},\mathbf{d};\mathbf{p})$ and $\mathbf{E}_{0,\infty}(\hat{\mathbf{x}},\mathbf{d};\mathbf{p})$ are the electric far field patterns of the physical and auxiliary problems, respectively, and they serve as the data for our problem; the physical far field pattern is collected from the system under investigation, and the auxiliary far field pattern is computed for various values of the parameter $\eta$. We remark that from standard arguments (cf. \cite{monk}) it follows that these electric far field patterns satisfy the reciprocity relations \begin{align*} \mathbf{q}\cdot\mathbf{E}_\infty(\hat{\mathbf{x}},\mathbf{d};\mathbf{p}) &= \mathbf{p}\cdot\mathbf{E}_\infty(-\mathbf{d},-\hat{\mathbf{x}};\mathbf{q}), \\ \mathbf{q}\cdot\mathbf{E}_{0,\infty}(\hat{\mathbf{x}},\mathbf{d};\mathbf{p}) &= \mathbf{p}\cdot\mathbf{E}_{0,\infty}(-\mathbf{d},-\hat{\mathbf{x}};\mathbf{q}), \end{align*} for all $\hat{\mathbf{x}},\mathbf{d}\in\mathbb{S}^2$ and $\mathbf{p},\mathbf{q}\in\mathbb{R}^3$. We now introduce the electric far field operator $\mathbf{F}:\mathbf{L}_t^2(\mathbb{S}^2)\to \mathbf{L}_t^2(\mathbb{S}^2)$ defined by \begin{equation*} (\mathbf{F}\mathbf{g})(\hat{\mathbf{x}}) := \int_{\mathbb{S}^2} \mathbf{E}_\infty(\hat{\mathbf{x}},\mathbf{d};\mathbf{g}(\mathbf{d})) \, ds(\mathbf{d}), \; \hat{\mathbf{x}}\in\mathbb{S}^2, \end{equation*} and we introduce the auxiliary far field operator $\mathbf{F}_0:\mathbf{L}_t^2(\mathbb{S}^2)\to \mathbf{L}_t^2(\mathbb{S}^2)$ defined by \begin{equation*} (\mathbf{F}_0\mathbf{g})(\hat{\mathbf{x}}) := \int_{\mathbb{S}^2} \mathbf{E}_{0,\infty}(\hat{\mathbf{x}},\mathbf{d};\mathbf{g}(\mathbf{d})) \, ds(\mathbf{d}), \; \hat{\mathbf{x}}\in\mathbb{S}^2. \end{equation*} With these definitions in hand, we further define the modified far field operator $\boldsymbol{\mathcal{F}}:\mathbf{L}_t^2(\mathbb{S}^2)\to \mathbf{L}_t^2(\mathbb{S}^2)$ by $\boldsymbol{\mathcal{F}} := \mathbf{F} - \mathbf{F}_0$, which may be written explicitly as \begin{equation*} (\boldsymbol{\mathcal{F}}\mathbf{g})(\hat{\mathbf{x}}) := \int_{\mathbb{S}^2} \Bigr[ \mathbf{E}_\infty(\hat{\mathbf{x}},\mathbf{d};\mathbf{g}(\mathbf{d})) - \mathbf{E}_{0,\infty}(\hat{\mathbf{x}},\mathbf{d};\mathbf{g}(\mathbf{d})) \Bigr] \, ds(\mathbf{d}), \; \hat{\mathbf{x}}\in\mathbb{S}^2. \end{equation*} The modified far field operator serves to compare the electric far field patterns of the physical and auxiliary problems, and in order to generate an eigenvalue problem we characterize when this operator is injective. We state the following theorem, which follows in a similar manner to \cite[Theorem 4.14]{cakoni_colton_monk} and \cite[Section 4]{camano_lackner_monk}. \begin{theorem} \label{mffo} The modified far field operator $\boldsymbol{\mathcal{F}}$ is injective with dense range if and only if there does not exist a nontrivial solution $(\mathbf{w},\mathbf{v},p)$ of the modified interior transmission problem \begin{subequations} \label{mit} \begin{align} \curl\curl\mathbf{w} - k^2\epsilon\mathbf{w} &= \mathbf{0} \text{ in } B, \label{mit1} \\ \curl\gamma^{-1}\curl\mathbf{v} - k^2\eta\mathbf{v} + k^2\nabla p &= \mathbf{0} \text{ in } B, \label{mit2} \\ \div\mathbf{v} &= 0 \text{ in } B, \label{mit3} \\ \un\cdot\mathbf{v} &= 0 \text{ on } \partial B, \label{mit4} \\ \un\times(\mathbf{w}-\mathbf{v}) &= \mathbf{0} \text{ on } \partial B, \label{mit5} \\ \un\times\left(\curl\mathbf{w} - \gamma^{-1}\curl\mathbf{v}\right) &= \mathbf{0} \text{ on } \partial B, \label{mit6} \end{align} \end{subequations} for which $\mathbf{v}$ and $p$ are of the form \begin{equation*} \mathbf{v}(\mathbf{x}) = \int_{\mathbb{S}^2} \mathbf{E}_0(\mathbf{x},\mathbf{d};\mathbf{g}(\mathbf{d})) \, ds(\mathbf{d}), \quad p(\mathbf{x}) = \int_{\mathbb{S}^2} P(\mathbf{x},\mathbf{d};\mathbf{g}(\mathbf{d})) \, ds(\mathbf{d}), \;\mathbf{x}\in B, \end{equation*} where $\mathbf{E}_0(\cdot,\mathbf{d};\mathbf{p})$ and $P(\cdot,\mathbf{d};\mathbf{p})$ satisfy \eqref{aux} with a plane wave incident field $\mathbf{E}^i(\cdot,\mathbf{d};\mathbf{p})$. \end{theorem} For a fixed $\gamma$, we call a value of $\eta$ for which there exists a nontrivial solution $(\mathbf{w},\mathbf{v},p)$ of \eqref{mit} a \emph{modified electromagnetic transmission eigenvalue}, and in the following sections we investigate this problem and its eigenvalues in greater detail. For future reference we define an \emph{electromagnetic Herglotz wave function} $\mathbf{v}_\mathbf{g}^i$ by \begin{equation} \label{herglotz} \mathbf{v}_\mathbf{g}^i(\mathbf{x}) := ik\int_{\mathbb{S}^2} e^{-ik\mathbf{x}\cdot\mathbf{d}} \mathbf{g}(\mathbf{d}) ds(\mathbf{d}), \quad \mathbf{x}\in\mathbb{R}^3. \end{equation} We remark that by linearity the electric far field pattern of the physical problem \eqref{sc} for $\mathbf{u}^i = \mathbf{v}_\mathbf{g}^i$ is given by $\mathbf{F}\mathbf{g}$, and the same relationship holds between the far field pattern of the auxiliary problem \eqref{aux} and the auxiliary far field operator $\mathbf{F}_0$. \section{The modified interior transmission problem} \label{sec_mitp} We now study a nonhomogeneous version of the modified interior transmission problem \eqref{mit}, which is to find $(\mathbf{w},\mathbf{v},p)\in \Hcurl{B}\times\Hcurl{B}\times H_*^1(B)$ satisfying \begin{subequations} \label{nmit} \begin{align} \curl\curl\mathbf{w} - k^2\epsilon\mathbf{w} &= \mathbf{f} \text{ in } B, \label{nmit1} \\ \curl\gamma^{-1}\curl\mathbf{v} - k^2\eta\mathbf{v} + k^2\nabla p &= \mathbf{g} \text{ in } B, \label{nmit2} \\ \div\mathbf{v} &= 0 \text{ in } B, \label{nmit3} \\ \un\cdot\mathbf{v} &= 0 \text{ on } \partial B, \label{nmit4} \\ \un\times(\mathbf{w}-\mathbf{v}) &= \boldsymbol{\xi} \text{ on } \partial B, \label{nmit5} \\ \un\times\left(\curl\mathbf{w} - \gamma^{-1}\curl\mathbf{v}\right) &= \mathbf{h} \text{ on } \partial B, \label{nmit6} \end{align} \end{subequations} where $\mathbf{f}\in\mathbf{L}^2(B)$, $\mathbf{g}\in\Hdivzero{B}$, and $\boldsymbol{\xi},\mathbf{h}\in\mathbf{H}^{-1/2}(\textnormal{Div},B)$ are given. Here we have used $\Hdivzero{B}$ to denote the subspace of $\mathbf{L}^2(B)$ consisting of vector fields with vanishing divergence in $B$. Our approach will be similar to our analysis of the auxiliary problem in Section \ref{sec_aux}, and in the remark following the next assumption we make two modifications analogous to those of Remark \ref{remark_mod1}. \begin{assumption} \label{assumption_dirichlet} \textnormal{ We assume that $k$ is chosen such that there exists a unique $\boldsymbol{\varphi}\in\Hcurl{B}$ satisfying \begin{subequations} \label{dir} \begin{align} \curl\curl\boldsymbol{\varphi} - k^2\epsilon\boldsymbol{\varphi} &= \mathbf{f} \text{ in } B, \label{dir1} \\ \un\times\boldsymbol{\varphi} &= \boldsymbol{\xi} \text{ on } \partial B, \label{dir2} \end{align} \end{subequations} with the estimate \begin{equation} \label{phi_est} \norm{\boldsymbol{\varphi}}_{\Hcurl{B}} \le C\left(\norm{\mathbf{f}}_{\mathbf{L}^2(B)} + \norm{\boldsymbol{\xi}}_{\mathbf{H}^{-1/2}(\textnormal{Div},\partial B)}\right) \end{equation} for a constant $C>0$ independent of $\mathbf{f}$ and $\boldsymbol{\xi}$. } \end{assumption} \begin{remark} \label{remark_mod2} \textnormal{ Assumption \ref{assumption_dirichlet} holds provided that $k^2$ is not an interior Maxwell eigenvalue (cf. \cite[Chapter 4]{monk}. Under this assumption we choose a lifting function $\boldsymbol{\varphi}\in\Hcurl{B}$ to be the unique solution of \eqref{dir} with the estimate \eqref{phi_est}. We may now replace $\mathbf{w}$ with $\mathbf{w} - \boldsymbol{\varphi}$ and modify $\mathbf{h}$ accordingly to obtain \eqref{nmit} with $\mathbf{f} = \mathbf{0}$ and $\boldsymbol{\xi} = \mathbf{0}$. Rather than make this modification explicit, we assume without loss of generality that $\mathbf{f} = \mathbf{0}$ and $\boldsymbol{\xi} = \mathbf{0}$. For the second modification we define $\zeta\in H_*^1(B)$ as the unique solution of \begin{align*} \Delta\zeta &= 0 \text{ in } B, \\ \normal{\zeta} &= k^{-2}\left( -\un\cdot\mathbf{g} + \nabla_{\partial B}\cdot\mathbf{h} \right) \text{ on } \partial B. \end{align*} We replace $p$ with $p+\zeta$ in \eqref{nmit2}, which (along with our assumption that $\mathbf{f} = \mathbf{0}$ and $\boldsymbol{\xi} = \mathbf{0}$ from Remark \ref{remark_mod2}) results in the equivalent problem of finding $(\mathbf{w},\mathbf{v},p)\in \Hcurl{B}\times\Hcurl{B}\times H_*^1(B)$ satisfying \begin{subequations} \label{eq_nmit} \begin{align} \curl\curl\mathbf{w} - k^2\epsilon\mathbf{w} &= \mathbf{0} \text{ in } B, \label{eq_nmit1} \\ \curl\gamma^{-1}\curl\mathbf{v} - k^2\eta\mathbf{v} + k^2\nabla p &= \mathbf{g} + k^2\nabla\zeta \text{ in } B, \label{eq_nmit2} \\ \div\mathbf{v} &= 0 \text{ in } B, \label{eq_nmit3} \\ \un\cdot\mathbf{v} &= 0 \text{ on } \partial B. \label{eq_nmit4} \\ \un\times(\mathbf{w}-\mathbf{v}) &= \mathbf{0} \text{ on } \partial B, \label{eq_nmit5} \\ \un\times\left(\curl\mathbf{w} - \gamma^{-1}\curl\mathbf{v}\right) &= \mathbf{h} \text{ on } \partial B, \label{eq_nmit6} \end{align} \end{subequations} } \end{remark} We are now in a position to study the nonhomogeneous problem \eqref{nmit} through the equivalent problem \eqref{eq_nmit}. We first define the space \begin{equation*} \boldsymbol{\mathcal{H}} := \{(\mathbf{w},\mathbf{v},p)\in \Hcurl{B}\times\Hcurl{B}\times H_*^1(B) \mid \mathbf{w}-\mathbf{v}\in\Hcurltr{B}\}, \end{equation*} equipped with the standard inner product on $\Hcurl{B}\times\Hcurl{B}\times H_*^1(B)$, and we see that an equivalent variational formulation of \eqref{eq_nmit} is to find $(\mathbf{w},\mathbf{v},p)\in\boldsymbol{\mathcal{H}}$ such that \begin{align} \begin{split} \label{varprob} (\curl\mathbf{w},\curl\mathbf{w}')_B &- \gamma^{-1}(\curl\mathbf{v},\curl\mathbf{v}')_B - k^2(\epsilon\mathbf{w},\mathbf{w}')_B + k^2\eta(\mathbf{v},\mathbf{v}')_B \\ &- k^2(\nabla p,\mathbf{v}')_B - k^2(\mathbf{v},\nabla p')_B = -(\mathbf{g},\mathbf{v}')_B - \inner{\mathbf{h}}{\mathbf{w}_T'}_{\partial B} - k^2(\nabla\zeta,\mathbf{v}')_B \end{split} \end{align} for all $(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}$. If we choose $(\mathbf{w}',\mathbf{v}',p') = (\mathbf{0},\mathbf{0},q)$ in \eqref{varprob} for any $q\in H_*^1(B)$, then we see that \begin{equation} (\mathbf{v},\nabla q)_B = 0 \quad\forall q\in H_*^1(B), \label{var_prop1} \end{equation} which implies that $\div\mathbf{v} = 0$ in $B$ and $\un\cdot\mathbf{v} = 0$ on $\partial B$. By choosing $(\mathbf{w}',\mathbf{v}',p') = (\varphi,\psi,q)$ for $\varphi,\psi,q\in H_*^1(B)$ such that $\varphi-\psi\in H_0^1(B)$ (which implies that $\un\times(\nabla\varphi-\nabla\psi) = \mathbf{0}$ on $\partial B$) in \eqref{varprob} and applying the previous result we see that \begin{equation} k^2(\epsilon\mathbf{w},\nabla\varphi)_B + k^2(\nabla p,\nabla \psi)_B = -(\mathbf{g},\nabla\psi)_B - \inner{\mathbf{h}}{\nabla_{\partial B}\varphi}_{\partial B} - k^2(\nabla\zeta,\nabla\psi)_B. \label{var_prop2a} \end{equation} However, by the divergence theorem and the definition of the surface divergence operator the right-hand side of \eqref{var_prop2a} becomes \begin{align*} - (\mathbf{g},\nabla\psi)_B - \inner{\mathbf{h}}{\nabla_{\partial B}\varphi}_{\partial B} - k^2(\nabla\zeta,\nabla\psi)_B &= -\inner{\un\cdot\mathbf{g}}{\psi}_{\partial B} + \inner{\nabla_{\partial B}\cdot\mathbf{h}}{\varphi}_{\partial B} - k^2\inner{\normal{\zeta}}{\psi}_{\partial B} \\ &= \inner{\left(-\un\cdot\mathbf{g} + \nabla_{\partial B}\cdot\mathbf{h}\right) - k^2\normal{\zeta}}{\varphi}_{\partial B} \\&= 0, \end{align*} where the final equality follows from the definition of $\zeta$ in Remark \ref{remark_mod2}. We conclude that \begin{equation} (\epsilon\mathbf{w},\nabla\varphi)_B + (\nabla p,\nabla \psi)_B = 0. \label{var_prop2b} \end{equation} This result motivates us to define the spaces \begin{gather*} S := \{(\varphi,\psi,q)\in(H_*^1(B))^3 \mid \varphi-\psi\in H_0^1(B)\}, \\ \boldsymbol{\mathcal{H}}_0 := \{(\mathbf{w},\mathbf{v},p)\in\boldsymbol{\mathcal{H}} \mid (\epsilon\mathbf{w},\nabla\varphi)_B + (\nabla p,\nabla \psi)_B + (\mathbf{v},\nabla q)_B = 0 \quad\forall(\varphi,\psi,q)\in S\}, \end{gather*} where $\boldsymbol{\mathcal{H}}_0$ is equipped with the inner product on $\boldsymbol{\mathcal{H}}$. We observe that the space $\boldsymbol{\mathcal{H}}_0$ includes both conditions \eqref{var_prop1} and \eqref{var_prop2b} that must be satisfied by solutions of the modified interior transmission problem \eqref{mit}, and we will use this fact in order to establish results on the solvability of this problem. In the following lemma we clarify the condition built into the definition of the space $\boldsymbol{\mathcal{H}}_0$. \begin{lemma} \label{lemma_H0} A given $(\mathbf{w},\mathbf{v},p)\in\boldsymbol{\mathcal{H}}$ lies in the space $\boldsymbol{\mathcal{H}}_0$ if and only if \begin{subequations} \label{H0} \begin{gather} \div(\epsilon\mathbf{w}) = 0, \quad \Delta p = 0, \quad \div\mathbf{v} = 0 \text{ in } B, \label{H01} \\ \normal{p} + \un\cdot\epsilon\mathbf{w} = 0, \quad \un\cdot\mathbf{v} = 0 \text{ on } \partial B. \label{H02} \end{gather} \end{subequations} \end{lemma} \begin{proof} If $(\mathbf{w},\mathbf{v},p)\in\boldsymbol{\mathcal{H}}_0$, then we already established the conditions on $\mathbf{v}$ in \eqref{H0}. If we choose $(\varphi,\psi,\xi) = (\phi,0,0)$ for some $\phi\in H_0^1(B)$, then it follows that $\div(\epsilon\mathbf{w}) = 0$ in $B$, and similar reasoning implies that $\Delta p = 0$ in $B$. For all $(\varphi,\psi,0)\in S$ we see from the divergence theorem and the preceding results that \begin{equation*} \inner{\normal{p} + \un\cdot\epsilon\mathbf{w}}{\varphi}_{\partial B} = \inner{\un\cdot\epsilon\mathbf{w}}{\varphi}_{\partial B} + \inner{\normal{p}}{\psi}_{\partial B} = (\epsilon\mathbf{w},\nabla\varphi)_B + (\nabla p,\nabla\psi)_B = 0, \end{equation*} which provides the remaining condition in \eqref{H02}. \par Conversely, if $(\mathbf{w},\mathbf{v},p)$ satisfies \eqref{H0}, then for all $(\varphi,\psi,\xi)\in S$ it follows from the divergence theorem that \begin{equation*} (\epsilon\mathbf{w},\nabla\varphi)_B + (\nabla p,\nabla \psi)_B + (\nabla\xi,\mathbf{v})_B = \inner{\un\cdot\epsilon\mathbf{w}}{\varphi}_{\partial B} + \inner{\normal{p}}{\psi}_{\partial B} = \inner{\un\cdot\epsilon\mathbf{w} + \normal{p}}{\varphi}_{\partial B} = 0, \end{equation*} and we conclude that $(\mathbf{w},\mathbf{v},p)\in\boldsymbol{\mathcal{H}}_0$. \qed\hfill $\Box$ \end{proof} We will return to the space $\boldsymbol{\mathcal{H}}_0$, but we must first introduce an important method for establishing solvability results for \eqref{nmit} when $\gamma\neq1$. In the variational formulation \eqref{varprob}, we see that the principal part of the associated operator is sign-indefinite, and as a result we appeal to $\mathcal{T}$-coercivity in order to restore positivity (cf. \cite{cakoni_colton_haddar,chesnel}). In particular, in the case $0<\gamma<1$ we define the operator $\mathcal{T}:\boldsymbol{\mathcal{H}}\to\boldsymbol{\mathcal{H}}$ by $\mathcal{T}(\mathbf{w},\mathbf{v},p) := (\mathbf{w}-2\mathbf{v},-\mathbf{v},p)$, and we see that $\mathcal{T}^2 = I$ and consequently that $\mathcal{T}$ is an isomorphism. We will provide proofs for this case and then simply state the appropriate choice of $\mathcal{T}$ for the case $\gamma>1$. If the sesquilinear form $a_\eta(\cdot,\cdot)$ is defined on $\boldsymbol{\mathcal{H}}\times\boldsymbol{\mathcal{H}}$ by the left-hand side of \eqref{varprob}, then we define the sesquilinear form $a_\eta^\mathcal{T}(\cdot,\cdot)$ by \begin{equation*} a_\eta^\mathcal{T}((\mathbf{w},\mathbf{v},p),(\mathbf{w}',\mathbf{v}',p')) := a_\eta((\mathbf{w},\mathbf{v},p),\mathcal{T}(\mathbf{w}',\mathbf{v}',p')) \quad\forall(\mathbf{w},\mathbf{v},p),(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}. \end{equation*} Although we will see that we have restored positivity of the principal part of the operator with the introduction of $\mathcal{T}$, the problem still remains that the space $\boldsymbol{\mathcal{H}}$ is not compactly embedded into $\mathbf{L}^2(B)\times\mathbf{L}^2(B)\times H_*^1(B)$. However, we may obtain compactness by working in the space $\boldsymbol{\mathcal{H}}_0$, which is still a valid space in which to seek the solution since $\mathcal{T}$ is an isomorphism on $\boldsymbol{\mathcal{H}}$. Thus, we introduce the problem of finding $(\mathbf{w},\mathbf{v},p)\in\boldsymbol{\mathcal{H}}_0$ which satisfies \begin{equation} a_\eta^\mathcal{T}((\mathbf{w},\mathbf{v},p),(\mathbf{w}',\mathbf{v}',p')) = \ell(\mathcal{T}(\mathbf{w}',\mathbf{v}',p')) \quad\forall(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}_0, \label{varprob_T} \end{equation} where $\ell$ is the antilinear functional on $\boldsymbol{\mathcal{H}}_0$ representing the right-hand sides in \eqref{eq_nmit} with the isomorphism $\mathcal{T}$ applied to the test functions. By means of the Riesz representation theorem we define the operator $A_\eta^\mathcal{T}:\boldsymbol{\mathcal{H}}_0\to\boldsymbol{\mathcal{H}}_0$ such that \begin{equation*} (A_\eta^\mathcal{T}(\mathbf{w},\mathbf{v},p),(\mathbf{w}',\mathbf{v}',p'))_{\boldsymbol{\mathcal{H}}} = a_\eta^\mathcal{T}((\mathbf{w},\mathbf{v},p),(\mathbf{w}',\mathbf{v}',p')) \quad\forall(\mathbf{w},\mathbf{v},p),(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}_0. \end{equation*} We observe that if $(\mathbf{w},\mathbf{v},p)$ satisfies \eqref{eq_nmit} for some $\eta\in\mathbb{C}$, then $A_\eta^\mathcal{T}(\mathbf{w},\mathbf{v},p) = \mathbf{L}$, where $\mathbf{L}\in\boldsymbol{\mathcal{H}}_0$ is the Riesz representer for $\ell\circ\mathcal{T}$ in $\boldsymbol{\mathcal{H}}_0$, and as a result we study the operator $A_\eta^\mathcal{T}$. We must first prove a compactness result for the space $\boldsymbol{\mathcal{H}}_0$, and for that we will again appeal to Theorem \ref{theorem_costabel} along with another compactness result from \cite{monk}. \begin{theorem} \label{theorem_compact} The space $\boldsymbol{\mathcal{H}}_0$ is compactly embedded into $\mathbf{L}^2(B)\times\mathbf{L}^2(B)\times H_*^1(B)$. \end{theorem} \begin{proof} We let $\{(\mathbf{w}_m,\mathbf{v}_m,p_m)\}$ be a bounded sequence in $\boldsymbol{\mathcal{H}}_0$, and we observe that $\{\mathbf{v}_m\}$ is a bounded sequence in $\Hcurl{B}$. Since $\div\mathbf{v}_m = 0 \in L^2(B)$ and $\un\cdot\mathbf{v}_m = \mathbf{0} \in \mathbf{L}^2(\partial B)$ it follows from Theorem \ref{theorem_costabel} that $\{\un\times\mathbf{v}_m\}$ is a bounded sequence in $\mathbf{L}_t^2(\partial B)$ and $\{\mathbf{v}_m\}$ is a bounded sequence in $\mathbf{H}^{1/2}(B)$. In particular, the compact embedding of $\mathbf{H}^{1/2}(B)$ into $\mathbf{L}^2(B)$ implies the existence of a subsequence of $\{\mathbf{v}_m\}$ that converges in the latter space. By passing to the corresponding subsequence of $\{\mathbf{w}_m\}$ without changing notation, we also see that $\{\mathbf{w}_m\}$ is a bounded sequence in the space \begin{equation*} \mathbf{X}_0 := \left\{\boldsymbol{\psi}\in\Hcurl{B} \mid \div(\epsilon\boldsymbol{\psi}) = 0 \text{ in } B, \,\un\times\boldsymbol{\psi}\in\mathbf{L}_t^2(\partial B)\right\}, \end{equation*} equipped with the norm defined by \begin{equation*} \norm{\boldsymbol{\psi}}_{\mathbf{X}_0}^2 := \norm{\boldsymbol{\psi}}_{\Hcurl{B}}^2 + \norm{\un\times\boldsymbol{\psi}}_{\mathbf{L}_t^2(\partial B)}^2, \end{equation*} which follows from the fact that $\un\times\mathbf{w}_m = \un\times\mathbf{v}_m$ on $\partial B$. The compact embedding of the space $\mathbf{X}_0$ into $\mathbf{L}^2(B)$ (cf. \cite[Theorem 4.7]{monk}) allows us to conclude that there exists a subsequence $\{\mathbf{w}_m\}$ that converges in $\mathbf{L}^2(B)$. Finally, we see that each $p_m\in H_*^1(B)$ satisfies the well-posed Neumann problem \begin{align*} \Delta p_m &= 0 \text{ in } B, \\ \normal{p_m} &= -\un\cdot(\epsilon\mathbf{w}_m) \text{ on } \partial B, \end{align*} which implies the existence of a constant $C$ independent of $m$ such that \begin{equation} \norm{\nabla p_m}_{\mathbf{L}^2(B)} \le C\norm{\un\cdot(\epsilon\mathbf{w}_m)}_{H^{-1/2}(\partial B)}. \label{neumann_estimate} \end{equation} Since the sequence $\{\epsilon\mathbf{w}_m\}$ lies in the space $\Hdivzero{B}$ and contains a convergent subsequence in this space, the normal trace theorem for the space $\Hdiv{B}$ and the estimate \eqref{neumann_estimate} imply that the corresponding subsequence of $\{p_m\}$ converges in $H_*^1(B)$. Therefore, we conclude that a subsequence of $\{(\mathbf{w}_m,\mathbf{v}_m,p_m)\}$ converges in the space $\mathbf{L}^2(B)\times\mathbf{L}^2(B)\times H_*^1(B)$. \qed\hfill $\Box$ \end{proof} With this compactness result, we may now split the operator $A_\eta^\mathcal{T}$ into an invertible and compact part, and to this end we define the operators $\hat{A}^\mathcal{T}, B_\eta^\mathcal{T}:\boldsymbol{\mathcal{H}}_0\to\boldsymbol{\mathcal{H}}_0$ such that \begin{align*} (\hat{A}^\mathcal{T}(\mathbf{w},\mathbf{v},p),(\mathbf{w}',\mathbf{v}',p'))_{\boldsymbol{\mathcal{H}}_0} &= (\curl\mathbf{w},\curl\mathbf{w}')_B + \gamma^{-1}(\curl\mathbf{v},\curl\mathbf{v}')_B + k^2(\mathbf{w},\mathbf{w}')_B \\ &\quad\quad+ k^2(\mathbf{v},\mathbf{v}')_B - 2(\curl\mathbf{w},\curl\mathbf{v}')_B + (\nabla p,\nabla p')_B, \\ (B_\eta^\mathcal{T}(\mathbf{w},\mathbf{v},p),(\mathbf{w}',\mathbf{v}',p'))_{\boldsymbol{\mathcal{H}}_0} &= -k^2((\epsilon+1)\mathbf{w},\mathbf{w}')_B - k^2(\eta+1)(\mathbf{v},\mathbf{v}')_B + 2k^2(\epsilon\mathbf{w},\mathbf{v}')_B \end{align*} for all $(\mathbf{w},\mathbf{v},p)\in\boldsymbol{\mathcal{H}}_0$. From the definition of $A_\eta^\mathcal{T}$ we see that $A_\eta^\mathcal{T} = \hat{A}^\mathcal{T} + B_\eta^\mathcal{T}$. An application of Young's inequality implies that \begin{equation*} 2\abs{(\curl\mathbf{w},\curl\mathbf{v})_B} \le \delta(\curl\mathbf{w},\curl\mathbf{w})_B + \delta^{-1}(\curl\mathbf{v},\curl\mathbf{v})_B \end{equation*} for any $\delta>0$, and it follows that \begin{align*} \abs{(\hat{A}^\mathcal{T}(\mathbf{w},\mathbf{v},p),(\mathbf{w},\mathbf{v},p))_{\boldsymbol{\mathcal{H}}_0}} &\ge (1-\delta)(\curl\mathbf{w},\curl\mathbf{w})_B + \left(\gamma^{-1} - \delta^{-1}\right)(\curl\mathbf{v},\curl\mathbf{v})_B \\ &\quad\quad+ k^2(\mathbf{w},\mathbf{w})_B+ k^2(\mathbf{v},\mathbf{v})_B + (\nabla p,\nabla p)_B. \end{align*} If $0<\gamma<1$ and we choose $\delta\in(\gamma,1)$, then we conclude from the Lax-Milgram Lemma that $\hat{A}^\mathcal{T}$ is invertible with bounded inverse. In the case $\gamma>1$, we may use the isomorphism defined by $\mathcal{T}(\mathbf{w},\mathbf{v},p) := (\mathbf{w},-\mathbf{v}+2\mathbf{w},p)$ in the same manner to conclude invertibility of $\hat{A}^\mathcal{T}$. In either case, the operator $B_\eta^\mathcal{T}$ is compact, as can be easily seen from the compact embedding of $\boldsymbol{\mathcal{H}}_0$ into $\mathbf{L}^2(B)\times\mathbf{L}^2(B)\times H_*^1(B)$ that we established in Theorem \ref{theorem_compact}. Therefore, we have shown that the operator $A_\eta^\mathcal{T} = \hat{A}^\mathcal{T} + B_\eta^\mathcal{T}$ is Fredholm of index zero provided that $\gamma\neq1$, and since $\mathcal{T}$ is an isomorphism we obtain the following result. We remark that, although we chose to use $\mathcal{T}$-coercivity in the present discussion, the proof of the subsequent theorem may alternatively be approached using techniques from \cite{cakoni_nguyen,nguyen_sil}. \begin{theorem} \label{theorem_Fredholm} If $\gamma\neq1$, then the modified interior transmission problem \eqref{nmit} is of Fredholm type. In particular, if $\eta$ is not a modified transmission eigenvalue, then there exists a unique solution $(\mathbf{w},\mathbf{v},p)\in\Hcurl{B}\times\Hcurl{B}\times H_*^1(B)$ of \eqref{nmit} satisfying the estimate \begin{equation} \norm{\mathbf{w}}_{\mathbf{H}(\curl,B)} + \norm{\mathbf{v}}_{\mathbf{H}(\curl,B)} + \norm{\nabla p}_B \le C\Bigr( \norm{\mathbf{f}}_B + \norm{\mathbf{g}}_B + \norm{\boldsymbol{\xi}}_{\mathbf{H}^{-1/2}(\textnormal{Div},\partial B)} + \norm{\mathbf{h}}_{\mathbf{H}^{-1/2}(\textnormal{Div},\partial B)} \Bigr). \label{fredholm_est} \end{equation} \end{theorem} We remark that the immediate result holds only for the equivalent problem \eqref{eq_nmit}, and Theorem \ref{theorem_Fredholm} follows from standard arguments on lifting functions since both $\boldsymbol{\varphi}$ and $\zeta$ defined in Remark \ref{remark_mod2} are controlled by the appropriate norms of the right-hand sides of \eqref{nmit}. The solvability result we established in Theorem \ref{theorem_Fredholm} and its preceding arguments will allow us to study the class of modified electromagnetic transmission eigenvalues in the next section. \section{Properties of modified electromagnetic transmission eigenvalues} \label{sec_props} We now investigate the properties of modified transmission eigenvalues, and we begin with an application of the analytic Fredholm theorem (cf. \cite[Theorem 8.26]{colton_kress}). Since the mapping $\eta\mapsto B_\eta^\mathcal{T}$ is clearly analytic, this theorem asserts that either i) the operator $A_\eta^\mathcal{T} = \hat{A}^\mathcal{T} + B_\eta^\mathcal{T}$ is invertible for no values of $\eta$, or ii) the operator $A_\eta^\mathcal{T} = \hat{A}^\mathcal{T} + B_\eta^\mathcal{T}$ is invertible for all $\eta$ except possibly in a discrete subset of the complex plane. Our aim is to show that the second statement holds, which will follow once we establish the existence of at least one value of $\eta$ for which $A_\eta^\mathcal{T}$ is injective, as this property implies invertibility by Theorem \ref{theorem_Fredholm}. \par We suppose that $(\mathbf{w},\mathbf{v},p)$ satisfies \eqref{mit} for some $\eta\in\mathbb{C}$, which may be written variationally as \begin{align*} (\curl\mathbf{w},\curl\mathbf{w}')_B - \gamma^{-1}(\curl\mathbf{v},&\curl\mathbf{v}')_B - k^2(\epsilon\mathbf{w},\mathbf{w}')_B + k^2\eta(\mathbf{v},\mathbf{v}')_B \nonumber \\ &- k^2(\nabla p,\mathbf{v}')_B - k^2(\mathbf{v},\nabla p')_B = 0 \quad\forall(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}_0. \end{align*} We remark that the last two terms on the left-hand side of this equation vanish by definition of $\boldsymbol{\mathcal{H}}_0$, and as a result we exclude them from this point onward. If we choose $(\mathbf{w}',\mathbf{v}',p') = (\mathbf{w},\mathbf{v},p)$ and take the imaginary part of this equation, then we see that \begin{equation*} k^2\Im(\eta)(\mathbf{v},\mathbf{v})_B = k^2(\Im(\epsilon)\mathbf{w},\mathbf{w})_B. \end{equation*} We observe that every eigenvalue $\eta$ must have nonnegative imaginary part, and it follows that $A_{-i\tau}^{\mathcal{T}}$ is injective whenever $\tau>0$. Thus, by Theorem \ref{theorem_Fredholm} and the analytic Fredholm theorem we conclude that the set of modified electromagnetic transmission eigenvalues is discrete without finite accumulation point. We summarize these results in the following theorem. \begin{theorem} \label{theorem_discrete} If $\gamma\neq1$, then the set of modified electromagnetic transmission eigenvalues is discrete in the complex plane, and, if they exist, each eigenvalue has nonnegative imaginary part. \end{theorem} We define the space \begin{equation*} \Hdivnormzero{B} := \{\mathbf{u}\in\mathbf{L}^2(B) \mid \div\mathbf{u} = 0 \text{ in } B, \; \un\cdot\mathbf{u} = 0 \text{ on } \partial B\}. \end{equation*} For a suitable choice of $z\in\mathbb{R}$, we next consider the operator $\boldPsi_z^{(\epsilon)}:\Hdivnormzero{B}\to\Hdivnormzero{B}$ defined by $\boldPsi_z^{(\epsilon)}\mathbf{g} := \mathbf{v}$, where $(\mathbf{w},\mathbf{v},p)\in\boldsymbol{\mathcal{H}}_0$ satisfies \begin{align} \begin{split} \label{source1} &(\curl\mathbf{w},\curl\mathbf{w}')_B - \gamma^{-1}(\curl\mathbf{v},\curl\mathbf{v}')_B \\ &\hspace{7em} - k^2(\epsilon\mathbf{w},\mathbf{w}')_B + k^2z(\mathbf{v},\mathbf{v}')_B = -k^2(\mathbf{g},\mathbf{v}')_B \quad\forall(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}_0, \end{split} \end{align} which is an equivalent variational formulation of \eqref{nmit} with $\mathbf{f} = \mathbf{0}$, $\boldsymbol{\xi} = \mathbf{0}$, $\mathbf{h} = \mathbf{0}$, and $k^2\mathbf{g}$ in place of $\mathbf{g}$ for convenience. We note that the ability to choose $z\in\mathbb{R}$ such that \eqref{source1} is well-posed follows from Theorem \ref{theorem_discrete}. We remark that the scalar field $p$ no longer appears in the equation, but it is still determined uniquely by the properties of the space $\boldsymbol{\mathcal{H}}_0$. We first establish the following results for the operator $\boldPsi_z^{(\epsilon)}$. \begin{prop} \label{prop_Psi} The operator $\boldPsi_z^{(\epsilon)}$ is injective. Moreover, a given $\eta\in\mathbb{C}$ is a modified electromagnetic transmission eigenvalue if and only if $(\eta-z)^{-1}$ is an eigenvalue of $\boldPsi_z^{(\epsilon)}$. \end{prop} \begin{proof} We suppose that $\boldPsi_z^{(\epsilon)}\mathbf{g} = \mathbf{0}$ for some $\mathbf{g}\in\Hdivnormzero{B}$, and from \eqref{source1} we have \begin{equation} (\curl\mathbf{w},\curl\mathbf{w}')_B - k^2(\epsilon\mathbf{w},\mathbf{w}')_B = -k^2(\mathbf{g},\mathbf{v}')_B \quad\forall(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}_0. \label{inj1} \end{equation} In particular, by choosing $\mathbf{w}'\in\Hcurltr{B}$ and $\mathbf{v}' = \mathbf{0}$ we obtain \begin{equation*} (\curl\mathbf{w},\curl\mathbf{w}')_B - k^2(\epsilon\mathbf{w},\mathbf{w}')_B = 0 \quad\forall\mathbf{w}'\in\Hcurltr{B}. \end{equation*} We see that $\mathbf{w}$ satisfies \eqref{dir} with $\mathbf{f} = \mathbf{0}$ and $\boldsymbol{\xi} = \mathbf{0}$, and consequently by Assumption \ref{assumption_dirichlet} we conclude that $\mathbf{w} = \mathbf{0}$. From \eqref{inj1} we immediately have $\mathbf{g} = \mathbf{0}$, and it follows that $\boldPsi_z^{(\epsilon)}$ is injective. \par For the second assertion, we suppose that $\boldPsi_z^{(\epsilon)} \mathbf{g} = \lambda\mathbf{g}$ for some $\lambda\in\mathbb{C}$ and nonzero $\mathbf{g}\in\Hdivnormzero{B}$. We note that $\lambda\neq0$ since $\boldPsi_z^{(\epsilon)}$ is injective. We may write $\mathbf{g} = \lambda^{-1}\mathbf{v}$, and from \eqref{source1} we have \begin{equation*} (\curl\mathbf{w},\curl\mathbf{w}')_B - \gamma^{-1}(\curl\mathbf{v},\curl\mathbf{v}')_B - k^2(\epsilon\mathbf{w},\mathbf{w}')_B + k^2z(\mathbf{v},\mathbf{v}')_B = -k^2(\lambda^{-1}\mathbf{v},\mathbf{v}')_B \quad\forall(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}_0. \end{equation*} By rearranging this expression we obtain \begin{equation*} (\curl\mathbf{w},\curl\mathbf{w}')_B - \gamma^{-1}(\curl\mathbf{v},\curl\mathbf{v}')_B - k^2(\epsilon\mathbf{w},\mathbf{w}')_B + k^2(z+\lambda^{-1})(\mathbf{v},\mathbf{v}')_B = 0 \quad\forall(\mathbf{w}',\mathbf{v}',p')\in\boldsymbol{\mathcal{H}}_0. \end{equation*} Since $\mathbf{v}\neq0$ by injectivity of $\boldPsi_z^{(\epsilon)}$, it follows that $\eta = z+\lambda^{-1}$ is a modified electromagnetic transmission eigenvalue. Noting that $\lambda = (\eta-z)^{-1}$ and following these steps in reverse order provides the converse result. \qed\hfill $\Box$ \end{proof} As a result of Proposition \ref{prop_Psi} we may establish properties of modified electromagnetic transmission eigenvalues by studying the spectrum of $\boldPsi_z^{(\epsilon)}$. In the following lemma we prove that our modification of \eqref{mp} indeed results in a compact solution operator. \begin{lemma} \label{lemma_Psi_compact} The operator $\boldPsi_z^{(\epsilon)}:\Hdivnormzero{B}\to\Hdivnormzero{B}$ is compact. \end{lemma} \begin{proof} We suppose that $\{\mathbf{g}_n\}$ is a bounded sequence in $\Hdivnormzero{B}$. The estimate \eqref{fredholm_est} combined with Theorem \ref{theorem_costabel} implies that \begin{equation*} \norm{\boldPsi_z^{(\epsilon)}\mathbf{g}}_{\mathbf{H}^{1/2}(B)} \le C\norm{\mathbf{g}}_B \end{equation*} for a constant $C>0$ independent of $\mathbf{g}$, and it follows that the sequence $\{\boldPsi_z^{(\epsilon)} \mathbf{g}_n\}$ is bounded in $\mathbf{H}^{1/2}(B)$. From the compact embedding of $\mathbf{H}^{1/2}(B)$ into $\mathbf{L}^2(B)$ we obtain a subsequence of $\{\boldPsi_z^{(\epsilon)} \mathbf{g}_n\}$ that converges to some $\mathbf{v}_0$ in $\mathbf{L}^2(B)$. Continuity of the divergence and normal trace operators yield $\mathbf{v}_0\in\Hdivnormzero{B}$ with convergence in this space as well, and we conclude that $\boldPsi_z^{(\epsilon)}$ is compact. \qed\hfill $\Box$ \end{proof} With the result of this lemma, the spectral theorem for compact operators immediately provides another proof that the set of modified transmission eigenvalues is discrete without finite accumulation point, and next we see that if $\epsilon$ is real-valued it follows that eigenvalues must exist as well. Indeed, if we suppose that $\epsilon$ is real-valued and we let $(\mathbf{w}_j,\mathbf{v}_j,p_j)$ satisfy \eqref{source1} for $\mathbf{g} = \mathbf{g}_j\in\Hdivnormzero{B}$, $j = 1,2$, then we see that \begin{align*} k^2(\mathbf{g}_1,\boldPsi_z^{(\epsilon)}\mathbf{g}_2)_B &= k^2(\mathbf{g}_1,\mathbf{v}_2)_B \\ &= (\curl\mathbf{w}_1,\curl\mathbf{w}_2)_B - \gamma^{-1}(\curl\mathbf{v}_1,\curl\mathbf{v}_2)_B - k^2(\epsilon\mathbf{w}_1,\mathbf{w}_2)_B + k^2z(\mathbf{v}_1,\mathbf{v}_2)_B \\ &= \overline{(\curl\mathbf{w}_2,\curl\mathbf{w}_1)_B} - \gamma^{-1}\overline{(\curl\mathbf{v}_2,\curl\mathbf{v}_1)_B} - k^2\overline{(\epsilon\mathbf{w}_2,\mathbf{w}_1)_B} + k^2z\overline{(\mathbf{v}_2,\mathbf{v}_1)_B} \\ &= k^2\overline{(\mathbf{g}_2,\mathbf{v}_1)_B} \\ &= k^2(\boldPsi_z^{(\epsilon)}\mathbf{g}_1,\mathbf{g}_2)_B. \end{align*} Thus, the operator $\boldPsi_z^{(\epsilon)}$ is self-adjoint, and we summarize the immediate consequences of the spectral theorem for compact self-adjoint operators in the following theorem. \begin{theorem} \label{theorem_eigs} If $\gamma\neq1$ and $\epsilon$ is real-valued, then all of the modified electromagnetic transmission eigenvalues are real and infinitely many exist. \end{theorem} Unfortunately, if $\epsilon$ has a nonzero imaginary part, then we have no general result on existence of eigenvalues as this operator is no longer self-adjoint. However, a limited existence result applicable when $\epsilon$ has sufficiently small imaginary part will be presented in a forthcoming manuscript (see \cite{cakoni_colton_haddar} for a similar result for standard transmission eigenvalues). \section{Determination of modified transmission eigenvalues from electric far field data} \label{sec_determine} In this section we establish that modified electromagnetic transmission eigenvalues may be computed from electric far field data using the linear sampling method. We note that we correct some errors in a similar analysis performed in \cite{camano_lackner_monk} for electromagnetic Stekloff eigenvalues. We begin by recalling that an electric dipole with polarization $\mathbf{q}$ is defined by \begin{align} \begin{split} \mathbf{E}_e(\mathbf{x},\mathbf{z},\mathbf{q}) &:= \frac{i}{k}\textbf{curl}_\mathbf{x}\,\textbf{curl}_\mathbf{x}\, \mathbf{q}\Phi(\mathbf{x},\mathbf{z}), \\ \mathbf{H}_e(\mathbf{x},\mathbf{z},\mathbf{q}) &:= \textbf{curl}_\mathbf{x}\, \mathbf{q}\Phi(\mathbf{x},\mathbf{z}). \end{split} \end{align} The electric field $\mathbf{E}_e(\cdot,\mathbf{z},\mathbf{q})$ is a radiating solution to Maxwell's equations outside of a neighborhood of $\mathbf{z}$ with corresponding far field pattern \begin{equation*} \mathbf{E}_{e,\infty}(\hat{\mathbf{x}},\mathbf{z},\mathbf{q}) := \frac{ik}{4\pi}(\hat{\mathbf{x}}\times\mathbf{q})\times\hat{\mathbf{x}}\, e^{-ik\hat{\mathbf{x}}\cdot\mathbf{z}}. \end{equation*} We investigate the \emph{modified far field equation} for $\mathbf{z}\in B$, which is to find $\mathbf{g}\in\mathbf{L}_t^2(\mathbb{S}^2)$ satisfying \begin{equation} (\mathcal{F}\mathbf{g})(\hat{\mathbf{x}}) = \mathbf{E}_{e,\infty}(\hat{\mathbf{x}},\mathbf{z},\mathbf{q}). \label{mffe} \end{equation} If $\mathbf{g}_\mathbf{z}\in\mathbf{L}_t^2(\mathbb{S}^2)$ satisfies \eqref{mffe}, then we see from Rellich's lemma (cf. \cite{colton_kress}) that \begin{equation*} \mathbf{w}_\mathbf{z}(\mathbf{x}) - \mathbf{v}_\mathbf{z}(\mathbf{x}) = \mathbf{w}_\mathbf{z}^s(\mathbf{x}) - \mathbf{v}_\mathbf{z}^s(\mathbf{x}) = \mathbf{E}_e(\mathbf{x},\mathbf{z},\mathbf{q}), \quad \mathbf{x}\in\mathbb{R}^3\setminus\overline{B}, \end{equation*} where $(\mathbf{w}_\mathbf{z},\mathbf{w}_\mathbf{z}^s)$ and $(\mathbf{v}_\mathbf{z},\mathbf{v}_\mathbf{z}^s,p_\mathbf{z})$ satisfy \eqref{sc} and \eqref{aux}, respectively, with incident field given by the Herglotz wave function $\mathbf{E}^i = \mathbf{v}_{\mathbf{g}_\mathbf{z}}^i$ that we defined in \eqref{herglotz}. It follows that $(\mathbf{w}_\mathbf{z},\mathbf{v}_\mathbf{z},p_\mathbf{z})$ satisfies the modified interior transmission problem \begin{subequations} \label{zmit} \begin{align} \curl\curl\mathbf{w}_\mathbf{z} - k^2\epsilon\mathbf{w}_\mathbf{z} &= \mathbf{0} \text{ in } B, \label{zmit1} \\ \curl\gamma^{-1}\curl\mathbf{v}_\mathbf{z} - k^2\eta\mathbf{v}_\mathbf{z} + k^2\nabla p_\mathbf{z} &= \mathbf{0} \text{ in } B, \label{zmit2} \\ \div\mathbf{v}_\mathbf{z} &= 0 \text{ in } B, \label{zmit3} \\ \un\cdot\mathbf{v}_\mathbf{z} &= 0 \text{ on } \partial B, \label{zmit4} \\ \un\times(\mathbf{w}_\mathbf{z}-\mathbf{v}_\mathbf{z}) &= \un\times\mathbf{E}_e(\cdot,z,\mathbf{q}) \text{ on } \partial B, \label{zmit5} \\ \un\times\left(\curl\mathbf{w}_\mathbf{z} - \gamma^{-1}\curl\mathbf{v}_\mathbf{z}\right) &= \un\times\curl\mathbf{E}_e(\cdot,\mathbf{z},\mathbf{q}) \text{ on } \partial B, \label{zmit6} \end{align} \end{subequations} and that these fields admit the decomposition \begin{equation} \mathbf{w}_\mathbf{z} = \mathbf{v}_{\mathbf{g}_\mathbf{z}}^i + \mathbf{w}_\mathbf{z}^s, \quad \mathbf{v}_\mathbf{z} = \mathbf{v}_{\mathbf{g}_\mathbf{z}}^i + \mathbf{v}_\mathbf{z}^s \text{ in } \mathbb{R}^3. \label{decomp} \end{equation} Unfortunately, the solution of \eqref{zmit} cannot in general be decomposed as in \eqref{decomp}, but in the following lemma we show that these fields may be decomposed as the sum of an incident field and a radiating field such that the incident fields coincide. \begin{lemma} \label{lemma_decomp} If $\eta$ is not a modified transmission eigenvalue, then \eqref{zmit} has a unique solution $(\mathbf{w}_\mathbf{z},\mathbf{v}_\mathbf{z},p_\mathbf{z})\in\Hcurl{B}\times\Hcurl{B}\times H_*^1(B)$ and the fields $\mathbf{w}_\mathbf{z}$, $\mathbf{v}_\mathbf{z}$ may be decomposed as \begin{equation*} \mathbf{w}_\mathbf{z} = \mathbf{u}_\mathbf{z}^i + \mathbf{w}_\mathbf{z}^s, \quad \mathbf{v}_\mathbf{z} = \mathbf{u}_\mathbf{z}^i + \mathbf{v}_\mathbf{z}^s, \end{equation*} where $\mathbf{u}_\mathbf{z}^i\in\Hcurl{B}$ satisfies the free-space Maxwell's equations in $B$ and $\mathbf{w}_\mathbf{z}^s,\mathbf{v}_\mathbf{z}^s\in\Hcurlloc{\mathbb{R}^3}$ are radiating solutions of the free-space Maxwell's equations outside of $B$. \end{lemma} \begin{proof} Since $\eta$ is not a modified transmission eigenvalue, it follows from Theorem \ref{theorem_Fredholm} that \eqref{zmit} possesses a unique solution which depends continuously on the data. In order to arrive at the desired decompositions of $\mathbf{w}_\mathbf{z}$ and $\mathbf{v}_\mathbf{z}$, we apply the Stratton-Chu representation formula (cf. \cite{colton_kress}). First, we define the incident field \begin{align*} \mathbf{w}_\mathbf{z}^i(\mathbf{x}) &:= -\curl\int_{\partial B} \un(\mathbf{y})\times\mathbf{w}_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) + \grad\int_{\partial B} \un(\mathbf{y})\cdot[\epsilon(\mathbf{y})\mathbf{w}_\mathbf{z}(\mathbf{y})] \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) \\ &\hspace{10em} - \int_{\partial B} \un(\mathbf{y})\times\curl\mathbf{w}_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}), \quad \mathbf{x}\in B, \end{align*} and the scattered field \begin{align*} \mathbf{w}_\mathbf{z}^s(\mathbf{x}) &:= -\grad\int_{\partial B}\un(\mathbf{y})\cdot[(\epsilon(\mathbf{y})-1)\mathbf{w}_\mathbf{z}(\mathbf{y})] \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) + \grad\int_B \div[(\epsilon(\mathbf{y})-1)\mathbf{w}_\mathbf{z}(\mathbf{y})] \Phi(\mathbf{x},\mathbf{y}) d\mathbf{y} \\ &\hspace{10em} + k^2\int_B [\epsilon(\mathbf{y})-1]\mathbf{w}_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) d\mathbf{y}, \quad \mathbf{x}\in\mathbb{R}^3, \end{align*} where we have used the fact that $\div(\epsilon\mathbf{w}_\mathbf{z}) = 0$ in $B$. By similar reasoning to \cite[p. 193]{colton_kress}, the incident field may be written as \begin{equation*} \mathbf{w}_\mathbf{z}^i(\mathbf{x}) := -\curl\int_{\partial B} \un(\mathbf{y})\times\mathbf{w}_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) - \frac{1}{k^2}\curl\curl\int_{\partial B} \un(\mathbf{y})\times\curl\mathbf{w}_z(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}), \quad \mathbf{x}\in B, \end{equation*} and consequently we see that $\mathbf{w}_\mathbf{z}^i$ satisfies the free-space Maxwell's equations in $B$. Moreover, an application of the identity \begin{equation*} \curl\curl = -\boldsymbol{\Delta} + \grad\div \end{equation*} and the divergence theorem implies that $\mathbf{w}_\mathbf{z}^s$ is a radiating solution of the free-space Maxwell's equations in $\mathbb{R}^3\setminus\overline{B}$. A similar decomposition appeared in the proof of Lemma 4.1 in \cite{camano_lackner_monk}, but each of the incident and scattered fields is missing a necessary term which is corrected in the above decomposition. Similarly, we define the incident field \begin{align*} \mathbf{v}_\mathbf{z}^i(\mathbf{x}) &:= -\curl\int_{\partial B} \un(\mathbf{y})\times\mathbf{v}_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) - \grad\int_{\partial B} \un(\mathbf{y})\cdot\nabla p_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) \\ &\hspace{10em} - \int_{\partial B} \un(\mathbf{y})\times\curl\mathbf{v}_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}), \quad \mathbf{x}\in B, \end{align*} and the scattered field \begin{align*} \mathbf{v}_\mathbf{z}^s(\mathbf{x}) &:= \grad\int_{\partial B}\un(\mathbf{y})\cdot\nabla p_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) + \curl\int_B \left(1 - \gamma^{-1}\right)\curl\mathbf{v}_\mathbf{z}(\mathbf{y}) \Phi(\mathbf{x},\mathbf{y}) d\mathbf{y} \\ &\hspace{10em} + k^2\int_B [(\eta - 1)\mathbf{v}_\mathbf{z}(\mathbf{y}) - \nabla p_\mathbf{z}(\mathbf{y})] \Phi(\mathbf{x},\mathbf{y}) d\mathbf{y}, \quad \mathbf{x}\in\mathbb{R}^3, \end{align*} where we have used the fact that $\div\mathbf{v}_\mathbf{z} = 0$ in $B$ and $\un\cdot\mathbf{v}_\mathbf{z} = 0$ on $\partial B$. By the same arguments we see that $\mathbf{v}_\mathbf{z}^i$ and $\mathbf{v}_\mathbf{z}^s$ satisfy the free-space Maxwell's equations in $B$ and $\mathbb{R}^3\setminus\overline{B}$, respectively, and the scattered field satisfies the radiation condition. Since $\mathbf{w}_\mathbf{z} = \mathbf{w}_\mathbf{z}^i + \mathbf{w}_\mathbf{z}^s$ and $\mathbf{v}_\mathbf{z} = \mathbf{v}_\mathbf{z}^i + \mathbf{v}_\mathbf{z}^s$ by the Stratton-Chu formula, it remains to prove that $\mathbf{w}_\mathbf{z}^i = \mathbf{v}_\mathbf{z}^i$. By the boundary conditions \eqref{zmit} and the relation \begin{equation*} \un\cdot(\epsilon\mathbf{w}_\mathbf{z}) + \normal{p_\mathbf{z}} = \un\cdot\mathbf{E}_e(\cdot,\mathbf{z},\mathbf{q}) \text{ on } \partial B, \end{equation*} we see that \begin{align*} \mathbf{w}_\mathbf{z}^i(\mathbf{x}) - \mathbf{v}_\mathbf{z}^i(\mathbf{x}) &:= -\curl\int_{\partial B} \un(\mathbf{y})\times\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) + \grad\int_{\partial B} \un(\mathbf{y})\cdot\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) \\ &\hspace{10em} - \int_{\partial B} \un(\mathbf{y})\times\curl\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}), \quad \mathbf{x}\in B. \end{align*} As we did for the incident field $\mathbf{w}_\mathbf{z}^i$, we may write this difference as \begin{align*} \mathbf{w}_\mathbf{z}^i(\mathbf{x}) - \mathbf{v}_\mathbf{z}^i(\mathbf{x}) &:= -\curl\int_{\partial B} \un(\mathbf{y})\times\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) \\ &\quad\quad\quad\quad\quad- \frac{1}{k^2}\curl\curl\int_{\partial B} \un(\mathbf{y})\times\curl\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}), \quad \mathbf{x}\in B. \end{align*} We now consider a ball $B_R$ of radius $R>0$ sufficiently large that $\overline{B}\subset B_R$, and we define the domain $S_R := B_R\setminus\overline{B}$ with boundary $\partial S_R = \partial B_R\cup\partial B$, where the unit normal $\un$ on $\partial B_R$ is directed into the exterior of $B_R$ and the unit normal $\un$ on $\partial B$ is directed into the interior of $S_R$. Since any $\mathbf{x}\in B$ lies outside of $S_R$ we see that \begin{equation*} -\curl\int_{\partial S_R} \un(\mathbf{y})\times\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) - \frac{1}{k^2}\curl\curl\int_{\partial S_R} \un(\mathbf{y})\times\curl\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) = 0, \end{equation*} and consequently we obtain the representation \begin{align*} \mathbf{w}_\mathbf{z}^i(\mathbf{x}) - \mathbf{v}_\mathbf{z}^i(\mathbf{x}) &:= -\curl\int_{\partial B_R} \un(\mathbf{y})\times\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}) \\ &\quad\quad\quad\quad\quad- \frac{1}{k^2}\curl\curl\int_{\partial B_R} \un(\mathbf{y})\times\curl\mathbf{E}_e(\mathbf{y},\mathbf{z},\mathbf{q}) \Phi(\mathbf{x},\mathbf{y}) ds(\mathbf{y}), \quad \mathbf{x}\in B, \end{align*} for any sufficiently large $R>0$. From the Silver-M{\"u}ller radiation condition satisfied by $\mathbf{E}_e(\cdot,\mathbf{z},q)$ and $\mathbf{H}_e(\cdot,\mathbf{z},q)$ it follows as in \cite[Theorem 6.7]{colton_kress} that this expression vanishes as $R\to\infty$ for any $\mathbf{x}\in B$. We conclude that $\mathbf{w}_\mathbf{z}^i = \mathbf{v}_\mathbf{z}^i$ and hence we may denote both incident fields by $\mathbf{u}_\mathbf{z}^i$. \qed\hfill $\Box$ \end{proof} We now factorize the modified far field operator $\boldsymbol{\mathcal{F}}$. We begin by defining the space of generalized incident fields as \begin{equation*} \Hcurlinc{B} := \{\mathbf{u}^i\in\Hcurl{B} \mid \curl\curl\mathbf{u}^i - k^2\mathbf{u}^i = \mathbf{0} \text{ in } B\}, \end{equation*} and we define the Herglotz operator $\mathbf{H}:\mathbf{L}_t^2(\mathbb{S}^2)\to\Hcurlinc{B}$ by $\mathbf{H}\mathbf{g} := \mathbf{v}_\mathbf{g}^i$. We also define two solution operators as follows. We define $\mathbf{G}:\Hcurlinc{B}\to\mathbf{L}_t^2(\mathbb{S}^2)$ by $\mathbf{G}\mathbf{u}^i := \mathbf{w}_\infty^*$, where $\mathbf{w}_\infty^*$ is the far field pattern corresponding to the unique radiating solution $\mathbf{w}^*\in\Hcurlloc{\mathbb{R}^3}$ of \begin{equation} \curl\curl\mathbf{w}^* - k^2\epsilon\mathbf{w}^* = k^2(1-\epsilon)\mathbf{u}^i \text{ in } \mathbb{R}^3. \label{wstar} \end{equation} We define $\mathbf{G}_0:\Hcurlinc{B}\to\mathbf{L}_t^2(\mathbb{S}^2)$ by $\mathbf{G}_0\mathbf{u}^i := \mathbf{v}_\infty^*$, where $\mathbf{v}_\infty^*$ is the far field pattern corresponding to the unique solution $(\mathbf{v}^*,p^*)\in\Hcurlloc{\mathbb{R}^3}\times H_*^1(B)$ of \begin{subequations} \label{vstar} \begin{align} \curl\tilde{\gamma}^{-1}\curl\mathbf{v}^* - k^2\tilde{\eta}\mathbf{w}^* + k^2\chi_B\nabla p^* &= \curl\left(1-\tilde{\gamma}^{-1}\right)\curl\mathbf{u}^i + k^2(1-\tilde{\eta})\mathbf{u}^i \text{ in } \mathbb{R}^3, \label{vstar1} \\ \div\mathbf{v}^* &= 0 \text{ in } B, \label{vstar2} \\ \un\cdot\mathbf{v}^* &= -\un\cdot\mathbf{u}^i \text{ on } \partial B. \label{vstar3} \end{align} \end{subequations} We denote by $\chi_B$ the characteristic function for the domain $B$, and for any constant $a$ we denote by $\tilde{a}$ the function which has the constant values of $a$ in $B$ and $1$ elsewhere. We see that $\mathbf{F} = \mathbf{G}\mathbf{H}$ and $\mathbf{F}_0 = \mathbf{G}_0 \mathbf{H}$, and consequently we obtain the factorization $\boldsymbol{\mathcal{F}} = \boldsymbol{\mathcal{G}} \mathbf{H}$, where the operator $\boldsymbol{\mathcal{G}}:\Hcurlinc{B}\to\mathbf{L}_t^2(\mathbb{S}^2)$ is defined by $\boldsymbol{\mathcal{G}} := \mathbf{G} - \mathbf{G}_0$ and is compact. With this factorization in hand, we provide a characterization of the modified transmission eigenvalues in terms of $\boldsymbol{\mathcal{F}}$ in the following two theorems. \begin{theorem} \label{theorem_noteig} Let $\mathbf{z}\in B$. If $\eta$ is not a modified electromagnetic transmission eigenvalue, then for every $\delta>0$ there exists $\mathbf{g}_z^\delta\in\mathbf{L}_t^2(\mathbb{S}^2)$ satisfying \begin{equation} \lim_{\delta\to0} \norm{\boldsymbol{\mathcal{F}}\mathbf{g}_\mathbf{z}^\delta - \mathbf{E}_{e,\infty}(\cdot,\mathbf{z},\mathbf{q})}_{\mathbf{L}_t^2(\mathbb{S}^2)} = 0 \label{F_limit} \end{equation} such that the sequence $\left\{\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i\right\}_{\delta>0}$ of Herglotz wave functions is convergent and hence bounded in $\Hcurl{B}$ as $\delta\to0$. \end{theorem} \begin{proof} By the assumption that $\eta$ is not a modified electromagnetic transmission eigenvalue, it follows from Lemma \ref{lemma_decomp} that there exists a unique solution $(\mathbf{w}_\mathbf{z},\mathbf{v}_\mathbf{z},p_\mathbf{z})$ of \eqref{zmit} with the decomposition \begin{equation*} \mathbf{w}_\mathbf{z} = \mathbf{u}_\mathbf{z}^i + \mathbf{w}_\mathbf{z}^s, \quad \mathbf{v}_\mathbf{z} = \mathbf{u}_\mathbf{z}^i + \mathbf{v}_\mathbf{z}^s \text{ in } B, \end{equation*} where $\mathbf{u}_\mathbf{z}^i\in\Hcurlinc{B}$ and both of the scattered fields $\mathbf{w}_\mathbf{z}^s,\mathbf{v}_\mathbf{z}^s\in\Hcurlloc{\mathbb{R}^3}$ satisfy the free-space Maxwell's equations in $\mathbb{R}^3\setminus\overline{B}$ along with the Silver-M{\"u}ller radiation condition. The definition of $\boldsymbol{\mathcal{G}}$ implies that $\boldsymbol{\mathcal{G}}\mathbf{u}_\mathbf{z}^i = \mathbf{E}_{e,\infty}(\cdot,\mathbf{z},\mathbf{q})$. We note that the range of the Herglotz operator $\mathbf{H}$ is dense in $\Hcurlinc{B}$ (cf. \cite{coltonkress2001}), and consequently for any $\delta>0$ we may choose $\mathbf{g}_\mathbf{z}^\delta\in\mathbf{L}_t^2(\mathbb{S}^2)$ such that \begin{equation} \norm{\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i - \mathbf{u}_\mathbf{z}^i}_{\Hcurl{B}} < \frac{\delta}{\norm{\boldsymbol{\mathcal{G}}}}. \label{Herglotz_limit} \end{equation} From the observation that $\boldsymbol{\mathcal{F}}\mathbf{g}_\mathbf{z}^\delta = \boldsymbol{\mathcal{G}}\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i$ we obtain \begin{align*} \norm{\boldsymbol{\mathcal{F}}\mathbf{g}_\mathbf{z}^\delta - \mathbf{E}_{e,\infty}(\cdot,\mathbf{z},\mathbf{q})}_{\mathbf{L}_t^2(\mathbb{S}^2)} &\le \norm{\boldsymbol{\mathcal{G}}}\norm{\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i - \mathbf{u}_\mathbf{z}^i}_{\Hcurl{B}} < \delta, \end{align*} which clearly implies \eqref{F_limit}. Moreover, we see from \eqref{Herglotz_limit} that $\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i \to \mathbf{u}_\mathbf{z}^i$ in $\Hcurl{B}$ as $\delta\to0$. \qed\hfill $\Box$ \end{proof} Before we prove the next theorem, we recall a result from \cite{camano_lackner_monk}. We remark that the original statement of the result in Lemma 4.3 of that work contains an error in the first equation. In particular, the term $\frac{1}{k^2}$ is missing, and we provide the corrected version here. \begin{lemma} \label{lemma_rep} For all $\mathbf{z}\in B$, $\mathbf{q}\in\mathbb{R}^3$, and sufficiently regular functions $\mathbf{u}$, we have the identities \begin{align*} &\int_{\partial B} \curl\mathbf{u}(\mathbf{x})\cdot\biggr(\un(\mathbf{x})\times\mathbf{E}_e(\mathbf{x},\mathbf{z},\mathbf{q}) \biggr) ds(\mathbf{x}) \\ &\hspace{5em} = ik\mathbf{q}\cdot \left( \frac{1}{k^2}\mathbf{grad}_\mathbf{z}\textnormal{div}_\mathbf{z}\int_{\partial B} \curl\mathbf{u}(\mathbf{x})\times\un(\mathbf{x}) \Phi(\mathbf{x},\mathbf{z}) ds(\mathbf{x}) \right. - \left. \int_{\partial B} \curl\mathbf{u}(\mathbf{x})\times\un(\mathbf{x}) \Phi(\mathbf{x},\mathbf{z}) ds(\mathbf{x}) \right) \end{align*} and \begin{equation*} \int_{\partial B} \biggr( \un(\mathbf{x})\times\mathbf{u}(\mathbf{x}) \biggr)\cdot\mathbf{curl}_\mathbf{x}\mathbf{E}_e(\mathbf{x},\mathbf{z},\mathbf{q}) ds(\mathbf{x}) = ik\mathbf{q}\cdot\mathbf{curl}_\mathbf{z}\int_{\partial B} \un(\mathbf{x})\times\mathbf{u}(\mathbf{x}) \Phi(\mathbf{x},\mathbf{z}) ds(\mathbf{x}). \end{equation*} \end{lemma} \begin{theorem} \label{theorem_eig} If $\eta$ is a modified electromagnetic transmission eigenvalue and the sequence $\{\mathbf{g}_\mathbf{z}^\delta\}_{\delta>0}$ satisfies \eqref{F_limit} for a given $\mathbf{z}\in B$, then the sequence $\{\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i\}$ cannot be bounded in $\Hcurl{B}$ as $\delta\to0$ for almost every $\mathbf{z}\in B_\rho$, where $B_\rho\subset B$ is an arbitrary ball of radius $\rho>0$. \end{theorem} \begin{proof} We suppose to the contrary that for some ball $B_\rho\subset B$ and all $\mathbf{z}\in B_\rho$ the sequence $\left\{\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i\right\}$ is bounded in $\Hcurl{B}$ as $\delta\to0$, which implies that, upon passing to a subsequence, $\left\{\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i\right\}$ converges weakly to some $\mathbf{u}_\mathbf{z}^i$ in $\Hcurlinc{B}$. By compactness of $\boldsymbol{\mathcal{G}}$ we obtain \begin{equation*} \boldsymbol{\mathcal{G}}\mathbf{v}_{\mathbf{g}_\mathbf{z}^\delta}^i \to \boldsymbol{\mathcal{G}}\mathbf{u}_\mathbf{z}^i \text{ in } \mathbf{L}_t^2(\mathbb{S}^2) \text{ as } \delta\to0, \end{equation*} and it follows from \eqref{F_limit} that $\boldsymbol{\mathcal{G}}\mathbf{u}_\mathbf{z}^i = \mathbf{E}_{e,\infty}(\cdot,\mathbf{z},\mathbf{q})$. If we let $\mathbf{w}_\mathbf{z}^*\in\Hcurlloc{\mathbb{R}^3}$ and $(\mathbf{v}_\mathbf{z}^*,p_\mathbf{z}^*)\in\Hcurlloc{\mathbb{R}^3}\times H_*^1(B)$ be the unique solutions of \eqref{wstar} and \eqref{vstar}, respectively, for the incident field $\mathbf{u}_\mathbf{z}^i$, then we see by definition of $\boldsymbol{\mathcal{G}}$ that \begin{equation*} \mathbf{w}_{\mathbf{z},\infty}^* - \mathbf{v}_{\mathbf{z},\infty}^* = \mathbf{E}_{e,\infty}(\cdot,\mathbf{z},\mathbf{q}). \end{equation*} An application of Rellich's lemma implies that $(\mathbf{w}_\mathbf{z},\mathbf{v}_\mathbf{z},p_\mathbf{z}) := (\mathbf{w}_\mathbf{z}^*|_B + \mathbf{u}_\mathbf{z}^i,\mathbf{v}_\mathbf{z}^*|_B + \mathbf{u}_\mathbf{z}^i,p_\mathbf{z})$ satisfies \eqref{zmit}. Since $\eta$ is a modified electromagnetic transmission eigenvalue, there exists a nontrivial solution $(\mathbf{w}_\eta,\mathbf{v}_\eta,p_\eta)$ of \eqref{mit}. We may apply Green's second vector theorem as in \cite{colton_kress} along with some simple calculations to obtain \begin{align*} \int_{\partial B} \left( \un\times\mathbf{v}_\mathbf{z}\cdot\gamma^{-1}\curl\mathbf{v}_\eta - \un\times\mathbf{v}_\eta\cdot\gamma^{-1}\curl\mathbf{v}_\mathbf{z} \right) ds &= 0, \\ \int_{\partial B} \left( \un\times\mathbf{w}_\mathbf{z}\cdot\curl\mathbf{w}_\eta - \un\times\mathbf{w}_\eta\cdot\curl\mathbf{w}_\mathbf{z} \right) ds &= 0, \end{align*} and upon subtracting these equations and applying the boundary conditions \eqref{zmit5}--\eqref{zmit6} and \eqref{mit5}--\eqref{mit6} we see that \begin{equation*} \int_{\partial B} \biggr( \un\times\mathbf{E}_e(\cdot,\mathbf{z},\mathbf{q})\cdot\curl\mathbf{w}_\eta - \un\times\mathbf{w}_\eta\cdot\curl\mathbf{E}_e(\cdot,\mathbf{z},\mathbf{q}) \biggr) ds = 0. \end{equation*} We now invoke the identities of Lemma \ref{lemma_rep} to write this equation as \begin{align} \begin{split} ik\mathbf{q}\cdot \Bigr( \frac{1}{k^2}\textbf{grad}_\mathbf{z}\textnormal{div}_\mathbf{z}&\int_{\partial B} \curl\mathbf{w}_\eta\times\un \Phi(\cdot,\mathbf{z}) ds + \int_{\partial B} \curl\mathbf{w}_\eta\times\un \Phi(\cdot,\mathbf{z}) ds \\ &\hspace{12em} - \textbf{curl}_\mathbf{z}\int_{\partial B} \un\times\mathbf{w}_\eta \Phi(\cdot,\mathbf{z}) ds \Bigr) = 0, \; \mathbf{z}\in B_\rho. \label{Lambda_normal} \end{split} \end{align} We define the function \begin{align*} \boldsymbol{\Lambda}(\mathbf{z}) := \frac{1}{k^2}&\textbf{grad}_\mathbf{z}\textnormal{div}_\mathbf{z}\int_{\partial B} \curl\mathbf{w}_\eta\times\un \Phi(\cdot,\mathbf{z}) ds + \int_{\partial B} \curl\mathbf{w}_\eta\times\un \Phi(\cdot,\mathbf{z}) ds \\ &\hspace{12em} - \textbf{curl}_\mathbf{z}\int_{\partial B} \un\times\mathbf{w}_\eta \Phi(\cdot,\mathbf{z}) ds, \; \mathbf{z}\in\mathbb{R}^3\setminus\partial B, \end{align*} and we observe that $\boldsymbol{\Lambda}$ satisfies the free-space Maxwell's equations in both $\mathbb{R}^3\setminus\overline{B}$ and $B$ and that it satisfies the radiation condition. From \eqref{Lambda_normal} we see that $ik\mathbf{q}\cdot\boldsymbol{\Lambda}(\mathbf{z}) = 0$ for all $\mathbf{z}\in B_\rho$ and all $\mathbf{q}\in\mathbb{R}^3$, and the unique continuation principle implies that $\boldsymbol{\Lambda}(\mathbf{z}) = 0$ for all $\mathbf{z}\in B$. It follows that \begin{equation*} \un\times\boldsymbol{\Lambda}^- = \mathbf{0} \text{ and } \un\times\curl\boldsymbol{\Lambda}^- = \mathbf{0} \text{ on } \partial B, \end{equation*} where the superscripts $+$ and $-$ denote the trace on $\partial B$ from the exterior and interior of $B$, respectively. The jump relations of vector potentials (cf. \cite[Theorem 6.12]{colton_kress}) then imply that \begin{equation*} \un\times\boldsymbol{\Lambda}^+ = -\un\times\mathbf{w}_\eta \text{ and } \un\times\curl\boldsymbol{\Lambda}^+ = -\un\times\curl\mathbf{w}_\eta \text{ on } \partial B, \end{equation*} from which it follows that $\mathbf{E}^s := -\boldsymbol{\Lambda}\in\Hcurlloc{\mathbb{R}^3\setminus\overline{B}}$ and $\mathbf{E} := \mathbf{w}_\eta\in\Hcurl{B}$ satisfy \eqref{sc} with $B$ in place of $D$ and $\mathbf{E}^i = 0$. (Note that this problem is equivalent to \eqref{sc} upon redefining the total and scattered fields since $\epsilon=1$ in $B\setminus\overline{D}$.) Since this problem is well-posed we conclude that both $\boldsymbol{\Lambda}$ and $\mathbf{w}_\eta$ are identically zero, which by the boundary conditions \eqref{mit} implies that \begin{equation*} \un\times\mathbf{v}_\eta = \mathbf{0} \text{ and } \un\times\gamma^{-1}\curl\mathbf{v}_\eta = \mathbf{0} \text{ on } \partial B. \end{equation*} The same arguments used in the proof of uniqueness of the auxiliary problem \eqref{aux} yield $\mathbf{v}_\eta = \mathbf{0}$ and $p_\eta = 0$ in $B$, which contradicts the assumption that the solution $(\mathbf{w}_\eta,\mathbf{v}_\eta,p_\eta)$ of the homogeneous modified interior transmission problem was nonzero. \qed\hfill $\Box$ \end{proof} \section{Numerical examples} \label{sec_numerics} We begin this section with a brief explanation of how the results of Theorems \ref{theorem_noteig} and \ref{theorem_eig} allow us to detect modified electromagnetic transmission eigenvalues from electric far field data via the linear sampling method (LSM). We begin with the measured scattering data represented by the electric far field operator $\mathbf{F}$, and we select a rectangular region in the complex plane (or an interval on the real line if $\epsilon$ is real-valued) and construct a grid of values of $\eta$ in which to seek eigenvalues. For each such $\eta$, we compute the auxiliary far field operator $\mathbf{F}_0$ and construct the modified far field operator $\boldsymbol{\mathcal{F}}$. In practice these operators are discretized by computing electric far field patterns for various choices of incident direction $\mathbf{d}$ on the unit sphere. We use Tikhonov regularization to approximately solve a discretized version of the modified far field equation \eqref{mffe} for multiple choices of source points $\mathbf{z}$ and polarization vectors $\mathbf{q}$. In each case we compute the $\mathbf{L}_t^2(\mathbb{S}^2)$-norm of the solution $\mathbf{g}_{\eta,\mathbf{z},\mathbf{q}}$ (which serves as a proxy for the $\Hcurl{B}$-norm of $\mathbf{v}_{\mathbf{g}_{\eta,\mathbf{z},\mathbf{q}}}^i$), average it over all $\mathbf{z}$ and $\mathbf{q}$ to obtain a number $g_\eta$, and investigate the contour plot of the LSM indicator function $\eta\mapsto g_\eta$. Theorems \ref{theorem_noteig} and \ref{theorem_eig} suggest that the number $g_\eta$ should be large when $\eta$ is an eigenvalue and small otherwise. Thus, we determine eigenvalues by seeking peaks in the contour plot of the LSM indicator function. We refer to \cite{camano_lackner_monk} for another implementation of this method to determine eigenvalues related to electromagnetic scattering. \par We now provide a simple example that modified electromagnetic transmission eigenvalues can be detected from simulated far field data and that they shift in response to changes in the permittivity $\epsilon$. For convenience we choose $D$ to be the unit ball in $\mathbb{R}^3$, $B=D$, and $\epsilon$ to be a real constant in $D$, and we choose the wave number as $k=2$ and the number of incident fields as $N_{\text{inc}} = 99$. In this case both the physical scattering problem \eqref{sc} and the auxiliary problem \eqref{aux} may be computed via separation of variables, and we may compute exact eigenvalues in the same manner for the problem \eqref{mit}. We refer to the Appendix for a discussion of this procedure, and we note that the numerical evaluation of the vector spherical harmonics in the series expansions is based on \cite{wang_wang_xie}. Before we continue to an example, we remark that a root-finding algorithm is used to determine the roots of the modified determinant functions given by \eqref{moddet}, which we refer to as the exact eigenvalues. \par In Figure \ref{fig:shift} we provide an example of both the detection of modified electromagnetic transmission eigenvalues and their shift due to changes in $\epsilon$ for the choices $\gamma = 0.5$ (left column) and $\gamma = 2$ (right column). We have shown a wide range of $\eta$-values in the top row, and in the bottom row we have restricted the plot to a smaller interval in order to highlight the noticeable shift in the most sensitive eigenvalues. The physical and auxiliary electric far field patterns have been computed using separation of variables, and the resulting modified electric far field operator has been subjected to approximatly 2\% multiplicative uniform noise (see \cite{cogar_colton_meng_monk} for details). For both $\gamma=0.5$ and $\gamma=2$ we observe that multiple eigenvalues are detected in this range and that these eigenvalues shift in response to a change from $\epsilon = 2$ to $\epsilon = 1.9$, suggesting that this class of eigenvalues has the potential to be useful in detecting flaws in a material given electromagnetic scattering data. In this simple example we do not see a significant difference between $\gamma=0.5$ and $\gamma=2$; in both cases the most sensitive eigenvalues shift by a comparable amount. However, it has been observed for scalar problems of a similar type that some choices of $\gamma$ can increase this sensitivity to changes in the material, sometimes by an order of magnitude (cf. \cite{cogar,cogar_colton_meng_monk,cogar_colton_monk}). We plan to investigate this question and the broader behavior of these eigenvalues for more complicated domains in a forthcoming manuscript. \begin{figure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{figure2a.png} \caption{$\gamma = 0.5$} \label{fig:shift_gammap5_wide} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{figure2b.png} \caption{$\gamma = 2$} \label{fig:shift_gamma2_wide} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{figure2c.png} \caption{$\gamma = 0.5$} \label{fig:shift_gammap5_zoom} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{figure2d.png} \caption{$\gamma = 2$} \label{fig:shift_gamma2_zoom} \end{subfigure} \caption{The shift of the eigenvalues due to a change in $\epsilon$ for two different values of $\gamma$. The figures in the top row demonstrate the detection of eigenvalues for $\epsilon = 1.9, 2$ for both $\gamma=0.5$ (left column) and $\gamma=2$ (right column). The figures in the bottom row are identical to the top row but with a narrower interval that highlights the shift in the most sensitive eigenvalues.} \label{fig:shift} \end{figure} \section{Conclusion} \label{sec_conclusion} We introduced a new eigenvalue problem related to electromagnetic scattering for potential use as a target signature in nondestructive testing of materials. This class of eigenvalues is generated by comparing the measured scattering data for a given medium to an auxiliary problem that depends on a parameter $\eta$ and may be computed independently of the measured scattering data. After showing that this auxiliary problem is well-posed, we established that a nonhomogeneous version of the modified interior transmission problem is of Fredholm type and investigated some properties of the eigenvalues, proving that the eigenvalues are discrete and that infinitely many eigenvalues exist if $\epsilon$ is real-valued. We concluded by showing that the eigenvalues may be determined from the measured electric far field data using the linear sampling method and providing a simple example for scattering by the unit ball with constant permittivity $\epsilon$. \par Many questions remain open concerning eigenvalue problems generated from modified far field operators, including a general existence theory for the eigenvalues when $\epsilon$ is complex-valued and non-smooth in $B$. Of particular interest for electromagnetic problems of this type is the characterization of when the straightforward electromagnetic analogue of scalar eigenvalue problems possesses the Fredholm property. In the present context, we remarked in Section \ref{sec_introduction} that the eigenvalue problem \eqref{mp} may not possess this property in the range of $\epsilon|_D$, which in the example given consisted of a single point. For variable $\epsilon$ this range would be a larger set, and the solvability of the problem is unknown when $\eta$ falls in this range. However, this question was addressed for electromagnetic Stekloff eigenvalues in \cite{halla2,halla1}, and it is possible that similar techniques may be applied to modified electromagnetic transmission eigenvalues as well. Finally, results on the stability of modified electromagnetic transmission eigenvalues will be presented in a forthcoming manuscript, along with a limited existence result for eigenvalues corresponding to complex-valued $\epsilon$ based on classical perturbation theory from \cite{kato}. \section*{Appendix} \label{app_sov} We now provide some details about the separation of variables procedure for the auxiliary problem \eqref{aux} and the eigenvalue problem \eqref{mit} in the case where $D$ is chosen to be the unit ball in $\mathbb{R}^3$, $B=D$, and $\epsilon$ is constant in $D$. Separation of variables for the physical scattering problem \eqref{sc} is standard, and we refer to \cite{colton_kress,kirsch_hettlich} for a detailed treatment. For the auxiliary problem, we assume that $\eta\neq0$ and we replace $\mathbf{E_0}$ in \eqref{aux} with $\mathbf{E}_0 + \eta^{-1}\nabla P$, which results in the equivalent problem of finding $\mathbf{E}_0^s\in \Hcurlloc{\mathbb{R}^3\setminus\overline{B}}$, $\mathbf{E}_0\in \Hcurl{B}$, and $P\in H_*^1(B) $ satisfying \begin{subequations} \label{eqaux} \begin{align} \curl\curl \mathbf{E}_0^s - k^2 \mathbf{E}_0^s &= \mathbf{0} \text{ in } \mathbb{R}^3\setminus\overline{B}, \label{eqaux1} \\ \curl\gamma^{-1}\curl \mathbf{E}_0 - k^2\eta \mathbf{E}_0 &= \mathbf{0} \text{ in } B, \label{eqaux2} \\ \Delta P &= 0 \text{ in } B, \label{eqaux3} \\ \eta^{-1}\normal{P} + \un\cdot \mathbf{E}_0 &= 0 \text{ on } \partial B, \label{eqaux4} \\ \un\times\mathbf{E}_0 + \eta^{-1}\un\times\nabla P - \un\times\mathbf{E}_0^s &= \un\times \mathbf{E}^i \text{ on } \partial B, \label{eqaux5} \\ \un\times\gamma^{-1}\curl \mathbf{E}_0 - \un\times\curl \mathbf{E}_0^s &= \un\times\curl \mathbf{E}^i \text{ on } \partial B, \label{eqaux6} \\ \mathclap{\lim_{r\to\infty} \left(\curl \mathbf{E}_0^s\times\mathbf{x} - ikr\mathbf{E}_0^s\right) = 0.} \label{eqaux7} \end{align} \end{subequations} In this form we may apply the standard approach for Maxwell's equations along with expansion methods for Laplace's equation found in \cite{kirsch_hettlich}, in which the scattered and total fields are of the form \begin{align*} \mathbf{E}_0^s(\mathbf{x}) &= \sum_{n=1}^\infty \sum_{m=-n}^n \Biggr[ \alpha_n^m \frac{\sqrt{n(n+1)}}{r} h_n^{(1)}(kr) Y_n^m(\mathbf{\hat{x}})\mathbf{\hat{x}} \\ &\hspace{8em} + \alpha_n^m \frac{(rh_n^{(1)}(kr))'}{r} \mathbf{U}_n^m(\hat{x}) + \beta_n^m h_n^{(1)}(kr) \mathbf{V}_n^m(\hat{x}) \Biggr], \; \mathbf{x}\in\mathbb{R}^3\setminus\overline{B}, \\ \mathbf{E}_0(\mathbf{x}) &= \sum_{n=1}^\infty \sum_{m=-n}^n \Biggr[ \delta_n^m \frac{\sqrt{n(n+1)}}{r} j_n(k\sqrt{\gamma\eta}r) Y_n^m(\mathbf{\hat{x}})\mathbf{\hat{x}} \\ &\hspace{8em} + \delta_n^m \frac{(rj_n(k\sqrt{\gamma\eta}r))'}{r} \mathbf{U}_n^m(\hat{x}) + \varphi_n^m j_n(k\sqrt{\gamma\eta}r) \mathbf{V}_n^m(\hat{x}) \Biggr], \; \mathbf{x}\in B, \end{align*} and the scalar field $P$ is of the form \begin{equation*} P(\mathbf{x}) = \sum_{n=1}^\infty \sum_{m=-n}^n p_n^m r^n Y_m^n(\mathbf{\hat{x}}), \; \mathbf{x}\in B. \end{equation*} Here we have denoted by $\mathbf{U}_n^m$ and $\mathbf{V}_n^m$ the vector spherical harmonics \begin{equation*} \mathbf{U}_n^m(\hat{x}) = \frac{1}{\sqrt{n(n+1)}} \nabla_{\mathbb{S}^2} Y_n^m(\mathbf{\hat{x}}), \; \mathbf{V}_n^m(\mathbf{\hat{x}}) = \mathbf{\hat{x}}\times\mathbf{U}_n^m(\mathbf{\hat{x}}). \end{equation*} Applying the boundary conditions \eqref{eqaux4}--\eqref{eqaux6} (in a similar manner to \cite{camano_lackner_monk}) now implies that the coefficients satisfy \begin{gather} \begin{split} \label{system0} \left( \begin{array}{ccccc} (rh_n^{(1)}(kr))'|_{r=1} & 0 & -(rj_n(k\sqrt{\gamma\eta}r))'|_{r=1} & 0 & -\eta^{-1} \sqrt{n(n+1)} \\ 0 & h_n^{(1)}(k) & 0 & -j_n(k\sqrt{\gamma\eta}) & 0 \\ 0 & (rh_n^{(1)}(kr))'|_{r=1} & 0 & -\gamma^{-1} (rj_n(k\sqrt{\gamma\eta}r))'|_{r=1} & 0 \\ h_n^{(1)}(k) & 0 & -\eta j_n(k\sqrt{\gamma\eta}) & 0 & 0 \\ 0 & 0 & \sqrt{n(n+1)} j_n(k\sqrt{\gamma\eta}) & 0 & \eta^{-1} n \end{array} \right) \\ \times \left( \begin{array}{c} \alpha_n^m \\ \beta_n^m \\ \delta_n^m \\ \varphi_n^m \\ p_n^m \end{array} \right) = \left( \begin{array}{c} -a_n^m (rj_n(kr))'|_{r=1} \\ -b_n^m j_n(k) \\ -b_n^m (rj_n(kr))'|_{r=1} \\ -a_n^m j_n(k) \\ 0 \end{array} \right), \end{split} \end{gather} where the coefficients $a_n$ and $b_n$ are chosen such that the incident field $\mathbf{E}^i$ may be expanded as \begin{equation*} \mathbf{E}^i(\mathbf{x}) = \sum_{n=1}^\infty \sum_{m=-n}^n \left[ a_n^m \frac{\sqrt{n(n+1)}}{r} j_n(kr) Y_n^m(\mathbf{\hat{x}})\mathbf{\hat{x}} + a_n^m \frac{(rj_n(kr))'}{r} \mathbf{U}_n^m(\hat{x}) + b_n^m j_n(kr) \mathbf{V}_n^m(\hat{x}) \right], \; \mathbf{x}\in\mathbb{R}^3. \end{equation*} In this case of a plane wave incident field with propagation direction $\mathbf{d}$ and polarization vector $\mathbf{p}$, these coefficients are given by \begin{equation*} a_n^m = -\frac{4\pi i^{n+1}}{k} \overline{\mathbf{U}_n^m(\mathbf{d})}^T \mathbf{p}, \; b_n^m =4\pi i^n \overline{\mathbf{V}_n^m(\mathbf{d})}^T \mathbf{p}, \end{equation*} as a result of the Jacobi-Anger expansion (cf. \cite{colton_kress}). With the coefficients satisfying \eqref{system0}, the auxiliary far field pattern may be constructed by the formula \begin{equation} \label{FFP} \mathbf{E}_{0,\infty}(\mathbf{\hat{x}},\mathbf{d};\mathbf{p}) = -\frac{1}{k} \sum_{n=1}^\infty \frac{1}{i^{n+1}} \sum_{m=-n}^n \left[ \alpha_n^m \sqrt{n(n+1)}\mathbf{V}_n^m(\mathbf{\hat{x}}) + ik\beta_n^m \sqrt{n(n+1)}\mathbf{U}_n^m(\mathbf{\hat{x}}) \right]. \end{equation} Following this same procedure for the modified electromagnetic transmission eigenvalue problem \eqref{mit} implies that $\eta\neq0$ is an eigenvalue if and only if it is a root of one of the determinant functions \begin{subequations} \label{moddet} \begin{align} \tilde{d}_n^{(a)}(\eta) &= \left(1-\gamma^{-1}\right)j_n(k\sqrt{\epsilon})j_n(k\sqrt{\gamma\eta}) + k\sqrt{\epsilon} j_n'(k\sqrt{\epsilon})j_n(k\sqrt{\gamma\eta}) - k\sqrt{\gamma^{-1}\eta} j_n(k\sqrt{\epsilon}) j_n'(k\sqrt{\gamma\eta}), \label{moddet_a} \\ \tilde{d}_n^{(b)}(\eta) &= (\eta+n\epsilon)j_n(k\sqrt{\epsilon})j_n(k\sqrt{\gamma\eta}) + k\eta\sqrt{\epsilon} j_n'(k\sqrt{\epsilon})j_n(k\sqrt{\gamma\eta}) - k\epsilon\sqrt{\gamma\eta} j_n(k\sqrt{\epsilon}) j_n'(k\sqrt{\gamma\eta}). \label{moddet_b} \end{align} \end{subequations} Compared to the standard determinant functions given by \eqref{det}, we see that the only difference is that $(\eta-\epsilon)$ in \eqref{det_b} is replaced with $(\eta+n\epsilon)$ in \eqref{moddet_b}. A computational study using the parameters from Figure \ref{fig_sov1} shows that the roots of these families of equations do not accumulate at the constant value of $\epsilon$. \FloatBarrier \bibliographystyle{siamplain}
{'timestamp': '2020-06-01T02:07:24', 'yymm': '2005', 'arxiv_id': '2005.14413', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14413'}
arxiv
\section{Introduction} Miniaturization and advances in WiFi technologies have led to the emergence of Web-enabled devices. Interconnected devices are typically referred to as the \emph{Internet of Thi
ngs (IoT)}. More formally, IoT represents the network of \emph{things} that exchange data, essentially enabling a wide variety of applications such as smart homes and smart cities \cite{gubbi2013internet}. A potential application is a crowdsourcing platform whereby IoT devices \emph{crowdsource} services for the benefit of other IoT devices. Crowdsourced services are generally defined as services provided by the crowd to the crowd \cite{brabham2008crowdsourcing}. In \emph{IoT service crowdsourcing}, IoT devices provision services to other nearby IoT devices. We refer to such services as \emph{crowdsourced IoT services}. IoT devices can crowdsource a wide range of service types such as \emph{computing services} \cite{habak2015femto}, \emph{green energy services} \cite{lakhdari2018crowdsourcing}, and \emph{WiFi hotspot services} \cite{Neiat2017caas}. For example, computing services involves IoT devices offering their computational resources (service providers) to other devices. IoT devices with limited computational capabilities (service consumers) may request and consume such services to perform complex tasks that may not be easily achieved otherwise. Crowdsourcing IoT services pose several trust-related challenges. For instance, let us assume a crowdsourcing environment where \emph{computing services} are provided by IoT devices \cite{habak2015femto}. In such an environment, IoT providers offer their computing resources (CPU, memory) to perform processing tasks for other IoT devices. A potential IoT service consumer may have concerns regarding the service's trust. For example, an untrustworthy service might not protect the privacy of the consumers' data or provide unreliable performance. Similarly, an IoT service provider may have concerns regarding their consumers' trustworthiness. Malicious consumers may misuse IoT services by sending malicious software. Such concerns can be alleviated by ascertaining the trustworthiness of both the service provider and service consumer. IoT environments, however, exhibit certain characteristics that make assessing trust rather challenging. One crucial challenge is the \emph{dynamic nature} of IoT environments. IoT devices are inherently expected to come and go and their existence may not be for long periods. For example, wearables like shirts and shoes have a limited lifespans as people tend to replace them frequently. Additionally, IoT devices can have different owners at any given time. For instance, an IoT shirt can be worn by different people (e.g., bothers). These dynamic characteristics may lead to unreliable historical records that might result in an inaccurate trust assessment. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{images/motivation_scenario.pdf} \caption{Crowdsourcing WiFi hotspots using IoT devices.} \label{fig:motivation_scenario} \end{figure*} Previous trust management frameworks relied mainly on historical data to evaluate the trustworthiness of service providers \cite{saied2013trust,chen2011trm}. IoT environments are highly dynamic; i.e., new devices are deployed and removed every day \cite{kyriazis2013smart}. New devices do not generally have prior interactions with other IoT devices. Service consumers, as a result, may not be able to assess the trustworthiness of services provided by new devices. Therefore, the majority of existing trust frameworks cannot be applied to such environments due to the lack of historical records. We introduce the concept of \emph{just-in-time memoryless trust} to overcome these challenges. The trustworthiness of the provider is evaluated \emph{without} relying on historical data (memoryless). We propose to leverage the characteristics of an active IoT service session to compute an accurate trust value for the service. As a result, the assessment would provide an accurate measure of a service's trustworthiness, given a specific service session (just-in-time). We propose a framework that assesses the trustworthiness of IoT services using service-session-related data. Our approach exploits existing IoT devices (bystanders) to assess the trustworthiness of IoT services in their vicinity. We achieve this by using a \emph{collaborative model} where users of the crowdsourcing platform are also expected to participate in assessing the trustworthiness of the services. In that respect, IoT devices that wish to consume IoT services are to contribute to the platform by evaluating IoT services for other consumers. It is worth noting that such a model is used in other popular applications, specifically in peer-to-peer applications, such as Skype and Window 10's Delivery Optimization. For example, Skype users contribute to the platform by acting as relay stations to forward calling packets for other Skype users. The rest of the paper is organized as follows. Section \ref{section:problem_definition} presents preliminaries and problem definition. Section \ref{section:framework} introduces our just-in-time memoryless trust management framework. Section \ref{section:experiments} discusses the experimental results. Section \ref{section:literature} surveys related work. Section \ref{section:conclusion} concludes the paper and highlights future work. \subsection*{Motivation Scenario} We use the following motivation scenario to illustrate the significance of our work (Fig. \ref{fig:motivation_scenario}). Assume a crowdsourcing environment where IoT devices are used to provide/consume services to/from other IoT devices. The provided services in this scenario are WiFi hotspots. In other words, an IoT service provider shares their device's Internet access using an application like WiFiMapper\footnote{https://www.wifimap.io}. Service providers $B$, $C$, and $D$ in Fig. \ref{fig:motivation_scenario} use their smartphones or smartwatches to share their Internet with other nearby IoT devices. Service consumer $A$ wishes to consume one of the available nearby services. Service consumer $A$ is presented with the potential service providers $B$, $C$, and $D$. The available providers are unknown to consumer $A$ and, therefore, consumer $A$ has no knowledge about their trustworthiness. Additionally, service providers $B$, $C$, and $D$ do not have historical records that reveal their previous behaviors. Consumer $A$, therefore, fails to ascertain the level of trust from such providers. Service provision/consumption, as a result, does not occur due to the absence of trust. We, therefore, propose a framework that assesses the trustworthiness of the providers \emph{despite the lack of historical records}. It is worth noting that trust management is not limited to WiFi hotspot services. Other IoT services require trust assurance prior to service consumption. For example, assume an IoT crowdsourcing environment, whereby IoT devices offer their computing resources (e.g., CPU and memory) to other less-capable IoT devices \cite{habak2015femto}. Suppose an IoT device owner $A$ wishes to use their smartphone to provision compute services to other nearby devices. Let us assume that provider $A$ has not provisioned any services before. Provider $A$, therefore, lacks historical data that can describe their performance. Assume a consumer $B$ who has some compute tasks to perform on their confidential data. Provider $A$ is a potential candidate to receive consumer $B$'s task. Consumer $B$, however, may not use the available computing service since provider $A$ is unknown to them. Additionally, Provider $A$ does not have any previous historical records that allow consumer $C$ to deduce a trust value for them. \begin{figure*} \centering \includegraphics[width=0.80\textwidth]{images/framework.pdf} \caption{The proposed framework.} \label{fig:framework} \end{figure*} \section{Preliminaries and Problem Definition} \label{section:problem_definition} We model a \emph{crowdsourced IoT service} $S$ provided and consumed by an \emph{IoT device} $d$ as a tuple of $<id, l, t, d, o, f, q>$ where, \begin{itemize} \item $id$ is a unique service ID. \item $l$ is a GPS location where the service $S$ is provided. \item $t$ is the time interval at which the service $S$ is provided. It is represented as a tuple $<t_s, t_e>$ , where \begin{itemize} \item $t_s$ is the start-time of $S$, \item $t_e$ is the end-time of $S$. \end{itemize} \item $d$ is the IoT device that offers the service $S$ (e.g., smartphone). \item $o$ is the owner of the IoT device. An owner can be an individual or any other cooperation or business (e.g., universities or restaurants). In this paper, we assume that a service provider is an owner of the IoT device (i.e., $sp = o$). \item $f$ is a set of functions offered by $S$ (e.g., providing WiFi hotspot). \item $q$ is a set of all non-functional parameters of $S$ (e.g., signal strength). \end{itemize} \label{section:framework} \subsection{Problem Definition} As stated earlier, IoT environments are highly dynamic. As a result, the majority of IoT services lack historical records, which are generally used to ascertain their trustworthiness. It is, therefore, crucial to consider other aspects of IoT services to establish their trustworthiness $\mathcal{T}$. The purpose of our work is to identify a function $\mathcal{F}$ that utilizes the properties of a service's environment $\mathcal{E}_S$ to assess its trust. In other words: \begin{equation} \mathcal{T} \approx \mathcal{F}(\mathcal{E}) \end{equation} \section{Just-in-Time Memoryless Trust Management Framework} We introduce a just-in-time memoryless trust management framework for assessing a service's trustworthiness without relying on historical records (see Fig. \ref{fig:framework}). The framework exploits the service session characteristics and its surrounding environment to infer the service's trustworthiness. The framework is divided into three phases: (1) IoT service initiation, (2) IoT service monitoring, and (3) trustworthiness assessment. During the IoT service initiation, an IoT service provider decides to offer their services to nearby devices. Nearby IoT devices monitor the performance of the service during the IoT service monitoring phase. A new consumer relies on existing IoT devices to build a just-in-time memoryless trust value for the potential service provider. \subsection{IoT Service Session Initiation} A service provider $S_p$ starts service provisioning by initiating a \emph{service session} $S_{sess}$. A service session is confined by a geographical area and time duration, e.g., a coffee place between 5:00pm and 7:00pm. Additionally, a service session includes the type of the service being provided, e.g., computing service, WiFi hotspots, etc. The service provider also broadcasts a \emph{promise vector} $\vv{P_S}$ to potential nearby IoT consumers. The promise vector describes the expected behavior of the IoT service. For example, in WiFi hotspot crowdsourcing, a service's trust can be evaluated based on the service's speed, security, and availability. A possible promise vector can be $[10mbps, Medium, 90\%]$. The vector indicates the service's expected speed is 10mbps, security is medium, and availability is 90\%. \subsection{Bystanders-Based Trust Assessment} A service session $S_{sess}$ is limited by space and time. IoT consumers cannot consume the service if they are not within the service session's area and period. For example, IoT consumers at a restaurant cannot consume services from a provider at a coffee shop. Similarly, an IoT service session running between 7:00pm and 8:00pm cannot be consumed between 9:00pm 10:00pm. Many IoT devices may exist within a service's area and periods other than IoT consumers. We refer to such devices as \emph{bystanders}. More formally, we define \emph{IoT service bystanders} as the non-consumer devices that exist within the vicinity of a service session $S_{sess}$ in terms of area and time. We leverage IoT service bystanders to assess the trustworthiness of a service $S$ during its session $S_{sess}$. A set of bystanders $B$ receive the promise vector $\vv{P_S}$ from the service provider $S_p$. Additionally, the bystanders assess the service's performance by invoking dummy tasks periodically. For instance, a bystander for a WiFi hotspot service can download a file using a particular service. Each bystander $b \in B$ generates an \emph{observation vector} $\vv{O_S^b}$ based on their task invocation. The observation vector contains information regarding the actual performance of the service at a specific time. For example, an observation vector $\vv{O_S} = [9mbps, Medium, 80\%]$ for a WiFi hotspot service indicates that the service has a speed of 9mbps, a medium security level, and 80\% availability. An \emph{instantaneous trust} $T_{inst}^b$ is computed by each bystander $b \in B$ using their observation vector $\vv{O_S^b}$ and the IoT service provider's promise vector $\vv{P_S}$. The instantaneous trust reflects the trustworthiness of a particular provider at a time instant from a bystander's perspective. A bystander $b \in B$ evaluates their instantaneous trust $T_{inst}^b$ using the following equation: \begin{equation} T_{inst}^b = \frac{1}{|\vv{P_S}|}\sum\limits_{i=0}^{|\vv{P_S}|-1} \min\left(1, \frac{\vv{O_S^b}(i)}{\vv{P_S}(i)}\right) \label{eq:t_inst} \end{equation} where $|\vv{P_S}|$ is the number of elements in the vector $\vv{P_S}$, and $\vv{O_S^b}(i)$ and $\vv{P_S}(i)$ are the $i^{th}$ elements in the vectors $\vv{O_S^b}$ and $\vv{P_S}$, respectively. The value of $T_{inst}^b$ ranges between zero and one, one being highly trusted. For example, assume a WiFi hotspot service provider with a promise vector $\vv{P_S} = [10mbps, Medium, 90\%]$, which represents a service with a speed of 10mbps, Medium security, and 90\% availability. In this example, the security level can either be Low, Medium, or High for simplicity. A representative numerical value can then be given to the security level when used in equation \ref{eq:t_inst} (0 = Low, 1 = Medium, and 2 = High). Additionally, assume a bystander that has an observation vector $\vv{O_S} = [9mbps, High, 80\%]$. The instantaneous trust $T_{inst}$ equals 0.93 after applying equation \ref{eq:t_inst}. \subsection{Consumer-Based Trust Assessment} An IoT service provider can offer and provision its services to more than one consumer. We exploit the set of existing consumers $C$ to assess their IoT provider's trust. Similar to IoT service bystanders, IoT consumers receive the provider's promise vector $\vv{P_S}$. Consumers monitor their providers while using the service. Unlike bystanders, consumers can offer a more accurate observation of the IoT service. Consumers perform their monitoring over \emph{periods} rather than single time instances. They, therefore, can capture the behavior of the IoT service over time more accurately and form a better representation for the service's trustworthiness. Each consumer $c \in C$ generates an observation vector $\vv{O_S^c}$. The observation vector is generated based on the consumer's experience while using the IoT service. The elements of the vector are similar to that of the bystanders' observation vectors; i.e., each represents one aspect of the service's performance. Each consumer $c \in C$ computes the \emph{accumulated trust} $T_{acc}^c$. We define the accumulated trust $T_{acc}^c$ as the trustworthiness of an IoT service from the perspective of the consumer $c \in C$ over their consumption period. Consumers keep updating their accumulated trust as they use the service. The updated accumulated trust utilizes earlier accumulated trust values to increase the accuracy of the service's trust over time. The consumer $c \in C$ evaluates the accumulated trust $T_{acc}^c$ using the following equation: \begin{equation} T_{acc}^c(t+1) = \alpha T_{acc}^c(t) + (1 - \alpha) T_{inst}^c \label{eq:t_acc} \end{equation} where $T_{acc}^c(t+1)$ is the updated accumulated trust, $T_{acc}^c(t)$ is the current accumulated trust, and $\alpha$ is a weighting factor from zero to one. The $T_{inst}^c$ is the instantaneous trust from consumer $c \in C$'s perspective. The value of $T_{inst}^c$ is computed using equation \ref{eq:t_inst}. \begin{figure*} \centering \begin{minipage}{.47\textwidth} \centering \includegraphics[width=0.90\linewidth]{images/results/instantaneous_and_accumulated_trust.pdf} \caption{Accuracy in detecting the instantaneous and accumulated trust by existing consumers and bystanders.} \label{fig:instantaneous_accumulated_trust} \end{minipage}% \hspace{0.5cm} \begin{minipage}{.47\textwidth} \centering \includegraphics[width=0.90\linewidth]{images/results/variable_bystanders_number.pdf} \caption{The effect of bystanders/consumers number on the accuracy of the overall trust.} \label{fig:bystanders_consumers_effect} \end{minipage} \end{figure*} \subsection{Overall IoT Service Trust Assessment} A new consumer relies on IoT service bystanders and existing consumers to evaluate an IoT service's trustworthiness. The accumulated and instantaneous trust values are sent to the new consumer. The new consumer aggregates the trust values to assess the overall trustworthiness of the service using the following equation: \begin{equation} \mathcal{T}_S = \frac{1}{|C| + |B|} \left(\sum\limits_{c \in C} T_{acc}^c + \sum\limits_{b \in B} T_{inst}^b\right) \label{eq:trust_basic} \end{equation} where $|C|$ is the number of existing consumers, and $|B|$ is the number of bystanders. Equation \ref{eq:trust_basic} assesses the trust by weighting instantaneous and accumulated trust values equally. The time at which the instantaneous trust is evaluated should be considered. A \emph{fresher} trust value reflects the service's trustworthiness more accurately. Additionally, the duration that is \emph{covered} by the accumulated trust is crucial. Higher weights should be given to accumulated trust values that cover longer periods. We, therefore, formulate the \emph{freshness} $F_{T_{inst}}$ of the instantaneous trust as follows: \begin{equation} F_{T_{inst}^b} = \frac{ts_b}{TS_B} \label{eq:freshness} \end{equation} where $ts_b$ is the timestamp at which the instantaneous trust $T_{inst}^b$ is evaluated, and $TS_B$ as the summation of all timestamps from all bystanders in $B$. The freshness value takes a value from zero to one, one indicating a newer trust value. For example, assume three bystanders $A$, $B$, and $C$, which computed the instantaneous trust at different time instances. The three bystanders recorded the timestamp of their observation vectors as an offset from the beginning of the service session. Assume the timestamps are as follows: 2, 8, 20 minutes for bystanders $A$, $B$, and $C$, respectively. The freshness scores for the three bystanders are 0.07, 0.27, 0.67 for bystanders $A$, $B$, and $C$, respectively, according to equation \ref{eq:freshness}. Bystander $C$ has the highest freshness scores since their computed trust is the newest among the three. Similarly, we evaluate the \emph{coverage} $G_{T_{acc}}$ of the accumulated trust as follows: \begin{equation} G_{T_{acc}^c} = \frac{d_c}{D_C} \label{eq:coverage} \end{equation} where $d_c$ is the duration that is covered by the accumulated trust, and $D_C$ is the summation of all durations covered by all the consumers in $C$. The value of the coverage can take a value between zero and one, where larger values indicate the accumulated trust covered longer periods. For instance, assume a service that is being consumed by three consumers $A$, $B$, and $C$. Each consumer used the service for different periods. Suppose that consumers $A$, $B$, and $C$ used the service for 45, 20, 5 minutes, respectively. The coverage is 0.64, 0.29, and 0.07 for consumers $A$, $B$, and $C$, respectively. Consumer $A$ is awarded the highest coverage since they used the service the longest. We use the freshness and coverage measures in equation \ref{eq:trust_basic} as follows: \begin{equation} \mathcal{T}_S = \beta \sum\limits_{c \in C} G_{T_{acc}^c} T_{acc}^c + (1 - \beta) \sum\limits_{b \in B} F_{T_{inst}^b} T_{inst}^b \label{eq:trust_freshness_coverage} \end{equation} where $\beta$ is a user-defined weighting factor between zero and one. In some scenarios, a bystander/consumer may report inaccurate or wrong trust assessments to the new consumer (e.g., biased or malicious bystander). We introduce the \emph{credibility} measure to overcome this issue. The basic intuition is that biased/malicious bystanders/consumers are special cases. In other words, the majority of consumers/bystanders behave as expected, while only a subset may produce invalid assessments. We, therefore, compare the trust assessments from different bystanders and consumers. The credibility of each bystander/consumer can then be inferred from such comparison. For example, assume, for simplicity, four bystanders with four instantaneous trust values: 0.9, 0.85, 1.0, and 0.1. The first three trust values are close to each other, while the last is significantly different. It can be deduced that the forth bystander may have invalid trust assessment since the majority produced different values. We, therefore, evaluate the credibility of a bystander/consumer $\mathcal{C}_{b|c}$ as follows: \begin{equation} \mathcal{C}_{b|c} = 1 - |T - T_{avg}| \label{eq:credibility} \end{equation} where $T$ is the instantaneous or the accumulated trust and $T_{avg}$ is the average of all trust values from bystanders and consumers. The credibility value takes a value between zero and one, one indicating high credibility. Using our earlier example, the credibilities for the four bystanders using equation \ref{eq:credibility} are 0.8125, 0.8625, 0.7125, and 0.3875. The first three bystanders got relatively high scores since they provide similar assessments. The fourth bystander is given the lowest since its trust assessment deviates from the majority. We use the credibility of the bystanders/consumers by applying equation \ref{eq:credibility} into equation \ref{eq:trust_freshness_coverage} as follows: \begin{equation} \mathcal{T}_S = \beta \sum\limits_{c \in C} \mathcal{C}_c G_{T_{acc}^c} T_{acc}^c + (1 - \beta) \sum\limits_{b \in B} \mathcal{C}_b F_{T_{inst}^b} T_{inst}^b \label{eq:trust} \end{equation} \begin{figure*} \centering \begin{minipage}{.47\textwidth} \centering \includegraphics[width=0.90\linewidth]{images/results/accuracy.pdf} \caption{Accuracy, Precision, and recall for the proposed framework.} \label{fig:experiment_accuracy} \end{minipage}% \hspace{0.5cm} \begin{minipage}{.47\textwidth} \centering \includegraphics[width=0.90\linewidth]{images/results/credibility_vs_no_credibility.pdf} \caption{The effect of credibility on the trust accuracy.} \label{fig:credibility_effect} \end{minipage} \end{figure*} \section{Experiments} \label{section:experiments} We evaluate the effectiveness of the framework in terms of accuracy. More specifically, we examine the accuracy of the just-in-time memoryless trust for IoT services. Additionally, we evaluate the accuracy of the instantaneous and accumulated trust discussed in Section \ref{section:framework}. We run the experiments on a 3.60GHz Intel(R) Core(TM) i7-7700 and 8 GB of RAM. \subsection{Dataset Description} Amazon Mechanical Turk\footnote{https://www.mturk.com/} (MTruk) is used to collect the dataset for our experiments. We divided MTurk workers into three categories: bystanders, existing consumers, and new consumers. Workers in all categories are asked to assume that they are at a public place, e.g., a restaurant, where other nearby users are sharing their Internet via WiFi hotspots using smartphones and smartwatches. In such an environment, potential consumers can use an application on their smartphones/smartwatches to get a list of nearby WiFi hotspots. Each provider in the application sets its expected performance, i.e., promise vector. Each worker is presented with a specific WiFi hotspot, with its promise vector. Workers in the bystanders' category are asked to assume that they are given some monetary rewards to monitor the performance of service providers. Monitoring is carried out by using the service at specific time instances and recording their actual performance, i.e., observation vector. The performance of the service at different timestamps is given to the workers. The workers are asked to rate the service based on the given performance. Existing consumes category consists of workers that are treated as current consumers for the presented service. Similar to bystanders category, workers in this category are presented with a list of performance records of the service, i.e., observation vector. Each performance record represents the performance of the service during a period rather than at timestamps. The workers are asked to rate the service based on the given performance. Workers in the new consumers' category are those who wish to compute the just-in-time memoryless trust. Each worker is presented with two lists of ratings. The first list represents the service's ratings from consumers who use it. The second list contains bystanders who monitor the service and report their ratings. The workers are asked to consider the ratings from both lists and give an expected rate for the service. All workers are asked to give their rating for the services as a value between zero to ten, ten being the highest. The data has been collected using the results from a total of 5000 workers for each category. \subsection{Experimental Results} We use \emph{precision}, \emph{recall}, and \emph{accuracy} \cite{Olson2008ADM1795943} to determine the efficiency of our work. We assume that trust can be at one of three levels: \emph{highly trusted}, \emph{moderately trusted}, and \emph{lowly trusted}. Given a trust level $l$, e.g., moderately trusted, \emph{precision} $l$ is defined as the ratio between the number of samples correctly detected as $l$ to the total number of samples detected as $l$: \begin{equation} \label{eq:precision} Precision_l =\frac{|correct_l|}{|detected_l|} \end{equation} \emph{Recall} for $l$ is the ratio between the correctly detected samples as $l$ to the total number of actual $l$ samples: \begin{equation} \label{eq:recall} Recall_l =\frac{|correct_l|}{|actual_l|} \end{equation} The \emph{accuracy} in detecting $l$ is the ratio between the number of correctly detected samples as $l$ and the number of correctly detected samples as not $l$ to the total samples number: \begin{equation} \label{eq:accuracy} Accuracy_l =\frac{|correct_l|+|correct\_not_l|}{|samples|} \end{equation} The first experiment set evaluates the overall accuracy of the whole framework, see Fig. \ref{fig:experiment_accuracy}. The framework achieves high accuracy, precision, and recall scores: 88.40\%, 82.78\%, and 82.54\%, respectively. The second experiment evaluates the credibility measure influence on the accuracy scores. Fig. \ref{fig:credibility_effect} shows the results for this experiment. The framework scores higher in terms of accuracy (around 88.40\%) when the credibility measure is used. The accuracy of the framework reduces significantly when the credibility of the bystander/consumer is not considered (around 59.03\%). Recall that the credibility measures the truthfulness of the trust values provided by the bystanders/consumers. The measure is used to weight the significance of the reported trust from the bystanders/consumers. The absence of the credibility results in treating all reported trust values equally. As a result, biased/malicious bystanders and consumers can greatly disrupt the accuracy of the assessed trust as evident in Fig. \ref{eq:credibility}. The overall trust value of the service is computed based on the instantaneous and accumulated trust. We evaluate the accuracy in assessing such trust values in the next experiment (see Fig. \ref{fig:instantaneous_accumulated_trust}. The accuracy in computing the instantaneous trust is around 85.04\%, whereas a higher accuracy is achieved when evaluating the accumulated trust; around 90.31\%. The accumulated trust has higher accuracy due to how it is computed. The instantaneous trust is computed from a single observation vector given a specific timestamp. However, computing the accumulated trust utilizes previously computed instantaneous trust values to get a more accurate assessment for the service. The last experiment studies the effect of the number of bystanders/consumers on the overall trust of the service. Fig. \ref{fig:bystanders_consumers_effect} shows the results for this experiment. The accuracy of the framework has significantly lower scores when a small number of bystanders and consumers is considered. For example, the accuracy is around 20\% when only one bystander or consumer is used. The accuracy increases exponentially as the number of bystanders and consumers increases. High accuracy scores are reached when six bystanders/consumers are considered (about 82\%). A lower number of bystanders/consumers might not represent the actual service trustworthiness accurately. For example, biased/malicious bystanders/consumers have a greater influence when the total number of bystanders/consumer is low. As a result, the computed trust value does not accurately represent the service's actual trustworthiness. \section{Related Work} \label{section:literature} Trust assessment in crowdsourced IoT service environment is fairly new. Most of the proposed approaches can be grouped into two main categories: \emph{previous experiences-based trust evaluation}, and \emph{social networks-based trust evaluation}. \subsection{Measuring Trust Using Previous Experiences} A centralized trust management system (TMS) is proposed to evaluate the trustworthiness of an IoT device based on its past behavior \cite{saied2013trust}. The TMS has four phases: information gathering, entity selection, transaction and evaluation, and learning. The information gathering phase involves collecting data about the executed services by the IoT device. In the entity selection phase, the TMS returns the most trustworthy IoT device for requested services. Performing the task by the IoT service provider happens at the transaction and evaluation phase. The requester gives a score to the provider based on the executed service outcome. In the learning phase, the TMS learns the credibility of the requesters to weight their scores. While TMS performed well according to the conducted experiments, their choice of a centralized solution may act as a bottleneck. A trust model is proposed for evaluating the trustworthiness and reputation of nodes in Wireless Sensor Networks (WSNs) \cite{chen2011trm}. The node's reputation is computed based on its performance characteristics: packet delivery, forwarding ratio, and energy consumption. The reputation is later used to evaluate the trustworthiness of the node. The proposed model uses WSN-specific characteristics (e.g., packet delivery which makes it unsuitable for other applications). Social IoT network is used to measure the trust between two IoT devices in \cite{nitti2012subjective}. A social IoT network is a type of social networks where nodes are IoT devices. Relationships between IoT devices indicate one or more of the following relations: similar owner, co-location, co-work, social relation, or brand. Each node computes the trust of its friends. The trust is measured based on the individual's and friends' previous experiences. A trust model is also proposed in \cite{wang2016toward} that uses social IoT to manage the interactions between IoT service consumers and providers. The social IoT network is used to search for candidate service providers. Reputation-based trustworthiness is used to evaluate the service providers using previous interactions. A trust management protocol is proposed in \cite{bao2012dynamic} \cite{bao2012trust} to assess the trustworthiness of IoT devices based on honesty, cooperativeness, and community-interest. Honesty is measured using the direct observation of an IoT device (high recommendation discrepancy, delays, etc.). Cooperativeness and community-interest are computed using data from social networks. Common friends between two IoT owners indicate high cooperativeness between their IoT devices. Community-interest depends on the common communities number between two IoT owners. A framework is proposed for crowdsourcing services to IoT devices based on their mobility and trustworthiness \cite{kantarci2014mobility}. A central authority exists to manage interactions between the consumer and provider. The trustworthiness of a service provider is computed based on their reputation. Basically, when the central authority receives a task request from the consumer, the task is submitted to multiple service providers. The server then computes the anomalies among the results from the service providers. Service providers with deviated results are marked and their reputation is decreased. Aforementioned approaches use historical data (i.e., previous experiences) to asses the trust. On the other hand, one key characteristic of crowdsourced IoT service environments is their high dynamism in terms of IoT devices deployment. Every day a large number of IoT services are being added. Newly added services do not have previous records. If those previous experiences are missing, the evaluated trust cannot be accurate. Therefore, these approaches cannot be utilized to accurately measure IoT services' trust. \subsection{Measuring Trust Using Social Networks} A social compute cloud framework is proposed in \cite{Caton2014} where users in a social network can share and consume services from other users. The framework leverages the social structure of the network. Relation types between users (e.g., family, colleagues, etc.) are also utilized to determine the level of trust between them. A framework is proposed in \cite{Cao2015} that aims at eliminating the privacy risks accompanied with public WiFi hotspots. Social WiFi utilizes social networks relationships to match hotspot users to trusted hotspot providers. The proposed framework lacks generality as it can only be used for WiFi hotspot services. An approach for evaluating the trust between users in social networks is presented in \cite{adali2010measuring}. Behavioral interactions are used to indicate the level of trust (i.e., conversations between users and message propagation). A conversation between two users can indicate a higher level of trust if: (1) it happens many times, (2) it lasts for a long duration, and (3) there is a balanced contribution of messages from both users. The message propagation indicates the willingness of a user $B$ to forward a message received by another user $A$. A large number of forwarded messages reflect a higher trust value for the sender. The above work focuses on social networks' relationships which is not sufficient to evaluate the trust between the IoT service provider and consumer. For example, two friends on a social network do not necessitate mutual trust between them \cite{sherchan2013survey}. \section{Conclusion} \label{section:conclusion} We presented just-in-time memoryless trust, a trust evaluation specifically suited for IoT services. The just-in-time memoryless trust accounts for the high dynamism exhibited in IoT environments. Such a dynamic nature causes the lack of \emph{historical records}, a crucial cornerstone for computing the trustworthiness in the majority of existing trust management frameworks. A novel framework was proposed to measure IoT services' trustworthiness \emph{without} relying on previous historical data. The framework exploits session-related data to assess the trustworthiness of IoT services. More specifically, we leverage the experience of IoT bystanders and consumers to build an accurate trust value for a given IoT service. The proposed framework achieved high accuracy scores in our experiments. Future directions can be investigating the trustworthiness of the IoT service consumer. \bibliographystyle{IEEEtran}
{'timestamp': '2020-06-16T02:06:48', 'yymm': '2005', 'arxiv_id': '2005.14419', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14419'}
arxiv
\subsection{Model-Free Policy Search} In model-free policy search, sampled trajectories are used directly to update the policy parameters. The discussion will follow the three main steps followed by
the algorithms: % (i) how they {\em explore} the space of policies, % (ii) how they {\em evaluate} policies, and % (iii) how policies are {\em updated}. \subsubsection{Policy Exploration} Exploring the space of policies implies either sampling the parameter vector the policy depends on, or perturbing the action choice of the policy. Often, the sampling of parameters takes place at the beginning of each episode (in episodic scenarios), and action perturbations are different at each time step, but other options are possible. Stochastic policies can be seen as naturally performing a step-based exploration in action space. Otherwise, the exploration strategy can be modeled as an {\em upper-level policy} $\pi_{\omega}(\theta)$---sampling $\theta$ according to a probability distribution governed by parameter vector $\omega$---, while the actual policy $\pi_{\theta}(a|s)$ is refered to as a {\em \PW{lower}-level policy}. In this setting, the policy search aims at finding the parameter vector $\omega$ that maximizes the expected return given this vector. If $\pi_{\omega}(\theta)$ is a Gaussian distribution (common in robotics), then its covariance matrix can be diagonal---typically in step-based exploration---or not---which leads to more stability, but requires more samples---, meaning that the various parameters in $\theta$ can be treated in a correlated manner or not. \subsubsection{Policy Evaluation} Policy evaluation can also be step-based or episode-based. Step-based approaches evaluate each state-action pair. They have low variance and allow crediting several parameter vectors. They can rely on $Q$-value estimates, which can be biased and prone to approximation errors, or Monte-Carlo estimates, which can suffer from high variance. Episode-based approaches evaluate parameters using complete trajectories. They allow more performance criteria than step-based approaches---\textit{e.g.}, minimizing the final distance to the target. They also allow for more sophisticated exploration strategies, but suffer all the more from noisy estimates and high variance that the dynamics are more stochastic. \subsubsection{Policy Update} Finally, the policy can be updated in rather different manners. We will discuss approaches relying on gradient ascents, inference-based optimization, information-theoretic ideas, stochastic optimization and path-integral optimal control. {\bf Policy Gradient} (PG) algorithms first require estimating the gradient. Some (episode-based) PG algorithms perform this estimate using a finite difference (FD) method by perturbing the parameter vector. Other algorithms instead exploit the {\em Likelihood ratio} trick, which allows estimating the gradient from a single trajectory, but requires a stochastic policy. These can be step-based as REINFORCE \cite{Williams-ml92} or G(PO)MDP \cite{BaxBar-jmlr01,BaxBarWea-jmlr01}, or episode-based as PEPG \cite{SehEtAl-nn10}. Policy gradients also include natural gradient algorithms (NPG), i.e., algorithms that try to limit the distance between distributions $P_{\theta}(h)$ and $P_{\theta+\delta\theta}(h)$ using the KL divergence (estimated through the Fisher information matrix (FIM)). In step-based NPGs \cite{BagSch-ijcai03,PetSch-nn08}, using appropriate (``{\em compatible}'') function approximation removes the need to estimate the FIM, but requires estimating the value function, which can be difficult. On the contrary, episodic Natural Actor-Critic (eNAC) \cite{PetSch-nc08} uses complete episodes, and thus only estimates $v(s_1)$. NAC \cite{PetSch-nn08} addresses infinite horizon problems, the lack of episodes leading to the use of Temporal Difference methods to estimate values. Policy gradient usually applies to randomized policies. Recent work \cite{SilverLeverHeessDegrisWierstraRiedmiller14,LillicrapHuntPritzelHeessErezTassaSilverWierstra16} has adapted it to deterministic policies with a continuous action space, which can potentially facilitate the gradient estimation. Building on DQN, actor-critic methods have been extended to asynchronous updates with parallel actors and neural networks as approximators \cite{MnihBadiaMirzaGravesLillicrapHarleySilverKavukcuoglu16}. {\bf Inference-based algorithms} avoid the need to set learning rates. They consider that (i) the return $R$ is an observed binary variable ($1$ meaning success),\footnote{Transformations can bring us in this setting.} (ii) the trajectory $h$ is a latent variable, and (iii) one looks for the parameter vector that maximizes the probability of getting a return of $1$. Then, an Expectation-Maximization algorithm can address this Bayesian inference problem. Variational inference can be used in the E-step of EM \cite{Neumann-icml11}, but most approaches rely on Monte-Carlo estimates instead, despite the fact that they perform maximum likelihood estimates over several modes of the reward function (and thus do not distinguish them). These can be episode-based algorithms as RWR \cite{PetSch-esann07} (uses a linear upper-level policy) or CrKR \cite{KobOztPet-rss10} (a kernelized version of RWR, i.e., which does not need to specify feature vectors, but cannot model correlations). These can also be step-based algorithms as PoWER \cite{KobPet-ml10}, which allows a more structured exploration strategy, and gives more influence to data points with less variance. {\bf Information-theoretic} approaches (see Chapter 2 of Volume 3) try to limit changes in trajectory distributions between two consecutive time steps, which could correspond to degradations rather than improvements in the policy. Natural PGs have the same objective, but need a user-defined learning rate. % Instead, REPS \cite{PetMueAlt-aaai10} combines advantages from NPG (smooth learning) and EM-based algorithms (no learning-rate). % Episode-based REPS \cite{DanNeuPet-aistats12} learns a higher-level policy while bounding parameter changes by solving a constrained optimization problem. % Variants are able to adapt to multiple contexts or learn multiple solutions. % Step-based REPS \cite{PetMueAlt-aaai10} solves an infinite horizon problem (rather than an episodic one), optimizing the average reward per time step. % It requires enforcing the stationarity of state features, and thus solving another constrained optimization problem. A related recent method, TRPO \cite{SchulmanLevineAbbeelJordanMoritz15}, which notably constrains the changes of $\pi(\cdot \,|\, s)$ instead of those of state-action distributions, proves to work well in practice. {\bf Stochastic Optimization} relies on black-box optimizers, and thus can easily be used for episode-based formulations, i.e., working with an upper-level policy $\pi_{\omega}(\theta)$. % Typical examples are CEM \cite{BoeKroManRub-aor05,SziLor-nc05}, CMA-ES \cite{HanMulKou-ec03,HeiIge-joa09}, and NES \cite{WierstraSchaulGlasmachersSunPetersSchmidhuber14}, three evolutionary algorithms that maintain a parametric probability distribution (often Gaussian) $\pi_{\omega}(\theta)$ over the parameter vector. % They sample a population of candidates, evaluate them, and use the best ones (weighted) to update the distribution. % Many rollouts may be required for evaluation, as examplified with the game of Tetris \cite{SziLor-nc05}. {\bf Path Integral} (PI) approaches were introduced for optimal control, i.e., to handle non-linear continuous-time systems. They handle squared control costs and arbitrary state-dependent rewards. % {\em Policy Improvement with PIs} (PI$^2$) applies PI theory to optimize Dynamic Movement Primitives (DMPs), i.e., representations of movements with parameterized differential equations, using Monte-Carlo rollouts instead of dynamic programming. \subsection{Model-Based Policy Search} Typical model-based policy-search approaches repeatedly % (i) sample real-world trajectories using a fixed policy; % (ii) learn a forward model of the dynamics based on these samples (and previous ones); % (iii) optimize this policy using the learned model (generally as a simulator). As can be noted, this process does not explicitly handle the exploration-exploitation trade-off as policies are not chosen so as to improve the model where this could be appropriate. We now discuss three important dimensions of these approaches: how to learn the model, how to make reliable long-term predictions, and how to perform the policy updates. Model learning often uses probabilistic models. They first allow accounting for uncertainty due to sparse data (at least in some areas) or an inappropriate model class. In robotics, where action and state spaces are continuous, non-parametric probabilistic methods can be used such as Linearly Weighted Bayesian Regression (LWBR) of Gaussian Processes (GPs), which may suffer from increasing time and memory requirements. But probabilistic models can also be employed to represent stochastic dynamics. An example is that of propositional problems, which are often modeled as Factored MDPs \cite{BouDeaGol-ijcai95}, where the dynamics and rewards are DBNs whose structure is {\em a priori} unknown. A variety of approaches have been proposed, which rely on different representations (such as rule sets, decision trees, Stochastic STRIPS, or PPDDL) \cite{DegSigWui-icml06,PasZetKae-jair07,WalSziDiuLit-uai09,LesZan-ewrl11}. See Chapter 10 of Volume 2. Long-term predictions are usually required to optimize the policy given the current forward model. While the real world is its own best (unbiased) model, using a learned model has the benefit of allowing to control these predictions. A first approach, similar to paired statistical tests, is to always use the same random initial states and the same sequences of random numbers when evaluating different policies. It has been introduced for policy-search in the PEGASUS framework \cite{NgJor-uai00} and drastically reduces the sampling variance. Another approach is, when feasible, to compute a probability distribution over trajectories using deterministic approximations such as linearization \cite{AndMoo-of05}, sigma-point methods (e.g., \citep{JulUhl-ieee04}) or moment-matching. Policy updates can rely % on gradient-free optimization (e.g., Nelder-Mead method or hill-climbing) \citep{BagSch-icra01}, % on sampling-based gradients (e.g., finite difference methods), as in model-free approaches, although they require many samples, % or on analytical gradients \citep{DeiRas-icml11}, which require the model as well as the policy to be differentiable, scale favorably with the number of parameters, but are computationally involved. \section{Introduction} Reinforcement learning (RL) is a general framework for building autonomous agents (physical or virtual), which are systems that make decisions without human supervision in order to perform a given task. Examples of such systems abound: expert backgammon player \cite{Tesauro95}, dialogue systems \cite{SinghKearnsLitmanWalker99}, acrobatic helicopter flight \cite{AbbeelCoatesNg10}, human-level video game player \cite{MnihKavukcuogluSilverRusuVenessBellemareGravesRiedmillerFidjelandOstrovskiPetersenBeattieSadikAntonoglouKingKumaranWierstraLeggHassabis15}, go player \cite{SilverHuangMaddisonGuezSifreDriesscheSchrittwieserAntonoglouPanneerschelvamLanctotDielemanGreweNhamKalchbrennerSutskeverLillicrapLeachKavukcuogluGraepelHassabis16} or autonomous driver \cite{BojarskiTestaDworakowskiFirnerFleppGoyalJackelMonfortMullerZhangZhangZhao16}. See also Chapter 11 of Volume 2 and Chapters 10 and 12 of Volume 3. In all those examples, an agent faces a sequential decision-making problem, which can be represented as an interaction loop between an agent and an environment. After observing its current situation, the agent selects an action to perform. As a result, the environment changes its state and provides a numeric reward feedback about the chosen action. In RL, the agent needs to learn how to choose good actions based on its observations and the reward feedback, without necessarily knowing the dynamics of the environment. In this chapter, we focus on the basic setting of RL that assumes a single learning agent with full observability. Some work has investigated the partial observability case (see \cite{Spaan12} for an overview of both the model-based and model-free approaches). The basic setting has also been extended to situations where several agents interact and learn simultaneously (see \cite{BusoniuBabuskaDeSchutter10} for a survey). RL has also been tackled with Bayesian inference techniques, which we do not mention here for space reasons (see \cite{GhavamzadehMannorPineauTamar15} for a survey). In Section~\ref{sec:basics}, we recall the Markov decision process model on which RL is formulated and the RL framework, along with some of their classic solution algorithms. We present two families of approaches that can tackle large-sized problems for which function approximation is usually required. The first, which is value-based, is presented in Section~\ref{sec:approx}. It consists in estimating the value function of an optimal policy. The second, called policy search, is presented in Section~\ref{sec:policysearch}. It searches for an optimal policy directly in a policy space. In Section~\ref{sec:extension}, we present some extensions of the standard RL setting, namely extensions to the case of unknown rewards and risk-sensitive RL approaches. Finally, we conclude in Section~\ref{sec:conclusion}. \section{Background for RL} \label{sec:basics} Before presenting the RL framework, we recall the Markov decision process (MDP) model, on which RL is based. See also Chapter 17 of this volume and Chapter 10 of Volume 2. {\em Markov decision process.} MDPs and their multiple variants (e.g., Partially Observable MDP or POMDP) \cite{Puterman94} have been proposed to represent and solve sequential decision-making problems under uncertainty. An MDP is defined as a tuple $\mathcal M = \langle \mathbf{S}, \mathbf{A}, T, R, \gamma, H\rangle$ where $\mathbf{S}$ is a set of states, $\mathbf{A}$ is a set of actions, transition function $T(s, a, s')$ specifies the probability of reaching state $s'$ after performing action $a$ in state $s$, reward function $R(s, a) \in \mathbb R$ yields the immediate reward after performing action $a$ in state $s$, $\gamma \in [0, 1]$ is a discount factor and $H \in \mathbb{N} \cup \{\infty\}$ is the horizon of the problem, which is the number of decisions to be made. An immediate reward, which is a scalar number, measures the value of performing an action in a state. In some problems, it can be randomly generated. In that case, $R(s, a)$ is simply the expectation of the random rewards. In this MDP formulation, the environment is assumed to be stationary. Using such an MDP model, a system designer needs to define the tuple $\mathcal M$ such that an optimal policy performs the task s/he wants. Solving an MDP (i.e., {\em planning}) amounts to finding a controller, called a {\em policy}, which specifies which action to take in every state of the environment in order to maximize the expected discounted sum of rewards (standard decision criterion). A policy $\pi$ can be deterministic (i.e., $\pi(s) \in A$) or randomized (i.e., $\pi(\cdot \,|\, s)$ is a probability distribution over $\mathbf{A}$). It can also be stationary or time-dependent, which is useful in finite-horizon or non-stationary problems. A $t$-step history (also called trajectory, rollout or path) $h = (s_1, a_1, s_2, \ldots, s_{t+1}) \in (\mathbf{S}\times\mathbf{A})^t \times \mathbf{S}$ is a sequence of past states and actions. In the standard case, it is valued by its return defined as $\sum_t \gamma^{t-1}R(s_t, a_t)$. As a policy induces a probability distribution over histories, the {\em value function} $v^\pi : \mathbf{S} \to \mathbb{R}$ of a policy $\pi$ is defined by: \begin{align*} v^\pi_H(s) = \mathbb{E}_{\pi}\big[ \sum_{t=1}^H \gamma^{t-1}R(S_t, A_t) \,|\, S_1 = s \big], \end{align*} where $\mathbb{E}_{\pi}$ is the expectation with respect to the distribution induced by $\pi$ in the MDP, and $S_t$ and $A_t$ are random variables respectively representing a state and an action at a time step $t$. We will drop subscript $H$ if there is no risk of confusion. The value function can be computed recursively. For deterministic policy $\pi$, we have: \begin{align*} v^\pi_0(s) & = 0,\\ v^\pi_t(s) & = R(s, \pi(s)) + \gamma \sum_{s'\in\mathbf{S}} T(s, \pi(s), s') v^\pi_{t-1}(s'). \end{align*} In a given state, policies can be compared via their value functions. Interestingly, in standard MDPs, there always exists an optimal deterministic policy whose value function is maximum in every state. Its value function is said to be optimal. In the infinite horizon case, when $\gamma<1$, $v^\pi_t$ is guaranteed to converge to $v^\pi$, which is the solution of the {\em Bellman evaluation equations}: \begin{align} v^\pi(s) & = R(s, \pi(s)) + \gamma \sum_{s'\in\mathbf{S}} T(s, \pi(s), s') v^\pi(s'). \label{eq:bellmanevaluation} \end{align} Given $v^\pi$, a better policy can be obtained with the following improvement step: \begin{align} \pi'(s) = \operatorname*{argmax}_{a \in \mathbf{A}} R(s, a) + \gamma \sum_{s'\in\mathbf{S}} T(s, a, s') v^\pi(s'). \label{eq:improvement} \end{align} The policy iteration algorithm consists in alternating between a policy evaluation step (\ref{eq:bellmanevaluation}) and a policy improvement step (\ref{eq:improvement}), which converges to the optimal value function $v^* : \mathbf{S} \to \mathbb R$. Alternatively, the optimal value function $v^*_H : \mathbf{S} \to \mathbb{R}$ can also be iteratively computed for any horizon $H$ by: \begin{align} v^*_0(s) & = 0 \nonumber\\ v^*_t(s) & = \max_{a \in \mathbf{A}}R(s, a) + \gamma \sum_{s'\in\mathbf{S}} T(s, a, s') v^*_{t-1}(s'). \label{eq:vi} \end{align} In the infinite horizon case, when $\gamma<1$, $v^*_t$ is guaranteed to converge to $v^*$, which is the solution of the {\em Bellman optimality equations}: \begin{align} v^*(s) & = \max_{a \in \mathbf{A}}R(s, a) + \gamma \sum_{s'\in\mathbf{S}} T(s, a, s') v^*(s'). \label{eq:bellmanoptimality} \end{align} In that case, (\ref{eq:vi}) leads to the value iteration algorithm. Two other related functions are useful when solving an RL problem: the action-value function $Q^\pi_t(s, a)$ (resp. the optimal action-value function $Q^*_t(s, a)$) specifies the value of choosing an action $a$ in a state $s$ at time step $t$ and assuming policy $\pi$ (resp. an optimal policy) is applied thereafter, i.e., \begin{align*} Q^x_t(s, a) = R(s, a) + \gamma \sum_{s'\in\mathbf{S}} T(s, a, s') v^x_{t-1}(s') \quad \mbox{ where } x \in \{\pi, *\}. \end{align*} {\em Reinforcement learning.} In the MDP framework, a complete model of the environment is assumed to be known (via the transition function) and the task to be performed is completely described (via the reward function). The RL setting has been proposed to tackle situations when those assumptions do not hold. An RL agent searches for (i.e., during the {\em learning phase}) a best policy while interacting with the unknown environment by trial and error. In RL, the standard decision criterion used to compare policies is the same as in the MDP setting. Although the reward function is supposed to be unknown, the system designer has to specify it. In RL, value and action-value functions have to be estimated. For $v^\pi$ of a given policy $\pi$, this can be done with the standard TD(0) evaluation algorithm, where the following update is performed after applying $\pi$ in state $s$ yielding reward $r$ and moving to new state $s'$: \begin{align} v^\pi_t(s) = v^\pi_{t-1}(s) - \alpha_t(s) \left( v^\pi_{t-1}(s) - \left(r + v^\pi_{t-1}(s')\right)\right),\label{eq:td update rule} \end{align} where $\alpha_t(s) \in [0, 1]$ is a learning rate. For $Q^\pi$, the update is as follows, after the agent executed action $a$ in state $s$, received $r$, moved to new state $s'$ and executed action $a'$ (chosen by $\pi$): \begin{align} Q^\pi_t(s, a) = Q^\pi_{t-1}(s, a) - \alpha_t(s, a) \big( Q^\pi_{t-1}(s, a) - \big(r + \gamma Q^\pi_{t-1}(s', a')\big) \big),\label{eq:sarsa update rule} \end{align} where $\alpha_t(s, a) \in [0, 1]$ is a learning rate. This update leads to the SARSA algorithm (named after the variables $s, a, r, s', a'$). In the same way that the policy iteration algorithm alternates between an evaluation step and a policy improvement step, one can use the SARSA evaluation method and combine it with a policy improvement step. In practice, we do not wait for the SARSA evaluation update rule to converge to the actual value of the current policy to make a policy improvement step. We rather continuously behave according to the current estimate of the $Q$-function to generate a new transition. One common choice is to use the current estimate in a softmax (Boltzmann) function of temperature $\tau$ and behave according to a randomized policy: \[\pi_t(a \,|\,s)=\frac{e^{Q_{\theta_t}(s, a)/\tau}}{\sum_b e^{Q_{\theta_t}(s, b)/\tau}}.\] Notice that we chose to use the Bellman evaluation equations to estimate the targets. However we could also use the Bellman optimality equations in the case of the $Q$-function and replace $r + \gamma Q(s', a')$ by $r + \max_b Q(s',b)$. Yet this only holds if we compute the value $Q^*$ of the optimal policy $\pi^*$. This gives rise to the $Q$-learning update rule, which directly computes the value of the optimal policy. It is called an {\em off-policy} algorithm (whereas SARSA is {\em on-policy}) because it computes the value function of another policy than the one that selects the actions and generates the transitions used for the update. The following update is performed after the agent executed action $a$ (e.g., chosen according to the softmax rule) in state $s$, received $r$ and moved to new state $s'$: \begin{align} Q^*_t(s, a) = Q^*_{t-1}(s, a) - \alpha_t(s, a) \big( Q^*_{t-1}(s, a) - (r + \gamma \max_{a'} Q^*_{t-1}(s', a')) \big) .\label{eq:qlearning update rule} \end{align} Updates (\ref{eq:td update rule}), (\ref{eq:sarsa update rule}) and (\ref{eq:qlearning update rule}) can be proved to converge if the learning rates satisfy standard stochastic approximation conditions (i.e., $\sum_t \alpha_t = \infty$ and $\sum_t \alpha_t^2 < \infty$). Besides, for (\ref{eq:sarsa update rule}), temperature $\tau$ would also need to converge to $0$ while ensuring sufficient exploration in order for SARSA to converge to the optimal Q-function. In practice, $\alpha_t(s, a)$ is often chosen constant, which would also account for the case where the environment is non-stationary. Those two general framework (MDP and RL) have been successfully applied in many different domains. For instance, MDPs or their variants have been used in finance \cite{BauerleRieder11} or logistics \cite{ZhaoChenLeungLai10}. RL has been applied to soccer \cite{BaiWuChen13} or power systems \cite{YuZhang13}, to cite a few. To tackle real-life large-sized problems, MDP and RL have to be completed with other techniques, such as compact representations \cite{BoutilierDeardenGoldszmidt00,GuestrinHauskrechtKveton04,Otterlo09} or function approximation \cite{de-FariasVan-Roy03,GeistPietquin11,MnihKavukcuogluSilverRusuVenessBellemareGravesRiedmillerFidjelandOstrovskiPetersenBeattieSadikAntonoglouKingKumaranWierstraLeggHassabis15}. \section{Value-Based Methods with Function Approximation}\label{sec:approx} In many cases, the state-action space is too large so as to be able to represent exactly the value functions $v^\pi$ or the action-value function $Q^\pi$ of a policy $\pi$. For this reason, function approximation for RL has been studied for a long time, starting with the seminal work of~\citet{BellmanPoly}. In this framework, the functions are parameterized by a vector of $d$ parameters $\bm\theta=[\theta_j]_{j=1}^{d}$, with $\bm\theta \in \Theta \subset \mathbb{R}^d$ (we will always consider column vectors) and the algorithms will aim at learning the parameters from data provided in the shape of transitions $\{s_t, a_t, s'_t, r_t\}_{t=1}^N$ where $s'_t$ is the successor state of $s_t$ drawn from $T(s_t, a_t, \cdot)$. We will denote the parameterized versions of the functions as $v_{\bm\theta}$ and $Q_{\bm\theta}$. Popular approximation schemes are linear function approximation and neural networks. The former gave birth to a large literature in the theoretical domain as it allows studying convergence rates and bounds (although it remains non-trivial). The latter, although already used in the 90's~\cite{Tesauro95}, has known a recent growth in interest following the Deep Learning successes in supervised learning. The case of neural networks will be addressed in Section~\ref{sec:deep} but we will start with linear function approximation. In this particular case, a set of basis functions $\bm\phi(\cdot) = [\phi_j(\cdot)]_{j=1}^{d}$ has to be defined by the practitioner (or maybe learned through unsupervised learning) so that the value functions can be approximated by: $$v_{\bm\theta}(s) = \sum_j \theta_j \phi_j(s) = \bm\theta^\intercal \bm\phi(s) \quad \text{ or } \quad Q_{\bm\theta}(s, a) = \sum_j \theta_j \phi_j(s, a) = \bm\theta^\intercal \bm\phi(s, a).$$ The vector space defined by the span of $\bm\phi$ is denoted $\Phi$. Notice that the exact case in which the different values of the value functions can be stored in a table (tabular case) is a particular case of linear function approximation. Indeed, if we consider that the state space is finite and small $\left( s =\{s_k\}_{k=1}^{|\mathbf{S}|} \in \mathbf{S} \right)$, then the value function can be represented in a table of $|\mathbf{S}|$ values $\{v_k \,|\, v_k = v(s_k)\}_{k=1}^{|\mathbf{S}|}$ where $|\mathbf{S}|$ is the number of states. This is equivalent to defining a vector of $|\mathbf{S}|$ parameters $\bm{v}=[v_k]_{k=1}^{|\mathbf{S}|}$ and a vector of $|\mathbf{S}|$ basis functions $\bm\delta(s)=[\delta_k(s)]_{k=1}^{|\mathbf{S}|}$ where $\delta_k(s)=1$ if $s = s_k$ and $0$ otherwise. The value function can thus be written $v(s)=\sum_k v_k \delta_k(s) = \bmv^\intercal \bm\delta(s)$. \subsection{Stochastic Gradient Descent Methods} \subsubsection{Bootstrapped Methods}\label{sec:bootstrap} If one wanted to cast the Reinforcement Learning problem into a supervised learning problem (see Chapter 11 of this Volume and Chapter 12 of Volume 2), one could want to fit the parameters to the value function directly. For instance, to evaluate the value of a particular policy $\pi$, one would solve the following regression problem (for some $\ell_p$-norm and distribution $\mu$ over states): $$\bm\theta^* = \operatorname*{argmin}_{\bm\theta} \|v^\pi_{\bm\theta} - \varv^\pi \|_{p, \mu} = \operatorname*{argmin}_{\bm\theta} \|v^\pi_{\bm\theta} - \varv^\pi \|_{p, \mu}^p$$ where $\|\cdot\|_{p, \mu}$ \PW{denotes the weighted $\ell_p$-norm defined by $ \big(\mathbb{E}_\mu |\cdot|^p\big)^{1/p}$}, $\mathbb{E}_\mu$ is the expectation with respect to $\mu$\PW{} Yet, as said before, we usually cannot compute these values everywhere and we usually only have access to some transition samples $\{s_t, a_t, s'_t, r_t\}_{t=1}^N$ generated according to distribution $\mu$. So we could imagine casting the RL problem into the following minimization problem: $$\bm\theta^* = \operatorname*{argmin}_{\bm\theta} \frac{1}{N} \sum_{t=1}^N \PW{|v_{\bm\theta}^\pi(s_t) - \varv^\pi(s_t) |^p}.$$ This cost function can be minimized by stochastic gradient descent (we will consider an $\ell_ 2$-norm): \begin{align*} \bm\theta_t &= \bm\theta_{t-1} - \frac{\alpha}{2} \nabla_{\bm\theta_{t-1}} \PW{\big(v_{\bm\theta_{t-1}}^\pi(s_t) - \varv^\pi(s_t)\big)^2} \\ & = \bm\theta_{t-1} - \alpha \nabla_{\bm\theta_{t-1}} v_{\bm\theta_{t-1}}^\pi(s_t) \left(v_{\bm\theta_{t-1}}^\pi(s_t) - \varv^\pi(s_t) \right). \end{align*} Of course, it is not possible to apply this update rule as it is since we do not know the actual value $\varv^\pi(s_t)$ of the states we observe in the transitions. But, from the Bellman evaluation equations (\ref{eq:bellmanevaluation}), we can obtain an estimate by replacing $\varv^\pi(s_t)$ by $r_t + \gamma v^\pi_{\bm\theta_{t-1}}(s_{t+1})$. Notice that this replacement uses bootstrapping as we use the current estimate of the target to compute the gradient. We finally obtain the following update rule for evaluating the current policy $\pi$: $$\bm\theta_t = \bm\theta_{t-1} - \alpha \nabla_{\bm\theta_{t-1}} v_{\bm\theta_{t-1}}^\pi(s_t) \left(v_{\bm\theta_{t-1}}^\pi(s_t) - \left(r_t + \gamma v^\pi_{\bm\theta_{t-1}}(s'_{t})\right)\right).$$ In the case of linear function approximation, i.e., $v^\pi_{\bm\theta}(s) = \bm\theta^\intercal \bm\phi(s)$, we obtain: $$\bm\theta_t = \bm\theta_{t-1} - \alpha \bm\phi(s_t) \left(\bm\theta_{t-1}^\intercal \bm\phi(s_t) - \left(r_t + \gamma \bm\theta_{t-1}^\intercal \bm\phi(s'_{t})\right)\right).$$ Everything can be written again in the case of the action-value function, which leads to the SARSA update rule with linear function approximation $Q^\pi_{\bm\theta}(s, a) = \bm\theta^\intercal \bm\phi(s, a)$: $$\bm\theta_t = \bm\theta_{t-1} - \alpha \bm\phi(s_t, a_t) \left( \bm\theta_{t-1}^\intercal \bm\phi(s_t, a_t) - \left(r_t + \gamma \bm\theta_{t-1}^\intercal \bm\phi(s'_{t}, a'_{t})\right)\right).$$ Changing the target as in the Q-learning update, we obtain for $Q^*_{\bm\theta}(s, a) = \bm\theta^\intercal \bm\phi(s, a)$: $$\bm\theta_t = \bm\theta_{t-1} - \alpha \bm\phi(s_t, a_t) \left( \bm\theta_{t-1}^\intercal \bm\phi(s_t, a_t) - \left(r_t + \gamma \max_b \bm\theta_{t-1}^\intercal \bm\phi(s'_{t},b)\right)\right).$$ \subsubsection{Residual Methods}\label{sec:residual} Instead of using the Bellman equations to provide an estimate of the target after deriving the update rule, one could use it directly to define the loss function to be optimized. We would then obtain the following minimization problem: $$\bm\theta^* = \operatorname*{argmin}_{\bm\theta} \frac{1}{N} \sum_{t=1}^N \PW{\big( v_{\bm\theta}^\pi(s_t) - \left(r_t + \gamma v^\pi_{\bm\theta} (s'_{t})\right) \big)^2}.$$ This can also be seen as the minimization of the Bellman residual. Indeed the Bellman evaluation equations ($v^\pi(s) = \mathbb{E}_\pi[R(s, A)+\gamma v^\pi(S')]$) can be rewritten as $v^\pi(s) - \mathbb{E}_\pi[R(s, A)+\gamma v^\pi(S')] = 0$. So by minimizing the quantity $v^\pi(s) - \mathbb{E}_\pi[R(s, A)+\gamma v^\pi(S')]$, called the Bellman residual, we reach the objective of evaluating $v^\pi(s)$. Here, we take the observed quantity $r+\gamma v^\pi(s')$ as an unbiased estimate of its expectation. The Bellman residual can also be minimized by stochastic gradient descent as proposed by~\citet{baird1995residual} and the update rule becomes: $$\bm\theta_t = \bm\theta_{t-1} - \alpha \nabla_{\bm\theta_{t-1}} \left( v_{\bm\theta_{t-1}}^\pi(s_t) - \left( r_t + \gamma v_{\bm\theta_{t-1}}^\pi (s'_{t})\right) \right) \left(v_{\bm\theta_{t-1}}^\pi(s_t) - \left(r_t + \gamma v^\pi_{\bm\theta_{t-1}}(s'_{t})\right)\right).$$ In the case of a linear approximation, we obtain: $$\bm\theta_t = \bm\theta_{t-1} - \alpha \left( \bm\phi(s_t) - \gamma \bm\phi(s'_{t}) \right) \left( \bm\theta_{t-1}^\intercal \bm\phi(s_t) - \left(r_t + \gamma \bm\theta_{t-1}^\intercal \bm\phi(s'_{t})\right)\right).$$ This approach, called R-SGD (for residual stochastic gradient descent), has a major flaw as it computes a biased estimate of the value-function. Indeed, $v_{\bm\theta}^\pi(s_t)$ and $v_{\bm\theta}^\pi(s'_t)$ are correlated as $s'_t$ is the result of having taken action $a_t$ chosen by $\pi(s_t)$ \cite{Werbos90}. To address this problem,~\citet{baird1995residual} suggest to draw two different next states $s'_{t}$ and $s''_{t}$ starting from the same state $s_t$ and to update as follows: $$\bm\theta_t = \bm\theta_{t-1} - \alpha \nabla_{\bm\theta_{t-1}} \left( v_{\bm\theta_{t-1}}^\pi(s_t) - \left( r_t + \gamma v_{\bm\theta_{t-1}}^\pi (s'_{t})\right) \right) \left(v_{\bm\theta_{t-1}}^\pi(s_t) - \left(r_t + \gamma v^\pi_{\bm\theta_{t-1}}(s''_{t})\right)\right).$$ Of course, this requires that a generative model or a simulator is available and that transitions can be generated on demand. The same discussions as in previous section can apply to learning an action-value function. For instance, one could want to solve the following optimization problem to learn the optimal action-value function: \begin{align} \bm\theta^* = \operatorname*{argmin}_{\bm\theta} \frac{1}{N} \sum_{t=1}^N \PW{\left( Q_{\bm\theta}^*(s_t, a_t) - \big(r_t + \gamma \max_b Q^*_{\bm\theta} (s'_{t}, b)\big) \right)^2}. \label{eq:qres} \end{align} Yet this optimal residual cannot directly be minimized in the case of the $Q$-function as the $\max$ operator is not differentiable. Notice that a sub-gradient method can still be used. \subsection{Least-Squares Methods} Gradient descent was used to minimize the empirical norm of either the bootstrapping error or the Bellman residual in the previous section. As the empirical norm is generally using the $\ell_2$-norm and that linear function approximation is often assumed, another approach could be to find the least squares solution to these problems. Indeed, least squares is a powerful approach as it is a second-order type of method and offers a closed-form solution to the optimization problem. Although there is no method that explicitly applies least squares to the two aforementioned empirical errors, one can see the fixed-point Kalman Filter (FPKF) algorithm~\cite{choi2006generalized} as a recursive least squares method applied to the bootstrapping error minimization. Also, the Gaussian Process Temporal Difference (GPTD)~\cite{GPTD} or the Kalman Temporal Difference (KTD)~\cite{KTD} algorithms can be seen as recursive least squares methods applied to Bellman residual minimization. We invite the reader to refer to~\citet{geist2013algorithmic} for further discussion on this. Yet, the most popular method inspired by least squares optimization does apply to a different cost function. The Least-Squares Temporal Difference (LSTD) algorithm~\citep{bradtke1996linear} aims at minimizing: $$\bm\theta^* = \operatorname*{argmin}_{\bm\theta} \frac{1}{N} \sum_{i=1}^N \left(v^\pi_{\bm\theta}(s_i) - v^\pi_{\bm\omega^*}(s_i) \right)^2,$$ where $\bm\omega^* = \operatorname*{argmin}_{\bm\omega} \frac{1}{N} \sum_{i=1}^N \left(v^\pi_{\bm\omega}(s_i) - \left(r_i + \gamma v^\pi_{\bm\theta}(s'_i) \right)\right)^2$ can be understood as a projection on the space $\Phi$ spanned by the family of functions $\bm\phi_j$'s used to approximate $v^\pi$. It can be seen as the composition of the Bellman operator and of a projection operator. This cost function is the so-called {\em projected Bellman residual}. When using linear function approximation, this optimization problem admits a closed-form solution: $$\bm\theta^* = \left[\sum_{i=1}^N \bm\phi(s_i) \left[\bm\phi(s_i) - \gamma \bm\phi(s'_i)\right]^\intercal \right]^{-1} \sum_{i=1}^N \bm\phi(s_i) r_i.$$ Note that the projected Bellman residual can also be optimized with a stochastic gradient approach \cite{SuttonMaeiPrecupBhatnagarSilverSzepesvariWiewiora09}. Extensions to non-linear function approximation exist and rely on the kernel trick~\citep{xu2007kernel} or on statistical linearization~\cite{geist2010statistically}. LSTD can be used to learn an approximate $Q$-function as well and can be combined with policy improvement steps into an iterative algorithm, similar to policy iteration, to learn an optimal policy from a dataset of sampled transitions. This gives rise to the so-called Least Squares Policy Iteration (LSPI) algorithm \citep{lagoudakis2003least}, which is one of the most popular batch-RL algorithm. \subsection{Iterative Projected Fixed-Point Methods} As we have seen earlier, dynamic programming offers a set of algorithms to compute value functions of a policy in the case the dynamics of the MDP is known. One of these algorithms, Value Iteration, relies on the fact that the Bellman equations define contraction operators when $\gamma<1$. For instance, if we define the Bellman evaluation operator $B^\pi$ such that $B^\pi Q(s, a) = R(s, a) + \gamma \mathbb{E}_{\pi}\big[Q(S', A') \,|\, S = s, A=a\big]$, one can show that iteratively applying $B^\pi$ to a random initialization of $Q$ converges to $Q^\pi$, because $B^\pi$ defines a contraction for which the only fixed point is $Q^\pi$~\citep{Puterman94}. The Bellman optimality operator $B^*$, defined as $B^*Q(s, a) = R(s, a) + \gamma \mathbb{E}\big[\max_b Q(S', b) \,|\, S = s, A = a\big]$, is also a contraction. The same holds for the sampled versions of the Bellman operators. For instance, let us define the sampled evaluation operator $\hat{B}^*$ such that $\hat{B}^* Q(s, a) = r + \gamma \max_b Q(s', b)$, where the expectation has been removed (the sampled operator applies to a single transition). Unfortunately, there is no guarantee that this remains a contraction when the value functions are approximated. Indeed when applying a Bellman operator to an approximate $Q_{\bm\theta}$, the result might not lie in the space spanned by $\bm\theta$. One has thus to project back on the space $\Phi$ spanned by $\bm\phi$ using a projection operator $\Pi_\Phi$, i.e., $\Pi_\Phi f = \operatorname*{argmin}_{\bm\theta} \| \bm\theta^\intercal \bm\phi - f \PW{\|_2} $. If the composition of $\Pi_\Phi$ and $\hat{B}^\pi$ (or $\hat{B}^*$) is still a contraction, then recursively applying this composition to any initialization of $\bm\theta$ still converges to a good approximate $Q^\pi_{\bm\theta}$ (or $Q^*_{\bm\theta}$). Unfortunately, the exact projection is often impossible to get as it is a regression problem. For instance, one would need to use least squares methods or stochastic gradient descent to learn the best projection from samples. Therefore the projection operator itself is approximated and will result in some $\hat{\Pi}_\Phi$ operator. So the iterative projected fixed-point process is defined as: $$ Q_{\bm\theta_t} = \hat{\Pi}_\Phi \hat{B}^\pi Q_{\bm\theta_{t-1}} \quad \text{ or } \quad Q_{\bm\theta_t} = \hat{\Pi}_\Phi \hat{B}^* Q_{\bm\theta_{t-1}}.$$ In practice, the algorithm consists in collecting transitions (e.g., $\{s_i, a_i, r_i, s'_i\}_{i=1}^N $), initialize $\bm\theta_0$ to some random value, compute a regression database by applying the chosen sampled Bellman operator (e.g., $\{\hat{B}^*Q_{\bm\theta_0}(s_i, a_i) = r_i + \gamma \max_b Q_{\bm\theta_0}(s_i, b)\}_{i=1}^N$), apply a regression algorithm to find the next value of parameters (e.g., $Q_{\bm\theta_1} = \hat{\Pi}_\Phi \hat{B}^* Q_{\bm\theta_{0}} = \operatorname*{argmin}_{\bm\theta} \frac{1}{N} \sum_{i=1}^N \PW{\big(Q_{\bm\theta}(s_i, a_i) - \hat{B}^*Q_{\bm\theta_0}(s_i, a_i) \big)^2}$) and iterate. This method finds its roots in early papers on dynamic programming~\citep{samuel1959some,bellman1963polynomial} and convergence properties have been analyzed by ~\citet{gordon1995stable}. The most popular implementations use regression trees~\citep{ernst2005tree} or neural networks~\citep{riedmiller2005neural} as regression algorithms and have been applied to many concrete problems such as robotics~\citep{antos2008fitted}. \subsection{Value-Based Deep Reinforcement Learning} \label{sec:deep} Although the use of Artificial Neural Networks (ANN, see Chapter 12 of Volume 2) in RL is not new~\citep{Tesauro95}, there has been only a few successful attempts to combine RL and ANN in the past. Most notably, before the recent advances in Deep Learning (DL)~\citep{lecun2015deep}, one can identify the work by~\citet{riedmiller2005neural} as the biggest success of ANN as a function approximation framework for RL. There are many reasons for that, which are inherently due to the way ANN learns and assumptions that have to be made for both gradient descent and most value-based RL algorithms to converge. Especially, Deep ANNs (DNN) require a tremendous amount of data as they contain a lot of parameters to learn (typically hundreds of thousands to millions). To alleviate this issue, \PW{~\citet{Tesauro95}} trained his network to play backgammon through a self-play procedure. The model learned at iteration $t$ plays again itself to generate data for training the model at iteration $t+1$. It could thus reach super-human performance at the game of backgammon using RL. This very simple and powerful idea was reused in~\citep{SilverHuangMaddisonGuezSifreDriesscheSchrittwieserAntonoglouPanneerschelvamLanctotDielemanGreweNhamKalchbrennerSutskeverLillicrapLeachKavukcuogluGraepelHassabis16} to build the first artificial Go player that consistently defeated a human Go master. Yet, this method relies on the assumption that games can easily be generated on demand (backgammon and Go rules are simple enough even though the game is very complex). In more complex settings, the agent faces an environment for which it does not have access to the dynamics, maybe it cannot start in random states and has to follow trajectories, and it can only get transitions through actual interactions. This causes two major issues for learning with DNNs (in addition to intensive usage of data). First, gradient descent for training DNNs assume the data to be independent and identically distributed (i.i.d. assumption). Second, the distribution of the data should remain constant over time. Both these assumptions are normally violated by RL since transitions used to train the algorithms are part of trajectories (so next states are functions of previous states and actions, violating the i.i.d. assumption) and because trajectories are generated by a policy extracted from the current estimate of the value function (learning the value function influences the distribution of the data generated in the future). In addition, we also have seen in Section~\ref{sec:residual} that Bellman residual minimization suffers from the correlation between estimates of value functions of successive states. All these problems make RL unstable~\citep{gordon1995stable}. To alleviate these issues,~\citet{MnihKavukcuogluSilverRusuVenessBellemareGravesRiedmillerFidjelandOstrovskiPetersenBeattieSadikAntonoglouKingKumaranWierstraLeggHassabis15} used two tricks that allowed to reach super-human performances at playing Atari 2600 games from pixels. First, they made use of a biologically inspired mechanism, called experience replay~\citep{lin1992self}, that consists in storing transitions in a Replay Buffer $D$ before using them for learning. Instead of sequentially using these transitions, they are shuffled in the buffer and randomly sampled for training the network (which helps breaking correlation between successive samples). \PW{The buffer is filled on a first-in-first-out basis so that the distribution of the transitions is nearly stationary (transitions generated by old policies are discarded first). Second, the algorithm is based on asynchronous updates of the network used for generating the trajectories and a slow learning network. The slow learning network, called the target network, will be updated less often than the network that actually learns from the transitions stored in the replay buffer (the $Q$-network). This way, }the update rule of the $Q$-network is built such that correlation between estimates of $Q(s,a)$ and $Q(s',a')$ is reduced. Indeed, the resulting algorithm (Deep Q-Network or DQN) is inspired by the gradient-descent update on the optimal Bellman residual (\ref{eq:qres}). But instead of using the double-sampling trick mentioned in Section~\ref{sec:residual}, two different estimates of the $Q$-function are used. One according to the target network parameters ($\bm\theta^-$) and the other according to $Q$-network parameters ($\bm\theta$). The parameters of the $Q$-network are thus computed as: $$\bm\theta^* = \operatorname*{argmin}_{\bm\theta} \sum_{(s_t, a_t, s'_t, r_t) \in D} \left[\left(r_t + \gamma \max_b Q_{\bm\theta^-}(s_t', b)\right) - Q_{\bm\theta}(s_t, a_t) \right]^2,$$ With this approach, the problem of non-differentiability of the $\max$ operator is also solved as the gradient is computed w.r.t. $\bm\theta$ and not $\bm\theta^-$. Once in a while, the target network parameters are updated with the $Q$-network parameters ($\bm\theta^- \leftarrow \bm\theta^* $) and new trajectories are generated according to the policy extracted from $Q_{\bm\theta^-}$ to fill again the replay buffer and train again the $Q$-network. The target network policy is actually a softmax policy based on $Q_{\bm\theta^-}$ (see Section~\ref{sec:bootstrap}). Many improvements have been brought to that method since its publication, such as a prioritized replay mechanism~\citep{schaul2016prioritized} that allows to sample more often from the replay buffer transitions for which the Bellman residual is larger, or the Double-DQN trick~\citep{van2016deep} used to provide more stable estimates of the $\max$ operator. \section{Policy-Search Approaches} \label{sec:policysearch} \input{ChapterRL-policy-search} \section{Extensions: Unknown Rewards and Risk-sensitive Criteria} \label{sec:extension} In the previous sections, we recalled different techniques for solving RL problems, with the assumption that policies are compared with the expected cumulated rewards as a decision criterion. However, rewards may not be scalar, known or numeric, and the standard criterion based on expectation may not always be suitable. For instance, multiobjective RL has been proposed to tackle situations where an action is evaluated over several dimensions (e.g., duration, length, power consumption for a navigation problem). The interested reader may refer to \cite{RoijersVamplewWhitesonDazeley13} for a survey and refer to Chapter 16 of this volume for an introduction to multicriteria decision-making. For space reasons, we focus below only on three extensions: reward learning (Section~\ref{sec:rewardlearning}), preference-based RL (Section~\ref{sec:preferencebased}) and risk sensitive RL (Section~\ref{sec:risksensitive}). \subsection{Reward Learning} \label{sec:rewardlearning} From the system designer's point of view, defining the reward function can be viewed as programming the desired behavior in an autonomous agent. A good choice of reward values may accelerate learning \cite{MatignonLaurentLe-Fort-Piat06} while an incorrect choice may lead to unexpected and unwanted behaviors \cite{RandlovAlstrom98}. Thus, designing this function is a hard task (e.g., robotics \cite{ArgallChernovaVelosoBrowning09}, natural language parsers \cite{NeuSzepesvari09} or dialogue systems \cite{El-AsriLarochePietquin12}). When the reward signal is not known, a natural approach is to learn from demonstration. Indeed, in some domains (e.g., autonomous driving), it is much simpler for an expert to demonstrate how to perform a task rather than specify a reward function. Such an approach has been called apprenticeship learning \cite{AbbeelNg04}, learning from demonstration \cite{ArgallChernovaVelosoBrowning09}, behavior cloning or imitation learning \cite{HusseinGaberElyanJayne17}. Two families of techniques have been developed to solve such problems. The first group tries to directly learn a good policy from (near) optimal demonstrations \cite{ArgallChernovaVelosoBrowning09,Pomerleau89} while the second, called inverse RL (IRL) \cite{NgRussell00,russell1998learning}, tries to first recover a reward function that explains the demonstrations and then computes an optimal policy from it. The direct methods based on supervised learning usually suffer when the reward function is sparse and even more when dynamics is also perturbed \cite{PiotGeistPietquin13}. As the reward function is generally considered to be a more compact, robust and transferable representation of a task than a policy \cite{AbbeelNg04,russell1998learning}, we only discuss reward learning approaches here. As for many inverse problems, IRL is ill-posed: any constant function is a trivial solution that makes all policies equivalent and therefore optimal. Various solutions were proposed to tackle this degeneracy issue, differing on whether a probabilistic model is assumed or not on the generation of the observation. When the state and/or action spaces are large, the reward function is generally assumed to take a parametric form: $R(s, a) = f_{\bm\theta}(s, a)$ for $f_{\bm\theta}$ a parametric function of $\bm\theta$. One important case, called {\em linear features}, is when $f$ is linear in $\bm\theta$, i.e., $R(s, a) = \sum_i \theta_i \phi_i(s, a)$ where $\phi_i$ are basis functions. {\bf No generative model assumption.} As underlined by \textcite{NeuSzepesvari09}, many IRL methods can be viewed as finding the reward function $R$ that minimizes a dissimilarity measure between the policy $\pi_R^*$ optimal for $R$ and the expert demonstrations. Most work assume a linear-feature reward function, with some exceptions that we mention below. \textcite{AbbeelNg04} introduced the important idea of expected feature matching, which says that the expected features of $\pi_R^*$ and those estimated from the demonstrations should be close. Thus, they notably proposed the projection method, which amounts to minimizing the Euclidean distance between those two expected features. \textcite{NeuSzepesvari07} proposed a natural gradient method for minimizing this objective function. \textcite{SyedSchapire08} reformulated the projection method problem as a zero-sum two-player game, with the nice property that the learned policy may perform better than the demonstrated one. \textcite{AbbeelNg04}'s work was extended to the partially observable case \cite{ChoiKim11}. Besides, \textcite{RatliffBagnellZinkevich06} proposed a max-margin approach enforcing that the found solution is better than any other one by at least a margin. Interestingly, the method can learn from multiple MDPs. It was later extended to the non-linear feature case \cite{RatliffBradleyBagnellChestnutt07}. Another technique \cite{KleinGeistPiotPietquin12,PiotGeistPietquin14} consists in learning a classifier based on a linearly parametrized score function to predict the best action for a state given the set of demonstrations. The learned score function can then be interpreted as a value function and can be used to recover a reward function. Traditional IRL methods learn from (near) optimal demonstration. More recent approaches extend IRL to learn from other types of observations, e.g., a set of (non-necessarily optimal) demonstrations rated by an expert \cite{El-AsriPiotGeistLarochePietquin16,BurchfieldTomasiParr16}, bad demonstrations \cite{SebagAkrourMayeurSchoenauer16} or pairwise comparisons \cite{da-SilvaCostaLima06,WirthNeumann15}. In the latter case, the interactive setting is investigated with a reliable expert \cite{ChernovaVeloso09} or unreliable one \cite{WengBusaFeketeHullermeier13}. {\bf Generative model assumption.} Another way to tackle the degeneracy issue is to assume a probabilistic model on how observations are generated. Here, most work assumes that the expert policy is described by Boltzmann distributions, where higher-valued actions are more probable. Two notable exceptions are the work of \textcite{GrollmanBillard11}, which shows how to learn from failed demonstration\PW{s} using Gaussian mixture models, and the Bayesian approach of \textcite{RamachandranAmir07}, with the assumption that state-action pairs in demonstrations follow such a Boltzmann distribution. This latter approach has been extended to Boltzmann distribution-based expert policy and for multi-task learning \cite{DimitrakakisRothkopf11}, and to account for multiple reward functions \cite{ChoiKim12}. This Bayesian approach has been investigated to interactive settings where the agent can query for an optimal demonstration in a chosen state \cite{LopesMeloMontesano09} or for a pairwise comparison \cite{WilsonFernTadepalli12,AkrourSchoenauerSebag13,AkrourSchoenauerSoupletSebag14}. Without assuming a prior, \textcite{Babes-VromanMarivateSubramanianLittman11} proposed to recover the expert reward function by maximum likelihood. The approach is able to handle the possibility of multiple intentions in the demonstrations. Furthermore, \textcite{NguyenLowJaillet15} suggested an Expectation-Maximization approach to learn from demonstration induced by locally consistent reward functions. To tackle the degeneracy issue, \textcite{ZiebartMaasBagnellDey10} argued for the use of the maximum entropy principle, which states that among all solutions that fit the observations, the least informative one (i.e., maximum entropy) should be chosen, with the assumption that a reward function induces a Boltzmann probability distribution over trajectories. When the transition function is not known, \textcite{BoulariasKoberPeters11} extended this approach by proposing to minimize the relative entropy between the probability distribution (over trajectories) induced by a policy and a baseline distribution under an expected feature matching constraint. \textcite{WulfmeierOndruskaPosner15} extended this approach to the case where a deep neural network is used for the representation of the reward function, while \textcite{BogertLinDoshiKulic16} took into account non-observable variables. \subsection{Preference-Based Approaches} \label{sec:preferencebased} Another line of work redefines policy optimality directly based on pairwise comparisons of histories without assuming the existence of a scalar numeric reward function. This notably accounts for situations where reward values and probabilities are not commensurable. In this context, different decision criteria (e.g., quantile \cite{GilbertWeng16}) may be used. One popular decision model (\cite{YueBroderKleinbergJoachims12,FurnkranzHullermeierChengPark12}) is defined as follows: a policy $\pi$ is preferred to another policy $\pi'$ if \begin{align} \mathbb{P}[h^\pi \succsim h^{\pi'}] \ge \mathbb{P}[h^{\pi'} \succsim h^\pi], \label{eq:probabilisticdominance} \end{align} where $\succsim$ is a preorder over histories, $h^\pi$ is a random variable representing the history generated by policy $\pi$ and therefore $\mathbb{P}[h^\pi \succsim h^{\pi'}]$ is the probability that a history generated by $\pi$ is not less preferred than a history generated by $\pi'$. Based on (\ref{eq:probabilisticdominance}), \textcite{FurnkranzHullermeierChengPark12} proposed a policy iteration algorithm. However, one crucial issue with (\ref{eq:probabilisticdominance}) is that the concept of optimal solution is not well-defined as (\ref{eq:probabilisticdominance}) can lead to preference cycles \cite{GilbertSpanjaardViappianiWeng15}. \textcite{BusaFeketeSzorenyiWengChengHullermeier14} circumvented this problem by refining this decision model with criteria from social choice theory. In \cite{GilbertSpanjaardViappianiWeng15}, the issue was solved by considering mixed solutions: an optimal mixed solution is guaranteed to exist by interpreting it as a Nash equilibrium of a two-player zero-sum game. \textcite{GilbertZanuttiniViappianiWengNicart16} proposed a model-free RL algorithm based on a two-timescale technique to find such a mixed optimal solution. \subsection{Risk-Sensitive Criteria} \label{sec:risksensitive} Taking into account risk is important in decision-making under uncertainty (see Chapter 17 of this volume). The standard criterion based on expectation is risk-neutral. When it is known that a policy will only be used a few limited number of times, variability in the obtained rewards should be penalized. Besides, in some hazardous domains, good policies need to absolutely avoid bad or error states. In those two cases, preferences over policies need to be defined to be risk-sensitive. In its simplest form, risk can directly be represented as a probability. For instance, \textcite{GeibelWysotzky05} adopted such an approach and consider MDP problems with two objectives where the first objective is the standard decision criterion and the second objective is to minimize the probability of reaching a set of bad states. A more advanced approach is based on risk-sensitive decision criteria \cite{BarberaHammondSeidl99}. Variants of Expected Utility \cite{Machina88}, which is the standard risk-sensitive criterion, were investigated in two cases when the utility function is exponential \cite{Borkar10,MoldovanAbbeel12} and when it is quadratic \cite{TamarDi-CastroMannor12,TamarDi-CastroMannor13,Gosavi14}. In the latter case, the criterion amounts to penalizing the standard criterion by the variance of the cumulated reward. While the usual approach is to transform the cumulated reward, \textcite{MihatschNeuneier02} proposed to directly transform the temporal differences during learning. Other approaches consider risk measures \cite{DenuitDhaeneGoovaertsKaasLaeven06} and in particular coherent risk measures \cite{ArtznerDelbaenEberHeath99}. Value-at-risk, popular in finance, was considered in \cite{GilbertWeng16}. Policy gradient methods \cite{ChowGhavamzadeh14,TamarGlassnerMannor15} were proposed to optimize Conditional Value-at-Risk (CVaR) and were extended to any coherent risk measure \cite{TamarChowGhavamzadehMannor15}. \textcite{JiangPowell16} proposed dynamic quantile-based risk measures, which encompasses VaR and CVaR, and investigated an approximate dynamic programming scheme to optimize them. In risk-constrained problems, the goal is to maximize the expectation of return while bounding a risk measure. For variance-constrained problems, \textcite{PrashanthGhavamzadeh16} proposed an actor-critic algorithm. For CVaR-constrained problems, \textcite{BorkarJain14} proposed a two-timescale stochastic approximation technique, while \textcite{ChowGhavamzadehJansonPavone16} investigated policy gradient and actor-critic methods. One important issue to consider when dealing with risk-sensitive criteria is that the Bellman optimality principle generally does not hold anymore: a sub-policy of an optimal risk-sensitive policy may not be optimal. However, in most cases, the Bellman optimality principle may be recovered by considering a state-augmented MDP, where the state includes the rewards cumulated so far \cite{LiuKoenig06}. \section{Conclusion}\label{sec:conclusion} Recently, thanks to a number of success stories, reinforcement learning (RL) has become a very active research area. In this chapter, we recalled the basic setting of RL. Our focus was to present an overview of the main techniques, which can be divided into value-based and policy search methods, for solving large-sized RL problems with function approximation. We also presented some approaches for tackling the issue of unknown rewards that a system designer would encounter in practice and recalled some recent work in RL when risk-sensitivity needs to be taken into account in decision-making. Currently RL still has too large sample and computational requirements for many practical domains (e.g., robotics). Research work is very active on improving RL algorithms along those two dimensions, notably by exploiting the structure of the problem \cite{KulkarniNarasimhanSaeediTenenbaum16} or other a priori knowledge, expressed in temporal logic \cite{WenPapushaTopcu17} for instance, or by reusing previous learning experience with transfer learning \cite{TaylorStone09}, lifelong learning \cite{Bou-AmmarTutunovEaton15}, multi-task learning \cite{WilsonFernRayTadepalli07} or curriculum learning \cite{WuTian17}, to cite a few. Having more efficient RL algorithms is important as it will pave the way to more applications in more realistic domains.
{'timestamp': '2020-10-30T01:18:05', 'yymm': '2005', 'arxiv_id': '2005.14435', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14435'}
arxiv
\section{Introduction} In recent years, single-channel speech enhancement methods based on deep learning have been proposed in large numbers, and they can be divided into time domain and frequency
domain methods. Time-domain methods~\cite{segan}~\cite{Hao2019}~\cite{tcnn}~\cite{wavenet_d} typically use the time-domain signal of noisy speech as a direct input to a neural network, and the learning target is the clean speech signal. Frequency-domain methods~\cite{phase_sensitive_bilstm}~\cite{regression_approch}~\cite{se_overview}~\cite{ccrn} typically use the spectral features of noisy speech (magnitude spectrum, phase spectrum, complex spectrum, etc.) as inputs to a neural network, and learning targets are the spectral features of clean speech or some masks (e.g., ideal binary mask~\cite{ibm_as_goal_wang_2005}, ideal ratio mask~\cite{irm}, complex ideal ratio mask~\cite{cIRM}). The frequency-domain methods still dominate the vast majority of current single-channel speech enhancement methods due to the high dimension and the lack of apparent geometric structure for the time domain signal. In the frequency domain, most speech enhancement methods focus on the full-band spectral representation. They take the full-band feature sequence directly as the input of the neural network. Unlike them, this paper proposes a sub-band speech enhancement method. This method uses the magnitude feature sequence within a certain sub-band on the noisy spectrogram as the input of the model. The learning target is the magnitude feature sequence of the corresponding sub-band of the clean speech. This method has some advantages over the full-band methods. \textbf{(1)} For neural networks, the simpler the features to be learned, the easier the optimization of the network. Spectral features are highly patterned features, with low, medium, and high frequencies having significantly different data distributions. The reasonable division of sub-bands is equivalent to introducing a priori knowledge to the model so that the model can focus on learning more stable features on each sub-band. This method is similar to the partition technology in face recognition. The human face has a fixed pattern. By only hard segmenting the human face (eg., mouth, eye.), and then independently identifying each segment of the face and integrating them, the recognition accuracy can be improved~\cite{face_partition}. \textbf{(2)} Most speech enhancement methods use an L-norm ($\ell_1, \ell_2$) loss function, for which the contribution of each frequency band to the total loss is identical. This may not be suitable for the actual situation that the speech energy is mainly concentrated in the low and medium frequencies. Correspondingly, the low and medium frequencies play a more important role in the loss. However, high-frequency information is essential for perceptual evaluation. The proposed method optimizes and calculates the loss for each sub-band separately to avoid the problem of applying the L-norm loss to the full-band spectra. Besides, a proper division of sub-bands also allows the model not to lose the ability to capture cross-band information too much. Some of the previous works have been exploring non-full-band methods. In~\cite{li_multichannel_2019}, a narrow-band LSTM, is proposed to deal with multichannel speech enhancement. The LSTM takes as input a sequence of TF bins associated with a frequency bin of multichannel noisy speech signals. The output of the model is a sequence of magnitude ratio masks at the same frequency bin. Experimental results show that the narrow-band method has outstanding performance. In~\cite{wei_single-channel_2018}, the spectrogram is divided into different non-uniform sub-bands, and spectral subtraction is performed separately in each sub-band to obtain accurate noise estimates. In~\cite{wang_exploring_2013}, the sub-band feature is used to classify speech or noise, which indicates that the local spectral pattern is informative for discriminating between speech and other signals. Different sub-bands have very different data distributions, and it is undoubtedly challenging to learn them simultaneously through a single small-scale model. The integration using multiple sub-models is an effective method, but it is undoubtedly troublesome during deployment. In this paper, we extend the sub-band enhancement model to a knowledge distillation framework containing multiple elite-level teacher models and one general-purpose student model for speech enhancement. Knowledge distillation was first proposed by~\cite{hinton2015distilling}. As a compression~\cite{model_compression_distilling} or training technique~\cite{On_the_Efficacy_of_Knowledge_Distillation}~\cite{Towards_Understanding_Knowledge_Distillation}, it has been widely used in machine translation~\cite{nmt_knowledge_distillation}, image object detection~\cite{object_detection_knowledge_distillation}, speech recognition~\cite{danpovey_distilling}~\cite{speech_recogition_2016distilling} and other fields. Knowledge distillation usually uses the category probability labels of a large teacher model and original labels as the learning goal of a small student model. It makes the small student model have the learning ability close to the large teacher model. In the proposed sub-band knowledge distillation framework, we first trained multiple sub-band enhancement models (teacher models). These models specialize in feature mapping in a certain sub-band, and they can well conduct the speech enhancement in their sub-bands. After that, we use these teacher models to guide a sub-band enhanced model (student model) to learn all sub-bands. Finally, without increasing the number of parameters and the computational cost of the student model, the student model's performance is further improved. In order to evaluate the effectiveness and performance of the proposed method, we conducted several experiments and analyses on an open-source data set. The final experimental results show that the teacher model's guidance dramatically improves the student model's performance. Moreover, the performance of the student model exceeds the corresponding full-band model. \section{Method} We use the representation of the speech signal in the short-time fourier transform (STFT) domain: \begin{equation} X(t,f) = S(t,f) + N(t,f) \end{equation} where $X(t,f)$, $S(t,f)$ and $N(t,f)$ respectively represent the time-frequency (T-F) bin of noisy speech, clean speech and interference noise at time frame $t$ and frequency bin $f$. $t \in Z^+, 1 \leq t \leq T$. $f \in Z^+, 0 \leq f \leq F - 1$. $T$ and $F$ denote the total number of frames and the total frequency bins expressed in the frequency domain, respectively. For the closed interval $[0, F - 1]$, we take an ordered sequence of equally spaced points: \begin{eqnarray} 0 = f_0 < f_1 < f_2 < ... < f_n = F - 1 \end{eqnarray} We refer to the closed interval $[f_i,f_{(i+1)}]$ as the sub-band, where $0 \leq i \leq n - 1$. $[0, F - 1]$ contains $n$ sub-bands. The frequency width of all sub-bands is fixed at $\lfloor \frac{F}{n} \rfloor$ except for the last one. In this paper, we use the magnitude component of the speech in the STFT domain as the input and training target. We use a single neural model to map all sub-bands: \begin{gather} |\hat{S}(i)| = G(|X(i)|) \\ |X(i)| = \left( \begin{smallmatrix} |X(1, f_i)| & |X(2, f_i)| & \cdots & |X(T, f_i)| \\ |X(1, f_{i} + 1)| & |X(2, f_{i} + 1)| & \cdots &|X(T, f_{i} + 1)| \\ \vdots & \vdots & \ddots & \vdots \\ |X(1, f_{i + 1})| & |X(2,f_{i + 1})| & \cdots & |X(T, f_{i + 1})| \end{smallmatrix} \right) \notag \end{gather} where $|X(i)|, 0 \leq i \leq n - 1$ is the noisy magnitude features in the $i$-th sub-band, $G( \cdot )$ is the sub-band enhancement model, and $|\hat{S}(i)|$ is the enhanced feature in the $i$-th sub-band with the same dimension as $|X(i)|$. We treat the divided sub-bands as independent units, and the model can be optimized for each sub-band individually. In an extreme case, we can divide $F$ sub-bands. Considering that the model simultaneously handles sub-bands with different data distributions is very challenging, inspired by~\cite{hinton2015distilling}, we introduce the sub-band enhancement model into a knowledge distillation-based framework. The basic principle of knowledge distillation is that a soft label generated by the large model contains more information than the original target label. In general, if a large model gives a higher probability of the input features for specific categories, it means that those categories are similar to the real ones. We typically use knowledge distillation to distill the knowledge contained in a large-scale model (teacher model) into a smaller-scale model (student model) to compress model or to enhance performance. In this paper, our knowledge distillation framework contains multiple teacher models for local sub-bands and a student model for all sub-bands. We train these teacher models so that they have excellent performance in their specific sub-band. During the student model training, we will select different teacher models according to the input sub-band to guide the student model, improving the student model's performance without changing the original student model scale. \subsection{Training of the teacher models} In order to build a sub-band knowledge distillation framework, we construct $n$ sub-band enhancement models $G_0 (\cdot), G_1(\cdot), ..., G_{(n-1)} (\cdot)$ to map $n$ magnitude sub-bands respectively: \begin{gather} |\hat{S}| = \left( \begin{matrix} G_0(|X(0)|) \\ G_1(|X(1)|) \\ \vdots \\ G_{n-1}(|X(n-1)|) \end{matrix} \right) \end{gather} Here, the $|\hat{S}|$ is the magnitude spectra enhanced by $n$ different sub-band enhancement models. Each sub-band enhancement model is only responsible for enhancing frequency bands within a fixed range, and they are experts in the frequency bands that belong to them. We call this $N$ band enhancement models as teacher models. When these teacher models are thoroughly trained, we fixed their weights. Subsequently, we used these teacher models to guide a sub-band enhancement model (student model) to learn different sub-bands better. \subsection{Distillation} When we select any one of the $n$ sub-bands to feed into the student model, we use the corresponding teacher model to guide the training of the student model. The training objective of the student model is the clean speech and the enhanced speech of the teacher model. We define the loss function $L$ as follows. \begin{eqnarray} \label{eq:loss} L=(G_s (|X_i |)-|S_i |)^2 + \alpha (G_s (|X_i |)-G_i (|X_i |) )^2 \end{eqnarray} where $G_s$ is the student sub-band enhancement model, and $G_i$ is the teacher model for the $i$-th sub-band, whose weights are fixed when training the student model. $|S_i|$ is the $i$-th sub-band corresponding to the clean speech, and $\alpha$ is the weight, representing the proportion of the loss related to the teacher model in the total loss. It is worth mentioning that to ensure that the student model can get enough beneficial information from the teacher model, we usually train the teacher model with a model size greater than or equal to the student model. \section{Experiment} \subsection{Dataset and metrics} In this paper, we evaluate our proposed method using an open-source per-mixed data set\footnote[1]{http://dx.doi.org/10.7488/ds/1356} that contains a large number of noises and speakers. This data set includes 30 randomly selected speakers from Voice Bank Corpus~\cite{voice_bank_corpus}. Each speaker contains approximately 400 sentences. Twenty-eight speakers (14 males and 14 females) were used for training, and two speakers (1 male and 1 female) for testing. Twenty-eight speakers were used for training and two speakers for testing. The training set of this data set contains a total of 10 different noises (2 synthetic noises and 8 noises selected from the Demand data set~\cite{demand}) and 4 different SNRs (0, 5,10, and 15dB). In total, the training set contains 40 different noise conditions (10 noises $\times$ 4 SNRs). Each training speaker has about 10 sentences in each noise condition. The test set contains a total of 5 noise types (selected from the Demand data set~\cite{demand}) not present in the training set and 4 slightly higher SNRs (2.5, 7.5, 12.5, and 17.5dB) than the training set. In total, the test set contains 20 different noise conditions (5 noises $\times$ 4 SNRs). There are around 20 different sentences in each condition per test speaker. There is no validation set in the original data set to evaluate the training process, and we randomly selected 1000 speeches from the training set as a validation set. In this paper, we use the perceptual evaluation of speech quality (PESQ)~\cite{pesq} and the short-term objective intelligibility (STOI)~\cite{stoi} to evaluate the quality and intelligibility of speech, respectively. They have been used in a large number of speech enhancement experiments. \subsection{Implementation and details} The student model and teacher models use the same model structure. We stacked two bidirectional long short-term memory (LSTM) layers and one fully connected layer. We applied ReLU as an output activation layer. The number of memory cells in each layer of LSTM will be set according to specific experiments. The sampling rate of all speeches is 16000 Hz. For the input of the model, 161-dimensional magnitude spectral features were extracted by performing STFT using a 320-point Hann window with an overlap of 50\%. We trained the teacher models ($G_0$, $G_1$, ..., $G_{n-1}$) one by one. The sub-band processed by the $G_0$ model is the lowest one (low frequency) on the spectrogram, and the sub-band processed by the $G_{n-1}$ model is the highest one (high frequency) on the spectrogram. Each teacher model handles its own sub-band with a size of $\lfloor \frac{161}{n} \rfloor$. For the few high-frequency bands remaining in the spectrogram, we do not use them. The student model needs to deal with all sub-bands. Before each training epoch starts, we randomly select any one of the n sub-bands as input to the student model. In the inference stage, we use the trained student model to enhance the sub-bands of the noisy speech one by one. For the few high-frequency bands remaining in the noisy spectrogram, we do not enhance them. Then, combining with the phase component of the noisy STFT spectrogram, we use the inverse STFT to convert the enhanced magnitude spectrogram into the time domain speech waveform. We shuffle the data set before each epoch begins by training a student model or a teacher model. All hyperparameters are the same for all models. We use mean square error (MSE, $\ell_2$) as the loss function. We use Adam optimizer~\cite{adam} (decay rate $\beta_{1} = 0.9$, $\beta_{2} = 0.999$) and set the batch size to 600. We set the initial learning rate of the models to a small constant of 0.0002. When the loss of the model on the training set is oscillating, we appropriately reduce the learning rate. When the loss of the model on the validation set does not drop noticeably for many epochs, we stopped training. According to the preliminary experimental results on the current validation set, we set $\alpha$ in Equation~(\ref{eq:loss}) to $0.1$. \subsection{Result} \subsubsection{Effect of different sub-band sizes on model performance} In our proposed method, each sub-band requires a teacher model. A question that must be answered is which is the proper size of the sub-band. We trained three sets of sub-band enhancement models of different scales. The three sets of models have the same model structure except for the difference in the number of memory cells per layer in LSTM (256, 512, and 1024). Each set contains four sub-band enhancement models, and their inputs are all sub-bands. The size of the sub-bands is 1, 20, 40, and 80, respectively. In total, we trained 12 sub-band enhancement models. We evaluate the performance of these 12 models on the test set, and the results are listed in Table~\ref{tab:differentSubBandLength}. From Table~\ref{tab:differentSubBandLength}, we noticed that regardless of the size of the model and the sub-band size, the enhanced speech quality and intelligibility have been prominently improved compared to the original noisy speech. When the size of the sub-band is 1, the input of the model is a single frequency band. We usually call this kind of model a narrow-band model. From the table, we can note that regardless of the scale of the model, the narrow-band model is the worst among all control groups. This is because the narrow-band model has very little frequency context information, and it is impossible to explore the feature information across the frequency band. We also notice that as the size of the sub-band increases, the quality and intelligibility of the enhanced speech increase overall. This indicates that the large-sized sub-band is more conducive to the model to explore the local feature across the band. We also noticed that when the sub-band length is 40, the quality and intelligibility of the enhanced speech are slightly better than when the sub-band length is 80. This possibly indicates that for most of the local features, the sub-band size of 40 is already enough to cover them. We will use the sub-band enhancement models with an input size of 40 as our basic models. \begin{table}[h] \centering \caption{Comparison of the models of different scales under different sizes of sub-bands.} \label{tab:differentSubBandLength} \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{@{}ccccccc@{}} \toprule & \multicolumn{3}{c}{PESQ} & \multicolumn{3}{c}{STOI (\%)} \\ \cmidrule{2-4} \cmidrule{5-7} Model & 256 & 512 & 1024 & 256 & 512 & 1024 \\ \midrule Noisy & \multicolumn{3}{c}{1.971} & \multicolumn{3}{c}{92.106} \\ 1 & 2.151 & 2.224 & 2.277 & 92.415 & 92.174 & 93.082 \\ 20 & 2.369 & 2.430 & 2.441 & 92.877 & 93.265 & 93.476 \\ 40 & \textbf{2.425} & \textbf{2.497} & 2.512 & \textbf{93.421} & \textbf{93.540} & 93.721 \\ 80 & 2.417 & 2.490 & \textbf{2.513} & 93.243 & 93.471 & \textbf{93.727} \\ \bottomrule \end{tabular} } \end{table} \begin{table*}[t] \centering \caption{The mean square error (MSE, $\ell_2$ loss) of the teacher models and the student models (without the guidance of the teacher models) with the different number of memory cells per layer in LSTM for each sub-band on the test set. C is the number of memory cells per layer in LSTM.} \label{tab:mse} \begin{tabular}{@{}ccccccccc@{}} \toprule & \multicolumn{4}{c}{Student Model} & \multicolumn{4}{c}{Teacher Model} \\ \cmidrule(l){2-5} \cmidrule(l){6-9} C & 0-40 & 40-80 ($10^{-4}$) & 80-120 ($10^{-4}$) & 120-160 ($10^{-4}$) & 0-40 & 40-80 ($10^{-4}$) & 80-120 ($10^{-4}$) & 120-160 ($10^{-4}$) \\ \midrule 256 & 0.036 & 10.027 & 3.647 & 2.801 & 0.031 & 9.230 & 3.578 & 2.204 \\ 512 & 0.026 & 9.701 & 3.594 & 2.508 & 0.019 & 9.038 & 3.494 & 1.792 \\ 1024 & 0.025 & 9.162 & 3.295 & 2.095 & \textbf{0.013} & \textbf{8.716} & \textbf{3.020} & \textbf{1.634} \\ \bottomrule \end{tabular} \end{table*} \subsubsection{Comparison between teacher models and student models} Table~\ref{tab:mse} comprehensively compares the $\ell_2$ loss between the teacher models and the student models (without the guidance of the teacher models) with the different number of memory cells per layer in LSTM for each sub-band. C is the number of memory cells. The left half of the table is the result area of the student models, and the right half is the result area of the teacher models. For example, in the result area of the student models, the value in the first column (0 to 40) of the first row (C is 256) shows the result of the student model with 256 memory cells and for the 0 to 40 sub-band. The input to the student model is one of the four sub-bands in the training sample (random reselection for each iteration). The training target of the student model is solely the corresponding sub-band of the clean speech. The value of $0.036$ represents the $\ell_2$ loss between the enhanced test sample and the target. In the result area of the teacher models, the first column (0 to 40) of the first row (C is 256) lists the teacher model result with 64 memory cells and for the 0 to 40 sub-band. The training data is a sub-band with a frequency range of 0 to 40 in each sample, and the target is the corresponding sub-band of the clean speech. The value of $0.031$ is the $\ell_2$ loss between the enhanced test sample and the target. $10^{-4}$ in the table means that the value in the table is multiplied by $10^{-4}$. In total, we trained 3 student models (3 different numbers of memory cells) and 12 teacher models (3 different numbers of memory cells $\times$ 4 sub-bands). Observing the results in the table, we noticed some phenomena. \textbf{(1)} It is not surprising that the teacher models trained for specific sub-bands are better than the student model trained for all sub-bands. This shows that although the student model can reduce the pressure of learning by capturing the commonality of each sub-band, the benefits of this commonality are not as good as the integration of multiple dedicated models. \textbf{(2)} We also notice that as the number of memory cells increases, the $\ell_2$ loss of the teacher model and the student model on the test set is decreasing overall. This is as expected. \textbf{(3)} In the 0 to 40 sub-band, the $\ell_2$ loss is much higher than that of 40 to 160. This is mainly because human speech energy is concentrated primarily in the low-frequency part of the spectrogram, resulting in a complicated data distribution. This also shows that the different frequency bands in the spectrogram contribute differently to the speech quality and intelligibility. \subsubsection{Effectiveness of teacher model guidance} From the above 12 teacher models, we select the 4 teacher models that perform best in the 4 sub-bands, which will be used to guide the training of student models. Considering that we generally distill the knowledge of large models into small models in knowledge distillation, we set up two sets of models. These two sets contain 256 memory cells and 512 memory cells per layer in LSTM, respectively. Each set includes a full-band model (F), a student model without teacher model guidance (S1), and a student model with teacher model guidance (S2). We list the performance comparison results of these models in Table 3. In Table~\ref{tab:comparisonTeachers}, S1 model and S2 model have the same number of parameters. Since the input of the F model is the full-band features, the number of parameters of the F model is higher than that of the S1 model and S2 model. Whether the number of memory cells is 256 or 512, we can find some similar conclusions. \textbf{(1)} The performance of the S1 model is slightly worse than the F model. This may be because full-band input can bring more context information, which can help the model to capture the features on the spectrogram better. However, the performance gap is tiny, only 0.56 to 0.66\% and 0.29 to 0.30\% higher on PESQ and STOI, respectively. This may be because the S1 model can cover 40 frequency bands at a time, which is fairly sufficient to capture most of the local feature information. On the other hand, S1 has a 7\% reduction in the parameter amount compared to the F model. Considering that the speech enhancement model is usually deployed offline on hardware, this slight performance degradation is normally acceptable. \textbf{(2)} With the guidance of the elite teacher model, the S2 model achieved better results than S1, which shows that the supervision of the teacher models is effective. It is worth mentioning that this improvement does not increase the number of parameters and the computational cost of the model. We also note that although the S2 model also does not have complete full-band context information, its performance noticeably exceeds model F. \begin{table}[] \centering \caption{Demonstrates the effectiveness of the sub-band knowledge distillation framework. C represents the number of memory cells per layer in LSTM. \#Params represents the number of parameters of the model.} \label{tab:comparisonTeachers} \begin{tabular}{cccccc} \toprule Model & C & \#Params (M) & PESQ & STOI (\%) \\ \midrule Noisy & - & - & 1.971 & 92.106 \\ \midrule F & 256 & 2.52 & 2.420 & 93.412 \\ S1 & 256 & 2.21 & 2.404 & 93.133 \\ S2 & 256 & 2.21 & \textbf{2.471} & \textbf{93.751} \\ \midrule F & 512 & 9.23 & 2.511 & 93.817 \\ S1 & 512 & 8.61 & 2.497 & 93.540 \\ S2 & 512 & 8.61 & \textbf{2.563} & \textbf{94.129} \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} In this paper, we propose a sub-band knowledge distillation framework for single-channel speech enhancement. We divide the spectrogram into multiple sub-bands, and train elite-level sub-band enhancement models (teacher models) on each sub-band. These models are then used to guide the training of general sub-band enhancement models (student model). We evaluated our proposed method on an open-source data set. We found that compared to the full-band model, although the sub-band enhancement model loses the ability to capture global cross-band information, the performance does not show a significant drop as expected. We also found that the teacher models further improves the performance of the sub-band enhancement model. Moreover, the performance of the student model exceeds the corresponding full-band model. \bibliographystyle{IEEEtran}
{'timestamp': '2020-11-06T02:20:50', 'yymm': '2005', 'arxiv_id': '2005.14475', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14475'}
arxiv
\section{ Introduction}\label{Intro} It is anticipated that quantum computers will be able to simulate quantum systems exponentially faster than classical computers \cite{QC_gen_1, QC_gen_2, uni_q_s
im}. A promising method to perform this task on emerging noisy intermediate-scale quantum (NISQ) computers \cite{NISQ,google_supreme} is the variational-quantum-eigensolver (VQE) algorithm \cite{UCCSD_0, vqe_2, VQE_accelerated, vqe_google_hf}. The VQE is a hybrid classical-quantum variational algorithm that determines the eigenvalues of a Hamiltonian operator by minimizing its expectation value, with respect to a parametrized ansatz state. The VQE can be used to solve the electronic structure problem, determining the ground-state energy eigenvalue of a molecular Hamiltonian \cite{vqe_general, CCSD}. Compared to other purely quantum algorithms for eigenvalue-determination, e.g., the quantum phase-estimation algorithm \cite{QC_QI,QPE}, the VQE uses shallower quantum circuits and is more error tolerant \cite{UCCSD_0}, at the expense of requiring a greater number of measurements and more classical processing. Two major challenges in the practical realization of the VQE on NISQ computers, are the design of ansatz states, and the construction of efficient circuits to create these states. Most ansatz states, currently considered by the scientific community, correspond to applying a series of fermionic unitary evolutions to an initial reference state, e.g., the Hartree-Fock state. These fermionic unitary evolutions, which we will refer to as ``fermionic excitations'', are exponentials of parametrized single- and double-fermionic-excitation operators. Examples of such fermionic ansatz states include unitary coupled cluster (UCC) ansatz states \cite{UCCSD_0, UCCSD, UCCSD_2, k-upCCGSD}, and states constructed by iterative-VQE algorithms, e.g. the adaptive derivative-assembled pseudo-trotter (ADAPT) VQE \cite{vqe_adapt, benchmark_adapt_vqe}. The standard circuits for fermionic excitations \cite{vqe_general, UCCSD, vqe_circs, staircase_example_q} use ``$CNOT$ staircases'' (see Sec. \ref{sec:stair_case}) inefficiently, in terms of $CNOT$ gates. The ability to implement a sufficient number of entangling gates with high enough fidelity is the current bottleneck in NISQ computing \cite{NISQ, CNOT_fidelity, CNOT_fidelity_2}. Therefore, more efficient fermionic-excitation circuits will improve the prospects of near-term realization of the VQE on NISQ computers. In this work, we demonstrate a methodology to construct circuits, optimized in the number of $CNOT$ gates and in the $CNOT$ circuit depth, that perform single and double fermionic excitations. First, we construct simpler excitations that do not account for fermionic anticommutation relations. We call these simplified excitations ``qubit excitations''. Single qubit excitations can be implemented by an exchange-interaction circuit \cite{part_hole,exchange_ansatz, entanglement_generation, VQE_hard_symmetry_preserve}. Second, we use this exchange-interaction circuit as a subcircuit to construct a double qubit excitation circuit. Finally, we modify these qubit excitations circuits, to account for fermionic anticommutation relations. In this way we construct circuits that offer a linear reduction in the number of $CNOT$ gates by a factor of $2$ and $8$ per single and double fermionic excitation, respectively, compared to circuits constructed entirely with $CNOT$ staircases. The paper is organised as follows: In Sec. \ref{sec:stair_case}, we review the standard way of using $CNOT$ staircases to construct fermionic-excitation circuits. In Sec. \ref{sec:ncU1}, we outline a method, which is used throughout the paper, to implement multi-qubit-controlled rotations. In Sec. \ref{sec:qubit_exc}, we design circuits that perform single and double qubit excitations. In Sec. \ref{sec:f_exc}, we utilise our qubit excitations to construct circuits that perform fermionic excitations. We summarize our results in Sec. \ref{summary}. \section{Theory}\label{sec:theory} Throughout this paper we assume the Jordan-Wigner encoding \cite{vqe_general, qubit_mapping}, where the occupancy of the $i^{\mathrm{th}}$ molecular spin orbital is represented by the state of qubit $q_i$. We denote spin orbitals with indices $i,j,k,l$, ordered as $i<j<k<l$. \subsection{Fermionic excitations}\label{sec:stair_case} In this section, we provide a brief description of what a fermionic excitation is, within the context of molecular VQE simulations. We also review the standard method to construct circuits that perform fermionic excitations \cite{vqe_general, UCCSD, vqe_circs, staircase_example_q}. We use the term fermionic excitation to refer to an exponential of $\theta$-parametrized fermionic excitation operators. Single and double fermionic excitation operators are defined, respectively, by the skew-Hermitian operators \begin{equation}\label{eq:s_f_exc_op} T^k_i(\theta) \equiv \theta( a^\dagger_k a_i - a^\dagger_i a_k)\;\;\mathrm{and} \end{equation} \begin{equation}\label{eq:d_f_exc_op} T_{ij}^{kl}(\theta) \equiv \theta( a^\dagger_k a^\dagger_l a_i a_j - a^\dagger_i a^\dagger_j a_k a_l), \end{equation} where $a_i^\dagger $ and $a_i$ denote fermionic creation and annihilation operators, respectively. Single and double fermionic excitations are expressed, respectively, by the unitary operators \begin{equation}\label{eq:s_f_exc_aa} U_{ki}^{\mathrm{(sf)}}(\theta) = e^{ T^{k}_{i}(\theta) }\;\;\mathrm{and} \end{equation} \begin{equation}\label{eq:d_f_exc_aa} U_{klij}^{\mathrm{(df)}}(\theta) = e^{ T^{kl}_{ij}(\theta) }. \end{equation} The operators $a_i$ and $a_i^\dagger $ obey the anticommutation relations \begin{equation} \label{eq:fermAnti} \{a_i,a^\dagger_j\} = \delta_{i,j} \; \; \mathrm{and} \; \; \{a_i, a_j\} = \{a_i^\dagger, a^\dagger_j\} = 0. \end{equation} Within the Jordan-Wigner encoding, $a$ and $a^\dagger$ can be written in terms of quantum-gate operators as \begin{equation}\label{eq:a} a_i = Q_i \prod_{r=0}^{i-1} Z_r=\frac{1}{2}(X_i+iY_i)\prod_{r=0}^{i-1} Z_r \;\;\mathrm{and} \end{equation} \begin{equation}\label{eq:a*} a_i^\dagger = Q_i^\dagger \prod_{r=0}^{i-1} Z_r = \frac{1}{2}(X_i-iY_i)\prod_{r=0}^{i-1} Z_r, \end{equation} where the qubit creation and annihilation operators are \begin{equation}\label{eq:Q_Q_0} Q_i^\dagger \equiv \frac{1}{2}(X_i - iY_i) \; \; \mathrm{and} \; \; Q_i \equiv \frac{1}{2}(X_i + iY_i) , \end{equation} respectively. $Q_i$ and $Q_i^{\dagger}$ (discussed further in Sec. \ref{sec:qubit_exc}) act to change the occupancy of orbital $i$. The product of Pauli-$z$ operators, $\prod_{r=0}^{i-1} Z_r$, acts as an exchange-phase factor, accounting for the anti-commutation relations of $a$ and $a^\dagger$. Using Eqs. (\ref{eq:a}) and (\ref{eq:a*}), a single fermionic excitation [Eq. \eqref{eq:s_f_exc_aa}] can be re-expressed in terms of quantum-gate operators: \begin{align} U_{ki}^{\mathrm{(sf)}}(\theta) = &\exp\Big[-i\frac{\theta}{2}(X_i Y_k - Y_i X_k)\prod_{r=i+1}^{k-1}Z_r\Big] \\ = & \exp\Big[-i\frac{\theta}{2}X_i Y_k\prod_{r=i+1}^{k-1}Z_r\Big] \nonumber \\ &\times \exp\Big[i\frac{\theta}{2}Y_i X_k\prod_{r=i+1}^{k-1}Z_r\Big] \label{eq:s_f_exc_0} . \end{align} The exponential $\exp\big[-i\frac{\theta}{2}X_i Y_k\prod_{r=i+1}^{k-1}Z_r\big]$ in Eq. (\ref{eq:s_f_exc_0}), can be implemented by the circuit in Fig. \ref{fig:stair_case}. The two $CNOT$ ``staircases'' together with the $R_z(\theta)$ rotation in between them (Fig. \ref{fig:stair_case}), are referred to as a ``$CNOT$-staircase construction''. This construction implements $U_{stair}\equiv \exp\big[i\theta \prod_{r=i}^{k} Z_r\big]$. The $H$ and $R_x(\pm \frac{\pi}{2})$ gates, on both sides of the $CNOT$ staircase construction in Fig. \ref{fig:stair_case} act to transform the $Z_i$ and the $Z_k$ operators, in $U_{stair}$, to $X_i$ or $Y_k$ operators, respectively. Similarly, by sandwiching a $CNOT$ staircase construction between single-qubit rotations that transform individual Pauli-$z$ operators to Pauli-$x$ or Pauli-$y$ operators, any exponential of a product of Pauli operators can be constructed. Hence, the single fermionic excitation [Eq. \eqref{eq:s_f_exc_0}] can be implemented by a circuit that contains $2$ $CNOT$ staircase constructions. The full circuit for a single fermionic excitation, constructed using the aforementioned method, is included in appendix \ref{app:f_circs} Fig. \ref{fig:s_f_exc}. A double fermionic excitation [Eq. \eqref{eq:d_f_exc_aa}] can be re-expressed in terms of quantum-gate operators: \begin{multline}\label{eq:d_f_exc} \ \ \ \ \ \ \ \ \ \ \ U_{klij}^{\mathrm{(df)}}(\theta) = \exp\Big[-i \frac{\theta}{8} (X_i Y_j X_k X_l + Y_i X_j X_k X_l \\ \\+ Y_i Y_j Y_k X_l + Y_i Y_j X_k Y_l - X_i X_j Y_k X_l - X_i X_j X_k Y_l \\ - Y_i X_j Y_k Y_l - X_i Y_j Y_k Y_l ) \prod_{r=i+1}^{j-1} Z_r\prod_{r'=k+1}^{l-1} Z_{r'}\Big]. \ \ \ \ \end{multline} $U_{klij}^{\mathrm{(df)}}(\theta)$ can be implemented by $8$ $CNOT$-staircase constructions. The full circuit for a double fermionic excitation, constructed using the aforementioned method, is included in appendix \ref{app:f_circs} Fig. \ref{fig:d_f_exc}. \begin{figure}[h] \ \ \ \ \ \Qcircuit @C=0.9em @R=0.7em { q_k && \gate{R_x(\frac{\pi}{2})} & \ctrl{1} & \qw & \qw & \qw &\qw & \qw &\ctrl{1}& \gate{R_x(-\frac{\pi}{2})} & \qw \\ q_{k-1} && \qw & \targ & \ctrl{1} & \qw & \qw & \qw &\ctrl{1} &\targ & \qw & \qw\\ \vdots &&&& \vdots &&&& \vdots &&& \\ q_{i+1} && \qw & \qw & \targ & \ctrl{1} & \qw & \ctrl{1} & \targ & \qw & \qw & \qw\\ q_{i} && \gate{H} &\qw & \qw & \targ & \gate{R_z(\theta)} & \targ & \qw & \qw & \gate{H} & \qw\\ } \caption{A circuit implementing the exponential $\exp\big[-i\theta X_i Y_{k} \prod_{r=i+1}^{k-1} Z_r\big]$.} \label{fig:stair_case} \end{figure} \subsection{Multi-qubit-controlled rotations}\label{sec:ncU1} In this section, we outline a method, used in \cite{POVM}, to construct circuits that implement multi-qubit-controlled rotations. The circuits use no ancilla qubits. Later, we use this method to construct circuits for qubit excitations (see Sec. \ref{sec:qubit_exc}). We denote a $R_y(\theta)$ rotation on a target qubit $q_0$, controlled by the $\{q_1..q_m\}$ qubits being in the state $|1..1\rangle$, as $R_y\big(\theta, \{q_1..q_m\}, q_0\big)$. We then write an $m$-qubit-controlled rotation by decomposing it into two opposite half-way rotations, controlled by $(m-1)$ qubits: \begin{align}\label{cU_1} R_y\big(\theta, \{q_1..q_m\}, q_0\big) = \nonumber \\ CNOT(q_1, q_0) R_y\big(-\frac{\theta}{2}, \{q_2..q_m\}, q_0\big) \nonumber \\ \times CNOT(q_1, q_0)R_y\big(\frac{\theta}{2}, \{q_2..q_m\},q_0\big), \end{align} or equivalently, as \begin{align}\label{cU_2} R_y\big(\theta, \{q_1..q_m\}, q_0\big) = \nonumber \\ R_y\big(\frac{\theta}{2}, \{q_2..q_m\}, q_0\big) CNOT(q_1, q_0) \nonumber \\ \times R_y\big(-\frac{\theta}{2}, \{q_2..q_m\}, q_0\big)CNOT(q_1, q_0). \end{align} By decomposing the controlled rotations further, the overall operation is brought down to $CNOT$s and single-qubit rotations. Implementing directly Eqs. (\ref{cU_1}) or (\ref{cU_2}), results in a circuit with $(2^{m+1}-2)$ $CNOT$s. However, for $m>2$, Eqs. (\ref{cU_1}) and (\ref{cU_2}) can be combined alternately to cancel adjacent $CNOT$s (see Fig. \ref{fig:ccU1}), and obtain a circuit with $(2^{m-1}+2)$ $CNOT$s. $R_z$- and $R_x$-controlled rotations can be obtained by additional single-qubit rotations on the target qubit. Equivalently, $CZ$ (controlled-phase) gates can be used instead of $CNOT$ gates in Eqs. (\ref{cU_1}) and (\ref{cU_2}); in some scenarios this can be used to cancel adjacent two-qubit gates (see Sec. \ref{sec:qubit_exc}). \begin{figure}[H] \Qcircuit @C=0.6em @R=1.5em { &q_0 && \gate{R_y(\frac{\theta}{4})} & \targ{} & \gate{R_y(-\frac{\theta}{4})} & \targ{} & \targ{} & \targ{} & \gate{R_y(\frac{\theta}{4})} & \targ{} & \gate{R_y(-\frac{\theta}{4})} & \targ{} & \qw \\ &q_1 && \qw & \ctrl{-1} & \qw & \ctrl{-1} & \qw & \ctrl{-1} & \qw & \ctrl{-1} & \qw & \qw & \qw\\ &q_2 && \qw & \qw & \qw & \qw & \ctrl{-2} & \qw & \qw & \qw & \qw & \ctrl{-2} & \qw\gategroup{1}{4}{2}{7}{1.0em}{..} \gategroup{1}{9}{2}{12}{1.0em}{..} } \caption{A circuit implementing $R_y(\theta, \{q_1,q_2\}, q_0)$. The first half-way rotation $R_y(\frac{\theta}{2}, \{q_1\}, q_0)$ (left dotted rectangle) is implement as in Eq. \eqref{cU_1}, and the second half-way rotation $R_y(-\frac{\theta}{2}, \{q_1\}, q_0)$ ( right dotted rectangle) as in Eq. \eqref{cU_2}. In this way, the two middle $CNOT$s between qubits $q_0$ and $q_1$ can be cancelled.} \label{fig:ccU1} \end{figure} \section{Results}\label{sec:results} In this section we present our main results. We begin by defining qubit excitations. Then, we construct circuits to perform single and double qubit excitations. Last, we modify these circuits to perform single and double fermionic excitations instead. \subsection{Qubit excitations}\label{sec:qubit_exc} We use the term qubit excitation to describe the unitary evolution of an exponential of a parametrized qubit excitation operator. Single- and double-qubit-excitation operators are generated by the qubit annihilation and creation operators, $Q$ and $Q^\dagger$ [Eq. (\ref{eq:Q_Q_0})], and are given by the skew-Hermitian operators \begin{equation}\label{eq:s_q_exc_op} \tilde{T}_i^k(\theta) \equiv \theta(Q^\dagger_k Q_i - Q^\dagger_i Q_k)\;\;\mathrm{and} \end{equation} \begin{equation}\label{eq:d_q_exc_op} \tilde{T}^{kl}_{ij}(\theta) \equiv \theta(Q^\dagger_k Q^\dagger_l Q_i Q_j - Q^\dagger_i Q^\dagger_j Q_k Q_l). \end{equation} The operators $Q$ and $Q^\dagger$ satisfy the relations \begin{multline}\label{eq:Q_Q} \ \ \ \{Q_i, Q_i^\dagger\} = I, \ \ [Q_i, Q^\dagger_j] = 0 \ \text{ if } \ i \neq j , \ \ \mathrm{and}\\ \ \ \ \ \ [Q_i, Q_j] = [Q_i^\dagger, Q^\dagger_j] = 0 \text{ for all }i,j. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{multline} These relations are neither bosonic nor fermionic; some authors have referred to them as parafermionic \cite{parafermions}. Single and double qubit excitations are expressed, respectively, by the unitaries \begin{equation}\label{eq:s_q_exc_QQ} U_{ki}^{\mathrm{(sq)}}(\theta) = e^{\tilde{T}_i^k(\theta)} \;\;\mathrm{and} \end{equation} \begin{equation}\label{eq:d_q_exc_QQ} \ U_{klij}^{\mathrm{(dq)}}(\theta) = e^{\tilde{T}^{kl}_{ij}(\theta)} . \end{equation} These unitary operations are similar to the single and double fermionic excitations [Eqs. \eqref{eq:s_f_exc_aa} and \eqref{eq:d_f_exc_aa}], apart from the absence of the exponentials of products of Pauli-$z$ operators that account for the fermionic anticommutation relations [Eq. \eqref{eq:fermAnti}]. Previously, these operations have been considered in the context of VQE-ansatz construction \cite{UCCSD_qubit, Eff_d_q_exc}. \subsubsection{Circuit for single qubit excitations}\label{sec:s_q_exc_QQ} A single qubit excitation [Eq. (\ref{eq:s_q_exc_QQ})] can be re-expressed in terms of quantum-gate operators: \begin{equation}\label{eq:s_exch} U_{ki}^{\mathrm{(sq)}}(\theta) = \exp\Big[-i\frac{\theta}{2}(X_i Y_k - Y_i X_k)\Big] . \end{equation} This unitary is equivalent to an evolution under the exchange interaction, which can be performed by the circuit in Fig. \ref{fig:s_exchange}. We implement the controlled $R_y(\theta)$ rotation in the circuit in Fig. \ref{fig:s_exchange}a as in Eq. (\ref{cU_2}), and apply the circuit identity in Fig. \ref{fig:c_zx} to cancel a $CNOT$. In this way we obtain the explicit circuit, for single qubit excitation, shown in Fig. \ref{fig:s_exchange}b.\footnote{The circuit in Fig. \ref{fig:s_exchange}b is not optimal in the number of $CNOT$ gates. A single qubit excitation, as defined in Eq. (\ref{eq:s_exch}), can be implemented by a circuit with only two $CNOT$s. We use the circuit in Fig. \ref{fig:s_exchange}b because it is used to construct the circuits in Figs. \ref{fig:d_q_exc}, \ref{fig:s_fermi} and \ref{fig:d_fermi} in an optimal way.} \begin{figure}[H] \Qcircuit @C=0.7em @R=1.0em { \textbf{a)} &&&&&&&&&&&&& q_k && \ctrl{2} & \gate{R_y(\theta)} & \ctrl{2} & \qw &\\ &&&&&&&&&&&&&&&&&&&=&&&&&\\ &&&&&&&&&&&&& q_i && \targ{} & \ctrl{-2} & \targ{} & \qw &\\ \\ } \Qcircuit @C=0.5em @R=1.0em { \textbf{b)}&&& q_k \ \ &&\qw&\gate{R_z(\frac{\pi}{2})} & \ctrl{2} &\gate{R_y(\frac{\theta}{2})} & \ctrl{2} & \gate{R_y(-\frac{\theta}{2})} & \ctrl{2} & \qw &\\ &&&&&&&&&&&&&&\\ &&& q_i \ \ && \gate{R_y(-\frac{\pi}{2})} & \gate{R_z(-\frac{\pi}{2})} & \targ &\gate{R_z(-\frac{\pi}{2})} & \targ & \gate{H} & \targ{} & \qw } \caption{\textbf{a)} A circuit performing an exchange-interaction operation equivalent to the single qubit excitation in Eq. (\ref{eq:s_exch}). \textbf{b)} An explicit circuit for \textbf{a}) obtained by implementing the controlled-$R_y(\theta)$ rotation as in Eq. (\ref{cU_2}), and using the circuit identity in Fig. \ref{fig:c_zx}.} \label{fig:s_exchange} \end{figure} \begin{figure}[H] \Qcircuit @C=0.75em @R=0.75em { \\ & \targ & \ctrl{1} &\qw & && \gate{R_y(-\frac{\pi}{2})} & \gate{R_z(-\frac{\pi}{2})} & \targ{} & \gate{R_z(\frac{\pi}{2})} & \gate{R_y(\frac{\pi}{2})} & \qw & \\ &&&& =&&&&&&&&\\ & \ctrl{-2} &\ctrl{-2} & \qw &&& \qw & \gate{R_z(\frac{\pi}{2})} & \ctrl{-2} & \qw & \qw & \qw & } \caption{A circuit identity for an operation equivalent to a $CNOT$ followed by a $CZ$.} \label{fig:c_zx} \end{figure} \subsubsection{Circuit for double qubit excitations}\label{sec:d_q_exc_QQ} A double qubit excitation [Eq. (\ref{eq:d_q_exc})] can be re-expressed in terms of quantum-gate operators: \begin{multline}\label{eq:d_q_exc} \ \ \ \ \ \ \ \ \ \ \ U_{klij}^{\mathrm{(dq)}}(\theta) = \exp\Big[-i \frac{\theta}{8} (X_i Y_j X_k X_l + Y_i X_j X_k X_l \\+ Y_i Y_j Y_k X_l + Y_i Y_j X_k Y_l - X_i X_j Y_k X_l - X_i X_j X_k Y_l \\ - Y_i X_j Y_k Y_l - X_i Y_j Y_k Y_l )\Big]. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{multline} Equation \eqref{eq:d_q_exc} is a unitary operation that continuously exchanges the $|1_i1_j0_k0_l\rangle$ and $|0_i0_j1_k1_l\rangle$ states, but does not alter other states. To implement this operation, we use a similar circuit to the one for the single qubit excitation. However, to ensure that the operation exchanges only the states $|1_i1_j0_k0_l\rangle$ and $|0_i0_j1_k1_l\rangle$, it must act nontrivially only if the parities of the qubit pairs $\{q_k q_l\}$ and $\{q_i q_j\}$ are even. To perform this kind of parity-controlled exchange operation, we use the circuit in Fig. \ref{fig:d_q_exc}. The first two $CNOT$s, between qubits $q_k$ and $q_l$, and qubits $q_i$ and $q_j$, encode the parities of the two respective qubit pairs on qubits $q_k$ and $q_i$, respectively. Then qubits $q_k$ and $q_i$ are used as control qubits for a controlled exchange operation, the dotted rectangle in Fig. \ref{fig:d_q_exc}, between qubits $q_l$ and $q_j$. The last two $CNOT$s between qubits $q_k$ and $q_l$, and qubits $q_i$ and $q_j$, respectively, reverse the parity-encoding action of the first two $CNOT$s. We implement the controlled $R_y(\theta)$ rotation in Fig. \ref{fig:d_q_exc} with the circuit in Fig. \ref{fig:cccU1}, and use the circuit identity in Fig. \ref{fig:c_zx} to cancel a $CNOT$. In this way, we obtain the explicit circuit for a double qubit excitation, shown in Fig. \ref{fig:d_q_exc_full}. The explicit circuit has a $CNOT$ count of $13$ and $CNOT$ depth of $11$. The circuit suggested in Refs. \cite{Eff_d_q_exc,Eff_d_q_exc_2}, for the same operation, also has a $CNOT$ count of $13$, but a higher $CNOT$ depth of $13$ \footnote{An anonymous referee has pointed out that another advantage of the circuit on Fig. \ref{fig:d_q_exc_full} is that it reduces the number of long-range $CNOT$s compared to the circuits in Refs. \cite{Eff_d_q_exc,Eff_d_q_exc_2}}. \ \begin{figure}[h] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Qcircuit @C=1.5em @R=1.5em { q_l && \ctrl{1} & \ctrl{2} & \gate{R_y(\theta)} & \ctrl{2} & \ctrl{1} & \qw &\\ q_k && \targ{} & \qw & \ctrlo{-1} & \qw & \targ{} & \qw &\\ q_j && \ctrl{1} & \targ{} & \ctrl{-1} & \targ{} & \ctrl{1} & \qw &\\ q_i && \targ{} & \qw & \ctrlo{-1} & \qw & \targ{} & \qw \gategroup{1}{4}{4}{6}{2.0em}{..} } \caption{A circuit performing a double qubit excitation [Eq. (\ref{eq:d_q_exc})]. The explicit circuit is given in Fig. \ref{fig:d_q_exc_full}.} \label{fig:d_q_exc} \end{figure} \onecolumngrid \begin{figure}[H] \ \ \ \ \ \ \ \ \Qcircuit @C=0.5em @R=1.0em { \\ q_0 && \gate{R_y(\frac{\theta}{8})} & \ctrl{1} & \gate{R_y(-\frac{\theta}{8})} & \ctrl{3} & \gate{R_y(\frac{\theta}{8})} & \ctrl{1} & \gate{R_y(-\frac{\theta}{8})} & \ctrl{2} & \qw & \gate{R_y(\frac{\theta}{8})} & \ctrl{1} & \gate{R_y(-\frac{\theta}{8})} & \ctrl{3} & \gate{R_y(\frac{\theta}{8})} & \ctrl{1} & \gate{R_y(-\frac{\theta}{4})} & \ctrl{2} & \qw & \qw \\ q_1 && \gate{H} & \targ & \qw & \qw & \qw & \targ & \qw & \qw & \qw & \qw & \targ & \qw & \qw & \qw & \targ & \gate{H} & \qw & \qw & \qw \\ q_2 && \qw & \qw & \qw & \qw & \qw & \qw & \gate{H} & \targ & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \targ & \gate{H} & \qw \\ q_3 && \qw & \qw & \gate{H} & \targ & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \targ & \gate{H} & \qw & \qw & \qw & \qw & \qw \\ \ \ } \caption{An explicit circuit implementing the controlled rotation $R_y(\theta, \{q_1,q_2,q_3\}, q_0)$. The circuit is obtained with the method described in Sec. \ref{sec:ncU1}.} \label{fig:cccU1} \end{figure} \begin{figure}[H] \Qcircuit @C=0.3em @R=1.0em { &&& \\ q_l \ && \ctrl{1} & \qw & \ctrl{2} & \gate{R_y(\frac{\theta}{8})} & \ctrl{1} & \gate{R_y(-\frac{\theta}{8})} & \ctrl{3} & \gate{R_y(\frac{\theta}{8})} & \ctrl{1} & \gate{R_y(-\frac{\theta}{8})} & \ctrl{2} & \gate{R_y(\frac{\theta}{8})} & \ctrl{1} & \gate{R_y(-\frac{\theta}{8})} & \ctrl{3} & \gate{R_y(\frac{\theta}{8})} & \ctrl{1} & \gate{R_y(-\frac{\theta}{8})} & \ctrl{2} & \gate{R_z(\frac{\pi}{2})} & \qw & \ctrl{1} & \qw \\ q_k \ && \targ{} & \gate{X} & \qw & \gate{H} & \targ & \qw & \qw & \qw & \targ & \qw & \qw & \qw & \targ & \qw & \qw & \qw & \targ & \gate{H} & \qw & \qw & \gate{X} & \targ & \qw & \\ q_j \ && \ctrl{1} & \qw & \targ{} & \qw & \qw & \qw & \qw & \qw & \qw & \gate{H} & \targ & \qw & \qw & \qw & \qw & \qw & \qw & \gate{R_z(-\frac{\pi}{2})} & \targ & \gate{R_z(-\frac{\pi}{2})}& \gate{R_y(-\frac{\pi}{2})} & \ctrl{1} & \qw & \\ q_i \ && \targ{} & \gate{X} & \qw & \qw & \qw & \gate{H} & \targ & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \targ & \gate{H} & \qw & \qw & \qw & \qw &\gate{X} & \targ & \qw & \\ } \caption{An explicit circuit implementing a double qubit excitation [Eq. (\ref{eq:d_q_exc})].} \label{fig:d_q_exc_full} \end{figure} \twocolumngrid \subsection{Efficient Fermionic excitations}\label{sec:f_exc} The quantum-gate operator expressions for fermionic excitations [Eqs. (\ref{eq:s_f_exc_0}) and (\ref{eq:d_f_exc})] differ from those for qubit excitations [Eqs. (\ref{eq:s_exch}) and (\ref{eq:d_q_exc})] only by the additional products of Pauli-$z$ operators in their exponents. These products account for the fermionic anticommutation relations [Eq. \eqref{eq:fermAnti}]: In the single (double) fermionic excitation, the operator product changes the sign before the excitation parameter $\theta$ if the parity of the set of qubits $\{q_{i+1}..q_{k-1}\}$ ($\{q_{i+1}..q_{j-1}q_{k+1}..q_{l-1}\}$) is odd. We can re-express the fermionic excitations in terms of qubit excitations: \begin{multline}\label{eq:f_exc_1} U_{ki}^{\mathrm{(sf)}}(\theta) \text{=} \begin{cases} U_{ki}^{\mathrm{(sq)}}(\theta) \text{ if } P(q_{i+1}..q_{k-1}) \text{=0}\\ \\ U_{ki}^{\mathrm{(sq)}}(\text{-}\theta) \text{ if } P(q_{i+1}..q_{k-1}) \text{=1} \end{cases}, \end{multline} \begin{multline}\label{eq:f_exc_2} U_{klij}^{\mathrm{(df)}}(\theta) \text{=} \begin{cases} U_{klij}^{\mathrm{(dq)}}(\theta) \text{ if } P(q_{i+1}..q_{j-1}q_{k+1}..q_{l-1})\text{=0}\\ \\ U_{klij}^{\mathrm{(dq)}}(\text{-}\theta) \text{ if } P(q_{i+1}..q_{j-1} q_{k+1}..q_{l-1})\text{=1} \end{cases}. \end{multline} We can adapt the circuits for qubit excitations in Figs. \ref{fig:s_exchange} and \ref{fig:d_q_exc} to perform fermionic excitation, by sandwiching the controlled-$R_y(\theta)$ rotation in each of them between two $CNOT$ staircases [see Figs. \ref{fig:s_fermi} and \ref{fig:d_fermi}]. In this way the sign before the excitation parameter $\theta$ is changed if the parity of the relevant qubits is odd, in accordance with Eqs. (\ref{eq:f_exc_1}) and (\ref{eq:f_exc_2}). Compared to the standard circuits for fermionic excitations, (see appendix \ref{app:f_circs}), the circuits outlined here utilizes only $2$ $CNOT$ staircases, instead of $4$ or $16$, per single or double fermionic excitation, respectively. \subsubsection{Circuit for single fermionic excitations} Figure \ref{fig:s_fermi} shows our modified circuit for a single fermionic excitation, based on the circuit in Fig. \ref{fig:s_exchange}a. The parity of $\{q_{i+1}..q_{k-1}\}$ is encoded in qubit $q_{i+1}$ by a $CNOT$ staircase. Conditioned on qubit $q_{i-1}$ being in state $|1\rangle$ (odd parity), the two $CZ$ gates between qubits $q_k$ and $q_{i+1}$ reverse the direction of the $R_y(\theta)$ rotation: $R_y(\theta) \rightarrow R_y(-\theta)$. The controlled $R_y(\theta)$ rotation is implemented as in the single-qubit-excitation circuit, and similarly the circuit identity from Fig. \ref{fig:c_zx} is used to cancel a $CNOT$ between qubits $q_k$ and $q_i$. A single fermionic excitation, Eq. \eqref{eq:s_f_exc_0}, involves operations on qubits $q_i$ to $q_k$. We define the total number of qubits involved in the excitation as $n^{(\mathrm{sf})} \equiv k -i +1 $. For $n^{(\mathrm{sf})} \geq 3$, the circuit in Fig. \ref{fig:s_fermi} has a $CNOT$ count of $2n^{(\mathrm{sf})}-1$ and a $CNOT$ depth of $\mathrm{max}[5,2n^{(\mathrm{sf})}-3]$. For $n^{(\mathrm{sf})}=2$, the circuit in Fig. \ref{fig:s_fermi} is equivalent to that for a single qubit excitation (Fig. \ref{fig:s_exchange}b). For comparison, the standard construction (Sec. \ref{sec:stair_case}) for single-fermionic-excitation circuits, yields a circuit (appendix \ref{app:f_circs} Fig. \ref{fig:s_f_exc}) with a $CNOT$ count and a $CNOT$ depth of $4(n^{(\mathrm{sf})} -1)$. \begin{figure}[H] \ \ \ \ \ \Qcircuit @C=1.5em @R=1.0em { q_{k-1} &&& \qw & \ctrl{1} & \qw & \qw & \qw &\qw & \qw &\ctrl{1}& \qw &\\ q_{k-2} &&& \qw & \targ & \ctrl{1} & \qw & \qw &\qw &\ctrl{1} &\targ & \qw &\\ \vdots &&&&& \vdots &&&& \vdots &&& \\ q_{i+1}&&& \qw &\qw & \targ & \ctrl{1} & \qw & \ctrl{1} & \targ & \qw & \qw & \\ q_k &&& \qw &\qw & \ctrl{1} & \ctrl{-1} & \gate{R_y(\theta)} & \ctrl{-1} & \ctrl{1} & \qw &\qw \\ q_i&&& \qw &\qw & \targ & \qw & \ctrl{-1} & \qw & \targ{} & \qw & \qw & } \caption{A circuit performing a single fermionic excitation [Eq. (\ref{eq:s_f_exc_0})]. The vertical dots denote $CNOT$ staircases on qubits $q_{i+1}$ to $q_{k-1}$.} \label{fig:s_fermi} \end{figure} \subsubsection{Circuit for double fermionic excitations} Figure \ref{fig:d_fermi} shows our modified circuit for a double fermionic excitation, based on the circuit in Fig. \ref{fig:d_q_exc}. The parity of $\{q_{i+1}..q_{j-1}q_{k+1}..q_{l-1}\}$ is encoded in qubit $q_{i+1}$ by a staircase $CNOT$ structure. Conditioned on qubit $q_{i+1}$ being in state $|1\rangle$, the two $CZ$ gates between qubits $q_l$ and $q_{i+1}$ reverse the direction of the controlled $R_y(\theta)$ rotation: $R_y(\theta) \rightarrow R_y(-\theta)$. The controlled $R_y(\theta)$ rotation is implemented as in Fig. \ref{fig:cccU1}, and the circuit identity from Fig. \ref{fig:c_zx} is used to cancel a $CNOT$ between qubits $q_l$ and $q_j$ . A double fermionic excitations, Eq. \eqref{eq:d_f_exc}, involves operations on qubits $q_i$ to $q_j$ and $q_k$ to $q_l$. We define the total number of qubits participating in the excitation as $n^{(\mathrm{df})} \equiv j -i + l -k + 2$. The circuit in Fig. \ref{fig:d_fermi} has a $CNOT$ count of $2n^{(\mathrm{df})} + 5$ and a $CNOT$ depth of $\mathrm{max}[13, 2n^{(\mathrm{df})} - 1]$ for $n^{(\mathrm{df})} \geq 5$. For $n^{(\mathrm{df})}=4$, the circuit in Fig. \ref{fig:d_fermi} is equivalent to that for a double qubit excitation (Figs. \ref{fig:d_q_exc} and \ref{fig:d_q_exc_full}). For comparison, the standard construction (Sec. \ref{sec:stair_case}) for fermionic-excitation circuits, yields a circuit (appendix \ref{app:f_circs} Fig. \ref{fig:d_f_exc}) with a $CNOT$ count and a $CNOT$ depth of $16(n^{(\mathrm{df})} -1)$. Additionaly, using the method of ``balanced trees'', suggested in Ref. \cite{phase_gadget}, one can rearrange the $CNOT$s in the fermionic excitation circuits (Figs. \ref{fig:s_fermi} and \ref{fig:d_fermi}), so that their $CNOT$ depths grow logarithmically rather than linearly. \begin{figure} \ \ \ \ \ \Qcircuit @C=1.5em @R=1.0em { q_{l-1} && \qw & \ctrl{1} & \qw & \qw & \qw & \qw & \qw &\ctrl{1}& \qw &\\ q_{l-2} && \qw & \targ & \ctrl{1} & \qw & \qw & \qw &\ctrl{1} &\targ & \qw &\\ \vdots &&&& \vdots &&&& \vdots &&& \\ q_{i+1}&& \qw & \qw & \targ & \ctrl{1} & \qw & \ctrl{1} & \targ & \qw \\ q_l &&\qw & \ctrl{1} & \ctrl{2} & \ctrl{-1} & \gate{R_y(\theta)} & \ctrl{-1} & \ctrl{2}& \ctrl{1} & \qw &\\ q_k && \qw &\targ{} & \qw & \qw & \ctrlo{-1} & \qw & \qw & \targ{} & \qw &\\ q_j &&\qw & \ctrl{1} & \targ{} & \qw & \ctrl{-1} & \qw & \targ{} & \ctrl{1} & \qw &\\ q_i &&\qw & \targ{} & \qw & \qw & \ctrlo{-1} & \qw & \qw & \targ{} & \qw & } \caption{A circuit performing a double fermionic excitation [Eq.(\ref{eq:d_f_exc})]. The vertical dots denote $CNOT$ staircases on qubits $q_{i+1}$ to $q_{j-1}$, and $q_{k+1}$ to $q_{l-1}$.} \label{fig:d_fermi} \end{figure} \section{Summary}\label{summary} In this work, we constructed circuits, optimized in terms of $CNOT$ gates, to perform efficiently single and double qubit excitations, excitations generated by qubit creation and annihilation operators. We then expanded the circuits' functionality to account for fermionic anticommutation relations, and to perform fermionic excitations. Our single-fermionic-excitation circuits use $2 n^{(\mathrm{sf})} -1$ $CNOT$s, and have $CNOT$ depths of $3$ for $n^{(\mathrm{sf})}=2$, and $\mathrm{max}[5,2n^{(\mathrm{sf})}-3]$ for $n^{(\mathrm{sf})} \geq 3$, where $n^{(\mathrm{sf})}$ is the number of orbitals on which the single fermionic excitation acts. Our double-fermionic-excitation circuits use $2 n^{(\mathrm{df})} +5$ $CNOT$s, and have $CNOT$ depths of $11$ for $n^{(\mathrm{df})}=4$, and $\mathrm{max}[13, 2n^{(\mathrm{df})} - 1]$ for $n^{(\mathrm{df})} \geq 5$, where $n^{(\mathrm{df})}$ is the number of orbitals on which the double fermionic excitation acts. To our knowledge, our circuits are more efficient than all previous methods for implementing double qubit and double fermionic excitations---both in $CNOT$ count and in $CNOT$ depth. Our optimized circuits present a significant step towards implementations of VQE algorithms on NISQ devices. \ \\ \begin{acknowledgements} The authors wish to thank V. Armaos for useful discussions, and an anonymous referee for pointing out the advantage of our methods mentioned in Footnote 2. Y.S.Y. acknowledges financial support from the EPSRC and Hitachi via CASE studentship RG97399. D.R.M.A.-S. was supported by the EPSRC, Lars Hierta's Memorial Foundation and Girton College. \end{acknowledgements} \ \\
{'timestamp': '2020-06-01T02:06:57', 'yymm': '2005', 'arxiv_id': '2005.14396', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14396'}
arxiv
\section{Introduction}\label{sec1} Publication bias is one of the most important concerns in systematic reviews and meta-analyses. Part of the issue is that authors and scientific journals are likel
y to publish studies with statistically significant results than inconclusive studies\cite{thornton2000}. To be statistically significant, the studies with small sample size need to show larger treatment effects, which is known as the small-study effect\cite{sterne2000}. Thus, this may force us to analyze selective samples from the population of interest, and the standard meta-analysis techniques may suffer from this selection bias, which is referred to as publication bias. The funnel plot is a simple and widely-used graphical tool to check for publication bias, which is defined by plotting the estimate of the effect size such as the log-odds ratio or the log-hazard ratio as the horizontal axis and the related measure of precision such as the square root of the sample size or the inverse of the standard error as the vertical axis. One can address the publication bias by examining asymmetry of the funnel plot visually and more formal statistical evaluation of the funnel-plot asymmetry can be made with a regression based test\cite{egger1997,macaskill2001} or a rank-based test \cite{begg1994, schwarzer2007}. The trim-and-fill method is a nonparametric method to adjust publication bias utilizing the funnel-plot asymmetry\cite{duval2000}. All these methods are simple to apply and are widely used in practice. However, publication bias is not the only reason for asymmetry of the funnel plot; Egger et al.\cite{egger1997} listed different potential reasons for the funnel-plot asymmetry including the between-study heterogeneity and others. Then, detection of the publication bias by the funnel plot may be subjective and the trim-and-fill method may not work well\cite{terrin2003, peters2007}. Alternatively, several sensitivity analysis methods have been developed. Copas and colleagues introduced the Heckman-type parametric selection model, which was originally proposed in the econometrics literature\cite{heckman1979}, to describe a selective publication process in meta-analysis. In this paper, this selection model is referred as the Copas selection model\cite{copas2000,copas2001}. The sensitivity analysis with the Copas selection model has several advantages over existing alternative sensitivity analysis methods\cite{copas2013,copas2004,henmi2007}; a user-friendly \emph{R} package \emph{metasens} is available for the Copas-Shi selection model\cite{copas2000,copas2001}, and its performance was empirically examined\cite{carpenter2009, schwarzer2010}. Thus, we focus on this selection model in this paper. The Copas selection model consists of a pair of models; one is for the outcome, such as the log-odds ratio or the log-hazard ratio reported in publications, and the other is a model for the latent variable describing a selective publication process. It's hard to estimate all the parameters in the Copas selection model, and thus a series of papers by Copas and colleagues took a sensitivity analysis approach; some parameters were fixed as sensitivity parameters in a certain range, and the impact on the estimate of interest was studied. However, it's not easy to define an appropriate range for the sensitivity parameters. Prospective registration of study protocols in clinical trial registries is a useful non-statistical approach for publication bias. The International Committee of Medical Journal Editors (ICMJE) initiated a trials-registration policy as a condition for publication in its member journals in July 2005\cite{deangelis2005}, which promoted the development and utilization of the clinical trial registries. ClinicalTrials.gov (\url{https://clinicaltrials.gov/ct2/home}) is one of the early examples which was established in 2000 and is nowadays a very popular registry with 334,954 registered studies in April 2019. World Health Organization's (WHO) International Clinical Trials Registry Platform (ICTRP) (\url{http://apps.who.int/trialsearch/}) is another well-known clinical trials registry which was constructed after the initiation of ICMJE's policy and has been supported by the WHO. FDA (US Food and Drug Administration) database (\url{https://www.accessdata.fda.gov/scripts/cder/daf/}) contains information about products approved by the FDA for human use in the United States. Besides, EU clinical trials register (\url{https://www.clinicaltrialsregister.eu/ctr-search/search}) and ISRCTN (\url{https://www.isrctn.com/}) are also well-developed and widely used in recent years as well as many national registries that often also provide lay summaries in the local language\cite{viergever2015,ogino2014}. Baudard et al. \cite{baudard2017} investigated 14 meta-analyses and showed that by incorporating the information from clinical trial registries, the effect size estimates might change by 29\%. It indicated that searching clinical trial registries may be useful to reduce the bias due to selective publication. On the other hand, although clinical trial registries provide potentially useful information of the registered clinical trials (e.g., study design, target sample size, development phase), they are used only as a tool to search for studies\cite{jones2014}. Information on unpublished studies provided by clinical trial registries is not formally incorporated in the analyses. To the best of knowledge, only two papers mentioned the use of clinical trial registries in statistical inference. Matsuoka et al. \cite{matsuoka2011} proposed a publication bias detection method by utilizing the planned sample size from World Health Organization's (WHO) International Clinical Trials Registry Platform (ICTRP). Mavridis et al. \cite{mavridis2013}utilized FDA database to construct a prior distribution of parameters in their Bayesian Copas-type model for sensitivity analysis in network meta-analysis. In this paper, we improve statistical inference for the Copas-Shi selection model utilizing the information from clinical trial registries. To be specific, utilizing the planned sample size of unpublished studies, which are provided by most clinical trial registries, we propose a method that fit by the maximum likelihood method. All the unknown parameters can be estimated from data, and statistical inference can be conducted according to the standard maximum likelihood theory. In addition to the confidence interval with the normal quantile according to the maximum likelihood theory, we also propose two modified confidence intervals aiming improvement of performance with the small number of studies. Through numerical studies, we observed that our method outperformed existing methods with smaller biases and coverage probabilities closer to the nominal level. Thus, by utilizing the information of unpublished studies from clinical trial registries, inference based on the Copas selection model can be improved. The remainder of this paper is organized as follows. In Section 2, we introduce two motivating meta-analyses. In Section 3, we briefly review the sensitivity analysis method of Copas and Shi\cite{copas2000,copas2001}, and then introduce our new inference procedure by utilizing clinical trial registries. In Section 4, we conduct a series of simulation studies motivated by the tiotropium study to evaluate the performance of our proposed method in comparison to other competitive approaches. In Section 5, we revisit the two studies in Section 2 to illustrate the proposed method. Finally, we discuss some limitations and further research in Section 6. \section{Motivating examples}\label{sec2} \subsection{Tiotropium study} Karner et al.\cite{karner2014}conducted a Cochrane review of tiotropium versus placebo in COPD. The primary outcome was COPD exacerbations, which was defined as a complex of respiratory events or symptoms (new onset or an increase in at least one of cough, sputum, dyspnoea or wheeze) that lasted at least three days and required treatment with antibiotics and/or systemic corticosteroids. In each study, the odds ratio for the event of one or more exacerbations was used to compare the treatment and placebo groups. In the meta-analysis by Karner et al.\cite{karner2014}, 22 studies were analyzed using a random-effects model, and the pooled odds ratio was reported as 0.78 with a 95\% confidence interval of [0.70, 0.87]. To assess the potential risk of publication bias, Karner et al.\cite{karner2014}conducted the Egger's test with the p-value of 0.22 and concluded no serious concern of publication bias. However, our additional application of the Macaskill's regression test\cite{macaskill2001} gave a p-value of 0.06, suggesting the funnel-plot asymmetry. Then, publication bias might be a concern. Karner et al. \cite{karner2014}made their efforts to search the studies with results submitted on ClinicalTrials.gov. We conducted a more comprehensive search of multiple registries, including ClinicalTrials.gov, World Health Organization's (WHO) International Clinical Trial Registry, EU clinical trials register, FDA database, and ISRCTN platform. We used the following terms: \emph{Tiotropium} or \emph{Spiriva} or \emph{HandiHaler} or \emph{Respimat} for the search. They were the same as used in the meta-analysis of Karner et al. \cite{karner2014}, and we also set primary completion date before February 28, 2012, which aimed to be consistent with the original meta-analysis. We detected 10 related studies on these registries, among which 8 came from ClinicalTrials.gov, 1 came from EU clinical trials register, and 1 came from World Health Organization's (WHO) International Clinical Trials Registry. Two of them reported the outcome measure and were incorporated with published studies, and 8 studies without results were deemed as unpublished studies. Finally, we got the tiotropium dataset with 24 published studies and 8 unpublished studies with information of sample size only (see Table S1 in Web-appendix A). \subsection{Clopidogrel study} The second example is about the comparison of high and standard maintenance-dose clopidogrel on major adverse cardiovascular/cerebrovascular events (MACE/ MACCE) by Chen et al.\cite{chen2013}. The original meta-analysis included 12 studies and reported pooled odds ratio as 0.6 based on the fixed-effect model with a 95\% CI of [0.43, 0.83], and no significant bias was concluded with the Egger's test (p=0.25). However, we rerun the Macaskill's test, which suggested funnel-plot asymmetry with a P value of 0.02. We took the procedure as given in the first example and identified 3 unpublished studies from ClinicalTrials.gov by following terms: \emph{Clopidogrel} or \emph{Plavix} or \emph{Iscover}, with completion date before August 31, 2013. The resulting dataset was presented in Table S2 in Web-appendix A. \section{Inference procedures for the Copas selection model}\label{sec3} \subsection{Brief review of the sensitivity analysis method by Copas and Shi} In order to evaluate the potential impact of publication bias on estimation of the treatment effect, Copas and Shi \cite{copas2000} introduced a sensitivity analysis method. Suppose we conduct a meta-analysis of \emph{N} studies. Let $\hat{\theta_i}$ be the estimated treatment effect of the $i$th study such as the log-odds ratio or the log-hazard ratio, and $s_i$ be an estimate of its standard error. We assume that $(\hat{\theta_i}, s_i)$ is available for $i=1,2,..., N$. To integrate the treatment effects over the studies, we consider the standard random-effects model with \begin{eqnarray} \hat{\theta_i}=\theta_i+\sigma_i\epsilon_i, \ \ \ \theta_i \sim N(\theta,\tau^2), \ \ \ \ \epsilon_i \sim N(0,1), \label{mixed} \end{eqnarray} where $\theta_i$ is the study-specific treatment effect of the \emph{i}th study, which is a random-effect following a normal distribution with mean $\theta$ and between-study variance $\tau^2$, and $\sigma_i^2$ is the true within-study variance of the outcome $\hat {\theta_i} $. The random elements $\theta_i$ and $\epsilon_i$ are assumed to be independent. To describe the selective publication process, Copas and Shi \cite{copas2000,copas2001} considered the following model \begin{equation} Y_i=\alpha_0+\alpha_1 / s_i+\delta_i, \ \ \ \delta_i \sim N(0,1), \ \ \ corr(\epsilon_i,\delta_i)=\rho, \label{selection} \end{equation} where $corr(\epsilon_i,\delta_i)$ denotes the correlation coefficient between $\epsilon_i$ and $\delta_i$. The \emph{i}th study is assumed to be published if and only if $Y_i>0$. The term $\alpha_0+\alpha_1 / s_i$ models the small study effect. That is, studies of high precision are more likely to be pulished. The correlation coefficient $\rho$ is responsible for describing the influence of the outcome $\hat{\theta_i}$ on likelihood of publication. Copas and Shi\cite{copas2000} considered to make inference by maximizing the likelihood function conditional on the study being published. The conditional likelihood function is given by \begin{eqnarray} && L_{obs}(\theta,\tau,\rho,\alpha_0,\alpha_1;s_i)=\sum_{i=1}^{N}\left[log f(\hat{\theta_i}|Y_i>0,s_i)\right] \nonumber \\ &&= \sum_{i=1}^{N}\left[-\frac{1}{2}log(\tau^2+\sigma_i^2)-\frac{(\hat{\theta_i}-\theta)^2}{2(\tau^2+\sigma_i^2)}-log\Phi(\alpha_0+\alpha_1/s_i)+log\Phi(v_i)\right], \label{lik_obs} \end{eqnarray} where $f(\cdot|Y_i>0,s_i)$ is the probability density function of $\hat{\theta_i}$ conditional on $Y_i>0$ and $s_i$. $ \Phi(\cdot)$ is the cumulative distribution function of the standard normal distribution, and $v_i=\left\{\alpha_0+\alpha_1/s_i +\rho \sigma_i(\hat{\theta_i}-\theta) / (\tau^2+\sigma_i^2) \right\} / \sqrt {1-\rho^2 \sigma_i^2 / (\tau^2+ \sigma_i^2)}$. In meta-analysis literatures, $s_i$ is usually regarded as being equal to $\sigma_i$ and known. This is relevant approximately at least when the number of subjects is sufficiently large. Copas and Shi\cite{copas2000,copas2001}took care of underestimation of $\sigma_i$ due to publication bias, and then they utilized the quantity $var(\theta_i|s_i,Y_i>0)$ to estimate $\sigma_i^2$ in inference of their selection model. As argued by Copas and Shi\cite{copas2000,copas2001}, the log-likelihood function ($\ref{lik_obs}$) may take its maximum over a very flat plateau, and then it may be computationally very challenging to estimate all the parameters simultaneously. Thus Copas and Shi\cite{copas2000,copas2001} proposed that the vector $(\alpha_0,\alpha_1)$ was fixed as sensitivity parameters, and $(\theta,\tau,\rho)$ was estimated by maximizing the conditional log-likelihood function ($\ref{lik_obs}$). One can examine how the estimate $\hat{\theta}$ for the overall treatment effect $\theta$ would be influenced by the assumed selective publication process. Since we do not know the true value of $(\alpha_0,\alpha_1)$, we should apply this method with various choices for the parameter vector $(\alpha_0,\alpha_1)$ in a certain range. According to the formula by Copas and Shi\cite{copas2000}, one can translate $(\alpha_0,\alpha_1)$ into the expected numbers of unpublished studies by $ M=\sum_{i=1}^{N}\left\{1-P(Y_i>0|s_i)\right\}/P(Y_i>0|s_i)$, which is more interpretable. Then, one can evaluate how biased the estimate of the treatment effect $\theta$ is with a certain number of unpublished studies behind. Although such a consideration provides nice insights about robustness of meta-analysis results, it is still unclear what range of the number of unpublished studies is sufficient to be considered. \subsection{Proposed inference procedure utilizing clinical trial registries} In addition to the \emph{N} published studies, we suppose that there are \emph{M} unpublished studies identified by searching clinical trial registries. Without loss of generality, the first \emph{N} studies are published and the last \emph{M} studies are unpublished. Items required at registration are not necessarily common among clinical trial registries. For example, FDA database provides the sample size of each experimental group in its summary review PDF document, but other clinical trial registries may not give it. However, all the clinical trial registries we referred provide the planned total sample size (not separately by the experimental groups) of each study. Thus, we suppose the triple $(n_i,\hat{\theta_i},s_i)$ is available for \emph{i}=1,2,...,\emph{N} (published studies), and only $n_i$ is available for \emph{i}=\emph{N}+1,..,\emph{N+M} (unpublished studies). A binary variable $D_i=I(Y_i>0)$ denotes the publication status (published/unpublished) of each study. Instead of the model ($\ref{selection}$), we employ an alternative selection model \begin{eqnarray} Y_i=\alpha_0+\alpha_1\sqrt{n_i}+\delta_i, \ \ \ \delta_i \sim N(0,1), \label{selection2} \end{eqnarray} which was used by Copas\cite{copas1999}. For the models ($\ref{mixed}$) and ($\ref{selection2}$), we introduce the maximum likelihood approach as follows. Considering the likelihood function of the published and the unpublished studies, the log likelihood is given by \begin{align} L_{full}(\theta,\tau,\rho,\alpha_0,\alpha_1)&=\sum_{i=1}^{N}\left[log f(\hat{\theta_i}|Y_i>0;n_i)+log P(Y_i>0;n_i)\right]\nonumber \\&+\sum_{i=N+1}^{N+M}\left[log P(Y_i<0;n_i)\right]\nonumber \\ &=\sum_{i=1}^{N}\left[-\frac{1}{2}log(\tau^2+\sigma_i^2)-\frac{(\hat{\theta_i}-\theta)^2}{2(\tau^2+\sigma_i^2)}+log\Phi(\tilde{v_i})\right]\nonumber \\&+\sum_{i=N+1}^{N+M}\left\{log [1-\Phi(\alpha_0+\alpha_1\sqrt{n_i})]\right\}, \label{lik_full} \end{align} where $\tilde{v_i}=\left\{\alpha_0+\alpha_1\sqrt{n_i} +\rho \sigma_i(\hat{\theta_i}-\theta) / (\tau^2+\sigma_i^2) \right\} / \sqrt {1-\rho^2 \sigma_i^2 / (\tau^2+ \sigma_i^2)}$. Following the treatment often made in meta-analysis literature, we suppose that $\sigma_i=s_i$. By maximizing this log-likelihood function ($\ref{lik_full}$), we can estimate all the parameters $(\theta,\tau,\rho,\alpha_0,\alpha_1)$ simultaneously, and the resulting maximum likelihood estimator is denoted by $(\hat{\theta},\hat{\tau},\hat{\rho},\hat{\alpha_0},\hat{\alpha_1})$. Maximization of the log-likelihood function can be implemented with the standard non-linear optimization techniques. Here, we employ the constraint optimizer L-BFGS-B method in the nlminb() function in \emph{R} (package stats, version 3.6.2) to do this. Statistical inference can be conducted following the standard maximum likelihood theory. \section{Simulation study}\label{sec4} \subsection{Methods to be compared} Simulation studies were carried out to evaluate the performance of our new inference procedure. We investigated whether or not the proposed method outperforms the Copas sensitivity analysis method. For reference, we also compared it with standard random-effects meta-analysis using the restricted maximum likelihood (REML) estimation, which did not account for selective publication process. In addition to the standard REML confidence interval based on normal quantiles, the Hartung and Knapp method\cite{hartung2001,hartung2001refined} was applied using \emph{t}-quantiles and rescaled standard errors, which is referred to as REML.KnHa in the following. As outlined in Section 3, the Copas method is a sensitivity analysis approach resulting in multiple estimates under several settings of the number of unpublished studies. For comparison with other methods, we took the estimate of the treatment effect with the smallest number of unpublished studies of the p-value of the goodness-of-fit test larger than 0.1\cite{copas2000}, which is presented in the output of \emph{Copas} function in \emph{metasens} package\cite{carpenter2009r}. For the REML, we used the commonly used \emph{metafor} package with method option equal to ``REML'' and test equal to ``knha'' to obtain the CI with the Hartung and Knapp method. \subsection{Data generation} We mimicked meta-analyses motivated by the tiotropium study. Suppose each study aimed to compare two groups of the treatment group and the control group with respect to a binary outcome. The log-odds ratio was used as the summary measure of the treatment effect between two groups. Let $\hat{\theta_i}$ in the model ($\ref{mixed}$) denoted the empirical log-odds ratio of the $i$th study. We set $\theta=-0.25$ and $\tau$=0.05, 0.15 or 0.3. The total number of studies including published and unpublished was set as 15, 25, 50 or 100. We generated datasets according to the models ($\ref{mixed}$) and ($\ref{selection2}$) as follows. At first, we generated $\theta_i \sim N(\theta,\tau^2)$, which was the true log-odds ratio of the \emph{i}th study. Next, we generated individual participant data of the two groups with the log-odds ratio of $\theta_i$. We set the true event rate of the control group from the uniform distribution \emph{U(0.2,0.9)}, and then set the event rate of treatment group satisfying the log-odds ratio of $\theta_i$. Similarly to Kuss\cite{kuss2015}, the total sample size of each study was generated from \emph{LN(5,1)}, the log-normal distribution with the location parameter 5 and scale parameter 1, and the minimum sample size was restricted to 20 patients (values below 20 were rounded up to 20). Subjects were allocated to the two groups with probability of 0.5. With the generated individual participant data, we calculated an empirical log odds ratio $\hat{\theta_i}$ and its standard error $s_i$. $Y_i$ in the model ($\ref{selection2}$) was generated according to the conditional distribution, \begin{equation} Y_i|\hat{\theta_i}\sim N(\alpha_0+\alpha_1 \sqrt n_i+\rho\sigma_i (\hat{\theta_i}-\theta)/(\tau^2+\sigma_i^2),1-\rho^2\sigma_i^2/(\tau^2+\sigma_i^2)) \end{equation} and set $D_i=I(Y_i>0)$, which entailed us to generate $(\hat{\theta_i},Y_i)$ from the joint distribution defined by the models ($\ref{mixed}$) and ($\ref{selection2}$). The parameters $(\alpha_0,\alpha_1)$ were set based on our consideration for the publication rate of a study with the minimum sample size of 20 and that with a large sample size of 500, which were denoted by $P_{20}$ and $P_{500}$, respectively. Set $P_{500}=0.99$, reflecting our belief that a study with 500 patients had sufficiently large probability to be published, and $P_{20}$ was set to 0.1, 0.3 or 0.5, which represented our different concern to the publication rate of a small study with sample size of 20. Thus according to the model (4), we could derive the parameters $(\alpha_0,\alpha_1)$ by solving the equations $P_{20}=\Phi(\alpha_0+\alpha_1\sqrt20)$ and $P_{500}=\Phi(\alpha_0+\alpha_1\sqrt500)$. Then we got three pairs of $(\alpha_0,\alpha_1)$ as (-2.18,0.20), (-1.24,0.16) and (-0.58,0.13), which resulted in 40\%, 27\% and 19\% unpublished studies on average in the simulated meta-analyses. The parameter of $\rho$ was set to -0.4 or -0.8, and for each $\rho$ we investigated 3 scenarios of $\tau$ (0.05, 0.15 or 0.3). For each scenario, we generated 1000 simulated meta-analyses. In Figure 1, we presented funnel plots of randomly selected simulated meta-analyses under different settings of $(\rho,\tau)$ when the total number of published and unpublished studies was 50. The filled and open circles represent published and unpublished studies, respectively, and we observed our simulation setting successfully simulated the selective publication process often observed in practice. The bold vertical line represents the true $\theta$ and the dashed vertical line is the REML estimate with published studies. They were certainly different and thus publication bias had certain impacts in simulated data. \subsection{Simulation results} The properties of each method were assessed by evaluating biases and standard errors of the estimates for $\theta$ and average lengths and coverage probabilities of two-tailed $95\%$ confidence intervals of $\theta$. Following the standard maximum likelihood theory, we constructed the two-tailed 95\% confidence interval for the estimates of $\theta$ with our new proposal by $\hat{\theta}\pm$$Z_{1-\frac{\alpha}{2}}\times$ S.E.$(\hat{\theta})$, where $Z_{1-\frac{\alpha}{2}}$ denoting the $1-\frac{\alpha}{2}$ quantile of the standard normal distribution, and S.E.$(\hat{\theta})$ is calculated by the inverse of the Fisher information matrix. The results for this confidence interval were denoted by MLE(N) in the summary table. All these methods were implemented with non-linear optimization techniques, thus we also summarized the number of converged cases (\emph{NOC}) in each scenario. For our method, the non-convergence was defined as a failure in optimization with negative hessian matrix or unsuccessful convergence based on the convergence indicator in the output of \emph{nlminb} function. For the Copas sensitivity analysis method, we followed the rule of \emph{Copas} function\cite{carpenter2009, carpenter2009r}. Results of the simulation studies with ($\alpha_0,\alpha_1$)=(-2.18,0.2) were reported in Table 1 ($\rho=-0.4$) and Table 2 ($\rho=-0.8$), respectively. The datasets for Tables 1 and 2 had around 40\% unpublished studies on average. In almost all the simulated datasets, the log-likelihood was successfully maximized and then gave estimates. In all the scenarios, the REML had considerable biases. With $\rho=-0.8$ (Table 2), biases were larger, reflecting that $\rho=-0.8$ modeled a more selective publication process. Both the Copas method and our new proposal decreased biases and ours had the smallest biases in most scenarios. With larger number of studies (N=50 and 100), the maximum likelihood estimator tended to have very negligible biases, and the confidence intervals based on the maximum likelihood theory (MLE(N)) had empirical coverage probabilities close to the nominal level of 95\%. With smaller number of the studies (N=15 and 25), our method successfully decreased biases. However, coverage probabilities might not be close to the nominal level and were not satisfactory when between-study heterogeneity was moderate or substantial ($\tau=0.15$ and 0.3). We demonstrated results of the simulation studies with ($\alpha_0,\alpha_1$)=(-1.24,0.16) in Tables S3 and S4, and that with ($\alpha_0,\alpha_1$)=(-0.58,0.13) in Tables S5 and S6 in Web-appendix B. Findings were very similar to discussed above based on Tables 1 and 2. \subsection{Modified confidence intervals} To improve the coverage probabilities with smaller number of studies, we introduced two modifed confidence intervals. As suggested by Follmann and Proschan\cite{follmann1999}, an alternative confidence interval $\hat{\theta}\pm$$t_{(N-1);(1-\frac{\alpha}{2})}\times$ S.E.$(\hat{\theta})$ may be used, where $t_{(N-1);(1-\frac{\alpha}{2})}$ denoting the $1-\frac{\alpha}{2}$ quantile of Student\emph{-t} distribution with $N-1$ degree of freedom. Further modification was also considered by mimicking the idea of the improved estimator of variance which was proposed by Knapp and Hartung\cite{knapp2003}in the meta-regression context; we modified the standard error of $\hat{\theta}$ by S.E.$(\hat{\theta})^\sharp$=Max(S.E.$(\hat{\theta})_{MLE}$, S.E.$(\hat{\theta})_{REML-KnHa}$) , where S.E.$(\hat{\theta})_{REML-KnHa}$ was the estimator of standard error with HK-adjusted REML method, reflecting our belief that the standard error of our proposal should be no less than the estimator of methods without considering the selcetive publication process for the additional uncentainty of unpublished studies. That is, a more conservative confidence interval was calculated by $\hat{\theta}\pm$$t_{(N-1);(1-\frac{\alpha}{2})}\times$S.E.$(\hat{\theta})^\sharp$. We applied these two confidence intervals to the simulated meta-analyses, results were also shown in Tables 1 and 2 and Tables S3-S6 in Web-appendix B, referring as MLE(T) and MLE($SE^\sharp$), respectively. We observed that these two confidence intervals MLE(T) and MLE($SE^\sharp$) gave certain improvement. In particular, MLE($SE^\sharp$) had much improved emprical coverage probabilities with \emph{N=15}. \section{Re-analysis of two motivating examples with clinical trial registries}\label{sec5} In this section, we applied our new inference procedure to the two case studies of the tiotropium study and clopidogrel study introduced in Section 2. To present data of unpublished studies as well as that of published studies, we propose to use a modified funnel-plot adding information of the planned sample size of unpublished studies with horizontal lines passing the y-axis at $\sqrt{n_i}$. The modified funnel-plots for tiotropium study and clopidogrel study were presented in Figures 2 and 3, respectively. We could observe that for both of the two motivation cases, all the unpublished studies concentrated on the lower part of the funnel-plot. To the data, we applied our proposed method, as well as other related methods used in the simulation section, and the results for the two motivation cases were summarized in Tables 3 and 4, respectively. As presented in Subsection 2.1, for the tiotropium study, we identified 24 published and 8 unpublished studies. By applying the linear mixed-effect model for the log-odds ratios with the REML method, the integrated odds ratio was obtanied as 0.768 with a two-tailed 95\% CI of [0.697,0.847] based on the normal approximation. The Knapp and Hartung modification gave a similar CI of [0.691, 0.854]. According to the sensitivity analysis by the Copas model, one can observe that as increasing the number of unpublished studies, the odds ratio increased up to 0.811, which was corresponding to the case of 13 unpublished studeis. The overall treatment effect was significantly different from unity even with 13 unpublished studies. Suppose we have 8 published studies behind which was same with our findings from the clinical trial registries, Copas sensitivity analysis method gave the pooled odds rato as 0.803 with a 95\% CI of [0.717, 0.898]. Our proposed method gave the integrated estimate of 0.787 with two-tailed 95\% confidence intervals [0.710, 0.873] and [0.706, 0.878] based on the standard normal quantile and t-quantile, respectively. In this case, the standard error from our method was larger than the standard error from REML with the Knapp and Hartung modification, so the modified confidence interval MLE($SE^\sharp$) was same with MLE(T). To summary, all the methods provided statistically significant effect of tiotropium, indicating that selective publication might have not been much of an issue here. For the clopidogrel study which we introduced in Subsection 2.2, with only 3 unpublished studies identified from the clinical trial registries, the estimates with and without consideration of publication bias were considerably different. By applying the REML with 12 published studies, the estimator of the pooled odds ratio was 0.579 and with two-tailed 95\% CIs of [0.375, 0.892] and [0.385, 0.871] based on the normal quantile and the Knapp and Hartung modification, respectively. Both of them supported the conclusion that the high maintenance-dose clopidogrel significantly reduced the incidence of MACE/MACCE in comparison to the standard-maintenance-dose clopidogrel. However, the sensitivity analysis by the Copas model showed that as the number of unpublished studies increased up to 4, the statistical significance of the High-maintenance-dose clopidogrel disappeared, and as high as 39\% change in estimates of odds ratio could be observed with 6 unpublished studies. Our proposed method gave the integrated estimate of 0.692, and a two-tailed 95\% CI based on the normal approximation was given as [0.496, 0.967], whose upper bound was very close to 1. The modified CIs introduced in Subsection 4.4 could be obtained as [0.476, 1.007] (MLE(T)) and [0.460, 1.041] (MLE($SE^\sharp$)), respectively. These results suggested that the significant effect of the high maintenance-dose clopidogrel might be marginal and should be interpreted with caution. \section{Discussion}\label{sec6} By utilizing the information on unpublished studies obtained from the clinical trial registries, we proposed a new inference procedure for the Copas selection model, which provides more objective evaluation of publication bias than the widely-used funnel plot and trim-and-fill method\cite{carpenter2009, schwarzer2010}. All the unknown parameters in the Copas selection model can be estimated from data with the proposed method. It resolved the issue of the sensitivity analysis approach that some unknown parameters had to be fixed in a certain range, and then gave a more insightful interpretation. Recently, Ning et al. \cite{ning2017}proposed a method to estimate all the unknown parameters. Whereas their method strongly relied on an imputation based on funnel-plot symmetry, our method does not. Here, we would like to address some issues in clinical trial registries. Firstly, different clinical trial registries may give inconsistent information. A comparison of results between ClinicalTrials.gov and FDA database by Schwartz et al.\cite{schwartz2016}showed the planned sample size registered on the clinical trial registries were not always consistent with the final sample size for statistical analysis. Besides, as acknowledged by Fleminger et al.\cite{fleminger2018} the status of the studies might be inconsistent between EUCTR and ClinicalTrials.gov, the studies registered on EUCTR which marked with ``completed'' may marked with ``onging'' in ClinicalTrials.gov. Secondly, no clinical trial registry could cover all the related studies\cite{glanville2014,tse2018,adam2018}. Thirdly, it is hard to integrate multiple clinical trial registries automatically since unique identifier of a study is not available among registries (e.g. World Health Organization's (WHO) International Clinical Trial Registry Platform (ICTRP)). Due to these issues of clinical trial registries, comprehensive search of multiple clinical trial registries was recommended in practice. Clinical trial registries provide various information other than sample size. ClinicalTrials.gov's search result contains 26 items in the list and analysis result if the investigator submitted, ICTRP contains 40 items in the result file, and FDA database provide much more details about the registered clinical trials. As acknowledged by several cross-sectional studies, information such as funding type (government, industry or academic) and region may possibly play important roles in the publication status\cite{loder2018}. In addition, information of ongoing studies are also available, although we did not use them in this paper. How to take a further utilization of the clinical trial registry would deserve more attention. \section*{Acknowledgments} The last author's research was partly supported by Grant-in-Aid for Challenging Exploratory Research (16K12403) and for Scientific Research (16H06299, 18H03208) from the Ministry of Education, Science, Sports and Technology of Japan.
{'timestamp': '2020-08-28T02:13:38', 'yymm': '2005', 'arxiv_id': '2005.14362', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14362'}
arxiv
\section{Introduction} Galactic modeling requires considering evolution and an asymmetric potential, as pointed out by \citet{antoja2018}, who have revealed in our Galaxy many intriguing signals suc
h as snail shells, arches and ridges, etc. The so called Galactoseismology \citep{Widrow12, Widrow14} concludes the non-equilibrium and non-stationary potential of the Milky Way, observed in many density or velocity asymmetries in \citet{liuchao2017, wang2018a, wang2018b, wang2018c, wang2019, wang2020a, wang2020b, wang2020c, xu2015, Katz2018, Carrillo2019, Trick2019, Lopez2019, Lopez2020} and reference therein, which are significant for us to understand the dynamical history of the Milky Way. There is no doubt we are entering into a golden era of the Galactoseismology by embracing $Gaia$ \citep{Gaia2018} parallax and proper motions. The inspiring snails and ridges imply that the disk is phase mixing from an out of equilibrium state \citep{antoja2018}. Hence, quadrupole patterns and phase spirals at different Galactic positions have been revealed by \citet{wangchun2019}, showing that external perturbations by the Sagittarius dwarf galaxy might be the dynamical origin for it. Time stamps on it suggested the snails happened between 0.5 and 6 \,Gyr ago, thus leading to the consideration that young stars may have memory of the interstellar medium \citep{Tian2018}. Phase space snail shells in different cold and hot orbits distributions are also dissected in \citet{Li2020}. Unfortunately, these works using LAMOST survey \citep{Deng2012, liu2014, cui2012, zhao2012} did not investigate more details for the intriguing ridges. For the phase mixing patterns and structures, scenarios are mainly classified in two types: one is the external perturbations \citep{antoja2018, Binney2018, Bland-Hawthorn2019, Laporte2019, Minchev2009, Craig2019}, e.g. Sagittarius dwarf galaxy perturbation; the other one is the internal dynamics \citep{Khoperskov2019, Barros2020, Quillen2018, Monari2019}, e.g. buckling of the stellar bar accompanied by bending waves without an external intruder. Both the spiral arms and Sagittarius perturbation simulations for ridges are shown in \citet{Khanna2019}. Outer Lindblad Resonance of the bar could create the prominent ridges \citep{Fragkoudi2019} and it could be used to compare with ridge map in \citet{Kawata2018}. Multiple ridges were also found in \citet{Hunt2018} with 2D transient spiral arms. Arches might be the projection of ridges in the $V_{R}$, $V_{\phi}$ plane \citep{antoja2018} and both are connected together \citep{Ramos2018}. Some recent works are also showing that the ridges could be produced by only internal mechanisms such as spirals without external contributors \citep{Barros2020, Michtchenko2019}. So far, it is still very ambiguous for us to have a clear picture for the ridges, arches, vertical waves, either the origins or relations. And whether they are from internal or external or both mechanisms is very unclear. In this work, we focus on the ridge pattern, tracing it in time stamps in a multiple-dimensional parameter space, trying to get more details of its features and better constraining its origin.There are other recent works discussing the snails, but relatively fewer works focused on ridge and without time tagging analysis as we pretend here. The cornerstone Gaia-DR2 mission\citep{Gaia2018} has already measured precise proper motions and distances for more than 1.3 billion stars. Gaia data in combination with statistical distribution of stellar ages of millions of stars from LAMOST \citep{Deng2012, liu2014, cui2012, zhao2012} provide a good sample to study the ridge pattern, by which we can track the variation of the feature in different age populations from multiple perspectives and thus push the understanding of that without precedent in history. This paper is organized as follows. In section 2, we introduce how we select the Main-Sequence-Turn-Off (MSTO) and OB stars sample and describe its properties concisely. The results and discussions are presented in Section 3. Finally, we conclude this work in Section 4. \section{The Sample Selection} A sample of around 0.93 million Main-Sequence-Turn-Off stars with subgiant stars contribution from the LAMOST Galactic spectroscopic surveys including disk region, Galactic$-$Anticenter region, etc., is selected based on their positions in their locus in the $T_{eff}-M_{V}$ plane. With the help of LAMOST DR4 spectra and the Kernel Principal Component Analysis (KPCA) method, accuracies of radial velocities reach 5 km s$^{-1}$. The ages are determined by matching with stellar isochrones using the Yonsei-Yale (Y2) isochrones and Bayesian algorithm with the help of Teff, logg, [Fe/H] and [$\alpha$/Fe] and similar method of \citet{Jorgensen2005}. Interstellar extinction was derived using the star pairs method and the technique is able to determine E(B−V) to an accuracy of 0.01 mag \citep{Yuan2015}, distance estimates range between 10 and 30 per cent. Overall, the sample stars have a median error of 34\% for the age estimates and the typical uncertainties of the stellar parameters, such as Teff, logg, [Fe/H] and [$\alpha$/Fe], measured from the LAMOST data are 100 K, 0.1 dex, 0.1 dex, 0.05 dex respectively \citep{xiang2017a, xiang2017b, xiang2017c}. The OB stars selection is easily selected by spectral line indices space in LAMOST and the distance here is from $Gaia$, more details could also be found in \citet{liu2019} and the data was used to unravel some velocity asymmetries in \citet{Cheng2019}. The second data release of the $Gaia$ mission with unprecedented high-precision proper motions with typical uncertainties of 0.05, 0.2 and 1.2 mas yr$^{-1} $ for stars with $G$-band magnitudes $\leq$ 14, 17 and 20 \,mag respectively, has made possible to map the Galaxy's kinematics and Galacto-seismology with hitherto the largest spatial extent \citep{Gaia2016, Gaia2018}. 3D velocities derived by assuming the location of Sun is $R_{\odot}$ = 8.34 \,kpc \citep{Reid14} and $Z_{\odot}$ = 27 \,pc \citep{Chen01}, \citet{Tian15} solar motion values: [$U_{\odot}$, $V_{\odot}$, $W_{\odot}$] = [9.58, 10.52, 7.01] km s$^{-1} $, other solar motions \citep[e.g., ][]{Huang2015} won't change our conclusion at all. The circular speed of the LSR is adopted as 238 km s$^{-1}$ \citep{Schonrich12} and Cartesian coordinates on the basis of coordinate transformation described in $Galpy$ \citep{Bovy2015}. We use LAMOST distance for MSTO stars by absolute magnitude and extinction measurement and Gaia parallax or distance only for OB stars. \citet{Lopez2019} have tested zero-point bias in the parallaxes of Gaia DR2, they suggested that the effect of the systematic error in the parallaxes is negligible during the work, we also don’t think the small zero point bias will affect our conclusion with similar tests to \citet{Lopez2019}. Actually, our sample are all within 4$-$5 \,kpc away from the sun and the parallax is larger than 0.2$-$0.25 mas, the small zero point bias can not change our conclusion. We show the MSTO sample in Fig. \ref{tefflog}. It shows the $Teff$ vs. $logg$ distributions colored by age, we can see most of stars have surface gravity larger than 3, and younger stars have higher effective temperature than the old ones. The Fig. \ref{ridge_NS_mapcount1} shows the star counts distribution in the $R, Z$ plane, it shows that the northern stars is more than southern stars with the calculations. In order to build the reliable sample containing stellar astrophysical parameters and precise kinematical information, we use criteria from LAMOST spectroscopic survey and Gaia catalogs as follows: 1) $|Z|$ $<$ 1.5 \,kpc and 7.5 $<$ $R$ $<$ 12 \,kpc; 2) SNR $>$ 20; 3) age less than 14 \,Gyr and larger than 0; 4) $v_{\phi}$ = [50, 350] km s$^{-1}$; 5) parallax $>$ 0 and the relative error $<$ 0.20. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{tefflogg.png} \caption{The figure shows the MSTO stars age distribution in the $Teff$ and $logg$ plane adopted in this work, younger stars are hotter than older stars for effective temperature.} \label{tefflog} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{ridge_NS_mapcount1.pdf} \caption{The figure shows the MSTO stars spatial distribution in the $R$ and $z$ plane adopted in this work, northern stars (296451) are more than southern stars (238564) by comparing the number of both sides.} \label{ridge_NS_mapcount1} \end{figure} \section{Results and Discussions} \subsection{Ridge patterns investigation for MSTO stars} For this part, we investigate the ridge pattern in the different parameter space plots. We show in Fig. \ref{msden_vr_feh} density ($f$), radial velocity ($v_{R}$) and vertical velocity ($v_{z}$) distributions in the plane of the rotational velocity in the y axis and radial distance in the x axis; the magenta dotted curves represent constant angular momentum of $L_{Z}$ = (1650, 1800, 2080) kpc km $s^{-1}$ including the contribution of the $V_{LSR}$ \citep{Schonrich12}. The radial distance range displayed here is from 7.5 to 12 \,kpc. We can see there are no clear ridge features for the density pattern due to the selection effects, sample precision, etc. However, the ridge pattern is very prominent and strong for the radial velocity distribution shown in the middle sub-figures, shown as negative blue strips accompanied with positive red stripes, patterns and trends of which are similar with some previous works. e.g., \citet{Khanna2019}. In addition, it denotes clearly that the sensitive time of the inspiring ridge feature to the possible perturbation is 0$-$14 \,Gyr due to that we could detect the ridge signals in the range of 0 to 14 \,Gyr. It is remarkable that the angular momentum per unit mass of the ridge pattern varies with age when compared with the constant magenta lines in radial velocity sub-figures showing in the middle one. As we can see in the top panel of middle figure of Fig. \ref{msden_vr_feh}, there are three magenta lines we adopted corresponding to the three ridges colored by blue and red strips. Here we define these three strips as ridge A, ridge B, ridge C from the top to the bottom. For the ridge A in the top, we could see when the age is less than 6 \,Gyr. The pattern could be matched with the constant angular momentum line for the general trend by focusing on the range of 9$-$10.5 \,kpc, but when we move forward with age larger than 6 \,Gyr, the ridge pattern, especially for the range of 9$-$10.5 \,kpc, is deviating the magenta lines with around 10 km s$^{-1}$ in all population bins. The corresponding errors are small and almost all of them are less than 2$-$4 km s$^{-1}$. For the ridge B, the overall trend of the ridge pattern could be matched with the second magenta lines in all populations without significant deviations, implying that there is no significant variation compared with the constant angular momentum line. For the ridge C, the minimum one, it is also matched with the third magenta lines well and has no clear variation like ridge A by focusing on the range of 9$-$10.5 \,kpc to guide our eyes. We suggest the relatively variable ridge A ($\tau > $ 6 vs. $\tau < $ 6 \,Gyr) and relatively invariable ridge B, C are showing two kinds of ridges possibly originated from different physical scenarios, which is helpful for us to unveil the origins of the ridge. So there are two relatively stable ridges and one variable ridges from the current figures and in order to try to see these ridges variation clearly, according to \citet{Friske2019}, in Fig. \ref{ridge_Lz_VR} we also use the $L_{Z}$ vs. $v_{\phi}$ panel colored by $v_{R}$ to investigate the ridge pattern, we could see there are also three ridge patterns from left to the right and colored by red, blue and blue. These ridge patterns are corresponding to the three ridges of the Fig. \ref{msden_vr_feh}, we can see the left two ridges are relatively stable but the shift of the right ridge located around 2180 \,kpc km $s^{-1}$ of $L_{Z}$ are detected, especially when the age is larger than 3 \,Gyr. Please notice that here we use 2180 but not 2080 vertical line for $L_{Z}$ due to that the angular momentum must have larger errors than the radial distance with the contribution of velocity and distance errors for $L_{Z}$, so the pattern here has some differences from the Fig. \ref{msden_vr_feh}, we use relatively larger value for the right vertical magenta line in order to see the variable pattern in the right relatively clearly and try to match the pattern of the Fig. \ref{msden_vr_feh}, which could not affect our conclusions. \citet{Friske2019} suggested the $L_{Z}$ shift of resonances with the age is expected, because of higher energy E (or action J) of the older stars. We also propose that the shift of the ridge again supports us that there might have two kinds of ridges. When we keep going with the $v_{z}$ pattern in the same plane, just as shown in the right panel of Fig. \ref{msden_vr_feh}, what we could see is that weak ridge features are observed in the vertical motions especially around the bottom magenta line, although it is not as strong as radial velocity, which is also not as clear as the results of \citet{Khanna2019} showing clear pattern in the vertical velocity distribution. The ridge stars in \citet{Khanna2019} are mainly consisting of mid-plane stars less than 0.2 \,kpc, which is different from our results here using stars less than 1.5 \,kpc (in order to get more stars and see the ridge pattern in $v_{R}$ clearly). When we use similar but more stringent selection conditions, the sample is too small to see very clear pattern like Fig. \ref{msden_vr_feh} due to the increasing poisson noise and observational errors, etc. All stars with all heights contribute to the ridge but in this case, the vertical information might be completely washed out \citep{Khanna2019}. We could test whether there is a possibility that these factors mentioned here and last paragraph also might affect the distribution of [$Fe/H$], [$\alpha/Fe$], $v_{z}$. As shown in Fig. \ref{msden_alpha_vz}, the [$Fe/H$], [$\alpha/Fe$] (z=[$-$1.5 1.5] \,kpc), and $v_{z}$ (z=[$-$0.2 0.2] \,kpc) distributions are displayed that there are still weak ridge features in the metallicity and abundance, especially for the top panels in the left and middle figure. We have plotted there red and blue strips in the range of 8$-$10 \,kpc and around 220 km $s^{-1}$. Other features are not so clear, but we could still detect some signals, e.g., the third row of the left panel and the second row of the middle figure. By using a narrow range of stars, we could detect weaker signal around the bottom magenta line for the vertical distribution in the right panel of Fig. \ref{msden_alpha_vz}. Summing up, in the $v_{\phi}, R$ plane for our sample, we see the well-known ridge pattern in radial velocity accompanied by the signals in the [$Fe/H$], [$\alpha/Fe$] and $v_{z}$ distributions, with the time tagging analysis not shown in previous works yet. They are displaying observational evidence that the different ridges might have different angular momentum that is variable or not with time, which shows for the first time there are possibly two types of ridges with different properties and origins. \begin{figure*}[!t] \centering \includegraphics[width=0.307\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_mapcount.pdf} \includegraphics[width=0.3\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_map1.pdf} \includegraphics[width=0.3\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_mapvz15.pdf} \caption{Stars distribution in the ($R, v_{\phi}$) plane with LAMOST MSTO stars and Gaia DR2 proper motion in different age populations. Heat maps of various quantities are shown; Left panel is the density $f$ distribution, middle one is the radial motion $v_{R}$, the right panel is the vertical velocity $v_{z}$. The magenta dotted curves represent constant angular momentum of $L_{Z}$ = (1650, 1800, 2080) \,kpc km $s^{-1}$ with contribution of the $V_{LSR}$. The radial distance range is from 7.5 to 12 \,kpc.} \label{msden_vr_feh} \end{figure*} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{ridge_Lz_VR.pdf} \caption{The figure shows the MSTO stars radial velocity distribution in the $v_{\phi}$ and $L_{Z}$ plane adopted in this work, there are three ridges around $L_{Z}$ = (1650 1800 2180) \,kpc km $s^{-1}$ plotted by the vertical magenta lines, similar to the Fig. \ref{msden_vr_feh} for the left two lines, please notice the right line used here has small difference from the Fig. \ref{msden_vr_feh} in order to guide our eyes clearly.} \label{ridge_Lz_VR} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=0.3\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_mapfeh.pdf} \includegraphics[width=0.3\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_mapafe.pdf} \includegraphics[width=0.295\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_mapvz02.pdf} \caption{The chemical [$Fe/H$], $\alpha$-abundance belonging to z = [$-$1.5, 1.5] \,kpc and vertical velocity in the range of z = [$-$0.2, 0.2] \,kpc, distributions in the rotational velocity (y) and radial distance(x) plane are shown in the left, middle and right panel respectively. Ridges are detected in the left two figures and weak ridge signal is shown in the right figure.} \label{msden_alpha_vz} \end{figure*} \subsection{Ridge patterns investigation by OB stars} As mentioned in the last section, the ridge pattern has been sensitive to perturbation for 0$-$14 \,Gyr. In order to know more about its population features, we make full use of LAMOST different samples. We use OB stars \citep{liu2019} to chart the distributions of density and radial velocity in the ($R, v_{\phi}$) plane, which is displayed in Fig. \ref{obden_vr}. It clearly denotes that there is an obvious ridge strip colored with blue in the right, especially for the radial velocity in the range of R from 9$-$11 \,kpc and $v_{\phi}$ from 170 to 200 km s$^{-1}$, the ridge is similar with one of the ridges in MSTO stars. As a mater of fact, the OB stars ridges don't need to correspond to the MSTO ridges exactly due to the different population effects. \begin{figure*}[!t] \centering \includegraphics[width=0.46\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{rgdenob.pdf} \includegraphics[width=0.45\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{rgvrob.pdf} \caption{Stars distribution in the ($R, v_{\phi}$) plane with LAMOST OB stars. Left panel is the density distribution, right panel is the mean radial velocity. The ridge pattern, especially colored by blue strips respectively in the right, is clear. The magenta dotted curves represent constant angular momentum of $L_{Z}$, similar to Fig. \ref{msden_vr_feh}.} \label{obden_vr} \end{figure*} \subsection{Discussions} As manifested and implied in \citet{wang2020b} and references therein, we suggest many mechanisms might be coupled together to cause the complexed and abundant vertical asymmetries with bending and breathing modes accompanied with mean non zero radial motions and asymmetrical rotations for the disk regions. All of these might be under a same comprehensive dynamical distribution function. In-plane asymmetries and vertical motions are coupled together as shown clearly in \citet{antoja2018, Khanna2019}, but whether other different locations and populations are still coupling together is not clear. \citet{antoja2018} used a relatively narrow range in the solar neighborhood to discover the snails in $z, v_{z}$ plane and arches, shells, box in $v_{R}, v_{\phi}, v_{z} $ plane, thus then draw the coupling conclusion, but it is not clear for ridges coupling phenomenon in $R, v_{\phi}$ plane corresponding to the larger distance range during that work. What's more, we might examine the details of the chemistry for ridges alone. A relatively clear picture was proposed by \citet{Khanna2019}, they made use all stars of GALAH southern sky survey, test particle simulation and N-body simulation to explore the relations of ridges, arches and vertical waves, which have differences for the sky coverage and tracers with us. Here we provide the ridge pattern sensitive time to the perturbations in our sample, and suggest the angular momentum variation of ridge in different age populations and ridge distributions in the north and south side in Fig. \ref{ridgens}, etc., by using LAMOST sky survey and only MSTO and OB stars. \citet{Khanna2019} suggested the ridges, arches and vertical waves are coupled together, and they implied the $v_{R}$ are strongly correlated with each other and some signals are also detected in [Fe/H], [$\alpha/Fe$] and $v_{z}$, which are consistent with our main results. Meanwhile, they also pointed out clearly that phase mixing of disrupting spiral arms can generate both the ridges and arches accompanied with the points that different ridges could be originated from different scenarios in theoretical view, but if they want to unify the coupling planar and vertical motions, an intermediate satellites like Sagittarius perturbation is favored. In this work, we detect the angular momentum of one ridge pattern is relatively variable with age but other two are relatively stable, displaying the two kinds of ridges might have different origins and accompanying by the vertical signals. We also have finished a test in Fig. \ref{ridgens}. The heat maps of various quantities show the stars radial velocity distribution in the ($R, v_{\phi}$) plane in different age populations for all sample (left), southern stars of the ridge (middle), northern stars of the ridge (right). It appears there are no clear north-south asymmetries shown here. By investigating the origin of moving groups and diagonal ridges with the help of simulations of stellar orbits and birthplaces, \citet{Barros2020} pointed out that the diagonal ridges could be originated from the spiral resonances. There is no evidence of incomplete phase mixing in the vertical direction of the disk found in \citet{Michtchenko2019} and their results could be explained by internal mechanisms without external perturbations. Recently, \citet{Kushniruk2020} investigated the HR 1614 moving groups and proposed that several different mechanisms such as resonances of the bar, spiral structure, phase-mixing of dissolving spiral structure, phase-mixing due to an external perturbation should be combined to explain this feature. \citet{Laporte2020} have investigated the ridge and moving groups and found that the long bar could produce the ridge features qualitatively, meanwhile with the point that the internal and external mechanisms are shaping the Galactic disk. All these works need to explain the dependence of the ridge features on the stellar ages found in this paper. Our results provide additional constraints on the theoretical models, and encourage further theoretical studies to distinguish these scenarios based on the new observational constraints. \begin{figure*}[!t] \centering \includegraphics[width=0.3\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_map1.pdf} \includegraphics[width=0.3\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_map2s.pdf} \includegraphics[width=0.3\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{ridge_NS_map3n.pdf} \caption{Stars distribution in the ($R, v_{\phi}$) plane colored by radial velocity. Heat maps of various quantities are shown; Left panel is radial velocity distribution of all sample (again shown here for comparison), middle one is the southern sample, the right panel is the northern sample. The magenta dotted curves represent constant angular momentum similar to the Fig. \ref{msden_vr_feh}.} \label{ridgens} \end{figure*} \section{Conclusion} In this work, using LAMOST$-$Gaia combined stars, we clearly corroborate the existences of the ridge structure in the radial velocity distribution in the $v_{\phi}, R$ plane. More importantly, with the help of three ridges detailed analysis, the evidence of the two kinds of ridge patterns with possibly different dynamical origins are firstly revealed, shown as the ridge angular momentum is relatively variable or not variable in different age populations, the two kinds of ridges are relatively clearer in the $v_{R}, L_{Z}$ plane implying again different ridges might have different physical scenarios. Moreover, the ridge patterns are also shown some features in [Fe/H], [$\alpha/Fe$], $v_{z}$ distributions. We further investigate the kinematic analysis of the ridge pattern with different stellar ages, and find that the asymmetry is sensitive to the perturbations for 0$-$14 \,Gyr. With the help of younger populations of OB stars, we also detect ridge signals in radial velocity distribution. This is the first time stamps work on the ridge and different levels of sensitivity of different stellar populations for the response to the possible dynamical perturbation are unveiled again in this work. These features are non-trivial to be investigated in more details by us, e.g., we will go farther distance beyond 12 \,kpc to characterize it in more dimensions, which is not the target of the current work. \acknowledgements We would like to thank the anonymous referee for his/her very helpful and insightful comments. This work is supported by the National Key Basic R\&D Program of China via 2019YFA0405500. H.F.W. is supported by the LAMOST Fellow project, funded by China Postdoctoral Science Foundation via grant 2019M653504 and 2020T130563, Yunnan province postdoctoral Directed culture Foundation, and the Cultivation Project for LAMOST Scientific Payoff and Research Achievement of CAMS-CAS. M.L.C. was supported by grant PGC-2018-102249-B-100 of the Spanish Ministry of Economy and Competitiveness. Y.H. acknowledges the National Natural Science Foundation of China U1531244,11833006, 11811530289, U1731108, 11803029, and 11903027 and the Yunnan University grant No.C176220100006 and C176220100007. H.W.Z. is supported by the National Natural Science Foundation of China under grant number 11973001. H.F.W. is fighting for the plan ``Mapping the Milky Way Disk Population Structures and Galactoseismology (MWDPSG) with large sky surveys" in order to establish a theoretical framework in the future to unify the global picture of the disk structures and origins with a possible comprehensive distribution function. We pay our respects to elders, colleagues and others for comments and suggestions, thanks to all of them. The Guo Shou Jing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by National Astronomical Observatories, Chinese Academy of Sciences. This work has also made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
{'timestamp': '2020-06-01T02:07:19', 'yymm': '2005', 'arxiv_id': '2005.14406', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14406'}
arxiv
\section{Appendix}\label{s:appendix} For writing compactness we use the following shorthand notations for the transition and observation functions thoughout the next two subsections: $\ensuremath{T}\x
space = \ensuremath{T(s, a, s')}\xspace$, $\ensuremath{\widehat{T}}\xspace = \ensuremath{\widehat{T}(s, a, s')}\xspace$ and $\ensuremath{Z}\xspace = \ensuremath{Z(s, a, o)}\xspace$, $\ensuremath{\widehat{Z}}\xspace = \ensuremath{\widehat{Z}(s, a, o)}\xspace$. Additionally in section Section~\ref{ssec:proofLem4} we use the notations $\ensuremath{T}\xspace_{k} = T(\ensuremath{s}\xspace_k, \ensuremath{a}\xspace, \ensuremath{s'}\xspace)$ and $\ensuremath{\widehat{T}}\xspace_{k} = \widehat{T}(\ensuremath{s}\xspace_k, \ensuremath{a}\xspace, \ensuremath{s'}\xspace)$. \subsection{Proof of Lemma \ref{lem:lem0}}\label{ssec:lemma_0_proof} Consider any $\sigma \in \Gamma$ with its action $\ensuremath{a}\xspace \in \ensuremath{A}\xspace$ and observation strategy $\nu$. Then for any $\ensuremath{s}\xspace\in\ensuremath{S}\xspace$ \begin{align}\label{eq:first_lemma_0} &&&\left | \alpha_{\sigma}(\ensuremath{s}\xspace) - \widehat{\alpha}_{\sigma}(\ensuremath{s}\xspace) \right | \nonumber \\ &&=&\left | \rewFuncComp{\ensuremath{s}\xspace}{\ensuremath{a}\xspace} + \gamma \ensuremath{\int_{s' \in S}}\xspace \ensuremath{\int_{o \in O}}\xspace \ensuremath{T}\xspace \ensuremath{Z}\xspace \alpha_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace)\ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \right. \nonumber\\ &&&\left. -\rewFuncComp{\ensuremath{s}\xspace}{\ensuremath{a}\xspace} - \gamma \ensuremath{\int_{s' \in S}}\xspace \ensuremath{\int_{o \in O}}\xspace \ensuremath{\widehat{T}}\xspace \ensuremath{\widehat{Z}}\xspace \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \right | \nonumber \\ &&=&\gamma \left | \ensuremath{\int_{s' \in S}}\xspace \ensuremath{\int_{o \in O}}\xspace \ensuremath{T}\xspace \ensuremath{Z}\xspace \alpha_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) - \ensuremath{\widehat{T}}\xspace\ensuremath{\widehat{Z}}\xspace \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \right | \nonumber \\ &&\leq&\gamma \left ( \left |\ensuremath{\int_{s' \in S}}\xspace \ensuremath{\int_{o \in O}}\xspace \ensuremath{T}\xspace \ensuremath{Z}\xspace \left [\alpha_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) -\widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \right ] \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \right | \right. \nonumber \\ & && + \left. \left |\ensuremath{\int_{s' \in S}}\xspace \ensuremath{\int_{o \in O}}\xspace \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \left [\ensuremath{T}\xspace\ensuremath{Z}\xspace -\ensuremath{\widehat{T}}\xspace \ensuremath{\widehat{Z}}\xspace \right ] \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace\right | \right ) \end{align} Let's have a look at the second term on the right-hand side of \eref{eq:first_lemma_0}, that is \begin{align}\label{eq:first_lemma_1} &&term2(\ensuremath{s}\xspace, \ensuremath{a}\xspace) =& \left | \ensuremath{\int_{s' \in S}}\xspace \ensuremath{\int_{o \in O}}\xspace \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \left [\ensuremath{T}\xspace \ensuremath{Z}\xspace - \ensuremath{\widehat{T}}\xspace \ensuremath{\widehat{Z}}\xspace \right ] \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \right | \end{align} We can expand this term as follows: \begin{align}\label{eq:first_lemma_2} &&&term2(\ensuremath{s}\xspace, \ensuremath{a}\xspace) \nonumber \\ &&=& \left | \ensuremath{\int_{s' \in S}}\xspace \ensuremath{\int_{o \in O}}\xspace \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \left [ \ensuremath{T}\xspace \ensuremath{Z}\xspace - \ensuremath{\widehat{T}}\xspace\ensuremath{Z}\xspace + \ensuremath{\widehat{T}}\xspace\ensuremath{Z}\xspace - \ensuremath{\widehat{T}}\xspace \ensuremath{\widehat{Z}}\xspace\right ] \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \right |\nonumber \\ &&\leq& \left | \ensuremath{\int_{s' \in S}}\xspace \left [ \ensuremath{T}\xspace - \ensuremath{\widehat{T}}\xspace\right ] \ensuremath{\int_{o \in O}}\xspace \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace)\ensuremath{Z}\xspace \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace\right | \nonumber \\ & && + \left |\ensuremath{\int_{s' \in S}}\xspace \ensuremath{\widehat{T}}\xspace \ensuremath{\int_{o \in O}}\xspace \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace)\left [\ensuremath{Z}\xspace - \ensuremath{\widehat{Z}}\xspace \right ] \ensuremath{d}\xspace\ensuremath{o}\xspace\ensuremath{d}\xspace\ensuremath{s'}\xspace\right | \nonumber \\ &&\leq& \ensuremath{\int_{s' \in S}}\xspace \left |\ensuremath{T}\xspace- \ensuremath{\widehat{T}}\xspace \right | \ensuremath{\int_{o \in O}}\xspace \left | \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \right | \ensuremath{Z}\xspace \ensuremath{d}\xspace\ensuremath{o}\xspace\ensuremath{d}\xspace\ensuremath{s'}\xspace \nonumber \\ & && + \ensuremath{\int_{s' \in S}}\xspace \ensuremath{\widehat{T}}\xspace \ensuremath{\int_{o \in O}}\xspace \left | \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \right | \left |\ensuremath{Z}\xspace - \ensuremath{\widehat{Z}}\xspace \right | \ensuremath{d}\xspace\ensuremath{o}\xspace\ensuremath{d}\xspace\ensuremath{s'}\xspace \end{align} The term $\left | \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \right |$ can be upper-bounded via $\left | \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace)\right | \leq \frac{R_{m}}{1-\gamma}$ for any $\ensuremath{s}\xspace \in \ensuremath{S}\xspace$, which yields \begin{align}\label{eq:first_lemma_3} &&&term2(\ensuremath{s}\xspace, \ensuremath{a}\xspace) \nonumber \\ &&\leq& \frac{R_{m}}{1-\gamma} \left [\ensuremath{\int_{s' \in S}}\xspace \left |\ensuremath{T}\xspace - \ensuremath{\widehat{T}}\xspace \right | \ensuremath{d}\xspace\ensuremath{s'}\xspace + \ensuremath{\int_{s' \in S}}\xspace\ensuremath{\widehat{T}}\xspace\ensuremath{\int_{o \in O}}\xspace\left | \ensuremath{Z}\xspace - \ensuremath{\widehat{Z}}\xspace \right | \ensuremath{d}\xspace\ensuremath{o}\xspace\ensuremath{d}\xspace\ensuremath{s'}\xspace \right ] \end{align} From the definition of the total variation distance, it follows that $\ensuremath{\int_{s' \in S}}\xspace \left | \ensuremath{T}\xspace - \ensuremath{\widehat{T}}\xspace\right |\ensuremath{d}\xspace\ensuremath{s'}\xspace = 2D_{TV}^{\ensuremath{s}\xspace, \ensuremath{a}\xspace}(\ensuremath{T}\xspace, \ensuremath{\widehat{T}}\xspace)$ for any given $\ensuremath{s}\xspace \in \ensuremath{S}\xspace$ and $\ensuremath{a}\xspace \in \ensuremath{A}\xspace$ and $\ensuremath{\int_{o \in O}}\xspace \left |\ensuremath{Z}\xspace - \ensuremath{\widehat{Z}}\xspace \right | \ensuremath{d}\xspace\ensuremath{o}\xspace= 2D_{TV}^{\ensuremath{s'}\xspace, \ensuremath{a}\xspace}(\ensuremath{Z}\xspace, \ensuremath{\widehat{Z}}\xspace)$ for any given $\ensuremath{s'}\xspace \in \ensuremath{S}\xspace$. Substituting these equalities into \eref{eq:first_lemma_3} and taking the supremum over the conditionals $\ensuremath{s}\xspace, \ensuremath{s'}\xspace$ and $\ensuremath{a}\xspace$ allows us to upper-bound \eref{eq:first_lemma_3} by \begin{equation} term2(\ensuremath{s}\xspace, \ensuremath{a}\xspace) \leq 2\frac{R_{m}}{1-\gamma} \nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} \end{equation} Substituting this upper bound into \ref{eq:first_lemma_0} yields \begin{align}\label{eq:lemm2_eq_5} &&&\left | \alpha_{\sigma}(\ensuremath{s}\xspace) - \widehat{\alpha}_{\sigma}(\ensuremath{s}\xspace) \right | \nonumber \\ &&\leq& \gamma\biggl | 2\frac{R_{m}}{1-\gamma} \nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} \biggr. \nonumber \\ &&&\left. + \ensuremath{\int_{s' \in S}}\xspace\ensuremath{\int_{o \in O}}\xspace \ensuremath{T}\xspace \ensuremath{Z}\xspace \left [\alpha_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) - \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \right] \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \right | \nonumber \\ &&\leq& \gamma \biggl ( 2 \frac{R_{m}}{1-\gamma} \nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} \biggr.\nonumber \\ &&&\left. + \ensuremath{\int_{s' \in S}}\xspace\ensuremath{\int_{o \in O}}\xspace \ensuremath{T}\xspace \ensuremath{Z}\xspace \left |\alpha_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) - \widehat{\alpha}_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \right| \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \right ) \end{align} The last term on the right hand side of \ref{eq:lemm2_eq_5} is essentially a recursion. Unfolding this recursion yiels \begin{equation} \left | \alphaPiS{\ensuremath{s}\xspace} - \alphaPiHatS{\ensuremath{s}\xspace} \right | \leq 2\gamma\frac{R_{m}}{(1-\gamma)^2} \nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} \end{equation} which is Lemma \ref{lem:lem0} $\square$ \subsection{Proof of \lref{lem:lemApprox}}\label{ssec:proofLem4} We can write the absolute difference between the SNM\xspace-values conditioned on two states $\ensuremath{s}\xspace_1, \ensuremath{s}\xspace_2 \in \ensuremath{S}\xspace_i$ as \begin{align} &&&\left | \ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace_1) - \ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace_2) \right | \nonumber \\ &&=& \left | \sup_{\ensuremath{a}\xspace \in \ensuremath{A}\xspace} D_{TV}(\ensuremath{T}\xspace_1, \ensuremath{\widehat{T}}\xspace_1) - \sup_{\ensuremath{a}\xspace \in \ensuremath{A}\xspace} D_{TV}(\ensuremath{T}\xspace_2, \ensuremath{\widehat{T}}\xspace_2) \right | \nonumber \\ &&=&\left |\frac{1}{2} \sup_{\ensuremath{a}\xspace \in \ensuremath{A}\xspace} \sup_{\left |f \right | \leq 1} \left | \ensuremath{\int_{s' \in S}}\xspace f(\ensuremath{s'}\xspace) \left [\ensuremath{T}\xspace_1 - \ensuremath{\widehat{T}}\xspace_1 \right ] \ensuremath{d}\xspace\ensuremath{s'}\xspace \right| \right. \nonumber \\ &&&\left. - \frac{1}{2} \sup_{\ensuremath{a}\xspace \in \ensuremath{A}\xspace} \sup_{\left |f \right | \leq 1}\left |\ensuremath{\int_{s' \in S}}\xspace f(\ensuremath{s'}\xspace) \left [\ensuremath{T}\xspace_2 - \ensuremath{\widehat{T}}\xspace_2 \right ]\ensuremath{d}\xspace\ensuremath{s'}\xspace\right | \right | \end{align} Manipulating the algebra allows us to write \begin{align}\label{e:lem3_eq2} &&&\left | \ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace_1) - \ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace_2) \right | \nonumber \\ &&\leq& \frac{1}{2}\sup_{\ensuremath{a}\xspace\in\ensuremath{A}\xspace}\left | \sup_{\left |f \right | \leq 1} \left ( \ensuremath{\int_{s' \in S}}\xspace f(\ensuremath{s'}\xspace) \left [\ensuremath{T}\xspace_1 - \ensuremath{T}\xspace_2 \right ] \ensuremath{d}\xspace\ensuremath{s'}\xspace \right. \right.\nonumber \\ &&&\left.\left. + \ensuremath{\int_{s' \in S}}\xspace f(\ensuremath{s'}\xspace)\left [\ensuremath{\widehat{T}}\xspace_1 - \ensuremath{\widehat{T}}\xspace_2 \right] \ensuremath{d}\xspace\ensuremath{s'}\xspace \right ) \right |\nonumber \\ &&\leq& \frac{1}{2}\sup_{\ensuremath{a}\xspace \in \ensuremath{A}\xspace} \left (\sup_{\left |f \right | \leq 1}\ensuremath{\int_{s' \in S}}\xspace f(\ensuremath{s'}\xspace) \left |\ensuremath{T}\xspace_1 - \ensuremath{T}\xspace_2 \right | \ensuremath{d}\xspace\ensuremath{s'}\xspace \right. \nonumber \\ &&&\left. + \sup_{\left |f \right | \leq 1} \ensuremath{\int_{s' \in S}}\xspace f(\ensuremath{s'}\xspace) \left |\ensuremath{\widehat{T}}\xspace_1 - \ensuremath{\widehat{T}}\xspace_2 \right |\ensuremath{d}\xspace\ensuremath{s'}\xspace\right ) \nonumber \\ &&\leq& \frac{1}{2}D_{S}(\ensuremath{s}\xspace_1, \ensuremath{s}\xspace_2)\left [C_{\ensuremath{T}\xspace_i} + C_{\ensuremath{\widehat{T}}\xspace_i} \right ] \end{align} For the last inequality we bound the terms $\left | \ensuremath{T}\xspace_1 - \ensuremath{T}\xspace_2 \right |$ and $\left |\ensuremath{\widehat{T}}\xspace_1 - \ensuremath{\widehat{T}}\xspace_2 \right |$ using \dref{d:partition}. Furthermore we use the fact that $\sup_{\left |f \right | \leq 1} \ensuremath{\int_{s' \in S}}\xspace f(\ensuremath{s'}\xspace) \ensuremath{d}\xspace\ensuremath{s'}\xspace = 1$, assuming that the state space $\ensuremath{S}\xspace$ is normalized. This concludes the proof of \lref{lem:lemApprox}. $\square$ \section{Introduction } \label{section:introduction} An autonomous robot must be able to compute reliable motion strategies, despite various errors in actuation and prediction on its effect on the robot and its environment, and despite various errors in sensors and sensing. Computing such robust strategies is computationally hard even for a 3 DOFs point robot\ccite{Can87:New},\ccite{Nat88:Complexity}. Conceptually, this problem can be solved in a systematic and principled manner when framed as the Partially Observable Markov Decision Process (POMDP)\ccite{Kae98:Planning}. A POMDP represents the aforementioned errors as probability distribution functions and estimates the state of the system as probability distribution functions called \emph{beliefs}. It then computes the best motion strategy with respect to beliefs rather than single states, thereby accounting the fact that the actual state is never known due to the above errors. Although the concept of POMDPs was proposed in the '60s\ccite{Son71:The}, only recently that POMDPs started to become practical for robotics problems (e.g.\ccite{Hoe19:POMDP,Hor13:Interactive,Tem09:Unmanned}). This advancement is achieved by trading optimality with approximate optimality for speed and memory. But even then, in general, computing close to optimal POMDP solutions for systems with complex dynamics remains difficult. Several general POMDP solvers ---solvers that do not restrict the type of dynamics and sensing model of the system, nor the type of distributions used to represent uncertainty--- can now compute good motion strategies on-line with a 1-10Hz update rate for a number of robotic problems\ccite{Kur13:An,Sil10:Monte,Som13:Despot,Sei15:An}. However, their speed degrades when the robot has complex non-linear dynamics. To compute a good strategy, today's POMDP solvers forward simulate the effect of many sequences of actions from different beliefs are simulated. For problems whose dynamics have no closed-form solutions, a simulation run generally invokes many numerical integrations, and complex dynamics tend to increase the cost of each numerical integration, which in turn significantly increases the total planning cost of these methods. Of course, this cost will increase even more for problems that require more or longer simulation runs, such as in problems with long planning horizons. Many linearized-based POMDP solvers have been proposed\ccite{Sun15:High,Agh13:Firm,Ber10:LQGMP,Ber12:LQG,Pre10:The}. They rely on many forward simulations from different beliefs too, but use a linearized model of the dynamics and sensing for simulation. Together with linearization, many of these methods assume that beliefs are Gaussian distributions. This assumption improves the speed of simulation further, because the subsequent belief after an action is performed and an observation is perceived can be computed in closed-form. In contrast, the aforementioned general solvers typically represent beliefs as sets of particles and estimate subsequent beliefs using particle filters. Particle filters are particularly expensive when particle trajectories have to be simulated and each simulation run is costly, as is the case for motion-planning of systems with complex dynamics. As a result, the linearization-based planners require less time to estimate the effect of performing a sequence of actions from a belief, and therefore can \emph{potentially} find a good strategy faster than the general method. However, it is known that linearization in control and estimation performs well only when the system's non-linearity is ``weak"\ccite{Li12:Measure}. The question is, what constitute ``weak" non-linearity in motion planning under uncertainty? Where will it be useful and where will it be damaging to use linearization (and Gaussian) simplifications? This paper extends our previous work\ccite{Hoe16:Linearization} towards answering the aforementioned questions. Specifically, we propose a measure of non-linearity for stochastic systems, called \emph{Statistical-distance-based Non-linearity Measure\xspace (SNM\xspace)}, to help identify the suitability of linearization in a given problem of motion planning under uncertainty. SNM\xspace is based on the total variation distance between the original dynamics and sensing models, and their corresponding linearized models. It is general enough to be applied to any type of motion and sensing errors, and any linearization technique, regardless of the type of approximation of the true beliefs (e.g., with and without Gaussian simplification). We show that the difference between the value of the optimal strategy generated if we plan using the original model and if we plan using the linearized model, can be upper bounded by a function linear in SNM\xspace. Furthermore, our experimental results indicate that compared to recent state-of-the-art methods of non-linearity measures for stochastic systems, SNM\xspace is more sensitive to the effect that obstacles have on the effectiveness of linearization, which is critical for motion planning. To further test the applicability of SNM\xspace in motion planning, we develop a simple on-line planner that uses a local estimate of SNM\xspace to automatically switch between a general planner\ccite{Kur13:An} that uses the original POMDP model and a linearization-based planner (adapted from\ccite{Sun15:High}) that uses the linearized model. Experimental results on a car-like robot with acceleration control, and a 4-DOFs and 6-DOFs manipulators with torque control indicate that this simple planner can appropriately decide if and when linearization should be used and therefore computes better strategies faster than each of the component planner. \section{Acknowledgements} This work is partially funded by ANU Futures Scheme QCE20102. The early part of this work is funded by UQ and CSIRO scholarship for Marcus Hoerger. \bibliographystyle{ieeetr} \section{SNM}\label{sec:SNM} Intuitively, our proposed measure SNM\xspace is based on the total variation distance between the effect of performing an action and perceiving an observation under the true dynamics and sensing model, and the effect under the linearized dynamic and sensing model. The total variation distance $D_{TV}$ between two probability measures $\mu$ and $\nu$ over a measurable space $\Omega$ is defined as $D_{TV}(\mu, \nu) = \sup_{E \in \Omega} \left |\mu(E) - \nu(E) \right |$. An alternative expression of $D_{TV}$ which we use throughout the paper is the functional form $D_{TV}(\mu, \nu) = \frac{1}{2}\sup_{\left |f \right | \leq 1}\left |\int f\ensuremath{d}\xspace\mu - \int f\ensuremath{d}\xspace\nu \right |$. Formally, SNM\xspace is defined as: \begin{definition} \label{def:mon} Let $\ensuremath{P}\xspace = \ensuremath{\langle S, A, O, T, Z, R, \ensuremath{b_0}\xspace, \gamma \rangle}\xspace$ be the POMDP model of the system and $\ensuremath{\widehat{P}}\xspace = \ensuremath{\langle S, A, O, \widehat{T}, \widehat{Z}, R, \ensuremath{b_0}\xspace, \gamma \rangle}\xspace$ be a linearization of \ensuremath{P}\xspace, where \ensuremath{\widehat{T}}\xspace is a linearization of the transition function \ensuremath{T}\xspace and \ensuremath{\widehat{Z}}\xspace is a linearization of the observation function \ensuremath{Z}\xspace of \ensuremath{P}\xspace, while all other components of \ensuremath{P}\xspace and \ensuremath{\widehat{P}}\xspace are the same. Then, the SNM\xspace (denoted as \ensuremath{\Psi}\xspace) between \ensuremath{P}\xspace and \ensuremath{\widehat{P}}\xspace is $\nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} = \nmT{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} + \nmZ{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace}$, where \begin{align} \nmT{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} &= \sup_{s \in S, a \in A} D_{TV}(\ensuremath{T(s, a, s')}\xspace, \ensuremath{\widehat{T}(s, a, s')}\xspace) \\ \nmZ{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} &= \sup_{s \in S, a \in A} D_{TV}(\ensuremath{Z(s, a, o)}\xspace, \ensuremath{\widehat{Z}(s, a, o)}\xspace) \end{align} \end{definition} Note that SNM\xspace can be applied as both a global and a local measure. In the latter case, the supremum over the state $s$ can be restricted to a subset of \ensuremath{S}\xspace, rather than the entire state space. Furthermore, SNM\xspace is general enough for any approximation to the true dynamics and sensing model, which means that it can be applied to any type of linearization and belief approximation techniques, including those that assume and those that do not assume Gaussian belief simplifications. We want to use the measure \nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} to bound the difference between the expected total reward received if the system were to run the optimal policy of the true model \ensuremath{P}\xspace and if it were to run the optimal policy of the linearized model \ensuremath{\widehat{P}}\xspace. Note that since our interest is in the actual reward received, the values of these policies are evaluated with respect to the original model \ensuremath{P}\xspace (we assume \ensuremath{P}\xspace is a faithful model of the system). More precisely, we want to show that: \begin{theorem}\label{th:valUpperBound} If \ensuremath{\pi^*}\xspace denotes the optimal policy for \ensuremath{P}\xspace and \ensuremath{\widehat{\pi}^*}\xspace denotes the optimal policy for \ensuremath{\widehat{P}}\xspace, then for any $\ensuremath{b}\xspace\in\mathbb{B}$, \begin{align} &V_{\ensuremath{\pi^*}\xspace}(\ensuremath{b}\xspace) - V_{\ensuremath{\widehat{\pi}^*}\xspace}(\ensuremath{b}\xspace) \leq 4\gamma\frac{R_{max}}{(1-\gamma)^2} \nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} \nonumber \end{align} where \newline $V_{\pi}(b) = R(b, \pi(b)) + \gamma \int_{o \in O}Z(b, a, o)V_{\pi}(\tau(b, a, o)) \ensuremath{d}\xspace\ensuremath{o}\xspace$ for any policy $\pi$ and $\tau(b, a, o)$ is the belief transition function as defined in \eref{e:belTrans} \end{theorem} To proof \thref{th:valUpperBound}, we first assume, without loss of generality, that a policy $\pi$ for a belief \ensuremath{b}\xspace is represented by a conditional plan $\sigma\in\Gamma$, where $\Gamma$ is the set of all conditional plans. $\sigma$ can be specified by a pair $\left \langle \ensuremath{a}\xspace, \nu \right \rangle$, where $\ensuremath{a}\xspace\in\ensuremath{A}\xspace$ is the action of $\sigma$ and $\nu: \ensuremath{O}\xspace \rightarrow \Gamma$ is an observation strategy which maps an observation to a conditional plan $\sigma'\in\Gamma$. Every $\sigma$ corresponds to an $\alpha$-function $\alpha_{\sigma}: \ensuremath{S}\xspace \rightarrow \mathbb{R}$ which specifies the expected total discounted reward the robot receives when executing $\sigma$ starting from $\ensuremath{s}\xspace\in\ensuremath{S}\xspace$, i.e. \begin{align}\label{eq:alpha_s} &\alpha_{\sigma}(s) = \rewFuncComp{\ensuremath{s}\xspace}{\ensuremath{a}\xspace}\nonumber\\ &+ \gamma \ensuremath{\int_{s' \in S}}\xspace \ensuremath{\int_{o \in O}}\xspace T(\ensuremath{s}\xspace, \ensuremath{a}\xspace, \ensuremath{s'}\xspace) Z(\ensuremath{s'}\xspace, \ensuremath{a}\xspace, \ensuremath{o}\xspace)\alpha_{\nu(\ensuremath{o}\xspace)}(\ensuremath{s'}\xspace) \ensuremath{d}\xspace\ensuremath{o}\xspace \ensuremath{d}\xspace\ensuremath{s'}\xspace \end{align} where $\ensuremath{a}\xspace\in\ensuremath{A}\xspace$ is the action of $\sigma$ and $\alpha_{\nu(\ensuremath{o}\xspace)}$ is the $\alpha$-function corresponding to conditional plan $\nu(\ensuremath{o}\xspace)$. For a given belief \ensuremath{b}\xspace, the value of the policy $\pi$ represented by the conditional plan $\sigma$ is then $V_{\pi}(\ensuremath{b}\xspace) = \int_{\ensuremath{s}\xspace\in\ensuremath{S}\xspace} \ensuremath{b}\xspace(\ensuremath{s}\xspace)\alpha_{\sigma}(\ensuremath{s}\xspace)\ensuremath{d}\xspace\ensuremath{s}\xspace$. Note that \eref{eq:alpha_s} is defined with respect to POMDP \ensuremath{P}\xspace. Analogously we define the linearized $\alpha$-function $\widehat{\alpha}_{\sigma}$ with respect to the linearized POMDP \ensuremath{\widehat{P}}\xspace by replacing the transition and observation functions in \eref{eq:alpha_s} with their linearized versions. Now, suppose that for a given belief \ensuremath{b}\xspace, $\sigma^* = \argsup_{\sigma\in\Gamma} \int_{\ensuremath{s}\xspace\in\ensuremath{S}\xspace}\ensuremath{b}\xspace(\ensuremath{s}\xspace)\alpha_{\sigma}(\ensuremath{s}\xspace)\ensuremath{d}\xspace\ensuremath{s}\xspace$ and $\widehat{\sigma}^* = \argsup_{\sigma\in\Gamma}\int_{\ensuremath{s}\xspace\in\ensuremath{S}\xspace}\ensuremath{b}\xspace(\ensuremath{s}\xspace)\widehat{\alpha}_{\sigma}(\ensuremath{s}\xspace)\ensuremath{d}\xspace\ensuremath{s}\xspace$. $\sigma^*$ and $\widehat{\sigma}^*$ represent the policies $\pi^*$ and $\widehat{\pi}^*$ that are optimal at \ensuremath{b}\xspace for POMDP \ensuremath{P}\xspace and \ensuremath{\widehat{P}}\xspace respectively. For any $\ensuremath{s}\xspace\in\ensuremath{S}\xspace$ we have that $\alphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} \geq \linAlphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} - \left |\alphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} - \linAlphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} \right |$ and $\linAlphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} \geq \alphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} - \left | \alphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} - \linAlphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace}\right |$. Therefore \begin{align}\label{eq:geq_1} \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace} \alphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} \ensuremath{d}\xspace\ensuremath{s}\xspace \geq &\int_{s \in S}\xspace \belS{\ensuremath{s}\xspace} \linAlphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} \ensuremath{d}\xspace\ensuremath{s}\xspace\nonumber \\ & - \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace} \left | \alphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} - \linAlphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} \right | \ensuremath{d}\xspace\ensuremath{s}\xspace \end{align} and \begin{align}\label{eq:geq_2} \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace} \linAlphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} \ensuremath{d}\xspace\ensuremath{s}\xspace \geq &\int_{s \in S}\xspace \belS{\ensuremath{s}\xspace}\alphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} \ensuremath{d}\xspace\ensuremath{s}\xspace\nonumber \\&- \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace}\left |\alphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} - \linAlphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} \right |\ensuremath{d}\xspace\ensuremath{s}\xspace \end{align} Since $\widehat{\sigma}^*$ is the optimal conditional plan for POMDP \ensuremath{\widehat{P}}\xspace at \ensuremath{b}\xspace, we also know that \begin{equation}\label{eq:geq_3} \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace} \linAlphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} \ensuremath{d}\xspace\ensuremath{s}\xspace \geq \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace} \linAlphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} \ensuremath{d}\xspace\ensuremath{s}\xspace \end{equation} From \eref{eq:geq_1}, \eref{eq:geq_2} and \eref{eq:geq_3} it immediately follows that \begin{alignat}{2}\label{eq:geq_4} \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace} \alphaFunctPolComp{\widehat{\sigma}^*}{\ensuremath{s}\xspace} \ensuremath{d}\xspace\ensuremath{s}\xspace \geq & &&\int_{s \in S}\xspace \belS{\ensuremath{s}\xspace} \alphaFunctPolComp{\sigma^*}{\ensuremath{s}\xspace} \ensuremath{d}\xspace\ensuremath{s}\xspace \nonumber \\ & && - 2 \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace}\sup_{\sigma\in\Gamma}\left |\alphaFunctPolComp{\sigma}{\ensuremath{s}\xspace} - \linAlphaFunctPolComp{\sigma}{\ensuremath{s}\xspace} \right |\ensuremath{d}\xspace\ensuremath{s}\xspace \nonumber \\ V_{\widehat{\pi}^*}(b) \geq& && V_{\pi^*}(b) \nonumber \\ & &&- 2 \int_{s \in S}\xspace \belS{\ensuremath{s}\xspace}\sup_{\sigma\in\Gamma}\left |\alphaFunctPolComp{\sigma}{\ensuremath{s}\xspace} - \linAlphaFunctPolComp{\sigma}{\ensuremath{s}\xspace} \right | \ensuremath{d}\xspace\ensuremath{s}\xspace \end{alignat} Before we continue, we first have to show the following Lemma: \begin{lemma}\label{lem:lem0} Let $R_m = \max\{\left |R_{min} \right |, R_{max}\}$, where $R_{min} = \min_{s,a} R(s, a)$ and $R_{max} = \max_{s,a} R(s, a)$. For any conditional plan $\sigma\in\Gamma$ and any $\ensuremath{s}\xspace \in \ensuremath{S}\xspace$, the absolute difference between the original and linearized $\alpha$-functions is upper bounded by \begin{align} \left | \alphaFunctPolComp{\sigma}{\ensuremath{s}\xspace} - \linAlphaFunctPolComp{\sigma}{\ensuremath{s}\xspace} \right | \leq 2\gamma\frac{R_{m}}{(1-\gamma)^2} \nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace}\nonumber \end{align} \end{lemma} The proof of \lref{lem:lem0} is presented in the Appendix~\ref{ssec:lemma_0_proof}. Using the result of \lref{lem:lem0}, we can now conclude the proof for \thref{th:valUpperBound}. Substituting the upper bound derived in \lref{lem:lem0} into the right-hand side of \eref{eq:geq_4} and re-arranging the terms gives us \begin{equation} V_{\pi^*}(\ensuremath{b}\xspace) - V_{\widehat{\pi}^*}(\ensuremath{b}\xspace) \leq 4\gamma\frac{R_{m}}{(1-\gamma)^2} \nm{\ensuremath{P}\xspace}{\ensuremath{\widehat{P}}\xspace} \end{equation} which is what we are looking for. $\square$ \section{Approximating SNM\xspace}\label{ssec:monApprox} Now, the question is how can we compute SNM\xspace sufficiently fast, so that this measure can be used as a heuristic during on-line planning to decide when a linearization-based solver will likely yield a good policy and when a general solver should be used. Unfortunately, such a computation is often infeasible when the planning time per step is limited. Therefore, we approximate SNM\xspace off-line and re-use the results during run-time. Here we discuss how to approximate the transition component \ensuremath{\Psi_T}\xspace of SNM\xspace, however, the same method applies to the observation component \ensuremath{\Psi_Z}\xspace. Let us first rewrite the transition component of \ensuremath{\Psi_T}\xspace as \begin{align}\label{e:snmTransCompRe} \ensuremath{\Psi_T}\xspace &= \sup_{\ensuremath{s}\xspace \in \ensuremath{S}\xspace} \ensuremath{\Psi_T}\xspace(s)\nonumber \\&= \sup_{\ensuremath{s}\xspace\in\ensuremath{S}\xspace}\sup_{\ensuremath{a}\xspace\in\ensuremath{A}\xspace}D_{TV}(\ensuremath{T(s, a, s')}\xspace, \ensuremath{\widehat{T}(s, a, s')}\xspace) \end{align} where $\ensuremath{\Psi_T}\xspace(s)$ is the transition component of SNM\xspace, given a particular state. To approximate \ensuremath{\Psi_T}\xspace, we replace \ensuremath{S}\xspace in \eref{e:snmTransCompRe} by a sampled representation of \ensuremath{S}\xspace, which we denote as $\tilde{\ensuremath{S}\xspace}$. The value $\ensuremath{\Psi_T}\xspace(s)$ is then evaluated for each $\ensuremath{s}\xspace\in\tilde{\ensuremath{S}\xspace}$ off-line, and the results are saved in a lookup-table. This lookup-table can then be used during run-time to get a local approximation of \ensuremath{\Psi_T}\xspace around the current belief. The first question that arises is, how do we efficiently sample the state space? A naive approach would be to employ a simple uniform sampling strategy. However, for large state spaces this is often wasteful, because for motion planning problems, large portions of the state space are often irrelevant since they either can't be reached from the initial belief or are unlikely to be traversed by the robot during run-time. A better strategy is to consider only the subset of the state space that is reachable from the support set of the initial belief under any policy, denoted as $\ensuremath{S}\xspace_{\ensuremath{b_0}\xspace}$. To sample from $\ensuremath{S}\xspace_{\ensuremath{b_0}\xspace}$, we use a simple but effective method: Assuming deterministic dynamics, we solve the motion planning problem off-line using kinodynamic RRTs and use the nodes in the RRT-trees as a sampled representation of $\ensuremath{S}\xspace_{\ensuremath{b_0}\xspace}$. In principle any deterministic sampling-based motion planner can be used to generate samples from $\ensuremath{S}\xspace_{\ensuremath{b_0}\xspace}$, however, in our case RRT is a particularly suitable due to its space-filling property \ccite{kuffner2011space}. Note that RRT generates states according to a deterministic transition function only. If required, one could also generate additional samples according to the actual stochastic transition function of the robot. However, in our experiments the state samples generated by RRT were sufficient. The second difficulty in approximating $\ensuremath{\Psi_T}\xspace(s)$ is the computation of the supremum over the action space. Similar to restricting the approximation to a discrete set of states reachable from the initial belief, we can impose a discretization on the action space which leaves us with a maximization over discrete actions, denoted as $\tilde{\ensuremath{A}\xspace}$. Using the set $\tilde{A}$, we approximate \eref{e:snmTransCompRe} for each state in $\tilde{\ensuremath{S}\xspace}_{\ensuremath{b_0}\xspace}$ ---the sampled set of $\ensuremath{S}\xspace_{\ensuremath{b_0}\xspace}$--- as follows: Given a particular state $\ensuremath{s}\xspace \in \tilde{\ensuremath{S}\xspace}_{\ensuremath{b_0}\xspace}$ and action $\ensuremath{a}\xspace \in \tilde{\ensuremath{A}\xspace}$, we draw $n$ samples from the original and linearized transition function and construct a multidimensional histogram from both sample sets. In other words, we discretize the distributions that follow from the original and linearized transition function, given a particular state and action. Suppose the histogram consists of $k$ bins. The value $\ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace, \ensuremath{a}\xspace)$ is then approximated as \begin{equation}\label{eq:smmTransCompStAct} \ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace, \ensuremath{a}\xspace) \approx \frac{1}{2} \sum_{i = 1}^{k} \left |p_i - \widehat{p}_i \right | \end{equation} where $p_i = \frac{n_i}{\sum_{j=1}^k n_j}$ and $n_i$ is the number of states inside bin $i$ sampled from the original transition function, while $\widehat{p}_i = \frac{\widehat{n}_i}{\sum_{j=1}^k \widehat{n}_j}$ and $\widehat{n}_i$ is the number of states inside bin $i$ sampled from the linearized transition function. The right-hand side of \eref{eq:smmTransCompStAct} is simply the definition of the total variation distance between two discrete distributions. By repeating the above process for each action in $\tilde{A}$ and taking the maximum, we end up with an approximation of $\ensuremath{\Psi_T}\xspace(s)$. This procedure is repeated for every state in the set $\tilde{\ensuremath{S}\xspace}_{\ensuremath{b_0}\xspace}$. As a result we get a lookup-table, assigning each state in $\tilde{\ensuremath{S}\xspace}_{\ensuremath{b_0}\xspace}$ an approximated value of $\ensuremath{\Psi_T}\xspace(s)$. During planning, we can use the lookup-table and a sampled representation of a belief \ensuremath{b}\xspace to approximate SNM\xspace at \ensuremath{b}\xspace. Suppose $\tilde{\ensuremath{b}\xspace}$ is the sampled representation of \ensuremath{b}\xspace (e.g., a particle set), then for each state $s \in \tilde{\ensuremath{b}\xspace}$, we take the state $\ensuremath{s}\xspace_{near} \in \tilde{\ensuremath{S}\xspace}_{\ensuremath{b_0}\xspace}$ that is nearest to $s$, and assign $\ensuremath{\Psi_T}\xspace(s) = \ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace_{near})$. The maximum SNM value $\max_{s \in \tilde{\ensuremath{b}\xspace}} \ensuremath{\Psi_T}\xspace(s)$ gives us an approximation of the transition component of SNM\xspace with respect to the belief \ensuremath{b}\xspace. Clearly this approximation method assumes that states that are close together should yield similar values for SNM\xspace. At first glance this is a very strong assumption. In the vicinity of obstacles or constraints, states that are close together could potentially yield very different SNM\xspace values. However, we will now show that under mild assumptions, pairs of states that are elements within certain subsets of the state space indeed yield similar SNM\xspace values. Consider a partitioning of the state space into a finite number of local-Lipschitz subsets $S_i$ that are defined as follows: \begin{definition}\label{d:partition} Let \ensuremath{S}\xspace be a metric space with distance metric $D_{\ensuremath{S}\xspace}$. $\ensuremath{S}\xspace_i$ is called a local-Lipschitz subset of $\ensuremath{S}\xspace$ if for any $\ensuremath{s}\xspace_1, \ensuremath{s}\xspace_2 \in \ensuremath{S}\xspace_i$, any $\ensuremath{s'}\xspace \in \ensuremath{S}\xspace$ and any $\ensuremath{a}\xspace \in \ensuremath{A}\xspace: \left |\ensuremath{T}\xspace(\ensuremath{s}\xspace_1, \ensuremath{a}\xspace, \ensuremath{s'}\xspace) - \ensuremath{T}\xspace(\ensuremath{s}\xspace_2, \ensuremath{a}\xspace, \ensuremath{s'}\xspace) \right | \leq C_{\ensuremath{T}\xspace_i}D_{\ensuremath{S}\xspace}(\ensuremath{s}\xspace_1, \ensuremath{s}\xspace_2)$ and $\left |\ensuremath{\widehat{T}}\xspace(\ensuremath{s}\xspace_1, \ensuremath{a}\xspace, \ensuremath{s'}\xspace) - \ensuremath{\widehat{T}}\xspace(\ensuremath{s}\xspace_2, \ensuremath{a}\xspace, \ensuremath{s'}\xspace) \right | \leq C_{\ensuremath{\widehat{T}}\xspace_i}D_{\ensuremath{S}\xspace}(\ensuremath{s}\xspace_1, \ensuremath{s}\xspace_2)$, where $C_{\ensuremath{T}\xspace_i} \geq 0$ and $C_{\ensuremath{\widehat{T}}\xspace_i} \geq 0$ are finite local-Lipschitz constants \end{definition} In other words, $\ensuremath{S}\xspace_i$ are subsets of $\ensuremath{S}\xspace$ in which the original and linearized transition functions are Lipschitz continuous with Lipschitz constants $C_{\ensuremath{T}\xspace_i}$ and $C_{\ensuremath{\widehat{T}}\xspace_i}$. With this definition at hand, we can now show the following lemma: \begin{lemma}\label{lem:lemApprox} Let \ensuremath{S}\xspace be a $n-dimensional$ metric space with distance metric $D_{\ensuremath{S}\xspace}$ and assume \ensuremath{S}\xspace is normalized to $\left [0, 1 \right ]^n$. Furthermore let $\ensuremath{S}\xspace_i$ be a local-Lipschitz subset of \ensuremath{S}\xspace, then \begin{equation} \left | \ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace_1) - \ensuremath{\Psi_T}\xspace(\ensuremath{s}\xspace_2) \right | \leq \frac{1}{2} \sqrt{n} D_{\ensuremath{S}\xspace}(\ensuremath{s}\xspace_1, \ensuremath{s}\xspace_2) \left [C_{\ensuremath{T}\xspace_i} + C_{\ensuremath{\widehat{T}}\xspace_i} \right ] \nonumber \end{equation} for any $\ensuremath{s}\xspace_1, \ensuremath{s}\xspace_2 \in S_i$ \label{l:subsetLipschitz} \end{lemma} The proof for this Lemma is presented in \appref{ssec:proofLem4}. This Lemma indicates that the difference between the SNM\xspace values for two states from the same local-Lipschitz subset $\ensuremath{S}\xspace_i$ depends only on the distance $D_{\ensuremath{S}\xspace}$ between them, since $C_{\ensuremath{T}\xspace_i}$ and $C_{\ensuremath{\widehat{T}}\xspace_i}$ are constant for each subset $\ensuremath{S}\xspace_i$. Thus, as the distance between two states converges towards zero, the SNM\xspace value difference converges towards zero as well. This implies that we can approximate SNM for a sparse, sampled representation of $\ensuremath{S}\xspace_{\ensuremath{b_0}\xspace}$ and re-use these approximations on-line with a small error, without requiring an explicit representation of the $\ensuremath{S}\xspace_i$ subsets. \section{SNM-Planner: An Application of SNM\xspace for Planning} \label{sec:method} SNM-Planner\xspace is an on-line planner that uses SNM\xspace as a heuristic to decide whether a general, or a linearization-based POMDP solver should be used to compute the policy from the current belief. The general solver used is Adaptive Belief Tree (ABT)\ccite{Kur13:An}, while the linearization-based method called Modified High Frequency Replanning (MHFR), which is an adaptation of HFR\ccite{Sun15:High}. HFR is designed for chance-constraint POMDPs, i.e., it explicitly minimizes the collision probability, while MHFR is a POMDP solver where the objective is to maximize the expected total reward. An overview of SNM-Planner\xspace is shown in \aref{alg:smnd}. During run-time, at each planning step, SNM-Planner\xspace computes a local approximation of SNM\xspace around the current belief $\ensuremath{b}\xspace_i$ (line 5). If this value is smaller than a given threshold, SNM-Planner\xspace uses MHFR to compute a policy from the current belief, whereas ABT is used when the value exceeds the threshold (line 8-12). The robot then executes an action according the computed policy (line 13) and receives and observation (line 14). Based on the executed action and perceived observation, we update the belief (line 15). SNM-Planner\xspace represents beliefs as sets of particles and updates the belief using a SIR particle filter\ccite{arulampalam2002tutorial}. Note that MHFR assumes that beliefs are multivariate Gaussian distributions. Therefore, in case MHFR is used for the policy computation, we compute the first two moments (mean and covariance) of the particle set to obtain a multivariate Gaussian approximation of the current belief. The process then repeats from the updated belief until the robot has entered a terminal state (we assume that we know when the robot enters a terminal state) or until a maximum number of steps is reached. In the following two subsections we provide a brief an overview of the two component planners ABT and MHFR. \begin{algorithm} \caption{SNM-Planner\xspace (initial belief \ensuremath{b_0}\xspace, SNM\xspace threshold $\mu$, max. planning time per step $t$, max. number of steps $N$)}\label{alg:smnd} \begin{algorithmic}[1] \State InitializeABT(\ensuremath{P}\xspace) \State InitializeMHFR(\ensuremath{P}\xspace) \State $i=0$, $\ensuremath{b}\xspace_i = \ensuremath{b_0}\xspace$, terminal = False \While{terminal is False and $i < N$} \State $\widehat{\Psi} =\ $approximateSNM($\ensuremath{b}\xspace_i$) \State $t_p = t - t_a$ \Comment $t_{a}$ is the time the algorithm takes to approximate SNM\xspace \If{$\widehat{\Psi} < \mu$} \State $a =\ $MHFR($\ensuremath{b}\xspace_i$, $t_p$) \Else \State $a =\ $ABT($\ensuremath{b}\xspace_i$, $t_p$) \EndIf \State terminal = executeAction($a$) \State $o =\ $get observation \State $b_{i+1} = \tau(\ensuremath{b}\xspace_i, a, o)$ \State $i = i + 1$ \EndWhile \end{algorithmic} \end{algorithm} \subsection{Adaptive Belief Tree (ABT)}\label{ssec:ABT} ABT is a general and anytime on-line POMDP solver based on Monte-Carlo-Tree-Search (MCTS). ABT updates (rather than recomputes) its policy at each planning step. To update the policy for the current belief, ABT iteratively constructs and maintains a belief tree, a tree whose nodes are beliefs and whose edges are pairs of actions and observations. ABT evaluates sequences of actions by sampling episodes, that is, sequences of state-–action-–observation-–reward tuples, starting from the current belief. Details of ABT can be found in\ccite{Kur13:An}. \subsection{Modified High-Frequency Replanning (MHFR)}\label{ssec:MHFR} The main difference between HFR and MHFR is that HFR is designed for chance constraint POMDP, i.e., it explicitly minimizes the collision probability, while MHFR is a POMDP solver, whose objective is to maximize the expected total reward. Similar to HFR, MHFR approximates the current belief by a multivariate Gaussian distribution. To compute the policy from the current belief, MHFR samples a set of trajectories from the mean of the current belief to a goal state using multiple instances of RRTs\ccite{kuffner2011space} in parallel. It then computes the expected total discounted reward of each trajectory by tracking the beliefs around the trajectory using a Kalman Filter, assuming maximum-likelihood observations. The policy then becomes the first action of the trajectory with the highest expected total discounted reward. After executing the action and perceiving an observation, MHFR updates the belief using an Extended Kalman Filter. The process then repeats from the updated belief. To increase efficiency, MHFR additionally adjusts the previous trajectory with the highest expected total discounted reward to start from the mean of the updated belief and adds this trajectory to the set of sampled trajectories. More details on HFR and precise derivations of the method are available in\ccite{Sun15:High}. \section{Background and Related Work}\label{sec:relWork} \subsection{Background} In this paper, we consider motion planning problems, in which a robot must move from a given initial state to a state in the goal region while avoiding obstacles. The robot operates inside deterministic, bounded, and perfectly known 2D or 3D environments populated by static obstacles. The robot's transition and observation models are uncertain and defined as follows. Let $\ensuremath{S}\xspace \subset \mathbb{R}^n$ be the bounded n-dimensional state space, $A \subset \mathbb{R}^d$ the bounded $d$-dimensional control space and $\ensuremath{O}\xspace \subset \mathbb{R}^l$ the bounded $l$-dimensional observation space of the robot. The state of the robot evolves according to a discrete-time non-linear function, which we model in the general form $\ensuremath{s}\xspace_{t+1} = f(\ensuremath{s}\xspace_t, \ensuremath{a}\xspace_t, v_t)$ where $\ensuremath{s}\xspace_t \in \ensuremath{S}\xspace$ is the state of the robot at time $t$, $\ensuremath{a}\xspace_t \in \ensuremath{A}\xspace$ is the control input at time $t$, and $v_t \in \mathbb{R}^d$ is a random transition error. At each time step $t$, the robot perceives imperfect information regarding its current state according to a non-linear stochastic function of the form $\ensuremath{o}\xspace_t = h(\ensuremath{s}\xspace_t, w_t)$, where $\ensuremath{o}\xspace_t \in \ensuremath{O}\xspace$ is the observation at time $t$ and $w_t \in \mathbb{R}^d$ is a random observation error. This class of motion planning problems under uncertainty can naturally be formulated as a Partially Observable Markov Decision Process (POMDP). Formally, a POMDP is a tuple \ensuremath{\langle S, A, O, T, Z, R, \ensuremath{b_0}\xspace, \gamma \rangle}\xspace, where \ensuremath{S}\xspace, \ensuremath{A}\xspace and \ensuremath{O}\xspace are the state, action, and observation spaces of the robot. $T$ is a conditional probability function $T(\ensuremath{s}\xspace, \ensuremath{a}\xspace, \ensuremath{s'}\xspace) = p(\ensuremath{s'}\xspace \,|\, \ensuremath{s}\xspace, \ensuremath{a}\xspace)$ (where $\ensuremath{s}\xspace, \ensuremath{s'}\xspace \in \ensuremath{S}\xspace$ and $\ensuremath{a}\xspace \in \ensuremath{A}\xspace$) that models the uncertainty in the effect of performing actions, while $Z(\ensuremath{s}\xspace, \ensuremath{a}\xspace, \ensuremath{o}\xspace) = p(\ensuremath{o}\xspace | \ensuremath{s}\xspace, \ensuremath{a}\xspace)$ (where $\ensuremath{o}\xspace\in\ensuremath{O}\xspace$) is a conditional probability function that models the uncertainty in perceiving observations. $R(\ensuremath{s}\xspace, \ensuremath{a}\xspace)$ is a reward function, which encodes the planning objective. \ensuremath{b_0}\xspace is the initial belief, capturing the uncertainty in the robot's initial state and $\gamma \in (0, 1)$ is a discount factor. At each time-step, a POMDP agent is at a state $s \in \ensuremath{S}\xspace$, takes an action $a \in \ensuremath{A}\xspace$, perceives an observation $o \in \ensuremath{O}\xspace$, receives a reward based on the reward function $R(s, a)$, and moves to the next state. Now due to uncertainty in the results of action and sensing, the agent never knows its exact state and therefore, estimates its state as a probability distribution, called belief. The solution to the POMDP problem is an optimal policy (denoted as \ensuremath{\pi^*}\xspace), which is a mapping $\ensuremath{\pi^*}\xspace: \mathbb{B} \rightarrow \ensuremath{A}\xspace$ from beliefs ($\mathbb{B}$ denotes the set of all beliefs, which is called the belief space) to actions that maximizes the expected total reward the robot receives, i.e. \begin{align} &V^*(\ensuremath{b_0}\xspace) = \nonumber\\ &\max_{a \in \ensuremath{A}\xspace} \left(R(b, a) + \gamma \int_{o \in \ensuremath{O}\xspace} p(o | b, a) V^*(\tau(b, a, o)) \, \ensuremath{d}\xspace\ensuremath{o}\xspace\right) \end{align} where $\tau(b, a, o)$ computes the updated belief estimate after the robot performs action $a \in \ensuremath{A}\xspace$ and perceived $o \in \ensuremath{O}\xspace$ from belief $b$, and is defined as: \begin{align}\label{e:belTrans} b'(s') &= \tau(b, a, o)(s') \nonumber \\&= \eta \, Z(s', a, o) \int_{s \in \ensuremath{S}\xspace} T(s, a, s') b(s) ds \end{align} For the motion planning problems considered in this work, we define the spaces $S$, $A$, and $O$ to be the same as those of the robotic system (for simplicity, we use the same notation). The transition $T$ represents the dynamics model $f$, while $Z$ represents the sensing model $h$. The reward function represents the task' objective, for example, high reward for goal states and low negative reward for states that cause the robot to collide with the obstacles. The initial belief \ensuremath{b_0}\xspace represents uncertainty on the starting state of the robot. \subsection{Related Work on Non-Linearity Measures} Linearization is a common practice in solving non-linear control and estimation problems. It is known that linearization performs well only when the system's non-linearity is ``weak"\ccite{Li12:Measure}. To identify the effectiveness of linearization in solving non-linear problems, a number of non-linearity measure have been proposed in the control and information fusion community. Many of these measures (e.g.\ccite{Bat80:Relative,Bea60:Confidence,Ema93:A}) have been designed for deterministic systems. For instance,\ccite{Bat80:Relative} proposed a measure derived from the curvature of the non-linear function. The work in\ccite{Bea60:Confidence,Ema93:A} computes a measure based on the distance between the non-linear function and its nearest linearization. A brief survey of non-linearity measures for deterministic systems is available in\ccite{Li12:Measure}. Non-linearity measures for stochastic systems has been proposed. For instance,\ccite{Li12:Measure} extends the measures in\ccite{Bea60:Confidence,Ema93:A} to be based on the average distance between the non-linear function that models the motion and sensing of the system, and the set of all possible linearizations of the function. Another example is\ccite{Dun13:Nonlinearity} that proposes a measures which is based on the distance between distribution over states and its Gaussian approximation, called Measure of Non-Gaussianity\xspace (MoNG\xspace), rather than based on the non-linear function itself. Assuming a passive stochastic systems, this measures computes the negentropy between a transformed belief and its Gaussian approximation. The results indicate that this measure is more suitable to measure the non-linearity of stochastic systems, as it takes into account the effect that non-linear transformations have on the shape of the transformed beliefs. This advancement is encouraging and we will use MoNG\xspace as a comparator of SNM\xspace. However, for this purpose, MoNG\xspace must be modified since we consider non-passive problems in work. The exact modifications we made can be found in Section~\ref{ssec:MHFR}. Despite the various non-linearity measures that have been proposed, most are not designed to take the effect of obstacles to the non-linearity of the system into account. Except for MoNG\xspace, all of the aforementioned non-linearity measures will have difficulties in reflecting these effects, even when they are embedded in the motion and sensing models. For instance, curvature-based measures requires the non-linear function to be twice continuously differentiable, but the presence of obstacles is very likely to break the differentiability of the motion model. Furthermore, the effect of obstacles is likely to violate the additive Gaussian error, required for instance by\ccite{Li12:Measure}. Although MoNG\xspace can potentially take the effect of obstacles into account, it is not designed to. In the presence of obstacles, beliefs have support only in the valid region of the state space, and therefore computing the difference between beliefs and their Gaussian approximations is likely to underestimate the effect of obstacles. SNM\xspace is designed to address these issues. Instead of building upon existing non-linearity measures, SNM\xspace adopts approaches commonly used for sensitivity analysis\ccite{Mas12:Loss,Mul97:Does} of Markov Decision Processes (MDP) ---a special class of POMDP where the observation model is perfect, and therefore the system is fully observable. These approaches use statistical distance measures between the original transition dynamics and their perturbed versions. Linearized dynamics can be viewed as a special case of perturbed dynamics, and hence this statistical distance measure can be applied as a non-linearity measure, too. We do need to extend these analysis, as they are generally defined for discrete state space and are defined with respect to only the transition models (MDP assumes the state of the system is fully observable). Nevertheless, such extensions are feasible and the generality of this measure could help identifying the effectiveness of linearization in motion planning under uncertainty problems. \section{Experiments and Results} The purpose of our experiments is two-fold: To test the applicability of SNM\xspace to motion planning under uncertainty problems and to test SNM-Planner\xspace. For our first objective, we compare SNM\xspace with a modified version of the Measure of Non-Gaussianity (MoNG\xspace)\ccite{Dun13:Nonlinearity}. Details on this measure are in Section~\ref{ssec:mong}. We evaluate both measures using two robotic systems, a car-like robot with 2$^{nd}$-order dynamics and a torque-controlled 4DOFs manipulator, where both robots are subject to increasing uncertainties and increasing numbers of obstacles in the operating environment. Furthermore we test both measures when the robots are subject to highly non-linear collision dynamics and different observation models. Details on the robot models are presented in Section~\ref{ssec:robot_models}, whereas the evaluation experiments are presented in Section~\ref{ssec:testing_snm}. To test SNM-Planner\xspace we compare it with ABT and MHFR on three problem scenarios, including a torque-controlled 7DOFs manipulator operating inside a 3D office environment. Additionally we test how sensitive SNM-Planner\xspace is to the choice of the SNM\xspace-threshold. The results for these experiments are presented in Section~\ref{ssec:testing_snm-planner}. All problem environments are modelled within the OPPT framework\ccite{hoerger2018software}. The solvers are implemented in C++. For the parallel construction of the RRTs in MHFR, we utilize 8 CPU cores throughout the experiments. All parameters are set based on preliminary runs over the possible parameter space, the parameters that generate the best results are then chosen to generate the experimental results. \subsection{Measure of Non-Gaussianity\xspace}\label{ssec:mong} The Measure of Non-Gaussianity (MoNG\xspace) proposed in\ccite{Dun13:Nonlinearity} is based on the negentropy between the PDF of a random variable and its Gaussian approximation. Consider an $n$-dimensional random variable $X$ distributed according to PDF $p(x)$. Furthermore, let \lin{X} be a Gaussian approximation of $X$ with PDF $\widehat{p}(x)$, such that $\lin{X} \sim N(\mu , \Sigma_x)$, where $\mu$ and $\Sigma_x$ are the first two moments of $p(x)$. The negentropy between $p$ and \lin{p} (denoted as \J{p}{\lin{p}}) is then defined as \begin{equation} \J{p}{\lin{p}} = H(\lin{p}) - H(p) \end{equation} where \begin{equation}\label{e:entropies} \begin{split} \entr{\lin{p}} &= \frac{1}{2} ln \left [(2 \pi e)^n \left |det(\Sigma_x) \right | \right ] \\ \entr{p} &= - \int p(x) \ln p(x) dx \end{split} \end{equation} are the differential entropies of $p$ and $\lin{p}$ respectively. A (multivariate) Gaussian distribution has the largest differential entropy amongst all distributions with equal first two moments, therefore \J{p}{\lin{p}} is always non-negative. In practice, since the PDF $p(x)$ is not known exactly in all but the simplest cases, \entr{p} has to be approximated. In\ccite{Dun13:Nonlinearity} this measure has originally been used to assess the non-linearity of passive systems. Therefore, in order to achieve comparability with SNM\xspace, we need to extend the Non-Gaussian measure to general active stochastic systems of the form $s_{t+1} = f(s_t, a_t, v_t)$. We do this by evaluating the non-Gaussianity of distribution that follow from the transition function \ensuremath{T(s, a, s')}\xspace given state $s$ and action $a$. In particular for a given $s$ and $a$, we can find a Gaussian approximation of \ensuremath{T(s, a, s')}\xspace (denoted by \ensuremath{T_G(s, a, s')}\xspace) by calculating the first two moments of the distribution that follows from \ensuremath{T(s, a, s')}\xspace. Using this Gaussian approximation, we define the Measure of Non-Gaussianity\xspace as \begin{align} &MoNG\xspace(\ensuremath{T}\xspace, \ensuremath{T_G}\xspace) = \nonumber\\ &\sup_{\ensuremath{s}\xspace \in \ensuremath{S}\xspace, \ensuremath{a}\xspace \in \ensuremath{A}\xspace} \left [H(\ensuremath{T_G(s, a, s')}\xspace) - H(\ensuremath{T(s, a, s')}\xspace)\right ] \end{align} Similarly we can compute the Measure of Non-Gaussianity\xspace for the observation function: \begin{align} &MoNG\xspace(\ensuremath{Z}\xspace, \ensuremath{Z_G}\xspace) = \nonumber\\ &\sup_{\ensuremath{s}\xspace \in \ensuremath{S}\xspace, \ensuremath{a}\xspace \in \ensuremath{A}\xspace} \left [H(\ensuremath{Z_G(s, a, o)}\xspace) - H(\ensuremath{Z(s, a, o)}\xspace) \right ] \end{align} where \ensuremath{Z_G}\xspace is a Gaussian approximation of \ensuremath{Z}\xspace. In order to approximate the entropies $\entr{\ensuremath{T(s, a, s')}\xspace)}$ and \entr{\ensuremath{Z(s, a, o)}\xspace}, we are using a similar histogram-based approach as discussed in Section~\ref{ssec:monApprox}. The entropy terms for the Gaussian approximations can be computed in closed form, according to the first equation in \eref{e:entropies}\ccite{Ahmed89Entropy}. \subsection{Robot Models}\label{ssec:robot_models} \subsubsection{4DOFs-Manipulator. }\label{sssec:4DOFManipulator} The 4DOFs-manipulator consists of 4 links connected by 4 torque-controlled revolute joints. The first joint is connected to a static base. In all problem scenarios the manipulator must move from a known initial state to a state where the end-effector lies inside a goal region located in the workspace of the robot, while avoiding collisions with obstacles the environment is populated with. The state of the manipulator is defined as $\ensuremath{s}\xspace=(\theta, \dot{\theta}) \in \mathbb{R}^{8}$, where $\theta$ is the vector of joint angles, and $\dot{\theta}$ the vector of joint velocities. Both joint angles and joint velocities are subject to linear constraints: The joint angles are constrained by $(-3.14, 3.14)rad$, whereas the joint velocities are constrained by $(6,\allowbreak 2,\allowbreak 2,\allowbreak 2)rad/s$ in each direction. Each link of the robot has a mass of $1kg$. The control inputs of the manipulator are the joint torques, where the maximum joint torques are $(20,\allowbreak 20,\allowbreak 10,\allowbreak 5)Nm/s$ in each direction. Since ABT assumes a discrete action space, we discretize the joint torques for each joint using the maximum torque in each direction, which leads to 16 actions. The dynamics of the manipulator is defined using the well-known Newton-Euler formalism\ccite{spong06:RobotModelling}. For both manipulators we assume that the input torque for each joint is affected by zero-mean additive Gaussian noise. Note however, even though the error is Gaussian, due to the non-linearities of the motion dynamics the beliefs will not be Gaussian in general. Since the transition dynamics for this robot are quite complex, we assume that the joint torques are applied for 0.1s and we use the ODE physics engine\ccite{drumwright2010:extending} for the numerical integration of the dynamics, where the discretization (\textrm{i.e.}\ $\delta t$) of the integrator is set to $\delta t = 0.004s$. The robot is equipped with two sensors: The first sensor measures the position of the end-effector inside the robot's workspace, whereas the second sensor measures the joint velocities. Consider a function $g: \mathbb{R}^{8} \mapsto \mathbb{R}^3$ that maps the state of the robot to an end-effector position inside the workspace, then the observation model is defined as \begin{equation}\label{eq:obs4DOF} o = [g(s), \dot{\theta}]^T + w \end{equation} where $w_t$ is an error term drawn from a zero-mean multivariate Gaussian distribution with covariance matrix $\Sigma_w$. The initial state of the robot is a state where the joint angles and velocities are zero. When the robot performs an action where it collides with an obstacle it enters a terminal state and receives a penalty of -500. When it reaches the goal area it also enters a terminal state, but receives a reward of 1,000. To encourage the robot to reach the goal area quickly, it receives a small penalty of -1 for every other action. \subsubsection{7DOFs Kuka iiwa manipulator. }\label{sssec:7DOFManipulator} The 7DOFs Kuka iiwa manipulator is very similar to the 4DOFs-manipulator. However, the robot consists of 7 links connected via 7 revolute joints. We set the POMDP model to be similar to that of the 4DOFs-manipulator, but expand it to handle 7DOFs. For this robot, the joint velocities are constrained by $(3.92,\allowbreak 2.91,\allowbreak 2.53,\allowbreak 2.23,\allowbreak 2.23,\allowbreak 2.23,\allowbreak 1.0)rad/s$ in each direction and the link masses are $(4,\allowbreak 4,\allowbreak 3,\allowbreak 2.7,\allowbreak 1.7,\allowbreak 1.8,\allowbreak 0.3)kg$. Additionally, the torque limits of the joints are $(25,\allowbreak 20,\allowbreak 10,\allowbreak 10,\allowbreak 5,\allowbreak 5,\allowbreak 0.5)Nm/s$ in each direction. For ABT we use the same discretization of the joint torques as in the 4DOFs-manipulator case, \textrm{i.e.} we use the maximum torque per joint in each direction, resulting in 128 actions. Similarly to the 4DOFs-manipulator, we assume that the input torques are applied for 0.1s and we use the ODE physics engine with an integration step size of 0.004s to simulate the transition dynamics. The observation and reward models are the same as for the 4DOFs-manipulator. The initial joint velocities are all zero and almost all joint angles are zero too, except for the second joint, for which the initial joint angle is $-1.5rad$. \fref{f:scenarioCompStudy}(c) shows the Kuka manipulator operating inside an office scenario. \subsubsection{Car-like robot. }\label{sssec:SecOrderCar. } A nonholonomic car-like robot of size ($0.12\times0.07\times0.01$) drives on a flat xy-plane inside a 3D environment populated by obstacles The robot must drive from a known start state to a position inside a goal region without colliding with any of the obstacles. The state of the robot at time t is defined as a 4D vector $s_t = (x_t, y_t, \theta_t, \upsilon_t) \in \mathbb{R}^4$, where $x_t, y_t \in [-1, 1]$ is the position of the center of the robot on the $xy$-plane, $\theta_t \in [-3.14, 3.14]rad$ the orientation and $\upsilon_t \in [-0.2, 0.2]$ is the linear velocity of the robot. The initial state of the robot is $(−0.7,−0.7,1.57rad,0)$ while the goal region is centered at $(0.7,0.7)$ with radius $0.1$. The control input at time $t$, $a_t = (\alpha_t, \phi_t)$ is a 2D real vector consisting of the acceleration $\alpha \in [-1, 1]$ and the steering wheel angle $\phi_t \in [-1rad, 1rad]$. The robot’s dynamics is subject to control noise $v_t = (\tilde{\alpha}_t, \tilde{\phi}_t) \sim N(0, \Sigma_v)$. The robot’s transition model is \begin{equation}\label{eq:carDynamics} s_{t+1} = f(s_t, a_t, v_t) = \begin{bmatrix} x_t + \Delta t \upsilon_t \cos \theta_t\\ y_t + \Delta t \upsilon_t \sin \theta_t\\ \theta_t + \Delta t \tan(\phi_t + \tilde{\phi}_t) / 0.11\\ \upsilon_t + \Delta t (\alpha_t + \tilde{\alpha}_t) \end{bmatrix} \end{equation} where $\Delta t = 0.3s$ is the duration of a timestep and the value $0.11$ is the distance between the front and rear axles of the wheels. This robot is equipped with two types of sensors, a localization sensor that receives a signal from two beacons that are located at $(\hat{x}_1, \hat{y}_1)$ and $(\hat{x}_2, \hat{y}_2)$. The second sensor is a velocity sensor mounted on the car. With these two sensors the observation model is defined as \begin{equation}\label{e:obstFunctCarMazeAdditive} o_t = \begin{bmatrix} \frac{1}{((x_t - \hat{x}_1)^2 + (y_t - \hat{y}_1)^2 + 1)}\\ \frac{1}{((x_t - \hat{x}_2)^2 + (y_t - \hat{y}_2)^2 + 1)}\\ v_t \end{bmatrix} + w_t \end{equation} where $w_t$ is an error vector drawn from a zero-mean multivariate Gaussian distribution with covariance matrix $\Sigma_w$. Similar to the manipulators described above, the robot receives a penalty of -500 when it collides with an obstacle, a reward of 1,000 when reaching the goal area and a small penalty of -1 for any other action. \subsection{Testing SNM\xspace}\label{ssec:testing_snm} In this set of experiments we want to understand the performance of SNM\xspace compared to MoNG\xspace in various scenarios. In particular, we are interested in the effect of increasing uncertainties and the effect that obstacles have on the effectiveness of SNM\xspace, and if these results are consistent with the performance of a general solver relative to a linearization-based solver. Additionally, we want to see how highly-nonlinear collision dynamics and different observation models -- one with additive Gaussian noise and non-additive Gaussian noise -- affect our measure. For the experiments with increasing motion and sensing errors, recall from Section~\ref{ssec:robot_models} that the control errors are drawn from zero-mean multivariate Gaussian distributions with covariance matrices $\Sigma_v$. We define the control errors (denoted as \ensuremath{e_T}\xspace) to be the standard deviation of these Gaussian distributions, such that $\Sigma_v = \ensuremath{e_T}\xspace^2 \times \mathds{1}$. Similarly for the covariance matrices of the zero-mean multivariate Gaussian sensing errors, we define the observation error as \ensuremath{e_Z}\xspace, such that $\Sigma_w = \ensuremath{e_Z}\xspace^2 \times \mathds{1}$. Note that during all the experiments, we use normalized spaces, which means that the error vectors affect the normalized action and observation vectors. For SNM\xspace and MoNG\xspace we first generated 100,000 state samples for each scenario, and computed a lookup table for each error value off-line, as discussed in Section~\ref{ssec:monApprox}. Then, during run-time we calculated the average approximated SNM and MonG values. \subsubsection{Effects of increasing uncertainties in cluttered environments. }\label{ssec:increasingly_uncertain} \begin{figure*} \centering \begin{tabular}{c@{\hspace*{5pt}}c@{\hspace*{5pt}}c@{\hspace*{5pt}}} \includegraphics[height=4cm]{DubinMaze} & \includegraphics[height=4cm]{4DOFFactory1-cropped} & \includegraphics[height=4cm]{KukaOfficeEnvironment} \\ (a) Maze & (b) Factory & (c) KukaOffice \end{tabular} \caption{Test scenarios for the different robots. The objects colored black and gray are obstacles, while the green sphere is the goal region. (a) The Maze scenario for the car-like robot. The blue squares represents the beacons, while the orange square at the bottom left represents the initial state. (b) The 4DOFs-manipulator scenario. (c) The KukaOffice scenario} \label{f:scenarioCompStudy} \end{figure*} To investigate the effect of increasing control and observation errors to SNM\xspace, MoNG\xspace and the two solvers ABT and MHFR in cluttered environments, we ran a set of experiments where the 4DOFs-manipulator and the car-like robot operate in empty environments and environments with obstacles, with increasing values of \ensuremath{e_T}\xspace and \ensuremath{e_Z}\xspace, ranging between $0.001$ and $0.075$. The environments with obstacles are the Factory and Maze environments shown in \fref{f:scenarioCompStudy}(a) and (b). For each scenario and each control-sensing error value (we set $\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$), we ran 100 simulation runs using ABT and MHFR respectively with a planning time of 2s per step. The average values for SNM\xspace and MoNG\xspace and the relative value differences between ABT and MHFR in the empty environments are presented in \tref{t:measureCompareEmpty}. The results show that for both scenarios SNM\xspace and MoNG\xspace are sensitive to increasing transition and observation errors. This resonates well with the relative value difference between ABT and MHFR. The more interesting question is now, how sensitive are both measures to obstacles in the environment? \tref{t:measureCompareClutter}(a) and (b) shows the results for the Factory and the Maze scenario respectively. It is evident that SNM\xspace increases significantly compared to the empty environments, whereas MoNG\xspace is almost unaffected. Overall obstacles increase the relative value difference between ABT and MHFR, except for large uncertainties in the Maze scenario. This indicates that MHFR suffers more from the additional non-linearities that obstacles introduce. SNM\xspace is able to capture these effects well. An interesting remark regarding the results for the Maze scenario in \tref{t:measureCompareClutter}(b) is that the relative value difference actually decreases for large uncertainties. The reason for this can be seen in \fref{f:relValCarMaze}. As the uncertainties increase, the problem becomes so difficult, such that both solvers fail to compute a reasonable policy within the given planning time. However, clearly MHFR suffers earlier from these large uncertainties compared to ABT. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{\textbf{(a) Empty environment 4DOFs-manipulator}} \\ \hline \hline \textbf{$\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$} & \textbf{SNM\xspace} & \textbf{MoNG\xspace} & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 0.001 & 0.207 & 0.548 & 0.0110 \\ \hline 0.0195 & 0.213 & 0.557 & 0.0346 \\ \hline 0.038 & 0.243 & 0.603 & 0.0385 \\ \hline 0.057 & 0.254 & 0.617 & 0.0437 \\ \hline 0.075 & 0.313 & 0.686 & 0.0470 \\ \hline \hline \multicolumn{4}{|c|}{\textbf{(b) Empty environment Car-like robot}} \\ \hline \hline \textbf{$\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$} & \textbf{SNM\xspace} & \textbf{MoNG\xspace} & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 0.001 & 0.169 & 0.473 & 0.1426 \\ \hline 0.0195 & 0.213 & 0.479 & 0.1793 \\ \hline 0.038 & 0.295 & 0.458 & 0.1747 \\ \hline 0.057 & 0.350 & 0.476 & 0.1839 \\ \hline 0.075 & 0.395 & 0.446 & 0.2641 \\ \hline \end{tabular}} \caption{Average values of SNM, MonG and the relative value difference between ABT and MHFR for the 4DOFs-manipulator (a) and the car-like robot (b) operating inside empty environments.} \label{t:measureCompareEmpty} \end{table} \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{\textbf{(a) Factory environment}} \\ \hline \hline \textbf{$\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$} & \textbf{SNM\xspace} & \textbf{MoNG\xspace} & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 0.001 & 0.293 & 0.539 & 0.0892 \\ \hline 0.0195 & 0.351 & 0.567 & 0.1801 \\ \hline 0.038 & 0.470 & 0.621 & 0.5818 \\ \hline 0.057 & 0.502 & 0.637 & 0.7161 \\ \hline 0.075 & 0.602 & 0.641 & 1.4286 \\ \hline \hline \multicolumn{4}{|c|}{\textbf{(b) Maze environment}} \\ \hline \hline \textbf{$\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$} & \textbf{SNM\xspace} & \textbf{MoNG\xspace} & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 0.001 & 0.215 & 0.482 & 0.2293 \\ \hline 0.0195 & 0.343 & 0.483 & 1.4473 \\ \hline 0.038 & 0.470 & 0.491 & 1.1686 \\ \hline 0.057 & 0.481 & 0.497 & 0.0985 \\ \hline 0.075 & 0.555 & 0.502 & 0.0040 \\ \hline \end{tabular}} \caption{Average values of SNM, MonG and the relative value difference between ABT and MHFR for the 4DOFs-manipulator operating inside the Factory environment (a) and the car-like robot operating inside the Maze environment (b).} \label{t:measureCompareClutter} \end{table} \begin{figure} \centering \includegraphics[width=0.485\textwidth]{resDubinMaze2} \caption{The average total discounted rewards achieved by ABT and MHFR in the Maze scenario, as the uncertainties increase. Vertical bars are the 95\% confidence intervals.} \label{f:relValCarMaze} \end{figure} \subsubsection{Effects of increasingly cluttered environments. } To investigate the effects of increasingly cluttered environments on both measures, we ran a set of experiments in which the Car-like robot and the 4DOFs-manipulator operate inside environments with an increasing number of randomly distributed obstacles. For this we generated test scenarios with 5, 10, 15, 20, 25 and 30 obstacles that are uniformly distributed across the environment. For each of these test scenarios, we randomly generated 100 environments. \fref{f:randObstacles}(a)-(b) shows two example environments with 30 obstacles for the Car-like robot and the 4DOFs-manipulator. For this set of experiments we don't take collision dynamics into account. The control and observation errors are fixed to $e_t = e_z = 0.038$ which corresponds to the median of the uncertainty values. \tref{t:increasingObstacles} presents the results for SNM\xspace, MoNG\xspace and the relative value difference between ABT ant MHFR for the 4DOFs-manipulator (a) and the car-like robot (b). From these results it is clear that, as the environments become increasingly cluttered, the advantage of ABT over MHFR increases, indicating that the obstacles have a significant effect on the Gaussian belief assumption of MHFR. Additionally SNM\xspace is clearly more sensitive to those effects compared to MoNG\xspace, whose values remain virtually unaffected by the clutterness of the environments. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.20\textwidth]{DubinRandom30-cropped} & \includegraphics[width=0.20\textwidth]{4DOFRandom30-cropped} \\ (a) Car-like robot & (b) 4DOFs-manipulator \end{tabular} \caption{Two example scenarios for the Car-like robot (a) and the 4DOFs-manipulator (b) with 30 randomly distributed obstacles.} \label{f:randObstacles} \end{figure} \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{\textbf{(a) 4DOFs-manipulator with increasing number of obstacles}} \\ \hline \hline Num obstacles & SNM & MonG & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 5 & 0.359 & 0.650 & 0.0276 \\ \hline 10 & 0.449 & 0.643 & 0.0683 \\ \hline 15 & 0.514 & 0.673 & 0.2163 \\ \hline 20 & 0.527 & 0.683 & 0.2272 \\ \hline 25 & 0.651 & 0.690 & 0.2675 \\ \hline 30 & 0.698 & 0.672 & 0.3108 \\ \hline \hline \multicolumn{4}{|c|}{\textbf{(b) Car-like robot with increasing number of obstacles}} \\ \hline \hline Num obstacles & SNM & MonG & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 5 & 0.327 & 0.459 & 0.0826 \\ \hline 10 & 0.387 & 0.473 & 0.1602 \\ \hline 15 & 0.446 & 0.482 & 0.1846 \\ \hline 20 & 0.468 & 0.494 & 0.4813 \\ \hline 25 & 0.529 & 0.489 & 0.5788 \\ \hline 30 & 0.685 & 0.508 & 0.7884 \\ \hline \end{tabular}} \caption{Average values of SNM, MonG and relative value difference between ABT and MHFR for the 4DOFs-manipulator (a) and the car-like robot (b) operating inside environments with increasing numbers of obstacles.} \label{t:increasingObstacles} \end{table} \subsubsection{Effects of collision dynamics. } Intuitively, collision dynamics are highly non-linear effects. Here we investigate SNM\xspace's capability in capturing these effects compared to MoNG\xspace. For this, the robots are allowed to collide with the obstacles. In other words, colliding states are not terminal and the dynamic effects of collisions are reflected in the transition model. For the 4DOFs-manipulator these collisions are modeled as additional constraints (contact points) that are resolved by applying "correcting velocities" to the colliding bodies in the opposite direction of the contact normals. For the Car-like robot, we modify the transition model \eref{eq:carDynamics} to consider collision dynamics such that \begin{equation}\label{eq:car_coll_transition} s_{t+1} = \begin{cases} f_{col}(s_t, a_t, v_t) & \text{ if } f(s_t, a_t, v_t)\ \text{collides} \\ f(s_t, a_t, v_t) & \text{ else } \end{cases} \end{equation} where \begin{equation}\label{eq:car_coll_funct} f_{coll}(s_t, a_t, v_t) = \left [x_t, y_t, \theta_t, -3v_t \right ]^T \end{equation} This transition function causes the robot to slightly "bounce" off obstacles upon collision. There are two interesting remarks regarding this transition function: The first one is that \eref{eq:car_coll_funct} is a deterministic. In other words, a collision causes an immediate reduction of the uncertainty regarding the state of the robot. Second, while the collision effects \eref{eq:car_coll_funct} are linear, \eref{eq:car_coll_transition} is not smooth since the collision dynamics induce discontinuities when the robot operates in the vicinity of obstacles. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{\textbf{(a) Maze environment with collision dynamics}} \\ \hline \hline \textbf{$\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$} & \textbf{SNM\xspace} & \textbf{MoNG\xspace} & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 0.001 & 0.425 & 0.490 & 0.3807 \\ \hline 0.0195 & 0.576 & 0.505 & 7.0765 \\ \hline 0.038 & 0.636 & 0.542 & 8.6847 \\ \hline 0.057 & 0.740 & 0.569 & 2.0194 \\ \hline 0.075 & 0.776 & 0.611 & 1.7971 \\ \hline \hline \multicolumn{4}{|c|}{\textbf{(b) Factory environment with collision dynamics}} \\ \hline \hline \textbf{$\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$} & \textbf{SNM\xspace} & \textbf{MoNG\xspace} & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 0.001 & 0.492 & 0.639 & 0.07141 \\ \hline 0.0195 & 0.621 & 0.621 & 0.4007 \\ \hline 0.038 & 0.725 & 0.738 & 0.6699 \\ \hline 0.057 & 0.829 & 0.742 & 1.0990 \\ \hline 0.075 & 0.889 & 0.798 & 1.7100 \\ \hline \end{tabular}} \caption{Average values of SNM, MonG and the relative value difference between ABT and MHFR for the 4DOFs-manipulator operating inside the Factory environment (a) and the car-like robot operating inside the Maze environment (b) while being subject to collision dynamics.} \label{t:measureCompareColl} \end{table} \tref{t:measureCompareColl} shows the comparison between SNM\xspace and MoNG\xspace and the relative value difference between ABT and MHFR for the 4DOFs-manipulator operating inside the Factory environment (a) and the car-like robot operating inside the Maze environment (b) while being subject to collision dynamics. It can be seen that the additional non-linear effects are captured well by SNM\xspace. Interestingly, compared to the results in \tref{t:measureCompareClutter}(a), where the 4DOFs-manipulator operates in the same environment without collision dynamics, MoNG\xspace captures the effects of collision dynamics as well, which indicates that collision dynamics have a large effect on the Gaussian assumption made by MHFR. Looking at the relative value difference between ABT and MHFR confirms this. MHFR suffers more from the increased non-linearity of the problems caused by collision dynamics compared to ABT. This effect aggravates as the uncertainty increases, which is a clear indication that the problem becomes increasingly non-linear with larger uncertainties. Looking at the results for the car-like robot operating in the Maze scenario presents a similar picture. Comparing the results in \tref{t:measureCompareColl}(b) where collision dynamics are taken into account to \tref{t:measureCompareClutter}(b), shows that collision dynamics have a significant effect both to SNM\xspace as well as Measure of Non-Gaussianity\xspace. \subsubsection{Effects of non-linear observation functions with non-additive errors. }\label{ss:observationComparison} In the previous experiments we assumed that the observation functions are non-linear functions with additive Gaussian noise, a special class of non-linear observation functions. This class of observation functions has some interesting implications: First of all, the resulting observation distribution remains Gaussian. This in turn means that MoNG\xspace for the observation function evaluates to zero. Second, linearizing the observation function results in a Gaussian distribution with the same mean but different covariance. We therefore expect that the observation component SNM\xspace remains small, even for large uncertainties. To investigate how SNM\xspace reacts to non-linear observation functions with non-additive noise, we ran a set of experiments for the 4DOFs-manipulator operating inside the Factory environment and the car-like robot operating inside the Maze environment where we replaced both observation functions with non-linear functions with non-additive noise. For the 4DOFs-manipulator we replaced the observation function defined in \eref{eq:obs4DOF} with \begin{equation}\label{e:4DOFObsNonAdditive} o_t = g(s_t + w_t) \end{equation} where $w_t \sim N(0, \Sigma_w)$. In other words, the manipulator has only access to a sensor that measure the position of the end-effector in the workspace. For the car-like robot we use the following observation function: \begin{equation}\label{e:obstFunctCarMazeNonAdditive} o_t = \begin{bmatrix} \frac{1}{((x_t + w_t^1 - \hat{x}_1)^2 + (y_t + w_t^2 - \hat{y}_1)^2 + 1)}\\ \frac{1}{((x_t + w_t^1 - \hat{x}_2)^2 + (y_t + w_t^2 - \hat{y}_2)^2 + 1)}\\ v_t + w_t^3 \end{bmatrix} \end{equation} where $\left (w_t^1, w_t^2, w_t^3 \right )^T \sim N(0, \Sigma_w)$. For both robots, we set $e_t = 0.038$. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{\textbf{(a) Factory environment with additive observation errors}} \\ \hline \hline $\mathbf{\ensuremath{e_Z}\xspace}$ & 0.001 & 0.0195 & 0.038 & 0.057 & 0.075 \\ \hline \textbf{SNM\xspace} & 0.001 & 0.004 & 0.013 & 0.036 & 0.047 \\ \hline \textbf{MonG} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline \hline \multicolumn{6}{|c|}{\textbf{(b) Factory environment with non-additive observation errors}} \\ \hline \hline $\mathbf{\ensuremath{e_Z}\xspace}$ & 0.001 & 0.0195 & 0.038 & 0.057 & 0.075 \\ \hline \textbf{SNM\xspace} & 0.012 & 0.087 & 0.173 & 0.234 & 0.317 \\ \hline \textbf{MonG} & 0.0 & 0.047 & 0.094 & 0.136 & 0.173 \\ \hline \end{tabular}} \caption{Comparison between the observation component of SNM\xspace and MoNG\xspace for the 4DOF-manipulator operating inside the Factory environment with observation function \eref{eq:obs4DOF} (a) and \eref{e:4DOFObsNonAdditive} (b) as the observation errors increase.} \label{t:compAdditive4DOF} \end{table} \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{\textbf{(a) Maze environment with additive observation errors}} \\ \hline \hline $\mathbf{\ensuremath{e_Z}\xspace}$ & 0.001 & 0.0195 & 0.038 & 0.057 & 0.075 \\ \hline \textbf{SNM\xspace} & 0.002 & 0.012 & 0.037 & 0.048 & 0.060 \\ \hline \textbf{MonG} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline \hline \multicolumn{6}{|c|}{\textbf{(b) Maze environment with non-additive observation errors}} \\ \hline \hline $\mathbf{\ensuremath{e_Z}\xspace}$ & 0.001 & 0.0195 & 0.038 & 0.057 & 0.075 \\ \hline \textbf{SNM\xspace} & 0.083 & 0.086 & 0.101 & 0.198 & 0.207 \\ \hline \textbf{MonG} & 0.0 & 0.012 & 0.032 & 0.053 & 0.075 \\ \hline \end{tabular}} \caption{Comparison between the observation component of SNM\xspace and MoNG\xspace for the car-like robot operating inside the Maze environment with observation function \eref{e:obstFunctCarMazeAdditive}(a) and observation function \eref{e:obstFunctCarMazeNonAdditive}(b) as the observation errors increase.} \label{t:compAdditiveCar} \end{table} \tref{t:compAdditive4DOF} shows the values for the observation components of SNM\xspace and MoNG\xspace for the 4DOFs-manipulator operating inside the Factory environment as the observation errors increase. As expected, for additive Gaussian errors, MoNG\xspace is zero, whereas SNM\xspace is small but measurable. This shows that SNM\xspace is able to capture the difference of the variance between the original and linearized observation functions. For non-additive errors, the observation function is non-Gaussian, therefore we can see that both measures increase as the observation errors increase. Interestingly for both measures the observation components yield significantly smaller values compared to the transition components. This indicates that the non-linearity of the problem stems mostly from the transition function. For the car-like robot operating inside the Maze environment we see a similar picture. For the observation function with additive Gaussian errors, \tref{t:compAdditiveCar}(a) shows that MoNG\xspace remains zero for all values of \ensuremath{e_Z}\xspace, whereas SNM\xspace yields a small but measurable value. Again, both measures increase significantly in the non-additive error case in \tref{t:compAdditiveCar}(b). The question is now, how do ABT and MHFR perform in both scenarios when observation functions with non-additive Gaussian errors are used? \tref{t:measureCompareNonAdditive}(a) shows this relative value difference for the 4DOFs-manipulator operating inside the Factory environment. It can be seen that as the errors increase, the relative value difference between ABT and MHFR increase significantly, compared to the relative value difference shown in \tref{t:measureCompareClutter}(a), where an observation function with additive errors is used. Similarly, for the car-like robot operating inside the Maze scenario using the observation function with non-additive errors, the relative value difference shown in table \tref{t:measureCompareNonAdditive}(b) between the two solvers is much larger compared to \tref{t:measureCompareClutter}(b). This is in line with our intuition that non-Gaussian observation functions are more challenging for linearization-based solvers. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{\textbf{(a) Factory environment with non-additive observation errors}} \\ \hline \hline \textbf{$\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$} & \textbf{SNM\xspace} & \textbf{MoNG\xspace} & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 0.001 & 0.012 & 0.0 & 0.06992 \\ \hline 0.0195 & 0.0878 & 0.0476 & 0.43861 \\ \hline 0.038 & 0.1732 & 0.0941 & 0.89720 \\ \hline 0.057 & 0.2347 & 0.1363 & 1.46063 \\ \hline 0.075 & 0.3178 & 0.1740 & 8.34832 \\ \hline \hline \multicolumn{4}{|c|}{\textbf{(b) Maze environment with non-additive observation errors}} \\ \hline \hline \textbf{$\ensuremath{e_T}\xspace = \ensuremath{e_Z}\xspace$} & \textbf{SNM\xspace} & \textbf{MoNG\xspace} & $\mathbf{\left |\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\right |}$ \\ \hline 0.001 & 0.0837 & 0.0 & -0.12451 \\ \hline 0.0195 & 0.0868 & 0.0121 & 0.33872 \\ \hline 0.038 & 0.1017 & 0.0321 & 1.41429 \\ \hline 0.057 & 0.1983 & 0.0531 & 8.70111 \\ \hline 0.075 & 0.2072 & 0.0758 & 0.95132 \\ \hline \end{tabular}} \caption{Average values of SNM, MonG and the relative value difference between ABT and MHFR for the 4DOFs-manipulator operating inside the Factory environment (a) and the car-like robot operating inside the Maze environment (b) with non-additive observation errors} \label{t:measureCompareNonAdditive} \end{table} \subsection{Testing SNM-Planner\xspace}\label{ssec:testing_snm-planner} In this set of experiments we want to test the performance of SNM-Planner\xspace in comparison with the two component planners ABT and MHFR. To this end we tested SNM-Planner\xspace on three problem scenarios: The Maze scenario for the car like robot shown in \fref{f:scenarioCompStudy}(a) and the Factory scenario for the 4DOFs-manipulator. Additionally we tested SNM-Planner\xspace on a scenario in which the Kuka iiwa robot operates inside an office environment, as shown in \fref{f:scenarioCompStudy}(b). Similarly to the Factory scenario, the robot has to reach a goal area while avoiding collisions with the obstacles. The planning time per step is 8s in this scenario. For the SNM\xspace-threshold we chose 0.5. Here we set $\ensuremath{e_T}\xspace=\ensuremath{e_Z}\xspace=0.038$. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline \textbf{Planner} & \textbf{Car-like robot} & \textbf{4DOFs-manipulator} & \textbf{Kuka iiwa} \\ \hline \hline ABT & -150.54 $\pm$ 40.6 & 801.78 $\pm$ 25.7 & 498.33 $\pm$ 30.6 \\ \hline MHFR & -314.25 $\pm$ 31.4 & 345.82 $\pm$ 60.8 & -163.21 $\pm$ 29.6 \\ \hline SNM-Planner\xspace & \textbf{14.68 $\pm$ 46.3} & \textbf{833.17 $\pm$ 13.4} & \textbf{620.67 $\pm$ 35.7} \\ \hline \end{tabular}} \caption{Average total discounted reward and $\pm$ 95\% confidence interval over 1,000 simulation runs. The proportion of ABT being used in the Maze, Factory and Office scenarios is 37.85\%, 56.43\% and 42.33\% respectively.} \label{t:snmPlannerResults} \end{table} The results in \tref{t:snmPlannerResults} indicate that SNM-Planner\xspace is able to approximately identify when it is beneficial to use a linearization-based solver and when a general solver should be used. In all three scenarios, SNM-Planner\xspace outperforms the two component planners. In the Maze scenario, the difference between SNM-Planner\xspace and the component planners is significant. The reason is, MHFR is well suited to compute a long-term strategy, as it constructs nominal trajectories from the current state estimate all the way to the goal, whereas the planning horizon of ABT is limited by the depth of the search tree. However, in the proximity of obstacles, the Gaussian belief assumption of MHFR are no long valid, and careful planning is required to avoid collisions with the obstacles. In general ABT handles these situations better than MHFR. SNM-Planner\xspace combines the benefits of both planners and alleviates their shortcoming. \fref{f:DubinSNMSamples} shows state samples for which the SNM\xspace-values exceed the given threshold of 0.5. It is obvious that many of these samples are clustered around obstacles. In other words, when the support set of the current belief (i.e. the subset of the state space that is covered by the belief particles) lies in open areas, MHFR is used to drive the robot towards the goal, whereas in the proximity of obstacles, ABT is used to compute a strategy that avoids collisions with the obstacles. A similar behavior was observed in the KukaOffice environment. During the early planning steps, when the robot operates in the open area, MHFR is well suited to drive the end-effector towards the goal area, but near the narrow passage at the back of the table, ABT in general computes better motion strategies. Again, SNM-Planner\xspace combines both strategies to compute better motion strategies than each of the component planners alone. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{DubinSNMSamples} \caption{State samples in the Maze scenario for which the approximated SNM\xspace value exceeds the chosen threshold of 0.5} \label{f:DubinSNMSamples} \end{figure} \subsubsection{Sensitivity of SNM-Planner\xspace. } In this experiment we test how sensitive the performance of SNM-Planner\xspace is to the choice of the SNM\xspace-threshold. Recall that SNM-Planner\xspace uses this threshold to decide, based on a local approximation of SNM\xspace, which solver to use for the policy computation. For small thresholds SNM-Planner\xspace favors ABT, whereas for large thresholds MHFR is favored. For this experiment we test SNM-Planner\xspace on the Factory problem (\fref{f:scenarioCompStudy}(b)) with multiple values for the SNM\xspace-threshold, ranging from 0.1 to 0.9. For each threshold value we estimate the average total expected discounted reward achieved by SNM-Planner\xspace using 1,000 simulation runs. Here we set $\ensuremath{e_T}\xspace=\ensuremath{e_Z}\xspace=0.038$. \tref{t:SNMPlannerSensitivity} summarizes the results. It can be seen that the choice of the threshold can affect the performance of SNM-Planner\xspace, particularly for values that are on either side of the spectrum (very small values or very large values) where SNM-Planner\xspace favors only one of the component solvers. However, between the threshold values of 0.2 and 0.5 the results are fairly consistent, which indicates that there's a range of SNM\xspace-threshold values for which SNM-Planner\xspace performs well. \begin{table}[htb] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|} \hline \textbf{SNM-Threshold} & \textbf{Avg. total discounted reward} & \textbf{\% ABT used}\\ \hline \hline 0.1 & 789.43 $\pm$ 18.4 & 100.0 \\ \hline 0.2 & 794.69 $\pm$ 15.3 & 95.3 \\ \hline 0.3 & 801.82 $\pm$ 14.2 & 89.8 \\ \hline 0.4 & 834.32 $\pm$ 13.3 & 65.2\\ \hline 0.5 & 833.17 $\pm$ 13.4 & 59.6\\ \hline 0.6 & 725.71 $\pm$ 19.6 & 42.7 \\ \hline 0.7 & 622.39 $\pm$ 18.5 & 30.6 \\ \hline 0.8 & 561.02 $\pm$ 29.4 & 21.5 \\ \hline 0.9 & 401.79 $\pm$ 39.6 & 7.8 \\ \hline \end{tabular}} \caption{Average total discounted reward and 95\% confidence intervals of SNM-Planner\xspace on the Factory problem for varying SNM\xspace-threshold values. The average is collected over 1,000 simulation runs. The last column shows the percentage of ABT being used as the component solver.\vspace{-9pt}} \label{t:SNMPlannerSensitivity} \end{table} \vspace{-9pt} \section{Summary and Future Work}\label{sec:discussion} This paper presents our preliminary work in identifying the suitability of linearization for motion planning under uncertainty. To this end, we present a general measure of non-linearity, called Statistical-distance-based Non-linearity Measure\xspace (SNM\xspace), which is based on the distance between the distributions that represent the system's motion--sensing model and its linearized version. Comparison studies with one of state-of-the-art methods for non-linearity measure indicate that SNM\xspace is more suitable in taking into account obstacles in measuring the effectiveness of linearization. We also propose a simple on-line planner that uses a local estimate of SNM\xspace to select whether to use a general POMDP solver or a linearization-based solver for robot motion planning under uncertainty. Experimental results indicate that our simple planner can appropriately decide where linearization should be used and generates motion strategies that are comparable or better than each of the component planner. Future work abounds. For instance, the question for a better measure remains. The total variation distance relies on computing a maximization, which is often difficult to estimate. Statistical distance functions that relies on expectations exists and can be computed faster. How suitable are these functions as a non-linearity measure? Furthermore, our upper bound result is relatively loose and can only be applied as a sufficient condition to identify if linearization will perform well. It would be useful to find a tighter bound that remains general enough for the various linearization and distribution approximation methods in robotics.
{'timestamp': '2021-04-01T02:08:04', 'yymm': '2005', 'arxiv_id': '2005.14427', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14427'}
arxiv
\section{Introduction}\label{sec:bsu-intro} Spatial structures and spatial algorithms are sometimes overshadowed by geostatistics and geochemical analysis in geology and stratigraphic modelling despit
e playing a no-less important role. In this paper, \textit{surface warping} is concerned with computational methods for mesh surface manipulation that automatically increase spatial fidelity \cite{guglielmino20133d}.\footnote{This should not be confused with alternative meanings in geomorphology for instance where \textit{surface warping} is attributed to cross-bending stresses resulting from torsional forces in the study of the mechanics of geologic structures \cite{mead1920notes}.} Specifically, it focuses on reshaping and correcting inaccuracies in an existing surface to maximize its agreement with observed data; in particular, geochemical assays sampled from drilled holes in an ore deposit. For decades, displacement field estimation (e.g. \cite{krishnamurthy1995optical}\cite{chang1997simultaneous}\cite{mark1997post}\cite{farneback2001very}\cite{secker2003lifting}) and surface editing techniques (e.g. \cite{nealen2006physically}\cite{sorkine2004laplacian}) have flourished in the computer vision and computer graphics community. The idea of directly manipulating mesh surface vertices can be traced back to Allan, Wyvill and Witten \cite{allan1989methodology} amongst others in a contemporary context, where the concepts of region of influence, movement constraints (in terms of decay function, bound and anchored vertices) are discussed. In specific disciplines such as video coding, dense motion field (optical flow) estimation and segmentation have, for instance, been attempted using deformable mesh motion models and MAP estimators. The a priori distribution of the estimates can be modelled by a coupled Markov random field to account for both spatial smoothness and temporal continuity along the estimated motion trajectories \cite{stiller1994object}. Despite these similarities, the \textit{surface and boundary alignment} problem considered in this paper differs in some significant ways. Although displacement estimation remains a central theme, the observations rely on \textit{geochemistry} rather than photogrammetry (or interferometry in the case of strain tensor estimation from geodetic and satellite deformation measurements \cite{guglielmino20133d}) and these observations are sparse, spatially irregular and noisy in comparison. For the surface and boundary alignment problem, mesh processing techniques \cite{botsch2010polygon} and domain knowledge \cite{dewaele-18} can play a crucial role. To the authors' knowledge, \textit{displacement field estimation} has not been utilized previously in 3D surface-based modelling of geological structures \cite{caumon2009surface} or exploited in practice to improve the efficacy of models in mining. This paper aims to bridge the gap that exists between a model and the latest data acquired from a mine. The Bayesian approach basically connects a model (or belief) with evidence from geochemical observations (the reality). Displacement estimation helps identify discrepancies, then surface warping corrects spatial inaccuracies in the surface-based representation of the relevant geological structures. In the literature, research has progressed largely in two separate streams: (1) advances in computational techniques that manipulate surfaces using dense uniformly sampled data, where warping is used to process visual information or register anatomical changes via MRI \cite{thompson2000warping} for instance; (2) generating 3D subsurface models using extensive field data and different modalities, a variety of techniques (including the use of contours and differential geometry) are summarized in \cite{leung-19subsurface} (Sect.\,1.1) and \cite{caumon2009surface}. In the first research stream, examples include (a) energy constrained mesh deformation using subspace gradient domain techniques \cite{huang2006subspace} where the targeted application is computer animation; and (b) post-rendering 3D warping which compensates for occlusion-related artifacts due to viewpoint variation by compositing interpolants from multiple reference frames \cite{mark1997post}. In computer graphics, methods for constructing extrinsically smooth and globally optimal directional fields have been considered by Jakob et al.\,\cite{jakob2015instant}, Huang and Ju \cite{huang2016extrinsically} and Kn{\"o}ppel et al.\,\cite{knoppel2013globally} where it is treated as a discrete optimization or sparse eigenvalue problem. An example from the second stream is the work of Olierook et al.\,\cite{olierook2021bayesian} which uses a Bayesian approach to fuse lithostratigraphic field observations with aeromagnetic and gravity measurements to build 3D geological models without structural data. Such model is fit for purpose for mineral exploration; however the grade control requirements encountered in a mine production setting are generally more stringent and demands far greater precision and vertical resolution. In \cite{calcagno2008geological}, Calcagno et al.\, use cokriging \cite{ver1998constructing} to create a continuous 3D potential field to describe the geometry of the geology. The model input consists of the location of geological interfaces and orientation data. Subsequently, structural information such as the location and orientation of boundaries are extracted from isosurfaces and the gradient of the interpolated potential function. The state of the art \cite{maccormack-19} generally considers modelling as an open-loop process where the input data is complete and a static model is to be produced. In practice, data is usually harvested bench-by-bench in a piecemeal manner in open-pit mining. Thus, there is a strong incentive in utilizing newly acquired data to improve both surface definitions and the existing model, to further understand the subterranean deposit below the current bench for mine planning and operational guidance. The missing link is a synergy between geochemical data, surface representation and the model --- and whether grade prediction performance can benefit from an incremental update strategy. These novel issues have not received much attention in earth and spatial science. In terms of where this paradigm fits, based on Nealen's survey article \cite{nealen2006physically}, a new category called geochemistry-based Bayesian deformable surface (GC-BDS) model is proposed to characterize the warping approach presented in this paper. \subsection{Motivation}\label{sec:motivation} To understand the importance of surface and boundary alignment, we first consider the implications of working with an inaccurate (or misleading) surface and its flow-on effects on subsequent modelling processes before formulating a Bayesian framework for surface warping with a view of improving surface integrity. As motivation, we illustrate how starting with a bad surface --- one that is not representative of the true geological boundary --- can impact on the block structure and inferencing ability of a grade estimation model. \begin{figure*}[!htb] \centering \includegraphics[width=140mm]{fig-warping-motivation1.pdf} \caption{Computational pipeline for the orebody grade estimation task. Block model spatial restructuring and inferencing both suffer from the negative effects of a surface that misrepresents the true geological boundary. The question marks serve as a reminder that the grade value of all the blocks need to be estimated.} \label{fig:warping-motivation1} \end{figure*} Figure~\ref{fig:warping-motivation1} illustrates the computational pipeline for a typical grade estimation problem where the main objectives are (i) obtain a compact block-based representation of the orebody with good boundary localization property; (ii) estimate the grade value for chemical components of interest, predicting these values especially at locations where no measurements are available using geostatistical or machine learning techniques. For the first objective, it requires changing the spatial structure of the model, partitioning the blocks as necessary (down to some predetermined, acceptable minimum block size) to closely follow the location and approximate the curvature of the geological boundary which the surface seeks to represent. In Fig.~\ref{fig:warping-motivation1}(top middle), we have a situation where the existing surface misrepresents the location of the actual boundary [which is not directly observable].\footnote{For illustrative purpose, Fig.~\ref{fig:warping-motivation1} shows a situation where the modelled boundary is grossly misplaced. In practice, the discrepancies that exist between the actual and initial modelled boundary are often explained by local differences due to inadequate sampling.} Consequently, the spatial restructuring algorithm \cite{leung2020mos} produces a block structure that is not faithful to the underlying boundary through no fault of its own. For the second objective, using a Gaussian Process (GP) learning and inferencing approach [described in Appendix~\ref{sect:inferencing-for-grade-estimation}], spatial variations (covariance functions) are learnt using samples drawn from an incorrect geological domain structure. A major consequence, depicted in Fig.~\ref{fig:warping-motivation1}(bottom right), is that the inferenced values for the blocks are biased near the actual boundary due to under-estimation or over-estimation of the chemistry values, and the blocks not properly decomposed to follow the shape of the boundary. These two factors lead to smearing in the predicted chemistry across the actual boundary which has practical implications for ore extraction and material blending in a production setting. \begin{figure*}[!htb] \centering \includegraphics[width=140mm]{fig-warping-motivation2.pdf} \caption{Rectification of the boundary through surface warping improves the spatial structure and inferencing ability of the grade block model.} \label{fig:warping-motivation2} \end{figure*} The main proposition of this paper is a Bayesian approach that corrects inaccuracies in a mesh surface through spatial warping which thus increases the positional integrity of the underlying boundary that it seeks to represent. Conceptually, the problem involves estimating the displacements to bring an existing surface into alignment with the true boundary based on the observed sample chemistry. The aim is to capture more precisely the shape and location of local features; this is depicted in Fig.~\ref{fig:warping-motivation2}(top left). The existing block structure is modified adaptively to align itself with the new warped surface. In Fig.~\ref{fig:warping-motivation2}(top center), the new block structure evidently follows the curvature of the new surface, however some of the preexisting (subdivided) blocks from the earlier misplaced boundary still remain. A block merging algorithm proposed in \cite{leung2020mos} is used to coalesce fragmented blocks to produce the consolidated block structure shown in Fig.~\ref{fig:warping-motivation2}(top right). The GP inferencing procedure is applied once again to the new block structure. The result shown in Fig.~\ref{fig:warping-motivation2}(bottom right) is free of blending or smearing artefacts. When the surface is accurate, the resultant model exhibits a clear contrast in the predicted grade values on either side of the actual boundary. Improving the reliability of these estimates ultimately enables better decision making, planning and ore extraction. To summarise, an ill-placed boundary can impact the block structure and grade estimation in significant ways. The boundary localization properties deteriorate when a block model is partitioned by an inaccurate surface. Incorrect geological domain classification resulting from a misleading surface can cause smearing to occur during inferencing where compositions are over-estimated or under-estimated near the actual boundary. \subsection{Contributions}\label{sect:contributions} In Section~\ref{sect:surface-warping}, the surface warping problem is formulated in a Bayesian framework and the MAP (maximum a posteriori) solution for node displacements estimation is presented; this serves to maximize the agreement between surfaces and geochemical observations from blast hole samples. In Section~\ref{sect:boundary-warping-fusion-workflow-validation}, local improvements are visually highlighted. An objective measure called \textit{r\textsubscript{2} cdf error score} is proposed and used for model validation to demonstrate an improvement in grade estimation performance resulting from spatial warping. \section{Surface warping}\label{sect:surface-warping} Surface warping may be framed as a Bayesian parametric inference problem where the objective is to estimate the required displacements (or spatial corrections) $\boldsymbol{\theta}\equiv \mathbf{d}\in\mathbb{R}^3$ to maximize the positional integrity of geological boundaries given a set of observations. In general terms, the observations consist of the location $\mathbf{x}\in\mathbb{R}^3$ and spatial extent $\boldsymbol {\delta}=[0,0,h]^T\subseteq\mathbb{R}^3$ of the measurements, as well as the composition $\mathbf{c}\in\mathbf{R}^K$ of the sample determined by chemical assays. The prior information available is a reference geological structure $\mathcal{G}$ that determines the geozone classification ($g\in\mathbb{Z}$) of a sample which indicates the geological domain where it belongs. This prior embeds knowledge about the stratigraphic structure of the modelled region which is considered a faithful (unbiased) representation of the ground truth at large scales, but inaccurate at smaller scales ($\sim 5-20$m) due to sparse sampling and local variation. Accordingly, applying Bayes rule, the problem may be formulated as \begin{align} \argmax_{\mathbf{d}} P(\mathbf{d}\!\mid\! \mathbf{c},\mathbf{s})\label{eq:surface-warping-problem} \end{align} where spatial information is contained in $\mathbf{s}=[\mathbf{x},\boldsymbol{\delta}]^T\in\mathbb{R}^6$ and \begin{align} P(\mathbf{d}\!\mid\! \mathbf{c},\mathbf{s})&\propto P(\mathbf{c}\!\mid\!\mathbf{d},\mathbf{s})\, P(\mathbf{d}\!\mid\!\mathbf{s}) \label{eq:surface-warping-bayes-formula1}\\ &= \left(\sum_{g} P(\mathbf{c},g\!\mid\! \mathbf{d},\mathbf{s})\right) P(\mathbf{d}\!\mid\!\mathbf{s}) \label{eq:surface-warping-bayes-formula2}\\ &= \left(\sum_{g} P(\mathbf{c}\!\mid\! g,\mathbf{d},\mathbf{s}) P(g\!\mid\! \mathbf{d},\mathbf{s})\right) P(\mathbf{d}\!\mid\!\mathbf{s}) \label{eq:surface-warping-bayes-formula3}\\ &= \sum_{g} P(\mathbf{c}\!\mid\! g,\mathbf{d},\mathbf{s}) P(g\!\mid\! \mathbf{d},\mathbf{s}) P(\mathbf{d}\!\mid\!\mathbf{s}) \label{eq:surface-warping-bayes-formula4}\\ &\approx \sum_{g} P(\mathbf{c}\!\mid\! g) P(g, \mathbf{d}\!\mid\! \mathbf{s})\label{eq:surface-warping-bayes-formula5} \end{align} Marginalization, conditional probabilities and conditional independence,\footnote{In our experience, using $p(\mathbf{c}\!\mid\! g,\mathbf{d},\mathbf{s})$ does not further increase performance since spatial attributes are already modelled by $P(g,\mathbf{d}\!\mid\!\mathbf{s})$.} ($\mathbf{c}\perp\!\!\!\perp \mathbf{d},\mathbf{s})\mid g$, are used in (\ref{eq:surface-warping-bayes-formula2}), (\ref{eq:surface-warping-bayes-formula3}) and (\ref{eq:surface-warping-bayes-formula5}), respectively. The final expression in (\ref{eq:surface-warping-bayes-formula5}) offers a clear interpretation for the posterior $P(\mathbf{d}\!\mid\! \mathbf{c},\mathbf{s})$. Specifically, $P(\mathbf{c}\!\mid\! g)$ denotes the likelihood of observing the chemical composition $\mathbf{c}$ in geozone $g$, whereas $P(g, \mathbf{d}\!\mid\! \mathbf{s})$ represents a spatial prior that considers the displacement and geozone likelihood given the sample location. This Bayesian network is described by the graphical model shown in Fig.~\ref{fig:graphical-model}. Even for this simple structure, there is tremendous scope for ingenuity. The following describes one possible implementation and reflects on some practical issues. \begin{figure*}[!htb] \centering \includegraphics[width=85mm]{fig-graphical-model.pdf} \caption{Graphical model for surface warping. The graph expresses the conditional dependence structure between the random variables: \textbf{c}=observed chemistry, \textbf{g}=geozone, \textbf{d}=displacement and \textbf{s}=spatial properties. The arrows represent conditional dependency of a target node from a source node.} \label{fig:graphical-model} \end{figure*} For readers new to geostatistics, a common pitfall is an attempt to model $P(\mathbf{c}\!\mid\! g)$ directly using raw chemical assay measurements and an affinity measure such as the Mahalanobis distance, $d^2(\mathbf{c}^\text{sample},\mathbf{c}^\text{class})$. As Filzmoser et al.\cite{filzmoser-hron-08} have pointed out, statistical analysis performed on compositional data\footnote{A key characteristic of compositional data is that they lie in the Aitchison Simplex \cite{filzmoser-hron-08}, which means the data is not Gaussian (or even symmetrically) distributed. An increase in a key component may cause another to decrease and the components sum to a constant.} without resorting to log-ratio transformation may lead to invalid or dubious interpretations \cite{garrett-17}. Even when the Mahalanobis distance metric can be legitimately applied following isometric log-ratio transformation \cite{greenacre-19}, it may still be inappropriate, as errors of the same magnitude may have different levels of significance depending on the grading thresholds. In grade estimation work, samples are routinely categorized based on composition. Thus, instead of $P(\mathbf{c}\!\mid\! g)$, a likelihood probability mass function $L(y(\mathbf{c})\!\mid\! g)$ is used by Lowe in \cite{rtgi-rtcma2018} where $y(\mathbf{c})$ represents a categorical label. These labels generally correspond to mineralogical groupings or `destination tags' since the excavated materials will be sorted eventually based on chemical and material properties.\footnote{For downstream ore processing, it is useful to know whether the material is hard or friable, lumpy or fine, viscous or powdery. Typically, 6 to 12 categorical labels, $y(\mathbf{c})$, are used. For our purpose, we focus more on sample chemistry. The criteria for HG (high grade) and LGA (low grade aluminous) iron ore, for instance, might be set at Fe $\ge$ 60\% and ($50\%\le \text{Fe} < 60\%$, Al\textsubscript{2}O\textsubscript{3} $\ge 3\%$), respectively. These parameters vary depending on the deposit and geozone.} In practice, there is a finite number of mineralogical groupings and geozones. Hence, $L(y(\mathbf{c})\!\mid\! g)$ is computed from a table with dimensions ($N_\text{class},N_\text{geozone}$) constructed using frequency counts applied to assay samples collected from exploration drillings. Here, $\mathbf{c}$ is observed, $y(\mathbf{c}): \mathbb{R}^K\rightarrow \mathbb{Z}$ is a deterministic mapping and $g$ is known. For the prior $P(g, \mathbf{d}\!\mid\! \mathbf{s}(\mathbf{x},\boldsymbol{\delta}))$, a proxy function $R(\mathbf{x}+\mathbf{d}, g, \boldsymbol{\delta})$ is used to compute the geozone and displacement likelihood in \cite{rtgi-rtcma2018}. This utilizes the a priori geological structure $\mathcal{G}$ to assess the feasibility of displacement $\mathbf{d}$. In particular, $R(\mathbf{x}+\mathbf{d}, g, \boldsymbol{\delta})$ determines the amount of overlap, $r\in[0,1]$, between geozone $g$ and an interval observation $\boldsymbol{\delta}$ of length $h$ at the proposed location $(\mathbf{x}+\mathbf{d})$. Alternatively, spatial correlation may be modelled using autocorrelation functions and random field simulation as demonstrated in \cite{gong2020stratigraphic} in regions where stratigraphic association can be reasonably inferred from dense drill hole samples. It is clear that a number of strategies can be used to find the optimal displacement in a hierarchical search space. For instance, a conditional random field may be used to impose connectivity and regularization constraints. For simplicity and speed, a set of candidate displacement points $\{\mathbf{d}_i\}$ may be chosen from a regular 3-D lattice in the vicinity of $\mathbf{x}$, viz., $\mathcal{L}_{\mathbf{x}}$. Assuming $\lvert \mathcal{L}_{\mathbf{x}}\rvert = N_\text{displacement}$ for all $\mathbf{x}$, the posterior would result in a table of size $(N_\text{sample},N_\text{displacement})$. The maximum a posteriori (MAP) estimate is given by (\ref{eq:surface-warping-max-posterior-estimate}), the solution with minimum $\lVert\mathbf{d}\rVert$ is chosen in the event of a tie. \begin{align} \mathbf{d}_\text{MAP}(\mathbf{x})&=\argmax_{\mathbf{d}} L(\mathbf{d}\!\mid\! y(\mathbf{c}),\mathbf{s}(\mathbf{x},\boldsymbol{\delta}))\notag\\ &= \argmax_{\mathbf{d}} \sum_g L(y(\mathbf{c})\!\mid\! g)R(\mathbf{x}+\mathbf{d}, g, \boldsymbol{\delta})\label{eq:surface-warping-max-posterior-estimate} \end{align} Diffusion flow techniques (based on using discrete Laplace-Beltrami \cite{botsch2010polygon}) may be applied to manifold surfaces to obtain a coherent displacement field where $\mathbf{d}_\text{MAP}(\mathbf{x})$ varies smoothly. However, dithering often presents as a simpler alternative. The solution is obtained as an aggregate average over a small neighbourhood, $\{\mathbf{x}+\boldmath{\epsilon}_i\}\in\mathcal{N}_\mathbf{x}$,\footnote{Another option is to treat $\mathcal{N}_\mathbf{x}$ as the barycentric cell or mixed Voronoi cell \cite{botsch2010polygon} in the 1-ring neighbourhood of $\mathbf{x}$.} with higher weights $w(\mathbf{x}_i)$ given to nearby estimates and displacements perpendicular to the surface normal $\mathbf{n}_\mathbf{x}$. \begin{align} \mathbf{d}'_\text{MAP}(\mathbf{x})=\sum_{\mathbf{x}_i\in\mathcal{N}_\mathbf{x}} w(\mathbf{x}_i)\cdot \mathbf{d}_\text{MAP}(\mathbf{x}_i)\label{eq:surface-warping-map-aggregate} \end{align} For instance, setting $w(\mathbf{x}_i)$ to $\text{proximity}(\mathbf{x},\mathbf{x}_i)\times\left|\left<\mathbf{n}_\mathbf{x},\mathbf{d}_\text{MAP}(\mathbf{x}_i)\right>\right|$ provides local smoothing and discourages movement parallel\footnote{Tangential movements do not effectively compensate for displacement errors which are perpendicular to the surface.} to the surface which is not productive.\footnote{In our implementation, IDW (inverse distance weights) are used for proximity($\mathbf{x},\mathbf{x}_i$). Another option is to use normalized exponential (softmax) function $\frac{e^{-\beta z_i}}{\sum_{i\in\mathcal{N}}e^{-\beta z_i}}$ where $z_i=\lVert\mathbf{x}-\mathbf{x}_i\rVert$. For the direction penalty term, $1 - (1 - {\cos}^2\theta)^2$ is used in place of $\left|\left<\mathbf{n}_\mathbf{x},\mathbf{d}_\text{MAP}(\mathbf{x}_i)\right>\right|$, where $\cos\theta = \left<\mathbf{n}_\mathbf{x},\mathbf{d}_\text{MAP}(\mathbf{x}_i)\right>$.} In instances where stratigraphic forward modelling (SFM) hints are available, $\mathbf{n}_\mathbf{x}$ may be guided instead by directional projections based on the deposition and evolution of sedimentary facies within a stratigraphic framework \cite{huang2015recent}. Given a set of mesh surface vertices $\mathbf{x}_q\in\mathcal{S}$, their corrected positions after surface warping are given by \vspace{-1mm}\begin{align} \mathbf{x}'_q=\mathbf{x}_q-\mathbf{d}'_\text{MAP}(\mathbf{x}_q).\label{eq:surface-warping-vertices-correction} \end{align} Equation (\ref{eq:surface-warping-vertices-correction}) simply applies the displacement-error corrections to surface vertices. Computing this expression usually requires spatial interpolation as $\mathbf{d}_\text{MAP}(\mathbf{x})$ is initially evaluated at sparse locations where assay information (geochemical evidence) is available. In areas where spatial resolution is low, mesh surface triangles may be subdivided to increase point density. The classification function, $y(\mathbf{c}): \mathbb{R}^K\rightarrow \mathbb{Z}$, and more generally $p(\mathbf{c}\!\mid\! g)$, may be learned \cite{leung2021warpml} using supervised or unsupervised techniques when the rules for destination tags (mineralogical grouping) are inadequate or unavailable. \subsection{Algorithm}\label{sect:spatial-warping-algorithm} The Bayesian surface warping algorithm may be summarised in a series of steps. The description here is immediately followed with explanation and illustration of the key steps in Sec.~\ref{sect:illustration}. \begin{enumerate} \item[]\hspace{-9mm}Given a chemical assay to material type categorical mapping $y(\mathbf{c}): \mathbb{R}^K\rightarrow \mathbb{Z}$, \item Compute $L(y(\mathbf{c})\!\mid\! g)\in\mathbb{R}^{N_\text{class}\times N_\text{geozone}}$ using training samples $\{(\mathbf{c}_j, g_j)\}_j$ from exploration holes\label{algo:step1} \item[]\hspace{-9mm}For each blast hole sample $i=1,\ldots, N_\text{sample}$: \item Assign categorical label $y(\mathbf{c}_i)\in\{1,\ldots,N_\text{class}\}$ to each sample\label{algo:step2} \item Compute $L(y(\mathbf{c}_i)\!\mid\! g)$ across all $N_\text{geozone}$ geozones\label{algo:step3} \item Compute geozone-displacement likelihood, $R(\mathbf{x}_i+\mathbf{d}_i, g,\boldsymbol{\delta}_i)$\label{algo:step4} \item[]\hspace{-9mm}For each surface vertex $\mathbf{x}_q$ and candidate displacement vector $\mathbf{d}_k$ from $\mathcal{L}_{\mathbf{x}_q}, k\in\{1,\ldots,N_\text{displacement}\}$: \item Interpolate the displacement field\label{algo:step5} \begin{enumerate} \item Find the $M$ nearest\footnote{Alternatively, use samples inside a local geodesic ball, e.g. within the 1-ring neighbourhood \cite{botsch2010polygon}.} samples to $\mathbf{x}_q$\label{algo:step5a} \item Compute $L_{m,k}\equiv L(\mathbf{d}_{m,k}\!\mid\! y(\mathbf{c}_m),\mathbf{s}_m(\mathbf{x}_m,\boldsymbol{\delta}_m)) = \sum_g L(y(\mathbf{c}_m)\!\mid\! g)R(\mathbf{x}_m+\mathbf{d}_{m,k}, g, \boldsymbol{\delta}_m)$\\to obtain a table where $m\in\{1,\ldots,M\}$ and $k\in\{1,\ldots, N_\text{displacement}\}$.\label{algo:step5b} \item Normalize each row s.t. $\max_k L(\mathbf{d}_{m,k}\!\mid\! y(\mathbf{c}_m),\mathbf{s}_m(\mathbf{x}_m,\boldsymbol{\delta}_m)) = 1$ for each $m$\label{algo:step5c} \item Compute weights incorporating proximity and directional preference:\\$w_{m,k}(\mathbf{x}_q) \propto \text{proximity}(\mathbf{x}_q,\mathbf{x}_m)\times\left|\left<\mathbf{n}_{\mathbf{x}_q},\mathbf{d}_{m,k}\right>\right|$ and $\sum w_{m,k} = 1$\label{algo:step5d} \item Compute $\overline{L}(\mathbf{d}_{k}\!\mid\! y(\mathbf{c}_m),\mathbf{s}_m) =\sum_m w_{m,k} L_{m,k}$\label{algo:step5e} \item Let $\mathbf{d}_\text{MAP}(\mathbf{x}_q)=\mathbf{d}_{m,k^*}$ where $k^*=\argmax_k \overline{L}(\mathbf{d}_{k}\!\mid\! y(\mathbf{c}_m),\mathbf{s}_m)$\label{algo:step5f} \end{enumerate} \item Apply smoothing to MAP displacement estimate. For example,\label{algo:step6} \begin{enumerate} \item Compute inverse distance weights $w_{p,q}$ for neighbour points $\mathbf{x}_p\in\mathcal{N}_{\mathbf{x}_q}$ s.t. $\sum w_{p,q}=1$ \item Compute $\mathbf{d}'_\text{MAP}(\mathbf{x}_q)=\sum_{p} w_{p,q}\cdot \mathbf{d}_\text{MAP}(\mathbf{x}_p)$ \end{enumerate} \item Apply correction to surface vertex to minimize discrepancy\label{algo:step7} \begin{enumerate} \item Update $\mathbf{x}'_q\leftarrow \mathbf{x}_q-\mathbf{d}'_\text{MAP}(\mathbf{x}_q)$\label{algo:step7a} \end{enumerate} \item Post-processing step: resolve conflicts, e.g. any surface patch intersection that may arise. \end{enumerate} \subsection{Illustration}\label{sect:illustration} The Algorithm presented in Sec.~\ref{sect:spatial-warping-algorithm} step \ref{algo:step1} produces a table for $L(y(\mathbf{c})\mid g)$. The example shown in Fig.~\ref{fig:dest-tags-observation-likelihood-given-geozone} has size $N_\text{geozone}\!=\!24$, $N_\text{class}\!=\!7$. The exact definitions and geochemical mapping, $y(\mathbf{c})$, used will vary depending on the site. The main point to convey is the probabilistic association between chemistry and geozone, thus an assay sample with destination tag $y(\mathbf{c})$ has a plausible connection with multiple geozones in general. For instance, the HG tag has strong affinity with mineralized geozones (\textit{M}) but its association with hydrated domains (\textit{H}) cannot be discounted based on chemical evidence alone. This multivalent proposition is emphasized in Fig.~\ref{fig:chemistry-likelihood-latex} where each dot corresponds to a single blasthole assay observation $y(\mathbf{c}_i)$ (computed in step \ref{algo:step2}) and often a given location lit up in multiple geozones across different panels. This represents a spatial description of the evaluation in step \ref{algo:step3}. The panels in Fig.~\ref{fig:chemistry-likelihood-latex} reveal spatial correlation with certain orebody structures and varying response (probability) associated with the dolerite dyke (\textit{D}) in row 2 and mineralized geozones (\textit{M}) in row 3. \begin{table}[!htb] \setlength\tabcolsep{0pt} \begin{center} \scriptsize \begin{tabular}{c} \includegraphics[width=95mm,trim={0mm 0mm 0mm 0mm},clip]{fig-dest-tags-observation-likelihood-given-geozone.pdf}\\ Destination definitions --- HG (high grade): $\text{Fe}\ge 60$ and $\text{Al}_2\text{O}_3 < 6$, BL (blended): $\text{Fe}\in [55,60)$, LG (low grade): $\text{Fe}< 55$;\\ +S (siliceous: $\text{Al}_2\text{O}_3 < 3$), +A (aluminous: $\text{Al}_2\text{O}_3 \in[3,6)$), W\textsubscript{1} (waste: $\text{Fe}< 50$ and $\text{Al}_2\text{O}_3 < 6$), W\textsubscript{2,3} (waste: $\text{Al}_2\text{O}_3 \ge 6$)\\ Geozone definitions: \textit{H}=hydrated, \textit{M}=mineralized, \textit{U}=unmineralized, \textit{C}=canga, \textit{B}=detrital, \textit{D}=dolerite. \end{tabular} \end{center} \captionof{figure}{Likelihood table of destination tag (geochemical mapping) given geozone, $L(y(\mathbf{c})\mid g)$} \label{fig:dest-tags-observation-likelihood-given-geozone} \end{table} \begin{figure*}[!htb] \centering \includegraphics[width=125mm]{fig-chemistry-likelihood-latex.pdf} \caption{Evaluation of $L(y(\mathbf{c}_i)\mid g)$ across geozones for many samples $\mathbf{c}_i$ within a spatial region of interest} \label{fig:chemistry-likelihood-latex} \end{figure*} Step~\ref{algo:step4} of the algorithm starts incorporating spatial information to disambiguate between geozone candidates whose chemistry are consistent with the observed assay sample. In concert, step~\ref{algo:step5} performs displacement estimation to minimize geochemical discrepancies with respect to the existing boundary represented by the surface. Fig.~\ref{fig:geochemistry-spatial-likelihood} provides a motivating example for both endeavors. First of all, if an assay sample is chemically consistent with the geozone it is currently situated in, there is no need for any spatial correction. It is only required when the observed chemistry is incongruent with the geochemical characteristics of the assumed domain at the measured location. \begin{figure*}[!htb] \centering \includegraphics[width=140mm]{fig-geochemistry-spatial-likelihood.pdf} \caption{Displacement estimation using $L(y(\textbf{c}_i)\mid g) R(\mathbf{x}_i+\mathbf{d}_i,g,\boldsymbol{\delta}_i)$} \label{fig:geochemistry-spatial-likelihood} \end{figure*} Figure~\ref{fig:geochemistry-spatial-likelihood}(a) shows an out-of-place (low grade Fe) observation within a mineralized domain. This is reflected numerically by $L(y(\mathbf{c}_i\mid G_1) \ll L(y(\mathbf{c}_i\mid G_2)$, where $G_1\in M$ and $G_2\in U$ belong to mineralized and unmineralized domains, respectively. Fig.~\ref{fig:geochemistry-spatial-likelihood}(b) elaborates on step~\ref{algo:step5b}: $L_{m,k}\equiv L(\mathbf{d}_{m,k}\!\mid\! y(\mathbf{c}_m),\mathbf{s}_m(\mathbf{x}_m,\boldsymbol{\delta}_m))$ for just one sample ($m=1$) and considers three displacement hypotheses $H_k$ with displacement vectors $\mathbf{d}_k\equiv\mathbf{d}_{m,k}, k\in\{1,...,K\}$ where $K=3$. The geozone-displacement likelihood $R(\mathbf{x}_m+\mathbf{d}_{m,k}, g, \boldsymbol{\delta}_m)$ is determined by the fraction of sample interval overlap with the a priori geozone structure. The example in Fig.~\ref{fig:geochemistry-spatial-likelihood}(b) shows the displacement hypothesis $H_2$ has the greatest support numerically. Intuitively, it represents the translation required to move the low grade sample completely into the unmineralized domain, $G_2$. Although hypothesis $H_3$ also provides a feasible solution, the displacement is greater and therefore it is not preferred. Based on these principles, Fig.~\ref{fig:geochemistry-spatial-likelihood}(c) shows the optimal displacement estimated for $M=2$ samples. Put simply, step~\ref{algo:step5e} roughly corresponds to Fig.~\ref{fig:geochemistry-spatial-likelihood}(d) where the displacement computed for a surface vertex $\mathbf{x}_q$ is essentially a weighted average of the estimated displacement from its $M$ nearest neighbors. Step~\ref{algo:step6} typically produces a locally smooth displacement error field. For an arbitrary vertex $\mathbf{x}_q$, the optimal solution, $\mathbf{d}'_\text{MAP}(\mathbf{x}_q)$, appears as a peak (see asterisk) in the parameter space in Fig.~\ref{fig:displacement-error-extrapolated}. Finally, the surface vertices are adjusted to complete step~\ref{algo:step7}, this may seen as additional smoothing or extrapolation on top of Fig.~\ref{fig:geochemistry-spatial-likelihood}(d).\footnote{In step~\ref{algo:step7a}, the update equation $\mathbf{x}'_q\leftarrow \mathbf{x}_q-\mathbf{d}'_\text{MAP}(\mathbf{x}_q)$ contains a minus sign because the displacement error estimation process considers moving samples around the initial boundary. In practice, we need to do the opposite, viz. move the boundary with respect to the samples which remain fixed at their measured locations.} \begin{figure*}[!htb] \centering \includegraphics[width=65mm]{fig-displacement-error-extrapolated.pdf} \caption{The likelihood associated with each displacement vector in the parameter space. The optimal solution (see asterisk) represents the consensus amongst sample estimates $\mathbf{d}_\text{MAP}(\mathbf{x}_p)$ in the neighborhood of $\mathbf{x}_q$, $\mathcal{N}_{\mathbf{x}_q}$} \label{fig:displacement-error-extrapolated} \end{figure*} \subsection{Complexity}\label{sect:complexity} The main computational complexity lies in step~\ref{algo:step5} of the algorithm. Looking at $L_{m,k} = \sum_g L(y(\mathbf{c}_m)\!\mid\! g)R(\mathbf{x}_m+\mathbf{d}_{m,k}, g, \boldsymbol{\delta}_m)$, since $L(y(\mathbf{c}_m)\!\mid\! g)$ amounts to an $O(1)$ lookup operation, the per-sample complexity comes from $R(\mathbf{x}_m+\mathbf{d}_{m,k}, g, \boldsymbol{\delta}_m)$ where sample $m$ is fixed and $k\in\{1,\ldots,N_\text{displacement}\}$ varies over the displacement search space. Assuming a rectilinear, uniformly quantized search space over a lattice $\mathcal{L}\subset\mathbb{R}^3$ and $n$ grid points in each linear dimension, an exhaustive brute-force search (see Fig.~\ref{fig:complexity}a) is $O(n^3)$. \begin{figure*}[!htb] \centering \includegraphics[width=140mm]{fig-complexity.pdf} \caption{Displacement search space exploration strategies: (a) brute-force, (b) subject to autoregressive or motion predictive modelling constraints, (c) hierarchical approach.} \label{fig:complexity} \end{figure*} Depending on the surface geometry and sampling density, if the displacement estimation process is amenable to autoregressive or motion predictive modelling \cite{elnagar1998motion,kim2014robust}, complexity drops to $O(n^2)$. As an example, if the optimal displacement at three points in the vicinity of $\mathbf{x}_m$ are represented by the vertices of the triangle in the solution space in Fig.~\ref{fig:complexity}(b) and the cost function is locally convex, then the search may be constrained to the triangular region $\Omega\subset\mathbb{R}^2$. Although this paper does not mandate a specific implementation, it is worth noting that a hierarchical search strategy (see k-step successive refinement in Fig.~\ref{fig:complexity}c or \cite{konrad2005}) has an approximate complexity of $O(n\log n)$ since an ($2^3+1$) point search is conducted $k$ times, where $k\in\mathbb{Z}\approx \log_2 n$. \section{Performance evaluation}\label{sect:boundary-warping-fusion-workflow-validation} The benefits of spatial warping is first demonstrated, this will be followed by results from a large scale validation experiment. \begin{figure*}[!htb] \centering \includegraphics[width=140mm]{fig-surface-warping.pdf} \caption{Surface warping performed on a mineralization base surface. (a) original mesh surface, (b) warped surface, (c) map shows where the surface has elevated and sunk after warping. (d)--(e) Assays rendered above the original (resp. warped) surface are predominantly high-grade; (f) displacement field obtained through surface warping (an equivalent high resolution display is shown in Fig.~\ref{fig:displacement-field})} \label{fig:surface-warping} \end{figure*} \subsection{Local corrections due to spatial warping}\label{sect:local-correction-spatial-warping} Figure~\ref{fig:surface-warping} provides an overview of the surface warping result for a mineralization base surface where high grade material ideally sits above the boundary. The bottom panels illustrate the displacement field and highlight changes in elevation by superimposing the surfaces before and after warping. The right hand side panels in the top and middle row show the mineral (Fe) grade of the assay samples situated above the original and warped surfaces. \begin{figure*}[!htb] \centering \includegraphics[width=125mm]{fig-displacement-field.pdf} \caption{Changes to the surface: (a) signed distance visualization obtained from (b) vertices displacement field; (c)--(d) magnified view of the displacement field in two sub-regions.} \label{fig:displacement-field} \end{figure*} Figure~\ref{fig:displacement-field}(a) describes the topological changes to the warped surface. It conveys the same information as Fig.~\ref{fig:surface-warping}(c) albeit at higher resolution. This signed distance visualization is generated using a multiscale cloud compare algorithm \cite{lague2013accurate} which takes the original and warped surface vertices as input and displays elevation differences as a color map. Figure~\ref{fig:displacement-field}(b) and insets show the raw displacement field obtained directly from the warping process as a quiver plot. \begin{figure*}[!htb] \centering \includegraphics[height=78mm,width=125mm]{fig-surface-warping-differences-magnified.pdf} \caption{Fe grade of assay samples situated above the (top) original and (bottom) warped `min\_base' surface.} \label{fig:surface-warping-differences-magnified} \end{figure*} Figure~\ref{fig:surface-warping-differences-magnified} examines the effects of surface warping in more detail. It provides a magnified view of the rectangular region shown in Fig.~\ref{fig:surface-warping}(d) and (e). As expected, areas pointed by an arrow have contracted after surface warping. This results in the correct behaviour whereby non-mineralized samples have vanished below the mineralization boundary which is implicitly represented by the warped surface. The white lines indicate inclusive behaviour (or areas of expansion) where mineralized samples have risen above the mineralization boundary following surface warping. Overall, local delineation between high-grade and low-grade material has improved. The surface has effectively been pushed down to include more mineralized (red) samples and lifted up to exclude more low-grade (yellow) samples. Although the assay samples are coloured by iron grade alone in this illustration, it is worth bearing in mind that ore grade is assessed in practice as a function of multiple chemical components and this often includes Al\textsubscript{2}O\textsubscript{3} and other trace elements. \begin{figure*}[!htb] \centering \includegraphics[width=125mm]{fig-surface-sections-with-assays.pdf} \caption{Surface cross-sections taken at (a) $400\pm 5$m and (b) $440\pm 5$m show better delineation between waste and high-grade samples after warping.} \label{fig:surface-sections-with-assays} \end{figure*} Figure~\ref{fig:surface-sections-with-assays} presents an alternative view. Surface cross-sections (the shell being visualized as black ribbons) are shown together with assay samples (coloured by grade) at two different elevations. Local changes are indicated by the arrows. The original surface is inaccurate at (a.1) as it excludes certain high-grade samples whereas in (a.2), (a.3) and (b.4) waste samples are included inside the boundary. Evidently, the warped surface provides better delineation between waste and high-grade samples as the boundary encircles the waste; this is especially noticeable at (b.4) in the warped surface. These observations can be verified quantitatively in Table~\ref{tab:samples-above-below-original-and-warped-surface} which demonstrates more effective separation of high grade and waste samples with warping.\footnote{Note that the dominant samples (HG/BL above the surface, and LG/W below the surface) do not quite reach 100\%. In part, this is due to some assay samples being taken from hole intervals that span across mineralized and non-mineralized geozones. Other surfaces that further delineate HG and W materials are not considered for the purpose of this evaluation.} \begin{table}[h] \begin{center} \small \setlength\tabcolsep{4pt} \caption{Assay samples above and below the original and warped surfaces, categorised by High grade (HG), Blended (BL) siliceous and aluminous, Low grade (LG) and Waste (W).}\label{tab:samples-above-below-original-and-warped-surface} \begin{tabular}{|l|p{12mm}|p{6mm}p{6mm}|p{6mm}p{6mm}|p{6mm}p{6mm}p{6mm}|c|c|}\hline \multicolumn{11}{|c|}{Samples located above surface}\\ \hline Surface & HG & BLS & BLA & LGS & LGA & W\textsubscript{1} & W\textsubscript{2} & W\textsubscript{3} & $\frac{\text{(HG+BL)}}{\text{(HG+BL+LG+W)}}$ & $\frac{\text{(HG+BL)}}{\text{(HG+BL+W)}}$ \\ \hline \textbf{original} & 12809 & 1609 & 3127 & 714 & 884 & 2588 & 1899 & 113 & 73.8\% & 79.2\% \\ \textbf{warped} & 13421 & 1730 & 3286 & 710 & 975 & 2194 & 1968 & 145 & 75.5\% & 81.0\% \\ \hline change & ++++++ & + & ++ & & + & -\,-\,-\,- & + & & +1.7\% & +1.8\% \\ \hline \multicolumn{11}{|c|}{Samples located below surface}\\ \hline Surface & HG & BLS & BLA & LGS & LGA & W\textsubscript{1} & W\textsubscript{2} & W\textsubscript{3} & $\frac{\text{(LG+W)}}{\text{(HG+BL+LG+W)}}$ & $\frac{\text{(W)}}{\text{(HG+W)}}$ \\ \hline \textbf{original} & 1396 & 723 & 553 & 513 & 558 & 3874 & 876 & 223 & 69.3\% & 78.0\% \\ \textbf{warped} & 783 & 602 & 394 & 518 & 468 & 4268 & 806 & 191 & 77.8\% & 87.1\% \\ \hline change & -\,-\,-\,-\,-\,- & - & -\,- & & - & ++++ & - & & +8.5\% & +9.1\% \\ \hline \end{tabular} \end{center} \end{table} \subsection{Validation experiment}\label{sect:validation-experiment} To provide an objective evaluation, an end-to-end validation procedure known as r\textsubscript{2} spatial reconciliation is applied to assess the potential benefits of the proposed scheme where surface warping, block model restructuring and interval GP grade inferencing are applied, relative to a baseline resource model where none of these are used. The comparison requires computing r\textsubscript{2} values or ratios of (grade-block average)/(model predicted value) for each respective model and a chemical of interest, where a ``grade-block'' is an industry term that refers to regions with fairly constant composition and a typical volume ranging from 625\,m\textsuperscript{3} to 145,000\,m\textsuperscript{3}. These grade-blocks are marked with a destination tag for mining excavation purpose based on material types and/or the estimated grades. The grade-block averages (for Fe, SiO\textsubscript{2} etc.) are computed by geologists using the blast hole assays contained within the grade-block boundaries. The corresponding model predictions are volume-weighted averages of the GP inferenced mean grade values calculated over all blocks (perhaps numbered in the tens, hundreds or thousands) that intersect with each grade-block.\footnote{Fair sampling --- whether the number of samples taken is adequate and representative of the geology --- is an important consideration from the viewpoints of reliability and performance evaluation. In reality, suboptimal sampling does occur particularly in low-grade regions where the cost of extra sampling outweighs the benefit of knowing more about a waste zone with zero profit potential. In any event, the grade-block averages are as close to the ground truth as one can possibly attain.} Table~\ref{tab:r2-reconciliation-raw-data} shows the raw data for 5 grade-blocks and r\textsubscript{2} values computed for the proposed and reference model. Each grade-block is identified by the pit, bench and destination-tag. What is shown is only a snippet of a large table. The dest-tags\footnote{HG = high grade, WH = waste/hydrated, BGA = blended aluminous, LGS = low grade siliceous.} represent a classification based on average grade-block composition. For validation, we will consider two pits (A and B) from a Pilbara iron ore mine and five benches each of height 10m with a base elevation from 70m to 110m in 10m increment. In Table~\ref{tab:r2-reconciliation-raw-data}, bench 90 extends from a height of 90m to 100m. Accordingly, the grade-blocks used for evaluation are restricted to this z-interval. Model performance will be evaluated in `intra-bench' and 'predictive' mode. The former, indicated by RL\textsubscript{90} for instance, allows data down to a minimum elevation of 90m to be used during modelling (incl. GP training). Evaluation of an RL\textsubscript{90} model on bench 90 indicates how well a model interpolates the assay data. The latter, indicated by RL\textsubscript{100}, permits only data down to 100m to be used during modelling. Evaluation of an RL\textsubscript{100} model on unseen data from bench 90 focuses on a model's look-ahead (generalization and prediction) capability, viz. how well it vertically extrapolates the assay data. These differences are illustrated in Fig.~\ref{fig-intra-vs-inter-prediction}. \begin{figure*}[!htb] \centering \includegraphics[width=140mm]{fig-intra-vs-inter-prediction.pdf} \caption{Model evaluation inferencing modes. (Left) bench-below\,/\,forward prediction, (right) intra-bench estimation mode.} \label{fig-intra-vs-inter-prediction} \end{figure*} \begin{table}[h] \begin{center} \small \setlength\tabcolsep{4pt} \caption{r\textsubscript{2} spatial reconciliation: excerpt of raw data showing the grade-block (gb) and model predicted averages and normalized tonnage associated with 5 grade blocks.}\label{tab:r2-reconciliation-raw-data} \begin{tabular}{|l|r|p{7mm}p{9mm}|p{8mm}|p{7mm}p{9mm}|p{8mm}|p{7mm}p{9mm}|p{8mm}|}\hline \begin{tabular}{@{}l@{}}Pit\,/\,bench\,/\\ blast\#\,/\,dest-tag\end{tabular} & tonne\% & \begin{tabular}{@{}c@{}}Fe\\ (gb)\end{tabular} & \begin{tabular}{@{}c@{}}Fe\\ (model)\end{tabular} & \begin{tabular}{@{}c@{}}\textbf{Fe}\\ \textbf{r\textsubscript{2}}\end{tabular} & \begin{tabular}{@{}c@{}}SiO\textsubscript{2}\\ (gb)\end{tabular} & \begin{tabular}{@{}c@{}}SiO\textsubscript{2}\\ (model)\end{tabular} & \begin{tabular}{@{}c@{}}\textbf{SiO\textsubscript{2}}\\ \textbf{r\textsubscript{2}}\end{tabular} & \begin{tabular}{@{}c@{}}Al\textsubscript{2}O\textsubscript{3}\\ (gb)\end{tabular} & \begin{tabular}{@{}c@{}}Al\textsubscript{2}O\textsubscript{3}\\ (model)\end{tabular} & \begin{tabular}{@{}c@{}}\textbf{Al\textsubscript{2}O\textsubscript{3}}\\ \textbf{r\textsubscript{2}}\end{tabular}\\ \hline \multicolumn{11}{|c|}{Proposed model}\\ \hline A\,/\,90\,/\,1\,/\,HG13 & 1.47028 & 63.558 & 63.679 & 0.998 & 2.281 &\ 2.219 & 1.027 & 1.977 & 1.892 & 1.045\\ A\,/\,90\,/\,1\,/\,WH10 & 0.90878 & 47.969 & 54.935 & 0.873 & 25.786 & 14.664 & 1.758 & 1.762 & 2.022 & 0.871\\ A\,/\,90\,/\,3\,/\,HG23 & 1.25085 & 62.237 & 59.237 & 1.050 & 3.834 &\ 7.195 & 0.532 & 2.011 & 1.758 & 1.143\\ A\,/\,90\,/\,399\,/\,BGA9 & 0.73250 & 56.618 & 55.832 & 1.014 & 8.807 &\ 9.748 & 0.903 & 3.034 & 3.228 & 0.940\\ A\,/\,90\,/\,5\,/\,LGS40 & 0.95260 & 54.531 & 52.953 & 1.029 & 9.440 & 10.383 & 0.909 & 6.853 & 7.581 & 0.904\\ \hline \multicolumn{11}{|c|}{Reference model}\\ \hline A\,/\,90\,/\,1\,/\,HG13 & 1.53145 & 63.558 & 62.942 & 1.009 & 2.281 &\ 2.640 & 0.863 & 1.977 & 2.098 & 0.942\\ A\,/\,90\,/\,1\,/\,WH10 & 0.89634 & 47.969 & 56.863 & 0.843 & 25.786 & 11.948 & 2.158 & 1.762 & 2.048 & 0.860\\ A\,/\,90\,/\,3\,/\,HG23 & 1.13243 & 62.237 & 63.299 & 0.983 & 3.834 &\ 2.434 & 1.575 & 2.011 & 2.061 & 0.975 \\ A\,/\,90\,/\,399\,/\,BGA9 & 0.63901 & 56.618 & 51.399 & 1.101 & 8.807 & 16.487 & 0.534 & 3.034 & 3.301 & 0.919 \\ A\,/\,90\,/\,5\,/\,LGS40 & 0.94149 & 54.531 & 56.785 & 0.960 & 9.440 &\ 8.183 & 1.153 & 6.853 & 5.719 & 1.198 \\ \hline \end{tabular} \end{center} \end{table} In order to convey useful information for large-scale performance evaluation, we propose using an r\textsubscript{2} error score. First, the r\textsubscript{2} values associated with bench $z$ and an RL\textsubscript{\textit{h}} model (where $h=z$ in intra-bench mode, or $h=z+10$ in predictive mode) are sorted in increasing order and cumulative tonnage percentages are computed. This produces an r\textsubscript{2} cumulative distribution function (cdf); an example of which is shown in Fig.~\ref{fig-obk-reconciliation-r2-cdf}(a). An $r_2$ value less than 1 indicates over-estimation by the model, conversely, a value greater than 1 indicates under-estimation w.r.t. the grade-blocks. In a perfect scenario where there is zero discrepancy between the grade-blocks and model predicted values, the cdf curve becomes a step function that transitions from 0\% to 100\% at an r\textsubscript{2} value of 1. Hence, adding the area below the curve for $r_2 < 1$ to the area above the curve for $r_2 \ge 1$ provides an aggregate error measure (a performance statistic) of a model for a given pit, bench and chemical. The graphs also give insight. For instance, the left-leaning red curve in Fig.~\ref{fig-obk-reconciliation-r2-cdf}(f) provides evidence of bias, viz. the reference model has a tendency of over-estimating the Al\textsubscript{2}O\textsubscript{3} grade in pit B for bench 90. \begin{figure*}[!t] \centering \includegraphics[width=140mm]{fig-obk-reconciliation.pdf} \caption{Evaluation of orebody grade estimation RL\textsubscript{90} model using grade blocks from two pits and bench 90. The r\textsubscript{2} cumultative distribution function associated with pit A (top) and pit B (bottom) are shown from left to right for Fe, SiO\textsubscript{2} and Al\textsubscript{2}O\textsubscript{3}.} \label{fig-obk-reconciliation-r2-cdf} \end{figure*} \begin{table}[h] \begin{center} \footnotesize \setlength\tabcolsep{4pt} \caption{r\textsubscript{2} spatial reconciliation $r_2$ statistics.}\label{tab:r2-reconciliation-stats} \begin{tabular}{|c|c|ccc|ccc|ccc|ccc|}\hline & & \multicolumn{6}{c|}{Pit A} & \multicolumn{6}{c|}{Pit B}\\ & &\multicolumn{3}{c}{Proposed model} & \multicolumn{3}{c|}{Reference}& \multicolumn{3}{c}{Proposed model} & \multicolumn{3}{c|}{Reference}\\ \hline Bench & Model & Fe & SiO\textsubscript{2} & Al\textsubscript{2}O\textsubscript{3} & Fe & SiO\textsubscript{2} & Al\textsubscript{2}O\textsubscript{3} & Fe & SiO\textsubscript{2} & Al\textsubscript{2}O\textsubscript{3} & Fe & SiO\textsubscript{2} & Al\textsubscript{2}O\textsubscript{3}\\ \hline \multicolumn{2}{c}{} &\multicolumn{12}{c}{Intra-bench estimation performance}\\ \hline 110 & RL\textsubscript{110} & \textbf{4.271} & \textbf{18.202} & \textbf{12.059} & 7.761 & 34.531 & 19.995 & \textbf{3.351} & \textbf{20.905} & \textbf{17.516} & 4.069 & 21.737 & 20.087\\ 100 & RL\textsubscript{100} & \textbf{5.411} & \textbf{28.101} & \textbf{24.784} & 8.548 & 46.368 & 29.377 & \textbf{2.093} & \textbf{15.670} & \textbf{11.505} & 3.282 & 22.298 & 19.057\\ 90 & RL\textsubscript{90} & \textbf{5.948} & \textbf{26.851} & \textbf{14.468} & 11.776 & 51.499 & 41.703 & \textbf{2.411} & \textbf{16.817} & \textbf{10.962} & 5.365 & 32.322 & 22.423\\ 80 & RL\textsubscript{80} & \textbf{4.693} & \textbf{16.172} & \textbf{21.967} & 12.182 & 60.436 & 27.069 & \textbf{6.777} & \textbf{22.986} & \textbf{18.957} & 10.461 & 39.828 & 29.820\\ 70 & RL\textsubscript{70} & \textbf{9.420} & \textbf{19.646} & \textbf{25.036} & 11.135 & 49.092 & 53.319 & \textbf{9.416} & \textbf{24.409} & \textbf{44.289} & 9.688 & 39.862 & 77.711 \\ \hline $\mu_\text{g}$ & & \textbf{5.711} & \textbf{21.280} & \textbf{18.828} & 10.117 & 47.611 & 31.816 & \textbf{4.042} & \textbf{19.862} & \textbf{17.933} & 5.919 & 30.140 & 28.823\\ \hline \multicolumn{2}{c}{} &\multicolumn{12}{c}{Bench-below prediction performance}\\ \hline 100 & RL\textsubscript{110} & \textbf{6.707} & \textbf{41.035} & 34.372 & 8.548 & 46.372 & 29.381 & 4.136 & 20.800 & 25.621 & 3.282 & 19.060 & 22.301 \\ 90 & RL\textsubscript{100} & \textbf{9.832} & \textbf{41.214} & \textbf{26.107} & 11.776 & 51.495 & 38.455 & \textbf{4.100} & \textbf{29.440} & \textbf{17.766} & 5.365 & 32.322 & 22.425 \\ 80 & RL\textsubscript{90} & \textbf{7.336} & \textbf{24.695} & \textbf{21.196} & 12.182 & 60.437 & 27.067 & \textbf{8.794} & \textbf{29.516} & \textbf{27.742} & 10.461 & 39.821 & 29.817 \\ 70 & RL\textsubscript{80} & \textbf{8.069} & \textbf{28.985} & \textbf{42.950} & 11.135 & 49.097 & 53.319 & \textbf{9.605} & \textbf{25.907} & \textbf{63.121} & 9.688 & 39.859 & 77.786 \\ \hline $\mu_\text{g}$ & & \textbf{7.904} & \textbf{33.170} & \textbf{30.064} & 10.810 & 51.594 & 35.734 & \textbf{6.152} & \textbf{27.558} & \textbf{28.362} & 6.500 & 32.705 & 31.554\\ \hline \multicolumn{14}{c}{$\mu_\text{g}$ denotes the geometric mean} \end{tabular} \end{center} \end{table} The two pits combined contain over 400 grade-blocks and a volume in excess of $3\times10^6$ m\textsuperscript{3}. The summary statistics for intra-bench estimation and bench-below prediction are shown in Table~\ref{tab:r2-reconciliation-stats}. The main observation is that the proposed model outperforms the reference model. With few exceptions, the r\textsubscript{2} error scores are consistently lower for both intra-bench and bench-below prediction for Fe, SiO\textsubscript{2} and Al\textsubscript{2}O\textsubscript{3}. Generally, the error scores are higher for bench-below prediction, the performance gap reflects the relative difficulty of the problem. These findings suggest the use of surface warping, block model spatial restructuring and interval GP inference can increase accuracy in grade estimation. In particular, improving the alignment of mesh surfaces with respect to the underlying boundaries (using the observed geochemistry from samples to enforce consistency) can make a real difference and improve orebody modelling outcomes. \subsection{Discussion}\label{sect:discussion} Although the results have been analyzed in the context of improving grade models, there is also intrinsic value in updating geological boundaries via surface warping using geochemical assay data. The ability to better characterize the lithological contacts (e.g. transition between mineralized and unmineralized domains) improves the mine geologist's ability to understand ore genesis events, particularly fluid flow within major structures in a banded iron formation (BIF) hosted iron ore sytem \cite{hagemann2016bif,perring2020new}. Shale boundaries\footnote{Shale is essentially a clay mineral with elevated SiO\textsubscript{2} and Al\textsubscript{2}O\textsubscript{3} content.}, along with folds and faults, play a significant role in controlling mineralization as fluid pathways are restricted by impermeable layers such as shale bands. From a modelling perspective, warped surfaces have been used successfully to modify the block model structure in \cite{leung2020mos} to improve the delineation between domains for grade estimation. Commercially available geo-modelling softwares often employ implicit modelling techniques \cite{cowan2003practical} to extract boundary as an iso-surface from a manifold. Recent work by Renaudeau et al.\,even handles discontinuities \cite{renaudeau2019implicit}. A recognized problem with this process is that it generally assumes the entire dataset is available and models everything at once. In contrast, our proposal honors the geologist's interpretations and uses the mesh surfaces prepared by experts as starting points. The warping procedure can be applied iteratively as more production data (for instance, blasthole assays from a single bench) become available to rectify surfaces incrementally. This flexible approach works well with sparse data and periodic updates which underscore the progressive nature of open-pit mining. More accurate knowledge about these geological boundaries can be exploited to improve decision making and process efficiency. It facilitates dynamic mine planning and presents options on whether to excavate, postpone or abandon operations in an area. Then, there is the fleet management and task scheduling aspect which coordinates the movement of mining equipment \cite{samavati2019improvements} such as diggers, excavators and haul trucks, directing them when and where to go to achieve flow targets \cite{seiler2020flow}. Knowing where an ore/waste transition might occur, drill-charge-and-blast operations may be optimized with respect to placement and depth. The directional information conveyed by the surfaces also provides guidance for adaptive sampling \cite{ahsan-15} which maximizes utility and minimizes cost. This allows specific areas to be targeted where the boundary transition is most uncertain. As Fig.~\ref{fig-changes-propagating-to-bench-below} shows, surface warping can project information to an adjacent bench or the bench below; this gives to some extent a forward stratigraphic modelling capability, allowing the drilling density and assay sampling rate to be adjusted if needed. Geological risks \cite{benndorf2013stochastic} can be mitigated with an improved grade model. These risks include identifying sticky or hazardous material, as well as grade control in general, such as finding waste instead of ore in a variable grade block. \begin{figure*}[!t] \centering \includegraphics[width=85mm]{fig-changes-propagating-to-bench-below.pdf} \caption{Surface warping can propagate directional information about a boundary to the bench below} \label{fig-changes-propagating-to-bench-below} \end{figure*} \section{Conclusion}\label{sect:conclusion} This paper described the importance of having an accurate surface for grade estimation in mining. As motivation, it was shown that an inaccurate surface --- one that fails to capture the location and shape of the underlying geological boundary --- can impact the block structure and inferencing ability of the resultant grade estimation model which in turn can lead to smearing and misleading interpretations. The main contribution was a Bayesian formulation of the surface warping problem which seeks to maximize the agreement between the surface and observed data. The objective was to reshape the surface where possible to provide a clear delineation between ore and waste material that is consistent with the observations. This involved estimating and applying the optimal displacement to the mesh vertices based on spatial and compositional analysis. The maximum a posteriori (MAP) solution considered the chemistry observation likelihood in a given geozone and incorporated an a priori spatial structure which allows the likelihood of a displacement estimate to be computed using geological domain knowledge. The results showed that locally, the mineralized and non-mineralized samples are better separated by the warped surface. For end-to-end performance evaluation which encompasses surface warping, block model spatial restructuring, and grade estimation based on GP inferencing, the \textit{r\textsubscript{2} reconciliation error score} was proposed. This provided a grade model validation metric that is useful irrespective of the actual algorithms\,/\,processes deployed in the system components. Our experiments showed the r\textsubscript{2} error scores were consistently and significantly lower with surface warping when the estimated grades for chemicals of interest (Fe, SiO\textsubscript{2} and Al\textsubscript{2}O\textsubscript{3}) were compared with over 400 grade-blocks for two large pits, having a total volume in excess of $3\times 10^6$ m\textsuperscript{3}. This demonstrated the value of implementing the displacement estimation framework for surface warping\,/\,boundary rectification, and spatial algorithms more generally, in a mining automation context. \section{Authorship statement} The first two authors contributed to the core contents in this paper. Alexander Lowe studied surface warping (mesh vertices displacement estimation) as a maximum likelihood problem. He investigated different strategies and provided a concrete software implementation. Raymond Leung reformulated the problem in a Bayesian framework, developed the block model spatial restructuring algorithms, and proposed using the R\textsubscript{2} CDF error score as a validation measure. He conceptualized this paper and wrote the manuscript in consultation with other authors. Anna Chlingaryan and Arman Melkumyan devised the covariance functions and mathematical framework for grade estimation and GP inference using interval data (see Appendix). John Zigman conducted the validation experiments. John and Raymond performed data analysis and interpreted the results. \begin{appendix} \section{Appendix: Gaussian Processes --- inferencing for grade estimation}\label{sect:inferencing-for-grade-estimation} Inferencing refers to the task of predicting the grade value for certain chemicals of interest at locations where direct assay measurements are unavailable. In this exposition, Gaussian Process (GP) is used to provide a probabilistic model of the grade functions given a set of data, this allows both the mean and uncertainty associated with the compositional percentage of various chemicals such as Fe, SiO\textsubscript{2} etc.\,to be estimated. Inferencing is generally preceded by a training phase which optimizes the hyper-parameters that describe the Gaussian Process. Mathematically, a GP is an infinite collection of random variables, any finite number of which has a joint Gaussian distribution. Machine learning using GPs consists of two steps: training and inference. For training, simulated annealing and gradient descent procedures are used to optimize the hyper-parameters to create a probabilistic model that best represents the training data. Specifically, the GP hyper-parameters include \textit{length scales} that describe the rate of change in composition with space, and \textit{noise variance} that describes the amount of noise present in the data. A training set $\mathcal{T}=(X,\mathbf{y})$ consists of a matrix of training samples $X=[x_1,x_2,\ldots,x_N]^T\in\mathbb{R}^{N\times D}$ and corresponding target vector $\mathbf{y}=[y_1,y_2,\ldots,y_N]\in\mathbb{R}^N$. Here, $N$ represents the number of training samples available for a geozone. Each $x_i\in\mathbb{R}^D$ denotes an observation (the spatial coordinates where an assay sample is taken) and the associated value $y_i\in\mathbb{R}$ denotes a chemical's compositional percentage. The objective is to compute the predictive distribution $f(x_*)$ at various test points $x_*$. Formally, a GP model places a multivariate Gaussian distribution over the space of function variables $f(x)$, mapping input to output spaces. GPs can also be considered as a stochastic process that can be fully specified by its mean function $m(x)$ and covariance function $k(x,x')$. To completely describe the standard regression model, we assume Gaussian noise $\varepsilon$ with variance $\sigma_\text{n}^2$, so that $y=f(x)+\varepsilon$. With a training set $(X, f, y)=(\{x_i\}, \{f_i\}, \{y_i\})_{i=1:N}$ and test set $(X, f, y)=(\{x_{*i}\}, \{f_{*i}\}, \{y_{*i}\})_{i=1:N}$ where $\{y_i\}$ are observed and $\{y_{*i}\}$ are unknown and $m(x)=0$, the joint distribution becomes \begin{align} \begin{bmatrix}y \\ f_*\end{bmatrix}\sim \mathcal{N}\left(0,\begin{bmatrix}K(X,X)+\sigma_\text{n}^2 I & K(X,X_*)\\K(X_*,X)& K(X_*,X_*)\end{bmatrix}\right) \label{eq:gp-joint-distribution} \end{align} In Equation~(\ref{eq:gp-joint-distribution}), $\mathcal{N}(\mu, cov(f_*))$ is a multivariate Gaussian distribution with mean $\mu$, posterior covariance at the estimated locations $cov(f_*)$, and $K$ is the covariance matrix computed between all the points in the set. Thus, the matrix element $K_{i,j*}\equiv K(X_i,X_{*j})$ for instance is obtained by applying the kernel to the locations of sample $x_i$ and $x_{*j}$ from the training and test sets, respectively. By conditioning on the observed training points, the predictive distribution for new points can be obtained as: \begin{align} p(f_*\!\mid\! X_*,X,y)=\mathcal{N}(\mu, cov(f_*))\label{eq:gp-predictive-dist} \end{align} where \begin{align} \mu&=K(X_*,X)\left[K(X,X)+\sigma_\text{n}^2I\right]^{-1} y\label{eq:gp-predictive-mean} \end{align} and the posterior covariance \begin{align} cov(f_*)&=K(X_*,X_*) - K(X_*,X)\left[K(X,X)+\sigma_\text{n}^2 I\right]^{-1} K(X,X_*)\label{eq:gp-predictive-covariance} \end{align} Learning a GP model is equivalent to learning the hyper-parameters of the covariance function from a data set. In a Bayesian framework, this can be performed by maximising the log of the marginal likelihood \cite{williams2006gaussian} with respect to $\theta$: \begin{align} \log p(y\mid X,\theta)=-\frac{1}{2}y^T\left[K(X,X)+\sigma_\text{n}^2 I\right]^{-1}y -\frac{1}{2}\log\left|K(X,X)+\sigma_\text{n}^2 I\right|-\frac{N}{2}\log 2\pi\label{eq:gp-nlml} \end{align} The marginal likelihood is a non-convex function, thus only local maxima can be obtained. It has three terms (from left to right) that represent the data fit, complexity penalty (to include the Occam's razor principle) and a normalization constant. In this standard framework, the main contribution is the design of new kernels \cite{rtgi-rtcma2014} that deal with not only point-based observations, but also \textit{interval} observations where $y_i$ represents an average assay value measured over some interval in drilled holes. This enables performing data fusion between exploration and blast hole assays taking into consideration their respective supports. The GP covariance functions for interval data are described mathematically in \cite{rtgi-rtcma2014}. Relevant works that underpin this theory can be found in \cite{jewbali2011apcom} and \cite{rasmussen2003bayesian}. \end{appendix} \vspace{3mm} \section*{Acknowledgment} This work was supported by the Australian Centre for Field Robotics and the Rio Tinto Centre for Mine Automation. \bibliographystyle{unsrt}
{'timestamp': '2020-07-08T02:21:28', 'yymm': '2005', 'arxiv_id': '2005.14511', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14511'}
arxiv
\section{Introduction} \blfootnote{* First authors contributed equally.} Automated analysis of microscopic images heavily relies on classification or segmentation of objects in the image. Starting fro
m a robust and precise segmentation algorithm, downstream analysis subsequently will be more accurate and reliable. Deep learning (DL) approaches nowadays have state-of-the-art performance in nearly all computer vision tasks (\cite{russakovsky2015imagenet}). In medical images or more specifically in computational pathology (CP), DL plays an important role for tackling wide range of tasks. Despite their success, DL methods have a major problem-their data hungry nature. If they are not provided with sufficient data, they can easily over-fit on the training data, leading to poor performance on the new unseen data. In computational pathology, most models are trained on datasets that are acquired from just a small sample size of whole data distribution. These models would fail if they are applied on a new distribution (e.g new tissue types or different center that data is coming from). Hence, one needs to collect annotation from new distribution and then add it to training set to overcome false predictions. Obtaining annotation as a target for training deep supervised models is time consuming, labour-intensive and sometimes involves expert knowledge. Particularly, for segmentation task where dense annotation is required. It is worth mentioning that in terms of performance, semi-supervised and weakly supervised methods are still far behind fully supervised methods (\cite{taghanaki2019deep}). Therefore, if one needs to build a robust and applicable segmentation algorithm, supervised methods are priority. In CP, fully automatic approaches which do not require user interactions have been extensively applied on histology images for segmentation of different objects (e.g. cells, nuclei, glands, etc.) where DL models have shown state-of-the-art performance (\cite{sirinukunwattana2017gland, kumar2019multi, graham2019hover, koohbanani2019nuclear, pinckaers2019neural, graham2019mild, chen2016dcan, gamper2020pannuke, zhou2019cia}). Semi-automatic (interactive) segmentation approaches which require the user to provide an input to the system bring several advantages over fully automated approaches: 1) due to the supervisory signal as a prior to the model, interactive models lead to better performance; 2) possible mistakes can be recovered by user interactions; 3) interactive models are less sensitive to domain shift since the supervisory signal can compensate for variations in domains, in other words, interactive models are more generalizable; and 4) selective attribute of interactive models gives the flexibility to the user to choose the arbitrary instances of objects in the visual field (e.g selecting one nucleus for segmentation out of hundreds of nuclei in the ROI). Due to generalizability power, these models can also serve as annotation tool to facilitate and speed up the annotation collection. Then these annotations can be used to train a fully automatic method for extracting the relevant feature for the task in hand. For example delineating boundaries of all nuclei, glands or any object of interest is highly labour intensive and time consuming. To be more specific, considering that annotation of one nuclei takes 10s, a visual field containing 100 nuclei takes ~17 minutes to be annotated. To this end, among interactive models, approaches that require minimum user interaction are of high importance, as it not only minimizes the user effort but also speed up the process. In this paper, by concentrating on keeping user interactions as minimum as possible , we propose a unified CNN-based framework for interactive annotation of important microscopic object in three different levels (nuclei, cells, and glands). Our model accepts minimum user interaction which is suitable for collecting annotation in histology domain. \begin{figure}[h] \centering \includegraphics[width =0.9\columnwidth]{validation.pdf} \caption{\textbf{NuClick interactive segmentation} of objects in histopathological images with different levels of complexity: nuclei (first row), cells (second row), and glands (third row). Solid stroke line around each object outlines the ground truth boundary for that object, overlaid transparent mask is the predicted segmentation region by NuClick, and points or squiggles indicate the provided guiding signal for interactive segmentation.} \label{fig:validation} \end{figure} \section{Related Works} \subsection{Weakly Supervised Signals for Segmentation} Numerous methods have been proposed in the literature that utilise weak labels as supervisory signals. In these methods, supervisory signal serves as an incomplete (weak) ground truth segmentation in the model output. Therefore, a desirable weakly supervised model would be a model that generalize well on the partial supervisory signals and outputs a more complete segmentation of the desired object. These methods are not considered as interactive segmentation methods and are particularly useful when access to full image segmentation labels is limited. For instance, \cite{yoo2019pseudoedgenet} and \cite{qu2019weakly} introduced weakly supervised nucleus segmentation models which are trained based on nuclei centroid points instead of full segmentation masks. Several other works used image-level labels (\cite{pathak2014fully, kolesnikov2016seed, pathak2015constrained, wei2018revisiting}), boxes (\cite{khoreva2017simple}), noisy web labels (\cite{jin2017webly, ahmed2014semantic}), point-clicks (\cite{bearman2016s, bell2015material, chen2018tap, wang2014touchcut}), and squiggles (\cite{lin2016scribblesup, xu2015learning}) as weak labels to supervise their segmentation models. Our model is analogous to methods proposed by \cite{bearman2016s} and \cite{lin2016scribblesup} with the difference that we used points and squiggles as auxiliary guiding signals in the input of our model. Our model is fully supervised and we will show how this additional information can be used to further improve accuracy of segmentation networks on histology images. \subsection{Interactive segmentation} \label{Sec-inter} Interactive segmentation of objects has been studied for over a decade now. In many works (\cite{bai2009geodesic, batra2011interactively, boykov2001interactive, rother2004grabcut, cheng2015densecut, gulshan2010geodesic, shankar2015video, mortensen1998interactive, cagnoni1999genetic, de2004interactive, wang2018interactive, li2018interactive}) object segmentation is formulated as energy minimization on a graph defined over objects. In a recent unsupervised approach proposed by \cite{papadopoulos2017extreme}, the annotator clicks on four extreme points (left-most, right-most, top and bottom pixels), then an edge detection algorithm is applied to the whole image to extract boundaries, afterwards the shortest path between two neighboring extreme points is chosen as boundary of the object. Area within the boundaries is considered as foreground and the region outside the extreme points is considered as background for the appearance model. Grabcut (\cite{rother2004grabcut}) and Graphcut (\cite{kwatra2003graphcut}) are classic interactive segmentation models, which segment objects by gradually updating the appearance model. These models require the user to mark in both background and foreground regions. Although they use extensive guiding signals, they would fail if the object has blurred or complex boundaries. In recent years, CNN models have been extensively used for interactive segmentation (\cite{xu2017deep, xu2016deep, agustsson2019interactive, papadopoulos2017extreme, maninis2018deep, ling2019fast, castrejon2017annotating, acuna2018efficient, wang2019object}). A well-known example is DEXTRE (\cite{maninis2018deep}) which utilizes extreme points as an auxiliary input to the network. First, the annotator clicks four points on the extreme positions of objects then a heat map (Gaussian map for each point where points are at the centers of Gaussians) channel is created form these clicks which is attached to the input and serves as guiding signal. There are methods in the literature that require the user to draw a bounding box around the desired object. \cite{wang2018interactive} proposed a method for interactive medical images segmentation where an object of interest is selected by drawing a bounding box around it. Then a deep network is applied on a cropped image to obtain segmentation. They also have a refinement step based on Grabcut that takes squiggles from the user to highlight the foreground and background regions. This model is applicable for single object (an organ) segmentation in CT/MRI images where this organ has similar appearance and shape in all images. However, this approach is not practical for segmentation of multiple objects (like nuclei) or amorphous objects (like glands) in histology domain. Some methods combined bounding box annotations with Graph Convolutional Network (GCN) to achieve interactive segmentation (\cite{ling2019fast,castrejon2017annotating, acuna2018efficient}). In these methods the selected bounding box is cropped from the image and fed to a GCN to predict polygon/spline around object. The polygon surrounds the object then can be adjusted in an iterative manner by refining the deep model. Also, there are some hybrid methods which are based on the level sets (\cite{caselles1997geodesic}). \cite{acuna2019devil} and \cite{wang2019object} embedded the level set optimization strategy in deep network to achieve precise boundary prediction from coarse annotations. For some objects such as nuclei, manual selection of four extreme points or drawing a bounding box is still time-consuming, considering that an image of size 512$\times$512 can contain more that 200 nuclei. Moreover, extreme points for objects like glands are not providing sufficient guidance to delineate boundaries due to complex shape and unclear edges of such objects. In this paper, we propose to use a single click or a squiggle as the guiding signal to keep simplicity in user interactions while providing enough information. Similar to our approach is a work by \cite{sakinis2019interactive}, where the annotator needs to place two pairs of click points inside and outside of the object of interest. However, their method is limited to segmenting a single predefined object, like prostate organ in CT images unlike the multiple objects (nuclei, cell, and glands) in histology images, as is the case in this study, that mutate greatly in appearance for different cases, organs, sampling/staining methods, and diseases. \subsection{Interactive full image segmentation} Several methods have been proposed to interactively segment all objects within the visual field. \cite{andriluka2018fluid} introduced Fluid Annotation, an intuitive human-machine interface for annotating the class label and delineating every object and background region in an image. An interactive version of Mask-RCNN (\cite{he2017mask}) was proposed by \cite{agustsson2019interactive} which accepts bounding box annotations and incorporates a pixel-wise loss allowing regions to compete on the common image canvas. Other older works that also segment full image are proposed by \cite{nieuwenhuis2012spatially, nieuwenhuis2014co, santner2010interactive, vezhnevets2005growcut}. Our method is different from these approaches as these are designed to segment all objects in natural scenes, requiring the user to label the background region and missing instances may interfere with the segmentation of desired objects. Besides, these approaches require high degree of user interaction for each object instance (minimum of selecting 4 extreme points). However, in interactive segmentation of nuclei/cells from microscopy images, selecting four points for each object is very cumbersome. On the other hand, all above-mentioned methods are sensitive to the correct selection of extreme points which also can be very confusing for the user when he/she aims to mark a cancerous gland in histology image with complex shape and vague boundaries. Furthermore, another problem with a full image segmentation method like \cite{agustsson2019interactive} is that it uses Mask-RCNN backbone for RoI feature extraction which has difficulty in detecting objects with small sizes such as nuclei. In this paper we propose \textbf{NuClick} that uses only one point for delineating nuclei and cells and a squiggle for outlining glands. For nucleus and cell segmentation, proving a dot inside nucleus and cell is fast, easy, and does not require much effort from user compared to recent methods which rely on bounding boxes around objects. For glands, drawing a squiggle inside the glands is not only much easier and user friendly for annotator but also gives more precise annotations compared to other methods. Our method is suitable for single object to full image segmentation and is applicable to a wide range of object scales, i.e. small nuclei to large glands. To avoid interference of neighboring objects in segmentation of desired object, a hybrid weighted loss function is incorporated in NuClick training. This paper is complementary to our previous paper (\cite{jahanifar2019nuclick}), where we showed results of the preliminary version of NuClick and its application to nuclei, whereas here we extend its application to glands and cells. As a result of the current framework, we release two datasets of lymphocyte segmentation in Immunohistochemistry (IHC) images and segmentation mask of white blood cells (WBC) in blood sample images\footnote{https://github.com/navidstuv/NuClick}. A summary of our contributions is as follows: \begin{itemize} \item We propose the first interactive deep learning framework to facilitate and speed up collecting reproducible and reliable annotation in the field of computational pathology. \item We propose a deep network model using guiding signals and multi-scale blocks for precise segmentation of microscopic objects in a range of scales. \item We propose a method based on morphological skeleton for extracting guiding signals from gland masks, capable of identifying holes in objects. \item We Incorporate a weighted hybrid loss function in the training process which helps to avoid interference of neighboring objects when segmenting the desired object. \item Performing various experiments to show the effectiveness and generalizability of the NuClick. \item We release two datasets of lymphocyte dense annotations in IHC images and touching white blood cells (WBCs) in blood sample images. \end{itemize} \section{Methodology} \subsection{NuClick framework overview} Unlike previous methods that use a bounding box or at least four points \cite{maninis2018deep, boykov2001interactive, wu2014milcut, rother2012interactive, papadopoulos2017extreme} for interactive segmentation, in our proposed interactive segmentation framework only one click inside the desired object is sufficient. We will show that our framework is easily applicable for segmenting different objects in different levels of complexity. We present a framework that is applicable for collecting segmentation for nuclei which are smallest visible objects in histology images, then cells which consist of nucleus and cytoplasm, and glands which are a group of cells. Within the current framework the minimum human interaction is utilized to segments desired object with high accuracy. The user input for nucleus and cell segmentation is as small as one click and for glands a simple squiggle would suffice. NuClick is a supervised framework based on convolutional neural networks which uses an encoder-decoder network architecture design. In the training phase, image patches and guiding signals are fed into the network, therefore it can learn where to delineate objects when an specific guiding signal appears in the input. In the test phase, based on the user-input annotations (clicks or squiggles), image patches and guiding signal maps are generated to be fed into the network. Outputs of all patches are then gathered in a post-processing step to make the final instance segmentation map. We will explain in details all aspects of this framework in the following subsections. \subsection{Model architecture \& loss} Efficiency of using encoder-decoder design paradigm for segmentation models has been extensively investigated in the literature and it has been shown that UNet design paradigm works the best for various medical (natural) image segmentation tasks (\cite{hesamian2019deep, garcia2017review}). Therefore, similar to \cite{jahanifar2019nuclick}, an encoder-decoder architecture with multi-scale and residual blocks has been used for NuClick models, as depicted in \cref{fig:network architecture}. As our goal is to propose a unified network architecture that segments various objects (nuclei, cells and glands), it must be capable of recognizing objects with different scales. In order to segment both small and large objects, the network must be able to capture features on various scales. Therefore, we incorporate multi-scale convolutional blocks \cite{jahanifar2018segmentation} throughout the network (with specific design configurations related to the network level). Unlike other network designs (eg. DeepLab v3 \cite{chen2017rethinking}) that only uses multi-scale \textit{atrous} convolutions in the last low-resolution layer of the encoding path, we use them in three different levels both in encoding and decoding paths. By doing this, NuClick network is able to extract relatable semantic multi-scale features from the low-resolution feature maps and generate fine segmentation by extending the receptive fields of its convolution layers in high-resolution feature maps in the decoder part. Parameters configuration for residual and multi-scale blocks is shown on each item in the \cref{fig:network architecture} Furthermore, using residual blocks instead of plain convolutional layers enables us to design a deeper network without risk of gradient vanishing effect \cite{he2016deep}. In comparison to \cite{jahanifar2019nuclick}, the network depth has been further increased to better deal with more complex objects like glands. \begin{figure*} [ht!] \includegraphics[width =\textwidth]{architecture_nuclick+.pdf} \caption{Overview of the NuClick network architecture which consists of Convolutional, Residual, and Multi-Scale convolutional blocks.} \label{fig:network architecture} \end{figure*} The loss function used to train NuClick is a combination of soft dice loss and weighted cross entropy. The dice loss helps to control the class imbalance and the weighted cross entropy part penalizes the loss if in the prediction map other objects rather than the desired object were present. \begin{equation} \begin{array}{l} {\cal L} = 1 - {{\left( {\sum\limits_i {{p_i}{g_i} + \varepsilon } } \right)} \mathord{\left/ {\vphantom {{\left( {\sum\limits_i {{p_i}{g_i} + \varepsilon } } \right)} {\left( {\sum\limits_i {{p_i} + \sum\limits_i {{g_i} + \varepsilon } } } \right)}}} \right. \kern-\nulldelimiterspace} {\left( {\sum\limits_i {{p_i} + \sum\limits_i {{g_i} + \varepsilon } } } \right)}}\\ \quad \;\; - \frac{1}{n}\sum\limits_{i = 1}^n {{w_i}({g_i}\log {p_i} + (1 - {g_i})\log (1 - {p_i}))} \end{array} \label{eq:loss} \end{equation} where $n$ is the number of pixels in the image spatial domain, ${p_i}$, ${g_i}$, and ${w_i}$ are values of the prediction map, the ground-truths mask $\bf{G}$, and the weight map $\bf{W}$ at pixel $i$, respectively and $\varepsilon$ is a small number. Considering that $\bf{G}$ has value of 1 for the desired (included) objects and 0 otherwise, its complement ${\bf{\tilde G}}$ has value of 1 for the undesired (excluded) objects in the image and 0 otherwise. The adaptive weight map is then defined as: ${\bf{W}} = {\alpha}^2 {\bf{G}} + \alpha{\bf{\tilde G}}+1$ ,where $\alpha$ is the adaptive factor that is defined based on areas of the included and excluded objects as follows: $ \alpha = \max \left\{ {{{\sum {{\bf{\tilde G}}} } \mathord{\left/ {\vphantom {{\sum {{\bf{\tilde G}}} } {\sum {\bf{G}} }}} \right. \kern-\nulldelimiterspace} {\sum {\bf{G}} }},1} \right\}$. This weighting scheme puts more emphasis on the object to make sure it would be completely segmented by the network while avoiding false segmentation of touching undesired objects. \subsection{Guiding Signals} \subsubsection{Guiding signal for nuclei/cells} When annotator clicks inside a nucleus, a map to guide the segmentation is created, where the clicked position is set to one and the rest of pixels are set to zero which we call it \textit{inclusion map}. In most scenarios, when more than one nucleus are clicked by the annotator (if he/she wants to have all nuclei annotated), another map is also created where positions of all nuclei except the desired nucleus/cell are set to one and the rest of pixels are set to zero, which is called \textit{exclusion map}. When only one nucleus is clicked exclusion map is a zero map. Inclusion and exclusion maps are concatenated to RGB images to have 5 channels as the input to the network (as illustrated in \cref{fig:network architecture}). The same procedure is used for creating guiding signals of cells. However, we took some considerations into the training phase of the NuClick in order to make it robust against guiding signal variations. In the following paragraphs, we will describe these techniques for both training and testing phases. \paragraph{Training} To construct inclusion map for training, a point inside a nucleus/cell is randomly chosen. It has been taking into account that the sampled point has at least 2 pixels distance from the object boundaries. The exclusion map on the other hand is generated based on the centroid location of the rest of nuclei within the patch. Thereby, guiding signals for each patch are continuously changing during the training. Therefore the network sees variations of guiding signals in the input for each specific nuclei and will be more robust against human errors during the test. In other words the network learns to work with click points anywhere inside the desired nuclei so there is no need of clicking in the exact centroid position of the nuclei. \paragraph{Test} At inference time, guiding signals are simply generated based on the clicked positions by the user. For each desired click point on image patch, an inclusion map and an exclusion map are generated. The exclusion map have values if user clicks on more than one nuclei/cells, otherwise it is zero. Size of information maps for nuclei and cells segmentation tasks are set to $128\times128$ and $256\times256$, respectively. For test time augmentations we can disturb the position of clicked points by 2 pixels in random direction. The importance of exclusion map is in cluttered areas where nuclei are packed together. If the user clicks on all nuclei within these areas, instances will be separated clearly. In the experimental section we will show the effect of using exclusion maps. \subsubsection{Guiding signal for glands} Unlike nuclei or cells, since glands are larger and more complex objects, single point does not provide strong supervisory signal to the network. Therefore, we should chose another type of guiding signal which is informative enough to guide the network and simple enough for annotator during inference. Instead of points, we propose to use squiggles. More precisely, the user provides a squiggle inside the desired gland which determines the extent and connectivity of it. \paragraph{Training} Considering $\bf{M}$ as the desired ground truth (GT) mask in the output, an inclusion signal map is randomly generated as follows: First we apply a Euclidean distance transform function $D(x)$ on the mask to obtain distances of each pixel inside the mask to the closest point on the object boundaries: \begin{equation} {D_{i,j}}(\bf{M}) = \left\{ {\sqrt {{{(i - {i_b})}^2} + {{(j - {j_b})}^2}} |(i,j) \in \bf{M}} \right\} \end{equation} where ${i_b}$ and ${j_b}$ are the closest pixel position on the object boundary to the desired pixel position $(i,j)$. Afterwards, we select a random threshold ($\tau$) to apply on the distance map for generating a new mask of the object which indicates a region inside the original mask. $$ {{\overline \bf{M}}_{i,j}} = \left\{ \begin{array}{l} 1\,\,\,\,\,\,\,if\,\,{D_{i,j}} > \tau \\ 0\,\,\,\,\,\,otherwise\, \end{array} \right. $$ The threshold is chosen based on the mean ($\mu$) and standard deviation ($\sigma$) of outputs of distance function, where the interval for choosing $\tau$ is $[0 ,\mu + \sigma ]$. Finally, to obtain the proper guiding signal for glands, the morphological skeleton (\cite{serra1983image}) of the new mask ${\overline \bf{M}}$ is constructed. Note that we could have used the morphological skeleton of the original mask as the guiding signal (which does not change throughout the training phase) but that may cause the network to overfit towards learning specific shapes of skeleton and prevents it from adjusting well with annotator input. Therefore, by changing the shape of the mask, we change the guiding signal map during training. \begin{figure*}[ht] \centering \includegraphics[width =0.9\textwidth]{glandGuidingSignal.pdf} \caption{Generating supervisory signal (inclusion map) for NuClick while training on gland dataset. The left image is the GT mask of a sample gland and $D(\bf{M})$ is the distance transformation of that mask. By changing the threshold value ($\tau$), the guiding signal (skeleton of the new mask ${\overline \bf{M}}$ which is specified by green color) is also changing.} \label{fig:glandGuid} \end{figure*} An example of constructing map for a gland is depicted in the \cref{fig:glandGuid}. In this figure, the left hand side image represents the GT of the desired gland on which its corresponding skeleton is overlaid with green color. If we use this same mask for training the network, the guiding signal would remain the exact same for all training epochs. However, based on our proposed mask changing technique, we first calculate the distance transformation of the GT, $D(\bf{M})$, and then apply a threshold of $\tau$ on it to construct a new mask of ${\overline \bf{M}}$. As you can see in Fig. \ref{fig:glandGuid}, by changing the the threshold value, appearance of the new mask is changing which results in different morphological skeletons as well (note the change of overlaid green colored lines with different $\tau$ values). This will make the NuClick network robust against the huge variation of guiding signals provided by the user during the test phase. The exclusion map for gland is constructed similar to nuclei/cells i.e., except one pixel from each excluding object all other pixels are set to zero. \paragraph{Test} When running inference, the user can draw squiggles inside the glandular objects. Then patches of 512$\times$512 are extracted from image based on the bounding box of the squiggle. If the bounding box height or width is smaller than 512, it is relaxed until height and width are 512. And if the bounding box is larger than 512 then image and corresponding squiggle maps are down-scaled to 512$\times$512. \subsection{Post-processing} After marking the desired objects by the user, image patches, inclusion and exclusion maps are generated and fed into the network to predict an output segmentation for each patch. Location of each patch is stored in the first step, so it can be used later to build the final instance segmentation map. The first step in post-processing is converting the prediction map into an initial segmentation mask by applying a threshold of 0.5. Then small objects (objects with area less than 50 pixels) are removed. Moreover, for removing extra objects except desired nucleus/cell/gland inside the mask, morphological reconstruction operator is used. To do so, the inclusion map plays the role of marker and initial segmentation is considered as the mask in morphological reconstruction. \section{Setups and Validation Experiments} \subsection{Datasets} \paragraph{Gland datasets} Gland Segmentation dataset \cite{sirinukunwattana2017gland} (GlaS) and GRAG datasets \cite{awan2017glandular, graham2019mild} are used for gland segmentation. GlaS dataset consists of 165 tiles, 85 of which for training and 80 for test. Test images of GlaS dataset are also split into to TestA and TestB. TestA was released to the participants of the GlaS challenge one month before the submission deadline, whereas Test B was released on the final day of the challenge. Within GRAG dataset, there are a total of 213 images which is split into 173 training images and 40 test images with different cancer grades. Both of these datasets are extracted from Hematoxylin and Eosin (H\&E) WSIs. \paragraph{Nuclei dataset} MonuSeg (\cite{kumar2019multi}) and CPM (\cite{vu2019methods}) datasets which contain 30 and 32 H\&E images ,respectively, have been used for our experiments. 16 images of each of these datasets are used for training. \paragraph{Cell dataset} A dataset of 2689 images consisting of touching white blood cells (WBCs) were synthetically generated for cell segmentation experiments. To this end, we used a set of 11000 manually segmented non-touching WBCs (WBC library). Selected cells are from one of the main five category of WBCs: Neutrophils, Lymphocytes, Eosinophils, Monocytes, or Basophils. The original patches of WBCs were extracted from scans of peripheral blood samples captured by CELLNAMA LSO5 slide scanner equipped with oil immersion 100x objective lens. However, the synthesized images are designed to mimic the appearance of bone marrow samples. In other words, synthesized images should contain several (10 to 30) touching WBCs. Therefore, for generating each image a random number of cells are selected from different categories of WBC library and then they are added to a microscopic image canvas which contains only red blood cells. During the image generation each added cell is well blended into the image so its boundary looks seamless and natural. This would make the problem of touching object segmentation as hard as real images. It is worth mentioning that each WBC is augmented (deformed, resize, and rotate) before being added to the canvas. Having more than 11000 WBCs and performing cell augmentation during the image generation would guarantee that the network does not overfit on a specific WBC shape. For all datasets 20\% of training images are considered as validation set. \subsection{Implementation Details} For our experiments, we used a work station equipped with an Intel Core i9 CPU, 128GB of RAM and two GeForce GTX 1080 Ti GPUs. All experiments were done in Keras framework with Tensorflow backend. For all applications, NuClick is trained for 200 epochs. Adam optimizer with learning rate of $3 \times {10^{ - 3}}$ and weight decay of of $5 \times {10^{ - 5}}$ was used to train the models. Batch size for nuclei, cell and gland was set to 256, 64 and 16 respectively. We used multiple augmentations as follows: random horizontal and vertical flip, brightness adjustment, contrast adjustment, sharpness adjustment, hue/saturation adjustment, color channels shuffling and adding Gaussian noise (\cite{jahanifar2018segmentation}). \subsection{Metrics} For our validation study we use metrics that has been reported in the literature for cell and gland instance segmentation. For nuclei and cells we have used AJI (Aggregated Jaccard Index) proposed by \cite{kumar2017dataset}: an instance based metric which calculates Jaccard index for each instance and then aggregates them, Dice coefficient: A similar metric to IoU (Intersection over Union), Hausdorff distance (\cite{sirinukunwattana2017gland}): the distance between two polygons which is calculated per object, Detection Quality (DQ): is equivalent to ${F\textsubscript{1}}-Score$ divided by 2, SQ: is summing up IoUs for all true positive values over number of true positives and PQ: DQ$\times$SQ (\cite{kirillov2019panoptic}). For AJI, Dice, the true and false values are based on the pixel value but for DQ true and false values are based on the value of IoU. The prediction is considered true positive if IoU is higher 0.5.\\ \begin{table} \centering \caption{Comparison of the proposed network architecture with other approaches: MonuSeg dataset have been used for these experiments.} \label{tArch} \begin{tabular}{lllll} \hline\hline & AJI & Dice & PQ & Haus. \\ \hline Unet & 0.762 & 0.821 & 0.774 & 8.73 \\ FCN & 0.741 & 0.798 & 0.756 & 9.5 \\ Segnet & 0.785 & 0.846 & 0.794 & 8.33 \\ NuClick W/O MS block & 0.798 & 0.860 & 0.808 & 6.11 \\ NuClick + 1 MS block & 0.817 & 0.889 & 0.820 & 5.51 \\ NuClick + 2 MS blocks & 0.830 & 0.905 & 0.829 & 4.93 \\ NuClick + 3 MS blocks & 0.834 & 0.912 & \textbf{0.838} & \textbf{4.05} \\ NuClick + 4 MS blocks & \textbf{0.835} & \textbf{0.914} & 0.838 & 4.05 \\ \hline\hline \end{tabular} \end{table} For gland segmentation, we use F1-score, Dice\textsubscript{Obj}, and Hausdorff distance (\cite{sirinukunwattana2017gland}). The True positives in F1-score are based on the thresholded IoU. Dice\textsubscript{Obj} is average of dice values over all objects and Hausdorff distance here is the same as the one used for nuclei. \subsection{Network Selection} In this section, we investigate the effect of multi-scale blocks on NuClick network and compare its performance with other popular architectures. Ablating various choices of components in NuClick network architecture have been shown in \cref{tArch}. We tested our architecture with up to 4 multi-scale (MS) blocks and we observed that adding more that 3 MS blocks does not contribute significantly to the performance. It can be observed that our architecture outperforms three other popular methods (UNet by \cite{ronneberger2015u}, SegNet by \cite{badrinarayanan2017segnet}, and FCN by \cite{long2015fully}). When we use no MS block, our model is still better than all baseline models which shows the positive effect of using residual blocks. We opt to use 3 MS blocks in the final NuClick architecture because it is suggesting a competitive performance while having smaller network size. \subsection{Validation Experiments} Performance of NuClick framework for interactive segmentation of nuclei, cells, and glands are reported in \cref{tValNuc,tValCell,tValGland}, respectively. For nuclei and cells, centroid of the GT masks were used to create inclusion and exclusion maps, whereas for gland segmentation, morphological skeleton of the GT masks were utilized. For comparison purposes, performance of other supervised and unsupervised interactive segmentation methods are included as well. In \cref{tValNuc,tValCell}, reported methods are Region Growing (\cite{adams1994seeded}): iteratively determines if the neighbouring pixels of an initial seed point should belong to the initial region or not (in this experiment, the seed point is GT mask centroid and the process for each nuclei/cell is repeated 30 iterations), Active Contour (\cite{chan2001active}): which iteratively evolves the level set of an initial region based on internal and external forces (the initial contour in this experiment is a circle with radius 3 pixels positioned at the GT mask centroid), marker controlled watershed (\cite{parvati2008image}) that is based on watershed algorithm in which number and segmentation output depends on initial seed points (in this experiment, unlike \cite{parvati2008image} that generates seed points automatically, we used GT mask centroids as seed points), interactive Fully Convolutional Network--iFCN (\cite{xu2016deep}): a supervised DL based method that transfers user clicks into distance maps that are concatenated to RGB channels to be fed into a fully convolutional neural network (FCN), and Latent Diversity--LD (\cite{li2018interactive}): which uses two CNNs to generate final segmentation. The first model takes the image and distance transform of two dots (inside and outside of object) to generate several diverse initial segmentation maps and the second model selects the best segmentation among them. \begin{table} \centering \caption{Performance of different interactive segmentation methods for nuclear segmentation on validation set of MonuSeg dataset} \label{tValNuc} \begin{tabular}{lccccl} \hline\hline Method & AJI & Dice & SQ & PQ & Haus. \\ \hline Watershed & 0.189 & 0.402 & 0.694 & 0.280 & 125 \\ Region Growing & 0.162 & 0.373 & 0.659 & 0.241 & 95 \\ Active Contour & 0.284 & 0.581 & 0.742 & 0.394 & 67 \\ iFCN & 0.806 & 0.878 & 0.798 & 0.782 & 7.6 \\ LD & 0.821 & 0.898 & 0.815 & 0.807 & 5.8 \\ NuClick & \textbf{0.834 } & \textbf{0.912} & \textbf{0.839} & \textbf{0.838} & \textbf{4.05} \\ \hline\hline \end{tabular} \end{table} \begin{table} \centering \caption{Performance of different interactive segmentation methods for cell segmentation on test set of WBC dataset} \label{tValCell} \begin{tabular}{lccccl} \hline\hline & AJI & Dice & SQ & PQ & Haus. \\ \hline Watershed & 0.153 & 0.351 & 0.431 & 0.148 & 86 \\ Region Growing & 0.145 & 0.322 & 0.414 & 0.129 & 71 \\ Active Contour & 0.219 & 0.491 & 0.522 & 0.198 & 50 \\ iFCN & 0.938 & 0.971 & 0.944 & 0.944 & 9.51 \\ LD & 0.943 & 0.978 & 0.949 & 0.949 & 8.33 \\ NuClick & \textbf{0.954} & \textbf{0.983} & \textbf{0.958} & \textbf{0.958} & \textbf{7.45} \\ \hline\hline \end{tabular} \end{table} \begin{table} \centering \setlength{\tabcolsep}{2.5pt} \caption{Performance of different interactive segmentation methods for gland segmentation on test sets of GLaS dataset} \label{tValGland} \begin{tabular}{lccc|ccc} \hline\hline & \multicolumn{3}{c|}{TestA} & \multicolumn{3}{c}{TestB} \\ \hline & F1 & Dice\textsubscript{Obj} & Haus. & F1 & Dice\textsubscript{Obj} & Haus. \\ \hline Grabcut & 0.462 & 0.431 & 290 & 0.447 & 0.412 & 312 \\ Deep Gabcut & 0.886 & 0.827 & 51 & 0.853 & 0.810 & 57 \\ DEXTRE & 0.911 & 0.841 & 43 & 0.904 & 0.829 & 49 \\ Mask-RCNN & \multicolumn{1}{l}{0.944} & 0.875 & 35 & \multicolumn{1}{l}{0.919} & 0.856 & 41 \\ BIFseg & \multicolumn{1}{l}{0.958} & 0.889 & 28 & \multicolumn{1}{l}{0.921} & 0.864 & 38 \\ NuClick & \textbf{1.000} & \textbf{0.956} & \textbf{15} & \textbf{1.000} & \textbf{0.951} & \textbf{21} \\ \hline\hline \end{tabular} \end{table} In \cref{tValGland}, reported methods are Grabcut by \cite{rother2004grabcut}: which updates appearance model within the bounding box provided by the user, Deep GrabCut by \cite{xu2017deep}: which converts the bounding box provided by the user into a distance map that is concatenated to RGB image as the input of a deep learning model, DEXTRE (\cite{maninis2018deep}): a supervised deep learning based method which is mentioned in the \cref{Sec-inter} and accepts four extreme points of glands as input (extreme points are extracted based on each object GT mask), and a Mask-RCNN based approach proposed by \cite{agustsson2019interactive}: where the bounding box is also used as the input to the Mask-RCNN. \cite{agustsson2019interactive} also added a instance-aware loss measured at the pixel level to the Mask-RCNN loss. We also compared our method for gland segmentation with BIFseg (\cite{wang2018interactive}) that needs user to crop the object of interest by drawing bounding box around it. The cropped region is then resized and fed into a resolution-preserving CNN to predict the output segmentation. \cite{wang2018interactive} also used a refinement step which is not included in our implementation. For GrabCut, Deep GrabCut, BIFseg, and Mask-RCNN approaches the bounding box for each object is selected based on its GT mask. For iFCN and LD methods, positive point (point inside the object) is selected according to the centroid of each nucleus and negative click is a random point outside the desired object. \begin{figure*}[h!] \centering \includegraphics[width =0.95\textwidth]{generalizability.pdf} \caption{\textbf{Generalizability of NuClick:} The first row shows results of NuClick on CPM dataset for nuclei segmentation (where the network was trained on MoNuSeg dataset). The second row illustrates two samples of gland segmentation task from CRAG dataset where the model was trained on GLaS dataset. Solid stroke line around each object outlines the ground truth boundary for that object, overlaid transparent mask is the predicted segmentation region by NuClick, and points or squiggles indicate the provided guiding signal for interactive segmentation. (Best viewed in color)} \label{fig:generalize} \end{figure*} Based on \cref{tValNuc}, NuClick achieved AJI score of 0.834, Dice value of 0.912, and PQ value of 0.838 which outperformed all other methods for nuclear segmentation on MonuSeg dataset. Performance gap between NuClick and other unsupervised methods is very high (for example in comparison with Watershed method, NuClick achieves a 0.645 higher AJI). Extreme low evaluation values achieved by unsupervised metrics indicate that they are not suitable for intricate task of nuclear segmentation, even if they are fed with GT markers. There is also iFCN (\cite{xu2016deep}), a deep learning based method in \cref{tValNuc} that is trained based on the clicked dots inside and outside of objects. However, NuClick performs better than iFCN for all AJI, Dice, and PQ metrics by margin of 2.8\%, 3.4\%, and 5.6\%, respectively, which is a considerable boost. For the other CNN based method in \cref{tValNuc}, LD method, NuClick advantage over all metrics is also evident. The same performance trend can be seen for both cell and gland segmentation tasks in \cref{tValCell,tValGland}. For the cell segmentation task, NuClick was able to segment touching WBCs from synthesized dense blood smear images quite perfectly. Our proposed method achieves AJI, Dice, and PQ values of 0.954, 0.983, and 0.958, respectively, which indicates remarkable performance of the NuClick in cell segmentation. Validation results of our algorithm on two test sets from GlaS dataset (testA and testB) are reported in \cref{tValGland} alongside the results of 4 supervised deep learning based algorithms and an unsupervised method (Grabcut). Markers used for Grabcut are the same as ones that we used for NuClick. Based on \cref{tValGland} our proposed method is able to outperform all other methods for gland segmentation in both testA and testB datasets by a large margine. For testB, NuClick achieves F1-score of 1.0, Dice similarity coefficient of 0.951, and Hausdorff distance of 21, which compared to the best performing supervised method (BIFseg) shows 7.9\%, 8.7\%, and 17 pixels improvement, respectively. The F1-score value of 1.0 achieved for NuClick framework in gland segmentation experiment expresses that all of desired objects in all images are segmented well enough. As expected, unsupervised methods, like Grabcut, perform much worse in comparison to supervised method for gland segmentation. Quantitatively, our proposed framework shows 55.3\% and 53.9\% improvement compared to Grabcut in terms of F1-score and Dice similarity coefficients. The reason for the advantage of NuClick over other methods mainly lies in its squiggle-based guiding signal which is able to efficiently mark the extent of big, complex, and hollow objects. It is further discussed in \cref{Sec:discussion}. Methods like DEXTRE, BIFseg, and Mask-RCNN are not evaluated for interactive nucleus/cell segmentation, because they may be cumbersome to apply in this case. These methods need four click points on the boundaries of nucleus/cell (or drawing a bounding box for each of them) which is still labour-intensive as there may be a large number of nuclei/cells within an image. Segmentation quality for three samples are depicted in \cref{fig:validation}. In this figure, the first, second, and third rows belong to a sample drawn from MoNuSeg, WBC, and GLaS validation sets. The left column of \cref{fig:validation} shows original images and images on the right column contains GT boundaries, segmentation mask, and guiding signals (markers) overlaid on them. Guiding signals for nuclei and cell segmentation are simple clicks inside each object (indicated by diamond-shape points on the images) while for glands (the third row) guiding signals are squiggles. In all exemplars, extent of the prediction masks (indicated by overlaid transparent colored region) are very close to the GT boundaries (indicated by solid strokes around each object). \section{Discussions} \label{Sec:discussion} In order to gain better insights into the performance and capabilities of the NuClick, we designed several evaluation experiments. In this section we will discuss different evaluation experiments for NuClick. First we will assess the generalizability of the proposed framework, then we will discuss how it can adapt to new domains without further training, after that the reliability of NuClick output segmentation is studied. Moreover, sensitivity of output segmentation to variations in the guiding signals is also addressed in the following subsections. \begin{figure*}[ht!] \centering \includegraphics[width =0.95\textwidth]{adaptability.pdf} \caption{\textbf{Domain adaptability of NuClick:} nuclei from unseen domains (Pap Smear sample in the first row and IHC stained sample in the second tow) are successfully segmented using the NuClick which was trained on MoNuSeg dataset. In all images, solid stroke line around each object outlines the ground truth boundary for that object (except for IHC samples, for which ground truth masks are unavailable), overlaid transparent mask is the predicted segmentation region by NuClick, and points indicate the provided guiding signal for interactive segmentation. (best viewed in color)} \label{fig:adapt} \end{figure*} \subsection{Generalization study} To show the generalizability of the NuClick across an unseen datasets, we designed an experiment in which NuClick is trained on the training set of a specific dataset and then evaluated on the validation set of another dataset but within the same domain. Availability of different labeled nuclei and gland datasets allow us to better show the generalizability of our proposed framework across different dataset and different tasks. To assess the generalizability upon nuclei segmentation, two experiments were done. In one experiment, NuClick was trained on training set of MoNuSeg dataset and then evaluated on the validation set of CPM dataset. In another experiment this process was done contrariwise where CPM training set was used for training the NuClick and MoNuSeg testing set was used for the evaluation. Evaluation results of this study are reported in the first two rows of \cref{tGeneralize}. From this table we can conclude that NuClick can generalize well across datasets because it gains high values for evaluation metrics when predicting images from dataset that was not included in its training. For example, when NuClick is trained on the MoNuSeg training set, Dice and SQ evaluation metrics resulted for CPM validation set are 0.908 and 0.821, respectively, which are very close to the values reported for evaluating the MoNuSeg validation set using the same model i.e., Dice of 0.912 and SQ of 0.839 in \cref{tValNuc}. This closeness for two different datasets using the same model supports our claim about generalizability of the NuClick. Similarly, to test the generlizability of the NuClick when working on gland segmentation task, it has been trained on one gland dataset and tested on validation images from another gland dataset. As GlaS test set is divided into TestA and TestB, when NuClick is trained on CRAG, it has been test on testA and testB of GlaS (named as GlaSA and GlaSB in \cref{tGeneralize}). High values of Dice\textsubscript{Obj} metric and low values for Hasdroff distances also supports the generalizability of NuClick framework for gland segmentation task as well. To provide visual evidence for this claim, we illustrated two nuclear segmentation samples from CPM validation set (resulted using a model trained on MoNuSeg dataset) and two gland segmentation samples from CRAG validation set (resulted using a model trained on GLaS dataset) in \cref{fig:generalize}. In all cases NuClick was able to successfully segment the desired objects with high accuracy. In all images of \cref{fig:generalize} different overlaid colors corresponds to different object instances, solid stroke lines indicate GT boundaries, transparent color masks show the predicted segmentation region, and other point or squiggle markers representing guiding signals for interactive segmentation. \begin{table} \centering \setlength{\tabcolsep}{2pt} \caption{Results of generalization study across different datasets for interactive nuclei and gland segmentation} \begin{tabular}{lllcc|cc} \hline\hline & Train & Test & Dice & SQ & Dice\textsubscript{Obj} & Haus. \\ \hline \multirow{2}{*}{Nuclei} & MoNuSeg & CPM & 0.908 & 0.821 & - & - \\ & CPM & MoNuSeg & 0.892 & 0.811 & - & - \\ \hdashline \multirow{2}{*}{Gland} & GLaS & CRAG & \multicolumn{1}{l}{-} & \multicolumn{1}{l|}{-} & \multicolumn{1}{l}{0.932} & \multicolumn{1}{l}{31} \\ & CRAG & GLaSA & \multicolumn{1}{l}{-} & \multicolumn{1}{l|}{-} & \multicolumn{1}{l}{0.944} & \multicolumn{1}{l}{28} \\ & CRAG & GLaSB & \multicolumn{1}{l}{-} & \multicolumn{1}{l|}{-} & \multicolumn{1}{l}{0.938} & \multicolumn{1}{l}{30} \\ \hline\hline \end{tabular} \label{tGeneralize} \end{table} \subsection{Domain adaptation study} To assess the performance of the NuClick on unseen samples from different data domains, we trained it on MoNuSeg dataset which contains labeled nuclei from histopathological images and then used the trained model to segment nuclei in cytology and immunohistochemistry (IHC) samples. In the cytology case, a dataset of 42 FoVs were captured from 10 different Pap Smear samples using CELLNAMA LSO5 slide scanner and 20x objective lens. These samples contain overlapping cervical cells, inflammatory cells, mucus, blood cells and debris. Our desired objects from these images are nuclei of cervical cells. All nuclei from cervical cells in the available dataset of Pap Smear images were manually segmented with the help of a cytotechnologist. Having the GT segmentation for nuclei, we can use their centroid to apply the NuClick on them (perform pseudo-interactive segmentation) and also evaluate the results quantitatively, as reported in \cref{tAdapt}. High values of evaluation metrics reported in \cref{tAdapt} shows how well NuClick can perform on images from a new unseen domain like Pap Smear samples. Some visual examples are also provided in fig. \ref{fig:adapt} to support this claim. As illustrated in the first row of fig. \ref{fig:adapt}, NuClick was able to segment touching nuclei (in very dense cervical cell groups) from Pap Smear samples with high precision. It is able to handle nuclei with different sizes and various background appearances. \begin{table} \centering \caption{Performance NuClick framework on segmenting nuclei in images from an unseen domain (Pap Smear)} \begin{tabular}{lccccc} \hline\hline Method & AJI & Dice & SQ & DQ & PQ \\ \hline NuClick & 0.934 & 0.965 & 0.933 & 0.997 & 0.931 \\ \hline\hline \end{tabular} \label{tAdapt} \end{table} For the IHC images, we utilized NuClick to delineate lymphocytes. The dataset we have used for this section is a set of 441 patches with size of $256\times256$ extracted from LYON19 dataset. LYON19 is scientific challenge on lymphocyte detection from images of IHC samples. In this dataset samples are taken from breast, colon or prostate organs and are then stained with an antibody against CD3 or CD8 \cite{lyon19} (membrane of lymphocyte would appear brownish in the resulting staining). However, for LYON19 challenge organizers did not release any instance segmentation/detection GTs alongside the image ROIs. Therefore, we can not assess the performance of NuClick segmentation on this dataset quantitatively. However, the quality of segmentation is very desirable based on the depicted results for two random cases in the second row of \cref{fig:adapt}. Example augmentations in \cref{fig:adapt} are achieved by clicks of a non-expert user inside lymphocytes (based on his imperfect assumptions). As it is shown in \cref{fig:adapt}, NuClick is able to adequately segment touching nuclei even in extremely cluttered areas of images from an unseen domain. These resulting instance masks were actually used to train an automatic nuclei instance segmentation network, SpaNet \cite{koohbanani2019nuclear}, which helped us achieve the first rank in LYON19 challenge. In other words, we approached the problem lymphocyte detection as an instance segmentation problem by taking advantage of our own generated nuclei instance segmentation masks \cite{jahanifar2019nuclick}. It also approves the reliability of the NuClick generated prediction masks, which is discussed in more details in the following subsection. \subsection{Segmentation Reliability Study} The important part of an interactive method for collecting segmentation is to see how the generated segmentation maps are reliable. To check the reliability of generated masks, we use them for training segmentation models. Then we can compare the performance of models trained on generated mask with the performance of models trained on the GTs. This experiment has been done for nuclear segmentation task, where we trained three well-known segmentation networks (U-Net \cite{ronneberger2015u}, SegNet \cite{badrinarayanan2017segnet}, and FCN8 \cite{long2015fully}) with GT and NuClick generated masks separately and evaluated the trained models on the validation set. Results of these experiments are reported in \cref{tRelaiable}. Note that when we are evaluating the segmentation on MoNuSeg dataset, the NuClick model that generated the masks is trained on the CPM dataset. Therefore, in that case NuClick framework did not see any of MoNuSeg images during its training. As shown in \cref{tRelaiable} there is a negligible difference between the metrics achieved by models trained on GT masks and the ones that trained on NuClick generated masks. Even for one instance, when testing on MoNuSeg dataset, Dice and SQ values resulted from FCN8 model trained on annotations of NuClick\textsubscript{CPM} are 0.01 and 0.006 (insignificantly) higher than the model trained on GT annotations, respectively. This might be due to more uniformity of the NuClick generated annotations, which eliminate the negative effect of inter annotator variations present in GT annotations. Therefore, the dense annotations generated by NuClick are reliable enough for using in practice. If we consider the cost of manual annotation, it is more efficient to use annotations obtained from NuClick to train models. \begin{table} \centering \setlength{\tabcolsep}{1pt} \caption{Results of segmentation reliability experiments} \begin{tabular}{lcc|cc|cc|cc} \hline\hline \multirow{3}{*}{} & \multicolumn{4}{c|}{Result on MoNuSeg test set} & \multicolumn{4}{c}{Result on CPM test set} \\ \cline{2-9} & \multicolumn{2}{c|}{GT} & \multicolumn{2}{c|}{NuClick\textsubscript{CPM}} & \multicolumn{2}{c|}{GT} & \multicolumn{2}{c}{NuClick\textsubscript{MoNuSeg}} \\ \cline{2-9} & Dice & SQ & Dice & SQ & Dice & SQ & Dice & SQ \\ \hline Unet & 0.825 & 0.510 & 0.824 & 0.503 & 0.862 & 0.596 & 0.854 & 0.584 \\ SegNet & 0.849 & 0.531 & 0.842 & 0.527 & 0.889 & 0.644 & 0.881 & 0.632 \\ FCN8 & 0.808 & 0.453 & 0.818 & 0.459 & 0.848 & 0.609 & 0.836 & 0.603 \\ \hline\hline \end{tabular} \label{tRelaiable} \end{table} \begin{figure*}[h!] \centering \includegraphics[width =\textwidth]{sensitivityAnalysis2.pdf} \caption{Example results of NuClick, highlighting the variations in the user input. First and second rows show the prediction of Nuclick at different positions of clicks inside objects. The third and fourth rows demonstrates the predictions of nuclick in presense of variouse shape of squiggle. Solid stroke line around each object outlines the ground truth boundary for that object, overlaid transparent mask is the predicted segmentation region by NuClick, and points or squiggles indicate the provided guiding signal for interactive segmentation. (Best viewed in color, zoom in to clearly see boundaries)} \label{fig:sens} \end{figure*} \begin{figure*} [ht!] \includegraphics[width =\textwidth]{extremeCases.pdf} Extreme cases for nuclei and glands: clumped nuclei in H\&E and IHC images (a-d) and irregular glands/tumor regions in cancerous colon and prostate images (e-h) are shown. In all images, solid stroke line around each object outlines the ground truth boundary for that object (except for d and e where the ground truth masks are unavailable), overlaid transparent mask is the predicted segmentation region by NuClick, and points or squiggles indicate the provided guiding signal for interactive segmentation. (Best viewed in color, zoom in to clearly see boundaries) \label{fig:extreme} \end{figure*} \begin{table} \centering \caption{Effect of disturbing click positions by amount of $\sigma$ on NuClick outputs for nuclei and cells segmentation} \label{tValsigma} \begin{tabular}{lccc|ccc} \hline\hline & \multicolumn{3}{c|}{Nuclei} & \multicolumn{3}{c}{Cells (WBCs)} \\ \hline $\sigma$ & AJI & Dice & PQ. & AJI & Dice & PQ. \\ \hline 1 & 0.834 & 0.912 & 0.838 & 0.954 & 0.983 & 0.958 \\ 3 & 0.834 & 0.911 & 0.837 & 0.954 & 0.983 & 0.958 \\ 5 & 0.832 & 0.911 & 0.835 & 0.953 & 0.983 & 0.957 \\ 10 & 0.821 & 0.903 & 0.822 & 0.953 & 0.982 & 0.957 \\ 20 & - & - & - & 0.950 & 0.979 & 0.955 \\ 50 & - & - & - & 0.935 & 0.961 & 0.943 \\ \hline\hline \end{tabular} \label{tSensitivity} \end{table} \subsection{Sensitivity to Guiding Signals} Performance of an interactive segmentation algorithm highly depends on quality of the user input markers. In other words, an ideal interactive segmentation tool must be robust against errors in the input annotations as much as possible. For instance, in nucleus or cell segmentation, an ideal segmentation tools should perform well to delineate boundaries of nuclei as long as user clicks fall inside the nuclei region i.e., the clicked point does not need to be located exactly at the center of the desired nuclei. \begin{table} \centering \caption{Effect of disturbing click positions by amount of $\sigma$ on NuClick outputs for nuclei and cells segmentation} \label{tValsigma} \begin{tabular}{lccc|ccc} \hline\hline & \multicolumn{3}{c|}{Nuclei} & \multicolumn{3}{c}{Cells (WBCs)} \\ \hline $\sigma$ & AJI & Dice & PQ. & AJI & Dice & PQ. \\ \hline 1 & 0.834 & 0.912 & 0.838 & 0.954 & 0.983 & 0.958 \\ 3 & 0.834 & 0.911 & 0.837 & 0.954 & 0.983 & 0.958 \\ 5 & 0.832 & 0.911 & 0.835 & 0.953 & 0.983 & 0.957 \\ 10 & 0.821 & 0.903 & 0.822 & 0.953 & 0.982 & 0.957 \\ 20 & - & - & - & 0.950 & 0.979 & 0.955 \\ 50 & - & - & - & 0.935 & 0.961 & 0.943 \\ \hline\hline \end{tabular} \label{tSensitivity} \end{table} To assess the sensitivity of NuClick to the variations in the guiding signal, we design an experiment for nuclei and cell segmentation applications in which location of the guiding point in the inclusion map is perturbed by adding value of ${\sigma}$ to the location of centroids. We repeat this experiment for different values of $\sigma$ for both nuclei and cell segmentation applications and report the results in \cref{tSensitivity}. For nuclear segmentation, jittering the location up to 10 pixels is investigated. It has been shown that disturbing the click position from the centroid up to 5 pixels does not considerably degrade the segmentation results. However, when the jittering amount is equal to $\sigma=10$, all evaluation metrics drop by 1\% or more. This reduction in metrics does not necessarily imply that NuClick is sensitive to click positions, because this fall in performance may be due to the fact that radius of some nuclei is less than 10 pixels and jittering the click position by 10 pixels cause it to fall outside the nuclei region therefore confusing the NuClick in correctly segmenting the desired small nucleus. However, even reduced metrics are still reliable in comparison with the resulted metrics from other methods as reported in \cref{tValNuc}. The same trend can be seen for cell segmentaiton task in \cref{tSensitivity}. However, for cells in our dataset we were able to increase the jittering range (up to 50 pixels) because in the WBC dataset, white blood cells have a diameter of at least 80 pixels. As one can see, the segmentation results are very robust against the applied distortion to the click position. Changing the click location by 50 pixels makes considerable drop in performance which can be due to the same reason as we discussed the nuclei case i.e., amount of jittering is bigger than the average radius of some small cells. Unfortunately, we can not quantitatively analyze the sensitivity of the NuClick to the squiggle changes, because its related changes are not easily measurable/paramtereizable. However, for two examples of histology images we tried to show the effect of changing the guiding squiggles on the resulting segmentation in \cref{fig:sens}. In this figure, the effect of changing the click position for two examples of nuclei segmentation and two examples cell segmentation are also visualized. It is obvious from exemplars in \cref{fig:sens} that NuClick successfully works with different shapes of squiggles as the guiding signal. Squiggles can be short in the middle or adjacent regions of the desired gland, or they can be long enough to cover the main diameter of the gland. They can be continuous curves covering all section and indentation of the gland geometry, or separated discrete lines that indicate different sections of a big gland. They can even have arbitrary numerical or letters shape like the example in the last row of \cref{fig:sens}. In all cases, it is obvious that NuClick is quite robust against variations in the guiding signals which is due to the techniques that we have incorporated during training of the NuClick (randomizing the inclusion map). It is worth mentioning that we have conducted experiments with training NuClick for gland segmentation using extreme points and polygons as guiding signals. Even with a considerable number of points on gland boundary or polygons with large number of vertices (filled or hollow), the network failed to converge during the training phase. However, we observed that even simple or small squiggles are able to provide enough guiding information for the model to converge fast. We have also conducted another experiment to assess the sensitivity of NuClick on the exclusion maps. In other words, we want to see if eliminating the exclusion map has any effect on NuClick segmentation performance. To this end, we evaluate the performance of NuClick for nuclei segmentation on MoNuSeg dataset in the absence of exclusion map. Therefore in this situation the input to the network would have 4 channels (RGB plus inclusion map). The network is trained from scratch on the MoNuSeg training set with the new considerations and then evaluated on the MoNuSeg validation set. Results of this experiment are reported in \cref{tExclud}. Based on \cref{tExclud}, performance of the NuClick significantly drops when exclusion map is missing. That is because there are a lot of overlapping nuclei in this dataset and without having the exclusion map, the network has no clue of the neighboring nuclei when dealing with a nucleus that belongs to a nuclei clump. \subsection{Extreme Cases} To investigate the effectiveness of NuClick when dealing with extreme cases, output of NuClick for images with challenging objects (high grade cancer in different tissue types) are shown in \cref{fig:extreme}. For example in \cref{fig:extreme}a-c touching nuclei with unclear edges from patches of cancerous samples have been successfully segmented by NuClick. Additionally, \cref{fig:extreme}d shows promising segmentation of densely clustered blood cells in a blurred IHC image from another domain (extracted from LYON19 dataset (\cite{lyon19})). In \cref{fig:extreme}e-f, images of glands with irregular shapes and their overlaid predictions are shown. As long as the squiggle covers the extend of gland, we can achieve a good segmentation. A noteworthy property of NuClick framework is its capability to segment objects with holes in them. In \cref{fig:extreme}e-f, although margins of glands are very unclear and some glands have holes in their shape, NuClick can successfully recognizing boundaries of each gland. Further, if the squiggle encompass the hole, it will be excluded from final segmentation whereas if the squiggle covers part of holes in the middle of glands, they will be included in the segmentation. For instance, in \cref{fig:extreme}g, a complex and relatively large gland is well delineated by the NuClick. Note that this gland contains a hole region which belongs to the gland and it is correctly segmented as part of the gland because the guiding signal covers that part. This is a powerful and very useful property that methods based on extreme points or bounding box like \cite{maninis2018deep} and \cite{wang2018interactive} do not offer. We also show a cancerous prostate image (extracted from PANDA dataset (\cite{bulten2020automated})) in \cref{fig:extreme}h where the tumor regions are outlined by NuClick. Overall, these predictions shows the capability of NuClick in providing reasonable annotation in scenarios that are even challenging for humans to annotate. Note that for images in \cref{fig:extreme}d,h the ground truth segmentation masks are not available, therefore they are not shown. \begin{table} \centering \setlength{\tabcolsep}{2.5pt} \caption{Performance of NuClick on the MonuSeg dataset with and without exclusion map} \begin{tabular}{llllll} & AJI & Dice & SQ & DQ & PQ \\ \hline\hline NuClick with ex. map & 0.834 & 0.912 & 0.839 & 0.999 & 0.838 \\ NuClick without ex. map & 0.815 & 0.894 & 0.801 & 0.972 & 0.778 \end{tabular} \label{tExclud} \end{table} \subsection{User Correction} In some cases, the output of models might not be correct, therefore there should be a possibility that user can modify wrong predictions. This is a matter of implementation of the interface in most cases, Hence, when the output is not as good as expected, the user can modify the supervisory signal by extending squiggles, changing the shape of squiggles or move the position of clicks. After the modification has been applied, the new modified supervisory signal is fed to the network to obtain new segmentation. \section{Conclusions} In this paper, we have presented NuClick, a CNN-based framework for interactive segmentation of objects in histology images. We proposed a simple and robust way to provide input from the user which minimizes human effort for obtaining dense annotations of nuclei, cell and glands in histology. We showed that our method is generizable enough to be used across different datasets and it can be used even for annotating objects from completely different data distributions. Applicability of NuClick has been shown across 6 datasets, where NuClick obtained state-of-the art performance in all scenarios. NuClick can also be used for segmenting other objects like nerves and vessels which are less complex and less heterogeneous compared to glands. We believe that NuClick can be used as a useful plug-in for whole slide annotation programs like ASAP (\cite{litjens2017asap}) or Qupath (\cite{bankhead2017qupath}) to ease the labeling process of the large-scale datasets. \bibliographystyle{unsrt \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8 and later. I wish you the best of success. \hfill mds \hfill December 27, 2012 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi
{'timestamp': '2020-06-01T02:07:13', 'yymm': '2005', 'arxiv_id': '2005.14403', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14403'}
arxiv
\section{Introduction} Graph ($G(V,E)$, in which $V$ is vertex set for describing dataset and $E$ is edge set for representing the relationship set between data) can capture the relationship of data d
istribution based on metric method (For example, Euclidean distance,Cosine distance or Kullback-Leibler divergence etc). As a metric representation, graph plays a vital role in pattern recognition. Especially, the recent graph convolutional networks (GCN) have the promising results for many application, for example, human activities \cite{manessi2020dynamic} \cite{zhao2019semantic} \cite{zeng2019graph},facial action unit detection \cite{liu2020relation}, text classification \cite{yao2019graph},and node classification \cite{kipf2016semi} \cite{hu2019hierarchical} \cite{lee2018higher} \cite{qin2018spectral}\cite{lin2019structure}. However, the graph structure is fixed in GCN methods, and it limits GCN for the application of the graph structure loss. Furthermore, the fixed graph structure usually is measured by one metric method, which can not better fit to the distribution of data. Therefore, Graph learning (GL) based on GCN \cite{jiang2019semi} \cite{jiang2019glmnet} \cite{jiang2019unified} \cite{chen2019deep} \cite{hu2019feature} is presented for dynamically mining graph structure of data. Graph learning faces to a key question, which is the structure relationship learning of data distribution. Existing methods fucus on how to update the graph structure with the metric constraint to optimize the object function \cite{du2020low}\cite{Lin2017Dynamic} or neural networks \cite{jiang2019semi} \cite{jiang2019glmnet} \cite{jiang2019unified} \cite{chen2019deep} \cite{hu2019feature}. The metric constraint usually is defined by two ways. One way is similarity metric learning, which is a global graph structure learning from all data samples. This method often focuses on the difference representation in the inter-class. Another way specifies different weights to different data in it's neighborhood (for example graph attention networks (GAT) \cite{velivckovic2017graph}) for capturing the local graph structure, which tends to the difference description in the intra-class. The global and local graph complement each other for classification. However, existing methods ignores the joint effect of these graphs and the relationship between the global and local graph based on GL for semi-supervised classification. Therefore, DGL is proposed for jointly considering these graphs structure for semi-supervised classification. Our main contributions include two points. One is to construct deep graph learning networks for dynamically capturing the global graph by similarity metric learning and local graph by attention learning. Compared with existing methods, the difference of this point focus on the joint consideration the different graphs to further find the distribution structure of the different data. Another is to fuse the global and local graph by the hierarchical progressive learning for semi-supervised classification.In contrast to existing methods, the difference of this point is the dynamic mining the relationship of these graphs to better balance the tendentious contradiction of the different graphs between inter-class and intra-class. the Figure \ref{fig-1} shows the difference between the global and local graph, and the modules of DGL. \begin{figure*}[ht] \begin{center} \includegraphics[width=1\linewidth]{1.png} \end{center} \vspace{-0.2in} \caption{The illustration of deep graph learning for mining the global and local graph.} \label{fig-1} \end{figure*} \section{Related Works} \label{rws} Graph learning try to automatically construct graph structure from data. Compared with fixed similarity metrics, the difference of GL can dynamically assign the neighbor of each data point, and automatically compute the weight between data points. Therefore, GL can obtain the better accuracy than the fixed graph description by similarity metrics \cite{kang2019robust}. According to the different learning framework, the recent GL methods can be divided into two categories, which are non-neural networks and neural networks. One is the methods based on non-neural networks, which attempt to build the optimization function based on the graph generation hypothesis. For example, in terms of completeness hypothesis, self-expressiveness \cite{huang2019auto} \cite{liu2013robust} \cite{kang2018self} regards linear coefficient matrix between data as the graph matrix for the impressive performance in clustering and semi-supervised learning; in accordance with Laplacian graph spectrum, graph learning based on spectral constraints \cite{kumar2019unified} complements the relationship of data by incorporating prior structural knowledge;on the basis of sparse sampling theory, sparse graph learning \cite{hu2019multi} \cite{chen2019adaptive}\cite{pei2019graph} captures few graph connections by adjusting sparsity parameter for improving the classification performance. The superiority of these methods focuses on the relevance between graph generation and constrains, and parameterizes graph generation processing for dynamically controlling graph learning. Because model construction usually be fixed by the specific function, graph structure information from raw data is difficultly mined by iterative boosts. Another is the approaches based on neural networks, which often simulate the interaction relationship between graph edges and nodes for propagating graph structure information by GCN \cite{jiang2019semi}. In these different networks, these are two types of methods for dynamically computing graph structure. The first type of method is the aggregation of nodes and edges information for updating the weight between nodes layer by layer. For example, hierarchical graph convolutional network(H-GCN) \cite{hu2019hierarchical} repeatedly aggregates similar nodes to hypernodes, and combines one- or two-hop neighborhood information to enlarge the receptive field of each node for encoding graph structure information; edge-labeling graph neural network (EGNN) \cite{kim2019edge} \cite{gong2018exploiting} updates the weight of graph by iteratively aggregating the node representation and the edgelabels with direct exploitation of both intra-cluster similarity and the inter-cluster dissimilarity. The second type of method is the similarity metric of pairwise nodes in some layer. For instance, graph learning-convolutional network(GLCN)\cite{jiang2019semi} optimizes graph structure by learning the transformation relationship of feature difference; dimension-wise separable graph convolution (DSGC) \cite{li2019attributed} uses the relationship among node attributes to complement node relations for representation learning by the covariance metric; graph learning neural networks (GLNNs) \cite{gao2019exploring} iteratively explores the optimization of graphs from both data and tasks by graph Laplacian regularizer; deep iterative and adaptive learning for graph neural networks (DIAL-GNN) \cite{chen2019deep} deals with the graph structure learning problem as a dynamical cosine similarity metric learning problem. These methods mostly consider the global structure from all data sample in the second type of method or the local structure from neighbor data in the first type of method. However, the hierarchical progressive relationship between the global and local graph is ignored. From above mentions, the methods based on non-neural networks show the better causal relationship between graph structure and the specific optimization function, while the methods based on neural networks demonstrate the stronger learning ability between graph structure and the uncertain optimization networks. It makes the latter be more suitable for further mining the graph structure. Moreover, the similarity metric of pairwise nodes in graph usually directly connect with raw data to easily fit its distribution. Therefore, our proposed method focuses on graph learning based on GCN to find the hierarchical progressive relationship between the global and local graph. \section{Deep graph learning} Deep graph learning (DGL) includes three modules, which are similarity metric learning module (S-module), attention learning module(A-module) and fusion learning module(F-module) in figure \ref{fig-2}. Similarity metric learning module implements graph structure computation for dynamically updating global structure relationship based on the raw data or the transformed data. Attention learning module reassigns the weight of the neighbor of each data point for finding the significant local structure based on the global structure. Fusion learning module integrates node representation based on the different graph structure for semi-supervised classification. \begin{figure*}[htp] \begin{center} \includegraphics[width=1\linewidth]{2.png} \end{center} \vspace{-0.2in} \caption{The network frameworks of deep graph learning, which contains similarity metric learning module (S-module), attention learning module(A-module) and fusion learning module(F-module).$X_{N \times D}$,$X_{N \times D_{0}}^{0}$,$X_{N \times D_{1}}^{1}$,$X_{N \times D_{2}}^{2}$,$X_{N \times D_{3}}^{3}$ and $X_{N \times D_{4}}^{4}$ respectively are node representation of each layer(The superscript of the node representation is the serial number of layer, and the subscript of the node representation shows the dimension space of the node representation.); $A_{N\times N}^{0}$ and $A_{N\times N}^{1}$ respectively are the adjacent matrix of the different layer(The superscript of the adjacent matrix is the serial number of layer, and the subscript of the adjacent matrix shows the dimension space of the adjacent matrix.); $Loss_{G}$ is the loss of graph learning; $Loss_{C}$ is the loss of classification; $Loss$ is the total loss of the whole networks.} \label{fig-2} \end{figure*} \subsection{Similarity metric learning module} Given data matrix $X\in R^{N\times D}$ ($N$ is the sample number of data, and $D$ is the dimension of each data), Let $X$ be the node representation of graph $G$. We expect to learn $G$ from $X$ for semi-supervised classification. In this module, there are three types of layer for stacking network structure. \textbf{The first type of layer} is linear projection layer for reducing the dimension of raw data feature. Because the dimension of raw data often leads to the higher computation complexity, the linear transformation of the reduction dimension is expected to implement in this layer. \begin{align} \label{fun-1} \begin{aligned} &X^{0}_{N\times D_{0}}=XP, \end{aligned} \end{align} here, $P\in R^{D \times D_{0}}$ is the linear transformation matrix, and $X^{0}_{N\times D_{0}}$ stands for the output of the linear projection layer. \textbf{The second type of layer} is graph learning layer for computing the weight of the pairwise nodes. The adjacent relationship $A^{l}_{N\times D_{l}}(i,j)$($i$ and $j$ respectively are the subscript of the different node representation in Graph $G$; $l$ represents the serial number of the layer)can describe this relationship weight, and can be defined as follow. \begin{align} \label{fun-2} \begin{aligned} A^{l}_{N\times N}(i,j)= &\frac{A(i,j)\exp(ReLU((\alpha^{l})^{T}|x_{i}^{l}-x_{j}^{l}|))}{\sum^{N}_{j}A(i,j)\exp(ReLU((\alpha^{l})^{T}|x_{i}^{l}-x_{j}^{l}|))}, \end{aligned} \end{align} here, $A$ is the normalized adjacent matrix from the initial data source. If $A$ is not available, $A(i,j)=1$. $ReLU(f)=\max(0,f)$ ($f$ is any variable or matrix) can assure the nonnegativity of $A^{l}_{N\times N}(i,j)$. $x_{i}^{l}\in R^{D_{l} \times 1}$ and $x_{j}^{l}\in R^{D_{l} \times 1}$ respectively are the different row transpose of the input $X^{l}_{N\times D_{l}}$ in the current layer. Equation \ref{fun-2} makes $A^{l}_{N\times N}$ normalized corresponding to its row. $\alpha^{l}\in R^{D_{l} \times 1}$ is weight parameter vector for measuring the significance of the relationship between nodes. Graph learning mainly trains the network for learning $\alpha^{l}$($l$=\{0,1\}). \textbf{The third type of layer} is graph convolution layer for propagating information based on graph. According to GCN\cite{kipf2016semi}, we can define the graph convolution layer as follow. \begin{align} \label{fun-3} \begin{aligned} &X_{N \times D_{l+1}}^{l+1}=ReLU(\hat{D^{l}}^{-1/2}\hat{A^{l}}\hat{D^{l}}^{-1/2}X_{N \times D_{l}}^{l}W^{l}), \end{aligned} \end{align} here, $\hat{A^{l}}=I_{N\times N}+A^{l}_{N\times N}$($I_{N\times N}\in R^{N\times N}$ is the identity matrix); $\hat{D^{l}}(i,i)=\sum_{j}A^{l}_{N\times N}(i,j)$; $W^{l}\in R^{D^{l}\times D^{l+1}}$ is the trainable weight matrix of the current layer. Similarity metric learning module based on three types layer includes one linear projection layer, two graph learning layer and graph convolution layer from input to output. Especially, two times stack of graph learning layer and graph convolution layer can construct deep network for mining the global graph structure of the different scale node representation. \subsection{Attention learning module} In the whole network construction, the global structure generation by similarity metric learning module can initially build local structure information of the neighbor of node representation. However, this local structure information only come from the pair-wise relevance between the current node and all other nodes, but weaken the importance discrimination of the node in the neighborhood of the current node. Therefore, we expect to construct attention learning module by the aggregation of the neighbor information for further capturing the local structure based on the sparse constrains neighborhood of the global structure(we call this process as hierarchical progressive learning).The original GAT \cite{velivckovic2017graph} only can process the binary weight of pair-wise node representation. For example, attention mechanism is built based on node's neighborhood weighted by binary value. However, the weight of the learned graph is real-value, which help to confirm the node's neighborhood with the incorporating the sparse constrains of the global graph structure. Therefore, the operation of attention mechanism is defined as follow. \begin{align} \label{fun-4} \begin{aligned} &X_{N \times D_{l+1}}^{l+1}=ReLU(\beta^{l}X_{N \times D_{l}}^{l}W^{l}), \end{aligned} \end{align} here, $\beta^{l}\in R^{N \times N}$ is the attention coefficient matrix, which any entry $\beta^{1}(i,j)$ directly is relevant with $X_{N \times D_{l}}^{l}(i,:)$,$X_{N \times D_{l}}^{l}(j,:)$ and $A_{N\times N}^{l}(i,j)$. Therefore, we define $\beta^{1}(i,j)$ by information aggregation based on graph as follow. \begin{align} \label{fun-5} \begin{aligned} &\hat{\beta}^{l}(i,j)=\exp{(ReLU(\gamma^{T}[X_{N \times D_{l}}^{l}(i,:)W^{l}\|X_{N \times D_{l}}^{l}(j,:)W^{l}]))}A_{N\times N}^{l}(i,j), \end{aligned} \end{align} \begin{align} \label{fun-6} \begin{aligned} &\beta^{1}(i,j)=\hat{\beta}^{l}(i,j)/\sum^{N}_{k}\hat{\beta}^{l}(i,k), \end{aligned} \end{align} here, $\|$ is the concatenation operator for transforming into column vector;$\gamma\in R^{2D_{l+1}\times 1}$ is the aggregation weight, which is shared by the dimension of all pari-wise nodes aggregation. In attention learning module, we handle the different scale information from the global graph structure by two graph attention layer for further mining local graph structure, which is credible basis for the description of the intra-class. \subsection{Fusion learning module} Fusion learning module includes two parts, which respectively are fusion learning layer for the different node representation and loss function for network training propagation. \textbf{The first part} is fusion learning layer to process the different dimension question of the node representation or the weight balance issue from the different module (similarity metric learning module or attention learning module). From figure \ref{fig-2}, the inputs of this module have $X_{N \times D_{2}}^{2}$ of graph convolution layer output, $X_{N \times D_{3}}^{3}$ and $X_{N \times D_{4}}^{4}$ of the different graph attention layer output. Because this network need deal with classification, we uniform the output dimension of the different module ($D_{2}=D_{3}=D_{4}=C$, $C$ is class number). Therefore, we define fusion learning layer as follow. \begin{align} \label{fun-7} \begin{aligned} &Z=Softmax(\eta_{1}X_{N \times D_{2}}^{2}+\eta_{2}X_{N \times D_{3}}^{3}+\eta_{3}X_{N \times D_{4}}^{4}), \end{aligned} \end{align} here $\eta=[\eta_{1},\eta_{2},\eta_{3}]$ is fusion coefficient vector, which encodes the importance of the different node representation. \textbf{The second part} is loss function definition, which determine the tendency of the network learning. The total loss $Loss$ contains the classification loss $Loss_{C}$ and the graph loss $Loss_{G}$. In semi-supervised classification, we construct classification loss based on the labeled data by cross-entropy loss for evaluating the error between the predicted label $Z$ and the real label $Y$. Therefore, $Loss_{C}$ is defined as follow. \begin{align} \label{fun-8} \begin{aligned} &Loss_{C}=-\sum_{k\in S}\sum_{c=1}^{C}Y_{kc}lnZ_{kc}, \end{aligned} \end{align} here,$S$ is the labeled data set; $Y_{kc}$ stands for the $kth$ label data belonging to the $cth$ class; $Z_{kc}$ shows the $kth$ label data predicted as the $cth$ class. In graph learning, we compute the adjacent matrix $A_{N\times N}^{0}$ and $A_{N\times N}^{1}$ for describing the graph of the different scale. To constrain the properties (sparsity and consistence) of these adjacent matrix, we define the graph loss $Loss_{G}$ as follow. \begin{align} \label{fun-9} \begin{aligned} Loss_{G}=&\lambda_{1}(X^{T}_{N\times D}(I-A_{N\times N}^{0})X_{N\times D}+X^{T}_{N\times D}(I-A_{N\times N}^{1})X_{N\times D})\\ &+\lambda_{2}(\|A_{N\times N}^{0}\|_{F}^{2}+\|A_{N\times N}^{1}\|_{F}^{2})+\lambda_{3}\|A_{N\times N}^{0}-A_{N\times N}^{1}\|_{F}^{2}, \end{aligned} \end{align} here, the first term can enforce the $X_{N\times D}$ matching with the topology of the graph by graph Laplacian regularizer; the second term can guarantee the sparsity of these adjacent matrixes; the third term can assure the consistence between these adjacent matrixes. Therefore,the total loss $Loss$ is the sum of $Loss_{C}$ and $Loss_{G}$. \begin{align} \label{fun-10} \begin{aligned} Loss=Loss_{C}+Loss_{G}, \end{aligned} \end{align} \section{Experiment} \subsection{Datasets} For evaluating the proposed DGL method, we carry out experiments in one generated dataset, and six benchmark datasets, which include three the paper-citation networks datasets(Cora,Citeseer and Pubmed\cite{sen2008collective}) and two image datasets(MNIST\cite{lecun1998gradient} and Cifar10\cite{krizhevsky2009learning}). The synthesized dataset contains 4 classes, each of which has 1000 samples, and includes 4000 samples. These data are randomly synthesized. In experiment, each class samples are divided into four groups, which are 1/100/899, 2/100/898, 3/100/897 and 4/100/896 for training/validation/testing sets. Table \ref{tab-1} show its details. Cora dataset includes 7 classes that have 2708 grouped publications as nodes represented by one-hot vector in term of the present or absence state of a word in the learned directory and their link relationship graph. Citeseer dataset contains 6 classes that involve 3327 scientific paper described like the same way of Cora dataset and their undirected graph. Pubmed dataset has 3 classes that include 19717 diabetes-related publication indicated by a term frequency-inverse document frequency (TF-IDF)\cite{wu2019comprehensive} and their relevance graph. In these datasets, experiments follow the configuration of the previous work \cite{kipf2016semi}. We select 500 samples for validation and 1000 samples for testing. Table \ref{tab-1} shows the specific information of these datasets. Cifar10 dataset has 10 classes that consists 50000 natural images\cite{krizhevsky2009learning}. The size of each RGB image is $32\times 32$. We select 10000 images (1000 images for each class) for evaluating the proposed DGL. For representing each image, we use Resnet-20\cite{he2016deep} to extract feature. MNIST dataset contains 10 classes of hand-written digit. We also select 10000 images (1000 images for each class) for assessing the proposed DGL. Each image feature is 784 dimension vector generated by the gray image. Table \ref{tab-1} demonstrates the statistics of these datasets. \begin{table*}[!ht] \small \renewcommand{\arraystretch}{1.0} \caption{Datasets statistics and the extracted feature in experiments.} \label{tab-1} \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{lp{0.8cm}p{1.0cm}p{1.2cm}p{1.0cm}p{1.5cm}p{1.2cm}p{0.8cm}} \hline \bfseries Datasets & \bfseries \tabincell{l}{Classes \\number} & \bfseries \tabincell{l}{Training\\Number} &\bfseries \tabincell{l}{Validating\\Number}& \bfseries \tabincell{l}{Testing\\Number} & \bfseries \tabincell{l}{Total number \\of images} & \bfseries \tabincell{l}{Feature\\dimension}& \bfseries \tabincell{l}{Initial\\graph} \\ \hline \hline \tabincell{l}{Generated\\ data}&$4$ &$4\sim16$& $400$ &$3596\sim3584$& $4000$ &200&No\\ \hline Cora &$7$ &$140$& $500$ &$1000$& $2708$ &1433&Yes\\ \hline Citeseer&$6$ &$120$& $500$ &$1000$& $3327$ &3703&Yes\\ \hline Pubmed &$3$ &$59$& $500$ &$1000$& $19717$ &500&Yes\\ \hline Cifar10 &$10$ &$1000\sim8000$& $1000$ &$8000\sim1000$& $50000$ &128& No\\ \hline MNIST &$10$ &$1000\sim8000$& $1000$ &$8000\sim1000$& $60000$ &784& No\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Experimental configuration} In experiments, we set $D_{0}=70$, $D_{1}=30$ and $D_{2}=D_{3}=D_{4}$, which is equal the classes number. The training maximum episodes of the proposed DGL is $200$. The parameter $\lambda_{1}$, $\lambda_{2}$ and $\lambda_{3}$ respectively are set $0.1$,$0.01$ and $0.001$. In Cifar10 and MNIST datasets, we select 8 group data for the different training-validating-testing sets(1000-1000-8000, 2000-1000-7000, 3000-1000-6000, 4000-1000-5000, 5000-1000-4000, 6000-1000-3000, 7000-1000-2000 and 8000-1000-1000). In the different datasets, validation set mainly is used for optimizing hyper-parameters, which include the dropout rate for all layer, the number of hidden units and the learning rate. \subsection{Generated data experiment} For observing the generated date, we reduce multi-dimension data to two dimension for visualizing data by t-SNE \cite{maaten2008visualizing}. Figure \ref{fig-21} shows the distribution of the generated data in two dimension, and experimental results of four methods, which are the proposed DGL,GLSGCN,GLGCN\cite{jiang2019semi} and GLGAT. GLSGCN and GLGAT is constructed for extending graph learning method in section \ref{extending}. Although the few data are labeled, DGL can still learn the structure distribution of data to obtain the promising results. Therefore, we conduct the following experiments for further evaluating the proposed DGL in the real datasets. \begin{figure*}[ht] \begin{center} \includegraphics[width=1\linewidth]{3.png} \end{center} \vspace{-0.2in} \caption{The structure distribution of the two-dimension generated data in (a) and the contrast experiment of the graph learning method in (b).} \label{fig-21} \end{figure*} \subsection{Comparison with baseline approaches} \label{baseline} In this section, we implement the proposed DGL and the baseline methods, which are GCN\cite{kipf2016semi}, GAT \cite{velivckovic2017graph}, simplifying graph convolutional networks(SGCN)\cite{wu2019simplifying} and GLGCN \cite{jiang2019semi}. GCN can construct the basic architecture of graph representation and classification model by the localized first-order approximation of spectral graph convolutions. GAT can learn the different weights to different nodes in a neighborhood for finding the attentions mechanism of local data. SGCN can eliminate the redundant complexity and computation of GCN by removing nonlinear unit and collapsing operation between the different layer. GLGCN can combine graph learning and graph convolution to optimize the global graph structure. Comparing with these methods, DGL can not only mine the global graph structure by the different scale graph learning layer, but also capture the local graph structure by the different scale graph attention layer. Furthermore, DGL can integrate the node representation from the different graph structure by fusion learning layer. Table \ref{tab-2} shows that DGL has the best performance in these methods. The experimental results between parentheses of GCN, GAT GLGCN come from the literature\cite{jiang2019semi}, while the results of SGCN stem from the literature \cite{wu2019simplifying}. \begin{table}[!ht] \small \renewcommand{\arraystretch}{1.0} \caption{Comparison of the proposed DGL method with baseline methods (GCN,GAT,SGCN and GLGCN) for semi-supervised classification, average per-class accuracy (\%) is reported based on the same data configurations in the different datasets. The results between parentheses come from the different literatures. All methods use the initial graph for computing model.} \label{tab-2} \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{lp{2.5cm}p{2.5cm}p{2.5cm}} \hline \bfseries Method &\bfseries Cora &\bfseries Citeseer &\bfseries Pubmed \\ \hline \hline GCN\cite{kipf2016semi} & $81.1\pm0.4 (82.9)$ &$71.0\pm0.2 (70.9)$ & $78.9\pm0.5 (77.9)$ \\ \hline GAT\cite{velivckovic2017graph} & $81.4\pm0.8 (83.2)$ &$71.8\pm0.3 (71.0)$ & $78.1\pm0.4 (78.0)$ \\ \hline SGCN\cite{wu2019simplifying} & $82.3\pm0.5 (81.0)$ &$71.4\pm0.3(71.9)$ & $78.3\pm0.2(78.9)$ \\ \hline GLGCN\cite{jiang2019semi} & $82.2\pm0.7(85.5)$ &$72.0\pm0.2(72.0)$ & $78.3\pm0.1(78.3)$ \\ \hline\hline DGL & $\textbf{84.8}\pm0.7$ &$\textbf{74.2}\pm0.5$ & $\textbf{80.2}\pm0.2$ \\ \hline \end{tabular} \end{center} \end{table} \subsection{Comparing with State-of-the-arts} \label{sota} Graph learning with neural network shows the promising results for semi-supervised classification. In section \ref{rws}, we summary the graph learning methods based on neural network, find the bias of the global graph structure or the local graph structure in existing methods. Therefore, we try to construct the new graph learning method based on neural network for further mining graph structure and balance the bias of these methods. We compare the proposed DGL with H-GCN \cite{hu2019hierarchical},GLNNs \cite{gao2019exploring},DIAL-GNN\cite{chen2019deep} and GLGCN\cite{jiang2019semi}. The difference of these methods is detailed in section \ref{rws}. Table \ref{tab-3} shows the best performance of the different methods, for example, GLGCN in Cora,and DGL in Citeseer and Pubmed. These methods can obtain the approximate performance in these datasets. For further contrasting the difference between GLGCN and the proposed DGL, we carry out the graph learning experiment in following section. \begin{table}[!ht] \small \renewcommand{\arraystretch}{1.0} \caption{Comparison of the proposed DGL method with state-of-the-art methods (H-GCN,GLNNs,DIAL-GNN and GLGCN) for semi-supervised classification, average per-class accuracy (\%) is reported based on the same data configurations in the different datasets. The results between parentheses come from the different literatures. All methods use the initial graph for computing model.} \label{tab-3} \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{lp{2.5cm}p{2.5cm}p{2.5cm}} \hline \bfseries Method &\bfseries Cora &\bfseries Citeseer &\bfseries Pubmed \\ \hline \hline H-GCN\cite{hu2019hierarchical} & $(84.5\pm0.5)$ &$(72.8\pm0.5)$ & $(79.8\pm0.4)$ \\ \hline GLNNs\cite{gao2019exploring} & $(83.4)$ &$(72.4)$ & $(76.7)$ \\ \hline DIAL-GNN\cite{chen2019deep} & $(84.5\pm 0.3)$ &$(74.1\pm0.2)$ & $Null$ \\ \hline GLGCN\cite{jiang2019semi} & $\textbf{(85.5)}$ &$(72.0)$ & $(78.3)$ \\ \hline\hline DGL & $84.8\pm0.7$ &$\textbf{74.2}\pm0.5$ & $\textbf{80.2}\pm0.2$ \\ \hline \end{tabular} \end{center} \end{table} \subsection{Comparing with the extended graph learning methods} \label{extending} In this section, we involve four methods, which are GLGCN\cite{jiang2019semi}, the proposed DGL and two extended methods (graph learning based on SGCN(GLSGCN) and graph learning based on GAT(GLGAT)). We use the basic idea of GLGCN to construct GLSGCN and GLGAT. \textbf{GLSGCN} includes a linear projection layer, which reduce the dimension of the original data to $70$,a graph learning layer and the following layers that are same with SGCN\cite{wu2019simplifying}. \textbf{GLGAT} also adds a linear projection layer for reducing the dimension of the data, a graph learning layer and the other layers that have the same configuration like GAT\cite{velivckovic2017graph}. In these experiments, all citation datasets do not use the initial graph, and graph structure can be learned from the original data by the different methods, for instance, GLSGCN and GLGCN tend to capture the global structure; GLGAT shallowly mine the global and local structure; the proposed DGL can deeply consider these structures for semi-supervised classification. Table \ref{tab-4} demonstrates the performance of the proposed DGL is better than that of other graph learning method. It indicates that deep mining and fusion of the different structure can significantly improve the performance of semi-supervised classification. GLSGCN shows the worse results than other methods in Cora and Citeseer datasets, while this method has the approximate result of other methods in Pubmed datasets. The main reason is that the simplifying structure of GLSGCN has the negative influence for graph structure learning in more categories. Table \ref{tab-5} shows the experimental results in MINIST image datasets. In the different training sets, DGL can outperform other graph learning methods. The same situation happens in Cifar10 of Table \ref{tab-6}. In all methods, the increasing training data is not a necessary and sufficient condition for the better performance because of the random data selection. \begin{table}[!ht] \small \renewcommand{\arraystretch}{1.0} \caption{Comparison of the proposed DGL method with the related graph learning methods (GLGCN, GLSGCN, GLGAT and DGL) for semi-supervised classification, average per-class accuracy (\%) is reported based on the same data configurations in the citation datasets (Cora,Citeseer and Pubmed). All methods do not use the initial graph for computing model.} \label{tab-4} \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{lp{2.5cm}p{2.5cm}p{2.5cm}} \hline \bfseries Method &\bfseries Cora &\bfseries Citeseer &\bfseries Pubmed \\ \hline \hline GLSGCN & $55.9\pm0.6$ &$49.6\pm0.3$ & $74.8\pm0.5$ \\ \hline GLGCN\cite{jiang2019semi} & $60.1\pm 0.3$ &$64.6\pm 0.2$ & $73.3\pm 0.5$ \\ \hline GLGAT & $63.1\pm 0.4$ &$65.5\pm0.2$ & $75.3\pm 0.2$ \\ \hline\hline DGL & $\textbf{65.3}\pm0.3$ &$\textbf{68.9}\pm0.4$ & $\textbf{76.9}\pm0.5$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!ht] \small \renewcommand{\arraystretch}{1.0} \caption{Comparison of the proposed DGL method with the related graph learning methods (GLGCN, GLSGCN, GLGAT and DGL) for semi-supervised classification, average per-class accuracy (\%) is reported based on the different data training/validation/testing in the MNIST image datasets.The initial graph for computing model is not available.} \label{tab-5} \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{lp{2.1cm}p{2.1cm}p{2.1cm}p{2.1cm}} \hline \bfseries Method &\bfseries \tabincell{l}{MNIST\\1000/1000/8000} &\bfseries \tabincell{l}{MNIST\\2000/1000/7000} &\bfseries \tabincell{l}{MNIST\\3000/1000/6000} &\bfseries\tabincell{l}{MNIST\\4000/1000/5000} \\ \hline \hline GLSGCN & $37.7\pm0.2$ &$38.7\pm0.4$ & $39.5\pm0.1$ & $39.6\pm0.2$\\ \hline GLGCN\cite{jiang2019semi} & $84.9\pm 0.4$ &$85.9\pm 0.2$ & $85.2\pm 0.3$& $88.0\pm 0.2$ \\ \hline GLGAT & $86.3\pm 0.5$ &$89.9\pm0.2$ & $89.7\pm 0.4$ & $89.2\pm 0.6$\\ \hline\hline DGL & $\textbf{89.1}\pm0.6$ &$\textbf{91.4}\pm0.2$ & $\textbf{91.1}\pm0.3$ & $\textbf{92.4}\pm0.5$\\ \hline \bfseries Method &\bfseries\tabincell{l}{MNIST\\5000/1000/4000} &\bfseries\tabincell{l}{MNIST\\6000/1000/3000} &\bfseries\tabincell{l}{MNIST\\7000/1000/2000} &\bfseries\tabincell{l}{MNIST\\8000/1000/1000} \\ \hline \hline GLSGCN & $39.4\pm0.3$ &$39.3\pm0.4$ & $38.9\pm0.3$ & $42.7\pm0.5$\\ \hline GLGCN\cite{jiang2019semi} & $87.9\pm 0.4$ &$86.4\pm 0.2$ & $88.0\pm 0.5$ & $88.9\pm 0.7$\\ \hline GLGAT & $89.7\pm 0.3$ &$89.1\pm0.7$ & $89.6\pm 0.4$ & $90.2\pm 0.5$\\ \hline\hline DGL & $\textbf{91.1}\pm0.5$ &$\textbf{91.3}\pm0.2$ & $\textbf{91.6}\pm0.6$ & $\textbf{92.4}\pm0.4$\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!ht] \small \renewcommand{\arraystretch}{1.0} \caption{Comparison of the proposed DGL method with the related graph learning methods (GLGCN, GLSGCN, GLGAT and DGL) for semi-supervised classification, average per-class accuracy (\%) is reported based on the different data training/validation/testing in the Cifar10 image datasets.The initial graph for computing model is not available.} \label{tab-6} \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{lp{2.1cm}p{2.1cm}p{2.1cm}p{2.1cm}} \hline \bfseries Method &\bfseries \tabincell{l}{Cifar10\\1000/1000/8000} &\bfseries \tabincell{l}{Cifar10\\2000/1000/7000} &\bfseries \tabincell{l}{Cifar10\\3000/1000/6000} &\bfseries\tabincell{l}{Cifar10\\4000/1000/5000} \\ \hline \hline GLSGCN & $63.5\pm0.4$ &$66.4\pm0.3$ & $71.5\pm0.5$ & $72.6\pm0.2$\\ \hline GLGCN\cite{jiang2019semi} & $84.2\pm 0.2$ &$79.7\pm 0.5$ & $81.1\pm 0.8$& $86.8\pm 0.4$ \\ \hline GLGAT & $86.5\pm 0.8$ &$87.4\pm0.5$ & $87.5\pm 0.6$ & $88.0\pm 0.3$\\ \hline\hline DGL & $\textbf{87.5}\pm0.5$ &$\textbf{88.8}\pm0.3$ & $\textbf{88.8}\pm0.6$ & $\textbf{88.8}\pm0.4$\\ \hline \bfseries Method &\bfseries\tabincell{l}{Cifar10\\5000/1000/4000} &\bfseries\tabincell{l}{Cifar10\\6000/1000/3000} &\bfseries\tabincell{l}{Cifar10\\7000/1000/2000} &\bfseries\tabincell{l}{Cifar10\\8000/1000/1000} \\ \hline \hline GLSGCN & $63.7\pm0.5$ &$73.3\pm0.3$ & $75.5\pm0.6$ & $71.0\pm0.3$\\ \hline GLGCN\cite{jiang2019semi} & $83.7\pm 0.9$ &$80.0\pm 0.5$ & $84.5\pm 0.7$ & $80.0\pm 0.7$\\ \hline GLGAT & $85.2\pm 0.5$ &$86.3\pm0.4$ & $87.5\pm 0.6$ & $87.0\pm 0.3$\\ \hline\hline DGL & $\textbf{87.0}\pm0.2$ &$\textbf{88.6}\pm0.5$ & $\textbf{89.0}\pm0.4$ & $\textbf{89.0}\pm0.3$\\ \hline \end{tabular} \end{center} \end{table} \subsection{Ablation experiments} \label{ablation} In this section, we expect to delete some parts form DGL for analyzing the function of the different components. In the proposed DGL, 'deep' has two kinds of meaning. One meaning is the information mining from global structure to local structure (from S-module of DGL to A-module of DGL in figure \ref{fig-2}). Therefore, we delete A-module for simulating the situation of non-local structure, which is called \textbf{DGL-non-local}. Another meaning is the metric learning of the different scale convolution information (two graph learning layers of DGL in \ref{fig-2}). Consequently, we delete the second graph learning layer for imitating the shallow metric learning, which is called \textbf{DGL-shallow-metric}. If DGL dose not consider the local graph structure and only care the metric learning of the single layer information, DGL will degrade to GLGCN. So, the intrinsic difference between DGL and GLGCN is the deep graph structure information mining and learning. Table \ref{tab-7} shows that the performance of DGL is superior to that of other methods. Specially, local graph structure mining by attention mechanism can complement global structure capturing by metric learning, so the performance of DGL-shallow-metric is better than that of GLGCN. Deep metric learning can obtain the more abundant structure information from the different scale node representation, hence the classification accuracy of DGL-non-local outperforms that of GLGCN. The performance of DGL-shallow-metric is obvious better than that of DGL-non-local, and it demonstrates that hierarchical progressive learning from the global structure to the local structure can get the more positive effect than metric learning from the different scale node representation. Furthermore, both factors can be considered for constructing DGL, and DGL can obtain the promising results for semi-supervised classification. \begin{table}[!ht] \small \renewcommand{\arraystretch}{1.0} \caption{Comparison of the proposed DGL method with GLGCN , and the ablated methods (DGL-non-local and DGL-shallow-metric) for semi-supervised classification, average per-class accuracy (\%) is reported based on the different datasets.The initial graph for computing model is not available.} \label{tab-7} \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{lp{2.5cm}p{2.5cm}p{2.5cm}} \hline \bfseries Method &\bfseries Cora &\bfseries Citeseer &\bfseries Pubmed \\ \hline \hline GLGCN\cite{jiang2019semi} & $60.1\pm 0.3$ &$64.6\pm 0.2$ & $73.3\pm 0.5$ \\ \hline DGL-non-local & $62.5\pm0.5$ &$65.9\pm0.2$ & $75.4\pm0.3$ \\ \hline DGL-shallow-metric & $63.7\pm 0.2$ &$66.2\pm0.5$ & $75.8\pm 0.4$ \\ \hline\hline DGL & $\textbf{65.3}\pm0.3$ &$\textbf{68.9}\pm0.4$ & $\textbf{76.9}\pm0.5$ \\ \hline \bfseries Method &\bfseries \tabincell{l}{MNIST\\1000/1000/8000} &\bfseries\tabincell{l}{Cifar10\\1000/1000/8000} \\ \hline \hline GLGCN\cite{jiang2019semi} & $84.9\pm 0.4$ &$84.2\pm 0.2$ \\ \hline DGL-non-local & $85.7\pm0.3$ &$85.2\pm0.5$ \\ \hline DGL-shallow-metric & $87.6\pm 0.6$ &$86.9\pm0.3$ \\ \hline\hline DGL & $\textbf{89.1}\pm0.6$ &$\textbf{87.5}\pm0.5$ \\ \hline \end{tabular} \end{center} \end{table} \subsection{Graph learning visualization} For directly observing graph learning process, we reduce multi-dimension node data to two dimension for visualizing data by t-SNE \cite{maaten2008visualizing}. we respectively show the node data distribution of the different episodes(1,50,100,150) in Cifar10 image datasets, in which training/validation/testing data number respectively is set $1000/1000/8000$. Figure \ref{fig-3} shows that the various structure distribution in the different leaning stage. In episode 1, the data distribution presents the hybrid state of the class; in episode 50, the less categories can be separated from all classes; in episode 100, the more categories subsequently can be parted from the whole classes; in episode 150, most of categories can be divided each other. We can observe that the globe and local structure distribution gradually show the aggregation state of the class. Figure \ref{fig-4} indicates that the loss change with episode increasing in DGL and GLGCN. The training or testing loss of DGL obviously is less than that of GLGCN, and it shows that DGL model can obtain the better performance than GLGCN model in training and testing for semi-supervised classification. \begin{figure*}[ht] \begin{center} \includegraphics[width=1\linewidth]{4.png} \end{center} \vspace{-0.2in} \caption{The various structure distribution of the different leaning stage of DGL in Cifar10 dataset. (a) is the structure distribution of episode $1$, (b) for that of episode $50$, (c) for that of episode $100$ and (d) for that of episode $150$. Horizontal and vertical axis respectively stand for the different dimension of data.} \label{fig-3} \end{figure*} \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\linewidth]{5.png} \end{center} \vspace{-0.2in} \caption{The loss of DGL and GLGCN in training and testing in Cifar10 dataset.} \label{fig-4} \end{figure*} \subsection{Experimental results analysis} \label{analysis} In experiments, eleven methods are utilized to evaluating the different aspects of the proposed DGL. These method can be divided into four group for the different purpose. The first group includes four baseline methods (GCN\cite{kipf2016semi}, GAT\cite{velivckovic2017graph}, SGCN\cite{wu2019simplifying} and GLGCN \cite{jiang2019semi} in section \ref{baseline}) for cognising the motivation of the proposed DGL. The second group contains four state-of-art methods(H-GCN \cite{hu2019hierarchical},GLNNs \cite{gao2019exploring}, DIAL-GNN\cite{chen2019deep} and GLGCN\cite{jiang2019semi} in section \ref{sota}) for analyzing the advantages and disadvantages between these graph learning methods and the proposed DGL. The third group explores two methods(GLSGCN and GLGAT in section \ref{extending}) based on the main idea of GLGCN \cite{jiang2019semi} for extending the graph learning method based on GCN \cite{kipf2016semi}. The forth group exploits two methods (DGL-non-local and DGL-shallow-metric in section \ref{ablation}) for finding the function of the different components in the proposed DGL. According to the above experiments, we can have the following observations. \begin{itemize} \item The performance of DGL outperforms the baseline approaches, which are GCN\cite{kipf2016semi}, GAT \cite{velivckovic2017graph}, SGCN \cite{wu2019simplifying} and GLGCN \cite{jiang2019semi} in section \ref{baseline}. GCN\cite{kipf2016semi} can reveal the node information propagation based on the statically global graph structure for capturing the data distribution relationship and node representation. GAT \cite{velivckovic2017graph} can assign the weight of the neighborhood in each data node to learn the local graph structure. SGCN \cite{wu2019simplifying} can simplify networks architecture based on the statically global graph structure for reaching the approximating results of GCN. GLGCN \cite{jiang2019semi} can extract the global graph structure from the original data in the networks learning for constructing the basic frameworks of graph learning based on GCN. DGL can not only dynamically mine the global and local graph structure for balancing their effect of the information propagation, but also simultaneously encode the node representation of the different scale outputs for improving the performance of semi-supervised classification. \item The graph learning methods based GCN (GLGCN\cite{jiang2019semi} and the proposed DGL) have the obvious performance improvement than the non-graph learning methods (GCN\cite{kipf2016semi}, GAT \cite{velivckovic2017graph} and SGCN \cite{wu2019simplifying}). The main reason is that the graph learning methods can dynamically generate graph structure by the parameterized interaction computation, while non-graph learning methods can only depend on the static graph structure in the whole networks learning regardless of the change of each layer. Therefore, the graph learning methods can better fit to the distribution of the transforming data in each layer for enhancing the performance of semi-supervised classification. \item In the state-of-the-art graph learning methods based on neural networks(H-GCN \cite{hu2019hierarchical},GLNNs \cite{gao2019exploring}, DIAL-GNN\cite{chen2019deep} and GLGCN\cite{jiang2019semi} in section \ref{sota}), the global or local graph structure can be described and mined by hierarchical aggregation or metric learning. The proposed DGL can comprehensively consider the global and local graph structure, and encode their propagation relationship for improving the performance of the networks model. Therefore, DGL can obtain the best performance of Citeseer and Pubmed datasets and the approximated best performance of Cora dataset in these state-of-the-art methods. \item The extended graph learning methods (GLSGCN and GLGAT in section \ref{extending}) conbine the main idea of GLGCN\cite{jiang2019semi} with GAT \cite{velivckovic2017graph} or SGCN \cite{wu2019simplifying} for finding the adaptation of the graph learning method. GLSGCN can get the worse performance than GLGCN \cite{jiang2019semi},while GLGAT can obtain the better performance than GLGCN \cite{jiang2019semi}. It shows that nonlinear unit layer have the stronger learning ability for dynamically generating graph structure. DGL outperforms GLSGCN and GLGAT, and it demonstrates that the different scale metric learning (from the global to the local graph structure and from the different layers) can contribute to the construction of the graph learning model. \item The proposed DGL method can delete the different components to formulate the different ablation methods (DGL-non-local and DGL-shallow-metric in section \ref{ablation}). DGL-non-local method emphasises on the global graph structure learning from the different scale node representation, while DGL-shallow-metric focuses on the balance learning between the global and the local graph structure in single layer. The performance of DGL-shallow-metric is superior to that of DGL-non-local, and it indicates that the depth mining from the global graph structure to the local graph structure has the more obvious effect than deep metric learning from the different scale outputs. However, two factors is simultaneously considered to build DGL that can obtain the promising results for semi-supervised classification. \item In the extended graph learning experiment, the different graph learning method shows the approximate results with training/validation/testing change. It reveals that graph learning process can complement the insufficient number of the training samples for improving the generalization of the model. Therefore, in Table \ref{tab-5} and \ref{tab-6}, this situation happens in the experimental results of the different graph learning methods. \end{itemize} \section{Conclusion} We have presented deep graph learning (DGL) method to address the global and local graph integration learning for improving semi-supervised classification. The proposed DGL can not only use graph learning layer and graph attention layer for hierarchical progressive graph structure mining, but also adopt two graph learning layers for deep capturing the global graph structure information from the different scale node representation. Furthermore, DGL can balance the difference between the global and local graph structure for finding the abundant data relationship, and fusion the node representation of the different layers for enhancing semi-supervised classification. Finally, DGL can automatically generate graph structure in networks learning, and dynamically encode the various information of the different layers. Experimental results and analysis shows that the proposed DGL method is promising for node classification on Citeseer,Cora, Pubmed,MNIST and Cifar10 datasets. \section{Acknowledgements} The authors would like to thank the anonymous reviewers for their insightful comments that help improve the quality of this paper. This work was supported by NSFC (Program No.61771386,Program No.61671376 and Program No.61671374).
{'timestamp': '2020-06-01T02:03:46', 'yymm': '2005', 'arxiv_id': '2005.14309', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14309'}
arxiv
\section{INTRODUCTION} \label{s:intro} The discovery of gravitational waves (GWs) from the coalescence of binary black holes (BHs) by the LIGO ground-based laser interferometers (Abbott et al. 2016b
) was one of the biggest scientific discoveries at the beginning of the 21st century and gave powerful impetus to the development of multi-wavelength observations of cosmic transients. Multi-wavelength observations of the first binary neutron star (NS) coalescence GW170817 (Abbott et al. 2016a) ushered in a new era of astronomical observations -- multi-messenger astronomy. At present, there is detailed information about ten coalescing binary BHs discovered by the LIGO/Virgo Collaboration during the first and second scientific run O1 and O2 (LIGO/Virgo Collaboration 2018). More than 30 coalescing binary BHs, several NS + NS binary candidates, and several NS + BH binary candidates were discovered in the ongoing O3 observations by the LVC Collaboration (see the online catalog https://gracedb.ligo.org/latest/). No reliable detection of the accompanying electromagnetic (EM) radiation from new coalescences in the O3 data has been reported so far. Obviously, the information about the GW coalescence source obtained from EM observations complement significantly the information obtained from an analysis of the GW signal properties. For example, the latest analysis of the GW data by the LVC Collaboration (LIGO Collaboration et al. 2019) does not rule out the possibility that one of the GW170817 binary components can be a low-mass BH, although an analysis of the EM radiation from the accompanying gamma-ray burst GRB 170817A argues for the formation of a supermassive neutron star as a result of the coalescence and, hence, for the model of two coalescing NSs in the source GW170817 (Gill et al. 2019; Piro et al. 2019). Clearly, studying the generation mechanisms and parameters of the EM radiation during the coalescence of binary relativistic stars remains a topical problem. In this paper, we address NS + BH binary systems. Such binaries are of interest on their own because a magnetized NS before the coalescence can be a radio pulsar. An analysis of the pulse arrival time (timing) for such a pulsar in a strong BH gravitational field could provide unique information about the spacetime structure near the BH. Binary radio pulsars with BHs have been studied previously (see, e.g., Lipunov et al. 1994; Pfahl et al. 2005), but they have not yet been detected. As relativistic numerical calculations show, the result of the NS–BH coalescence in a binary system depends significantly on the component mass ratio $q=M_{BH}/M_{NS}>1$ and the NS equation of state (Shibata and Taniguchi 2011; Shibata and Hotokezaka 2019). The neutron star can be disrupted by tidal forces before the coalescence or be swallowed by the BH entirely. A key criterion is the ratio of the tidal NS radius $R_t\sim R_{NS}q^{1/3}$ to the radius of the innermost stable circular orbit around the BH $R_{ISCO}$. The tidal radius depends on the mass ratio and the NS equation of state, while the radius of the innermost stable circular orbit is determined by the BH mass and angular momentum. At $R_t>R_{ISCO}$ a disk-shaped or crescent-shaped structure is formed around the BH with a possible sub-relativistic dynamic jet as a result of the coalescence (Kyutoku et al. 2015; Shibata and Hotokezaka 2019), which is favorable for the emergence of the subsequent optical kilonova afterglow (Kawaguchi et al. 2016; Metzger 2019). After the coalescence, the BH acquires an additional angular momentum and physical conditions arise for the formation of a relativistic jet, for example, by the Blandford–Znajek (BZ) EM mechanism (Blandford and Znajek 1977), and the generation of a short gamma-ray burst (GRB) (Nakar 2007). In our recent paper (Postnov and Kuranov 2019), we performed model calculations of the angular momenta of coalescing binary BHs for various initial spins of the components (co-aligned and randomly oriented spins), BH formation models (without any additional fallback of the stellar envelope during the collapse onto the CO core and with this fallback), and various common envelope efficiencies $\alpha_{CE}$ (the ratio of the binding energy of the stellar core and envelope after the main sequence to the orbital energy of the binary before the beginning of the common envelope stage) by the population synthesis method. In these calculations we used the standard scenario for the evolution of massive binary stars (Postnov and Yungelson 2014) supplemented by the treatment of the evolution of stellar core rotation with allowance made for the effective core–envelope coupling proposed in Postnov et al. (2016). The calculations were performed for various initial chemical compositions (metallicities) of stars by taking into account the time evolution of the metallicity and the star formation rate in the Universe (for details, see Postnov and Kuranov (2019)). The technique of these calculations was applied to NS+BH binaries, which allowed the coalescence rate ${\cal R}$ (in yr$^{-1}$ Gpc$^{-3}$) and the detection rate ${\cal D}$ (in yr$^{-1}$) at the sensitivity level of the operating GW interferometers to be calculated (Postnov et al. 2019). In the present paper, the mass ratios $q=M_{BH}/M_{NS}$ and the BH spins before the coalescence of NS+BH binaries obtained in our calculations (Postnov et al. 2019) are used as input parameters to determine the mass of the remnant disk around the BH $M_{d}$ and the BH spin (characterized by the dimensionless Kerr parameter $a^*=J_{BH}/(GM_{BH}^2/c^2)$, $J_{BH}$ is the BH angular momentum, $G$ is the gravitational constant, and $c$ is the speed of light) after the coalescence. The disk mass depends significantly on the NS compactness (mass-to-radius ratio $M_{NS}/R_{NS}$), which is defined by the NS equation of state. The effects of the equation of state are parameterized by the tidal deformability $\Lambda=(2/3)k_2[(c^2/G)(R_{NS}/M_{NS})]^5$ (Damour et al. 2012) ($k_2$ is the Love number). This parameter is constrained from the GW observations of the source GW170817 (Abbott et al. 2019). In turn, the disk mass and the BH spin determine the possible kinetic energy of the relativistic jet launched by the BZ mechanism. The kinetic energy of the jet $\Delta E_{BZ}$ may be considered as an upper limit for the isotropic energy of a short GRB $\Delta E_{iso}$. For coalescing binaries with a large mass ratio $q$, in which the NS is swallowed by the BH without disruption, the NS magnetic field and the BH spin after the coalescence $a^*$ are used to calculate the possible BH electric charge in the NS magnetic field (Wald 1974). In addition, if the NS before the coalescence was at the radio pulsar stage, then the medium around the coalescing NS+BH binary could be filled with a relativistic lepton plasma. We also consider the mechanism for the conversion of gravitational waves into electromagnetic ones in such a magnetized relativistic plasma. \section{COALESCENCE AND DETECTION RATES OF NS+BH BINARIES WITH GW INTERFEROMETERS} \label{s:rates} Figure 1 shows the results of our population synthesis calculations of the NS+BH binary coalescence rate density and the detection rates by the LIGO/Virgo ground-based laser GW interferometers. The parameters of the NS and BH formation, the evolution of massive binary systems, and the technique of calculations are described in detail in Postnov and Kuranov (2019) and Postnov et al. (2019). At the coalescence phase, the GW signal amplitude is determined by the chirp mass of the binary system, which for two point masses $M_1$ and $M_2$ is ${\cal M}=(M_1M_2)^{3/5}/(M_1+M_2)^{1/5}$. The detection horizon $D_h$ of coalescing binary systems with a chirp mass ${\cal M}=1.22 M_\odot$ for two neutron stars with a canonical mass of 1.4 $M_\odot$ in the current LIGO/Virgo observations is taken to be 120 Mpc. The detection horizon depends on the binary chirp mass as $D_h\sim {\cal M}^{5/6}$ (LIGO Collaboration et al. 2010). The distribution of parameters for binaries that can be recorded by the ground-based GW interferometers within the detection horizon $D_h$ is important from the viewpoint of comparison with observations. At a given sensitivity level of the GW detectors, $D_h$ is determined mainly by the chirp mass. Thus, for each chirp mass there is a limiting distance (redshift) to which the binary coalescence rate density ${\cal R}_{NSBH}$ calculated by the population synthesis method should be integrated. In turn, the coalescence rate per unit comoving volume ${\cal R}_{NSBH}(t)$ is a convolution of the calculated time dependence of the binary system coalescence rate ${\cal R}_\delta(t)$ calculated for an instantaneous star formation burst with the star formation rate and the stellar metallicity as functions of time $SFR(t)$: ${\cal R}_X(t)=\int^t SFR(t-\tau){\cal R}_\delta(\tau)d\tau$. The time evolution of the metallicity of the stellar population and the star formation rate were calculated using the formulas given in Postnov and Kuranov (2019). It can be seen from Fig. 1 that the expected detection rate of NS+BH binaries is approximately several events per year, which is consistent with the existing detection statistics of such binaries (for the online information about the detected events, see https://gracedb.ligo.org/latest/). \begin{figure} \includegraphics[width=0.65\columnwidth]{merger_rate_bhns.pdf} \includegraphics[width=0.65\columnwidth]{merger_det_bhns.pdf} \caption{ Upper panel: NS+BH coalescence rate density ${\cal R}_{NSBH}$ (per year per cubic Gpc) versus cosmological redshift $z$ for various values of the parameter $\alpha_\mathrm{CE}$ (common envelope efficiency) with allowance made for the evolution of the mean star formation rate and the stellar metallicity in the Universe. The upper and lower boundaries (dashed lines) correspond to $\alpha_\mathrm{CE}= 4$ and 0.5, respectively; the solid line corresponds to $\alpha_\mathrm{CE}= 4$. The vertical dashed line indicates the LIGO/Virgo O3 detection horizon for coalescing binary systems with masses of 5+1.4 $M_\odot$. Bottom panel: Number of NS+BH coalescences per year (the integral of the coalescence rate per unit volume to the distance corresponding to a given $z$) versus limiting redshift (detection horizon $D_h$) with allowance made for the star formation history in the Universe for common envelope efficiencies $\alpha_\mathrm{CE}=0.5,\, 1,\, 4$. The dashed curve indicates the expected number of events per year detected by the LIGO/Virgo O3 interferometers for the averaged orientation of the binary orbits relative to the line of sight ${\cal R}_\mathrm{BHNS}\sim 1-3$ yr$^{-1}$.The vertical dashed line indicates the LIGO/Virgo O3 detection horizon for coalescing binary systems with masses of 5+1.4 $M_\odot$. } \end{figure} \section{COALESCENCES WITH NS TIDAL DISRUPTION} \label{s:tidal} The NS+BH binary coalescences in which the NS is tidally disrupted are most interesting from the standpoint of observational manifestations in the EM range. As was said in the Introduction, the NS tidal disruption occurs mostly in binaries with a small component mass ratio $q=M_{BH}/M_{NS}\lesssim 3$ and depends on the NS equation of state (tidal deformability $\Lambda$). \subsection{Remnant Disks around the BH} \label{ss:disk} To estimate the mass of the baryonic disk (torus) around the BH left after the coalescence, we will use the fit to the numerical data with allowance made for the NS equation of state that has been proposed recently (Zappa et al. 2019). In turn, the fitting formulas in this paper use the results from Jimenez-Forteza et al. (2017), where the radiated GW energy, the mass and spin of the BH remaining after the coalescence of binary BHs with a different mass ratio are calculated. The derived mass ratio distributions of NS+BH binaries are presented in Fig. 2. The upper three rows in this and next figures present the simulation results for stellar metallicities in the ranges $Z>0.01$, $0.01Z>0.001$ and $0.001Z>0.0001$ (from top to bottom), while the lower row presents the convolution with the time evolution of the metallicity. The left and right columns in the figure present the calculations, respectively, for the standard common envelope efficiency $\alpha_{CE}=1$ and $\alpha_{CE}=4$ corresponding to a smaller approach of the binary components in the common envelope. Note that the reduced common envelope efficiency $\alpha_{CE}=4$ corresponds better to the treatment of the binary dynamics in the common envelope based on the angular momentum conservation law (the so-called $\gamma$-formalism; see Nelemans and Tout 2005) and is required to explain the properties of the population of symbiotic X-ray binary systems in the Galaxy (Yungelson et al. 2019). It can be seen from Fig. 2 that in $\sim 10-20\%$ of the coalescences the component mass ratio is small enough for the formation of a remnant disk around the BH after the coalescence. The BH spin after the coalescence of NS+BH binaries is entirely determined by the initial BH spin $a^*$ and the mass ratio $q$ and depends weakly on the NS equation of state. The distribution in initial BH spins (before the coalescence) calculated by Postnov et al. (2019) is indicated in Fig. 3 by the dotted line, while the spins after the coalescence $a^*_f$ are indicated by the solid line. The BH spins after the coalescence are seen to be concentrated near $a^*_f\sim 0.5$, while the fraction of rapidly rotating BHs after the coalescence is small. The resulting mass of the disk around the BH after NS tidal disruption is shown in Fig. 4 for various values of the NS tidal deformability $\Lambda$ in the wide range from 100 to 2000 spanning a broad spectrum of possible NS equation of state (LIGO Collaboration et al. 2019). We see a strong dependence of the disk mass on the NS equation of state -- astrophysically interesting disk masses $M_d > 0.05 M_\odot$ are obtained only at large $\Lambda > 300$ corresponding to small compactness $M_{NS}/R_{NS}$ (stiff equations of state). Note that the constraints on the parameter $\Lambda$ from the GW observations of GW170817 lie within a wide range, but significant tidal deformations with $\Lambda\gtrsim 1600$ are highly unlikely (Abbott et al. 2019; LIGO Collaboration et al. 2019). An independent analysis with the involvement of other constraints gives $\Lambda = 390^{+280}_{-210}$ for the mass $M_{NS} = 1.4 M_\odot$ (Jiang et al. 2019). \begin{figure*} \includegraphics[width= \columnwidth]{q_o3.pdf} \caption{Cumulative distribution in mass ratio in coalescing NS+BH binaries that can be recorded at the sensitivity level of the LIGO/Virgo GW detectors in the observing run O3. The upper three rows present the results for various stellar metallicities. The fourth row presents the result with allowance made for the time evolution of the mean star formation rate and the stellar metallicity in galaxies. The left and right columns present the calculations for the common envelope parameters $\alpha_{CE}=1$ and 4, respectively. } \label{fig:qO3} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{abh_o3.pdf} \caption{BH spins before ( $a^*$, dotted line) and after the coalescence ($a^*_f$, solid line) in NS+BH binaries. The dependence on the NS equation of state is indistinguishable. The upper three rows present the results for various stellar metallicities $Z$. The fourth row presents the convolution with allowance made for the time evolution of the mean star formation rate and the stellar metallicity in galaxies. The left and right columns present the calculations for the common envelope parameters $\alpha_{CE} = 1$ and 4, respectively.} \label{fig:spins} \end{figure*} \subsection{Jet kinetic energy} The minimum isotropic kinetic energy of the relativistic jet required to produce a GRB is estimated from observations as $\Delta E_{K,min}\simeq 10^{48}$ erg (Soderberg et al. 2006). An analysis of the observations of short GRBs shows that the mean conversion efficiency of the kinetic energy of the relativistic jet into gamma-ray emission is $\eta=E_{\gamma,iso}/(E_{\gamma,iso}+E_{K,iso})\sim 0.4$ (with a large scatter of individual sources) (Fong et al. 2015). This allows the kinetic energy of the jet to be used to estimate the possible power of the short GRB produced by it. To be specific, consider the Blandford–Znajek process as a possible physical mechanism of a short GRB (Nakar 2007). The energy of the BZ jet is determined by the magnetic field around the BH and its spin, $L_{BZ}\sim \Phi^2\Omega_H^{*2} f(\Omega_H^*)$, where $\Phi$ is the magnetic flux through the BH ergosphere, $\Omega_H^*=(1/2)a^*/(1+\sqrt{1-a^{*2}})$ is the dimensionless angular velocity of rotation on the BH horizon, and $f(\Omega_H^*)\approx 1+1.38 \Omega_H^{*2}-9.2\Omega_H^{*4}$ is the correction function that can be derived by fitting the numerical calculations (Nakar 2007). The magnetic field is an uncertain parameter, but it can be eliminated by assuming the balance between the magnetic and dynamic pressures in the disk around the BH. In this case, $L_{BZ}\sim \dot M c^2\Omega_H^{*2} f(\Omega_H^*)$. Assuming the accretion rate to be $\dot M=M_d/\Delta t$, where $\Delta t$ is the accretion time, the kinetic energy of the BZ jet is found to be $\Delta E_{K,iso}\approx 0.015 M_dc^2\Omega_H^2f(\Omega_H)$ (we use the numerical coefficient justified in Barbieri et al. 2019). Figure 5 presents the cumulative distributions of the kinetic energy of the BZ jet $\Delta E_{K,iso}$ for coalescing NS+BH binary systems within the detection horizon $D_h (O3)$ at the current phase of O3 observations with the LIGO/Virgo GW interferometers with allowance made for the evolution of the metallicity $Z$ and the mean star formation rate $SFR$ in galaxies for two common envelope parameters. We see a strong dependence on the NS equation of state (more energetic jets are obtained at greater values of the tidal deformability $\Lambda$) and on the degree of approach of the binary components at the common envelope stage. Less efficient common envelopes (a large parameter $\alpha_{CE}$) lead to a noticeably larger percentage events with astrophysically interesting EM energy release \begin{figure*} \includegraphics[width = \columnwidth]{mdisk_o3.pdf} \caption{(Color online) Cumulative distributions of the mass of the remnant disk around the BH after NS tidal disruption. The color lines in the inset indicate the NS tidal deformability $\Lambda$ parameterizing the various NS equations of state. The upper three rows present the results for various stellar metallicities. The fourth row presents the result with allowance made for the time evolution of the mean star formation rate and the stellar metallicity in galaxies. The left and right columns present the calculations for the common envelope parameters $\alpha_{CE}=1$ and $\alpha_{CE}=4$, respectively.} \label{fig:mdO3} \end{figure*} \begin{figure*} \includegraphics[width= \columnwidth]{ejet_o3.pdf} \caption{ (Color online) Same as Fig. 4, but for the kinetic energy of the jet launched by the Blandford–Znajek mechanism from the remnant disk around the rotating BH after the coalescence of NS+BH binaries.} \label{fig:EO3} \end{figure*} \section{COALESCENCES WITHOUT NS TIDAL DISRUPTION} \label{s:plunge} The NS+BH coalescences in binaries with a large mass ratio $q$ occurring without tidal disruption can also be of interest from the viewpoint of the appearance of accompanying EM radiation. The NS should have a magnetic field, with the NS+BH coalescences occurring in a fairly short time after the formation in most cases, so that the NS magnetic field has no time to decay. Several physical mechanisms for the generation of EM radiation associated with electrodynamic processes in the vicinity of a BH coalescing with a magnetized NS (see, e.g., Zhang 2016; Levin et al. 2018; Zhang 2019; Dai 2019; and references therein) or with the fundamental graviton-to-photon conversion in a magnetic field (Dolgov and Postnov 2017) are possible in this case. \subsection{Electric Charge of a Rotating BH} The spin and orbital motion of an electrically charged BH in a coalescing binary system initiate time-varying electric dipole and magnetic dipole moments in the binary that give rise to EM radiation (for the estimates and discussion, see Dai 2019), whose power and energetics for rapidly rotating BHs can be sufficient for the explanation of short EM transients (for example, fast radio bursts (FRBs); see Popov et al. 2018)). A rotating BH in an external magnetic field can acquire an electric charge with a maximum value of$Q_w=(2G/c^3)JB$, where $J=a^*GM_{BH}^2/c$ is the angular momentum of the rotating BH and $B$ is the magnetic field strength (Wald 1974). This mechanism has also been recently discussed for the estimates of a possible EM radiation pulse already after the NS+BH coalescence (Zhong et al. 2019). It is convenient to normalize the charge of a rotating Reissner–Nordstr\"om BH to the characteristic value of $Q_{RN}=2\sqrt{G} M_{BH}\approx 10^{30}(M_{BH}/M_\odot)$ (emu) corresponding to the equality of the Schwarzschild radius to the Reissner–Nordstr\"om radius: $\tilde q_w=Q_W/Q_{RN}$. \begin{figure*} \includegraphics[width= \columnwidth]{mu_o3.pdf} \caption{(Color online) Same as Fig. 4, but for the combination of the BH spin before the coalescence and the NS magnetic field $a^{*2}b_s$ determining the maximum intrinsic magnetic dipole moment of aWald-charged BH $\mu_{W,max}$ (Eq. (2)).} \label{fig:muO3} \end{figure*} \begin{figure*} \includegraphics[width= \columnwidth]{l_o3.pdf} \caption{(Color online) Same as Fig. 4, but for the maximum energy release from a charged rotating BH after the coalescence $L_{W,max}$ (Eq. (3)).} \label{fig:LWO3} \end{figure*} In natural units $\hbar=c=1$ Newton’s gravitational constant is written via the Planck mass $G=1/m_{Pl}^2$, $m_{Pl}\approx 10^{19}$~GeV, the electric charge is dimensionless, the electron charge is expressed via the fine structure constant $\alpha = 1/137$ as $e^2=4\pi \alpha$. It is also convenient to make the magnetic field dimensionless by normalizing it to the critical (Schwinger) one $b=B/B_{cr}$, where $B_{cr}=m_e^2/e\approx 4.41\times 10^{13}$~G ($m_e\approx 511$~keV is the electron rest mass). The specific Wald BH charge is then \beq{e:qw} \tilde q_W=\frac{a^*b}{\sqrt{4\pi\alpha}}\myfrac{m_e}{m_{Pl}}^2\myfrac{M_{BH}}{m_{Pl}}\approx 10^{-6}a^*b\myfrac{M_{BH}}{M_\odot}\,. \end{equation} This formula is written for a uniform magnetic field. Note that for a NS dipole field $b(R)=b_s(R_{NS}/R)^3$, where $b_s$ is the surface field, it follows from (1) that at the tidal radius $R_t\sim R_{NS}q^{1/3}$ the specific Wald charge does not depend on the BH mass: $\tilde q_W\sim 10^{-6}a^*b_s(M_{NS}/M_\odot)$. For the electric dipole and magnetic dipole radiation associated with the orbital motion of a charged BH before the coalescence, the EM radiation power is proportional to the square of the NS magnetic field $b$ and the square of the BH spin, $\sim a^{*2} b^2$. In the case of magnetic dipole radiation from the rotating charged BH itself acquiring the Wald charge at the stage when the charged NS approaches the BH before the coalescence, it is proportional to the square of the intrinsic magnetic moment $\mu_W^2\sim a^{*4} b^2$ (Dai 2019). Therefore, the Wald charge can be important only for rapidly rotating BHs with $a^*\gtrsim 0.5$. The number of such binaries before the coalescence is extremely small (see the dashed curve in Fig. 3). If the NS is swallowed by the BH without being disrupted, the maximum Wald charge can be estimated from the magnetic field at the Schwarzschild BH radius $R\sim R_g=2GM_{BH}/c^2$ before the coalescence: $\tilde q_{W,max}\sim 3\times 10^{-5} a^*b_s (R_{NS}/10\hbox{km})^3(M_{BH}/M_\odot)^{-2}$. In this case, the maximum intrinsic magnetic dipole moment of the charged BH will be determined only by the BH spin $a^*$, NS magnetic field $b_s$, and NS radius: \beq{e:mumax} \mu_{W,max}=\frac{J_{BH}Q_{W,max}}{M_{BH}c}\approx 5\times 10^{30} a^{*2}b_s\myfrac{R_{NS}}{10\hbox{km}}^3\,. \end{equation} The maximum energy release from a charged rotating single BH with a magnetic moment $\mu_{W,max}$ can be estimated from the magnetic dipole formula $L_{W,max}\sim \mu_{W,max}^2\Omega_H^4$, where $\Omega_H^f\sim (a^{*}_f/2)(c^3/GM_{BH}^f)$ is the angular velocity of the horizon for a BH with mass $M_{BH}^f$ BH after the coalescence. Substituting $\mu_{W,max}$ from (2), we find \beq{e:Lwmax} L_{W,max}\sim 10^{42}[\hbox{erg/s}] a^{*4}b_s^2a^{*4}_f\frac{(R_{NS}/10\hbox{km})^6}{(M_{BH}^f/10 M_\odot)^4}\,. \end{equation} This estimate is comparable in magnitude to the estimates of the possible EM energy release from charged BHs at the pre-coalescence (in-spiraling) stage (Zhang 2019; Dai 2019). Given the relative smallness of the BH spins before ($a^*\sim 0.2$, the dashed curve in Fig. 3) and after ($a^{*,f}\sim 0.6$ the solid curve in Fig. 3) the coalescence as well as the low (poorly known) Poynting flux-to-EM radiation conversion efficiency, there can be astrophysically interesting EM energy release in this process only for rapidly rotating low-mass BHs in a pair with strongly magnetized NSs with $b_s\sim 1$, whose number is extremely small. \subsection{Conversion of Gravitational Waves in a Relativistic Plasma in a Magnetic Field} At the final stages before the coalescence of a binary magnetized NS+BH, conditions for the additional appearance of EM radiation due to the coherent conversion of gravitational waves into electromagnetic ones in a magnetic field arise in the binary. For a vacuum this mechanism was first considered by Gertsenshtein (1962). In the presence of a surrounding plasma the effect in cosmological applications was considered by Dolgov and Ejlli (2012), while for the case of conversion in a nonrelativistic plasma with a magnetic field around astrophysical GW sources, coalescing binary NSs and BHs, it was considered by Dolgov and Postnov (2017). In the latter paper it was emphasized that since the plasma frequency in the interstellar medium with density $n_e\sim 1$~cm$^{-3}$, $\Omega_e=60\ \sqrt{n_e}$ kHz, is much greater than the frequency of the GWs from coalescing binary systems (100– 200 Hz), then the GW-to-EM conversion in the plasma is dissipative in nature and is determined by the imaginary part of the dielectric permittivity. The EM wave damping amplitude $A_j$ was shown to be related to the amplitude $h_j$ of a GW propagating with frequency $\omega$ perpendicularly to an external magnetic field B by the relation (in natural units $\hbar=c=k_B=1$) \begin{equation} \label{eq:10} A_j\approx \frac{\kappa b\omega a_e}{\Omega_e} h_j\,, \end{equation} where $ a_e = \sqrt {\frac {T_e} {e^2 n_e}} $ is the Debye radius of electrons, $T_e$ is the electron temperature, $\kappa^2=16\pi/m_{Pl}^2$ is the coupling constant. The fraction of the energy being released in the GW that dissipates into the thermal energy of the plasma, \beq{} \label{eq:11} K_{nr} = {\left(\frac{\kappa B\omega a_e}{\Omega_e}\right)}^2\approx 10 ^ {- 46} \left (\frac {\omega} {\Omega_e} \right) ^ 2 \left (\frac {a_e} {\hbox{cm}} \right) ^ 2 \left ( \frac {B} {1 \ \hbox{G}} \right)^2 \end{equation} is very small and is interesting only for superstrong magnetic fields. It may well be that part of this energy can be reprocessed into high-frequency radio emission (Marklund et al. 2000). During the coalescence of magnetized NSs with BHs, the magnetosphere of a NS with a magnetic field $B_{NS}$ spinning with a frequency $\omega_{NS}=2\pi/P_{NS}$ is filled with an ultrarelativistic plasma with a density no less than the Goldreich–Julian charge density $n_{GJ}=(\omega_{NS}B_{NS})/(2\pi e)$. For radio pulsars $n_e=\lambda n_{GJ}$, where $\lambda\sim 10^4-10^5$ is the multiplicity factor of the pairs created near the NS surface (Beskin 2018). In a relativistic collisional plasma with a plasma frequency $\Omega_{rel}^2=\frac{4 \pi e^2 n_e}{3 T_e}$, $T_e\sim \gamma m_e$ ($\gamma$ is the electron Lorentz factor) we can obtain (Postnov and Simkin 2019) \begin{equation} \label{eq:17} A_j \approx \frac{\kappa b\omega }{\Omega^2_{rel}}h_j\sim\frac{\kappa b\omega}{(n_e/\gamma m_e)}\,. \end{equation} If the plasma flows along open magnetic field lines (as in a pulsar), then for a dipole field $B(R)\sim B_s(R_{NS}/R)^3$. In a magnetic flux tube the magnetic flux is conserved, $\Phi=B(R)S(R)=const$; given the continuity equation $n_eB(R)S(R)=const$ , we then find that for an ultrarelativistic collisional plasma with $T_e\sim \gamma m_e$ the conversion efficiency does not depend on the NS magnetic field, but is determined only by the lepton Lorentz factor, the NS spin period, and the pair multiplicity with respect to the Goldreich–Julian density $\lambda$: \begin{eqnarray} \label{eq:25} &K_{rel}=\left(\frac{\kappa b\omega}{\Omega^2_{rel}}\right)^2 \nonumber \\ &\approx 10^{-35}\left(\frac{\omega }{100\ \hbox{rad}\, \hbox{s}^{-1}}\right)^2\left(\frac{P_{NS}}{1\ \hbox{s}}\right)^2\left(\frac{\lambda }{10^5}\right)^{-2}\left(\frac{\gamma }{10^5}\right)^2\,. \end{eqnarray} Clearly, the effect is weak for the densities of a relativistic pulsar plasma with $\lambda\sim 10^4-10^5$. However, the relativistic plasma density near a coalescing NS+BH binary is unknown and, therefore, the density $n_{GJ}$ can serve only as an approximate lower limit. The upper bound of the GW-to-EM conversion efficiency in a relativistic plasma around a coalescing NS+BH binary will then be $K_{rel}\lesssim 10^{-25}(P_{NS}/1\hbox{s})^2(\gamma/10^5)^2(n_e/n_{GJ})^{-2}$, i.e., for a GW pulse energy during the coalescence of $M_\odot c^2\sim 2\times 10^{54}$ erg, up to $\sim 10^{38}$ ergs can be additionally reprocessed into the thermal energy of the plasma. \section{CONCLUSION} \label{s:conclusion} In this paper we analyzed the various physical mechanisms that could give rise to an EM pulse accompanying the coalescence of magnetized NS+BH binaries. At the time of writing this paper (mid- October 2019), the LIGO/Virgo detectors recorded several such binaries, but no EM radiation from them have been detected so far. Based on a series of population synthesis calculations (Postnov and Kuranov 2019; Postnov et al. 2019), we constructed the distributions of the NS+BH binary coalescence rate density and the expected detection rate of such binaries in the current LIGO/Virgo O3 observations by taking into account the evolution of the stellar metallicity and the star formation rate in the Universe (Fig. 1). The derived distributions of coalescing NS + BH in component mass ratio $q=M_{BH}/M_{NS}>1$, magnetic fields $b_s=B_{NS}/B_{cr}$, $B_{cr}=4.14\times 10^{13}$~ G, NS spin period, and BH angular momentum (dimensionless spin $a^*=J_{BJ}/(GM^2_{BH}/c)$) before the coalescence were used to estimate the mass of the remnant disk around the BHMd (Fig. 4) and the BH spin after the coalescence $a^*_f$ (Fig. 3). We estimated the masses of the remnant baryonic disks, the BH masses and spins after the coalescence based on interpolation of the relativistic numerical calculations by Zappa et al. (2019) and Jimenez-Forteza et al. (2017) by taking into account the NS equation of state that was specified by the dimensionless tidal deformability $\Lambda$. Assuming that the magnetic field in the remnant disk around a rotating BH to be dynamically balanced, we estimated the kinetic energy of the relativistic jet launched by the BZ mechanism (Fig. 5), which depends strongly on the NS equation of state through the tidal deformability $\Lambda$. We separately studied the NS+BH coalescences with a large mass ratio $q$ occurring without NS tidal disruption. For such binaries we constructed the distributions of the maximum possible BH magnetic dipole moment before the coalescence $\mu_{W,max}$ acquired due to the Wald BH charge (Wald 1974; Levin et al. 2018) (Fig. 6) and corresponding to the maximum EM magnetic dipole luminosity of such a charged BH $L_{W,max}$ (Fig. 7). Even for the most optimistic parameters our estimates of the EM luminosity from a Wald-charged BH in coalescing NS+BH binaries are considerably smaller than those expected from binaries with a smaller mass ratio, where the formation of remnant disks and relativistic jets is possible. We additionally considered the conversion of gravitational waves in a magnetic field with a relativistic plasma that may surround a NS+BH binary at the pre-coalescence stage. This mechanism was shown to convert no more than $\sim 10^{38}-10^{39}$ erg into additional plasma heating even under the most favorable conditions (a large lepton Lorentz factor $\gamma \sim 10^5$ and a low plasma density of the order of the Goldreich-Julian one). Our general conclusion is that noticeable EM phenomena from coalescing NS+BH binaries might be expected from a small fraction of the coalescences in which the NS is tidally disrupted and a remnant disk is formed around the rotating BH. The fraction of such events depends on the mass ratio $q$ and the NS equation of state. At the expected coalescence rate of several events per year in the current LIGO/Virgo O3 GW observations, the chances to see a weak EM signal are slim. The more exotic physical mechanisms (the Wald electric charge of a rotating BH or the fundamental conversion of gravitational waves into electromagnetic ones in a magnetized plasma surrounding a coalescing NS+BH binary) are much less efficient. At the current detector sensitivity level EM phenomena from coalescing NS+BH binaries in various ranges might be expected only from the nearest events (at distances of tens of Mpc). The detection of an EM counterpart from NS+BH can potentially give physically rich information about the equation of state and the magnetic field of neutron stars. \textbf{Acknowledgements.} The work of K.A. Postnov was supported by RSF grant no. 19-12-00229. The work of A.G. Kuranov was supported by the Scientific School of the Moscow State University “Physics of Stars, Relativistic Objects, and Galaxies.” \section*{References} 1. B. P. Abbott, R. Abbott, T. D. Abbott, et al., Phys. Rev. Lett. 116, 241103 (2016a). 2. B. P. Abbott, R. Abbott, T. D. Abbott, et al., Phys. Rev. Lett. 116, 061102 (2016b). 3. B. P. Abbott, R. Abbott, T. D. Abbott, LIGO Sci. Collab., and Virgo Collab., Phys. Rev. X 9, 011001 (2019). 4. C. Barbieri, O. S. Salafia, A. Perego, M. Colpi, and G. Ghirlanda, arXiv:1908.08822 (2019). 5. V. S. Beskin, Phys. Usp. 61, 353 (2018). 6. R. D. Blandford and R. L. Znajek, Mon. Not. R. Astron. Soc. 179, 433 (1977). 7. Z. G. Dai, Astrophys. J. Lett. 873, L13 (2019). 8. T. Damour, A. Nagar, and L. Villain, Phys. Rev. D 85, 123007 (2012). 9. A. D. Dolgov and D. Ejlli, J. Cosmol. Astropart. Phys. 2012 (12), 003 (2012). 10. A. Dolgov and K. Postnov, J. Cosmol. Astropart. Phys. 2017 (9), 018 (2017). 11. W. Fong, E. Berger, R. Margutti, and B. A. Zauderer, Astrophys. J. 815, 102 (2015). 12. M. Gertsenshtein, Sov. Phys. JETP 14, 84 (1962). 13. R. Gill, A. Nathanail, and L. Rezzolla, Astrophys.J. 876, 139 (2019). 14. J.-L.Jiang,S.-P.Tang,D.-S.Shao,et al., arXiv:1909.06944 (2019). 15. X. Jim enez-Forteza, D. Keitel, S. Husa, et al., Phys. Rev. D 95, 064024 (2017). 16. K. Kawaguchi, K. Kyutoku, M. Shibata, and M. Tanaka, Astrophys. J. 825, 52 (2016). 17. K. Kyutoku, K. Ioka, H. Okawa, M. Shibata, and K. Taniguchi, Phys. Rev. D 92, 044028 (2015). 18. J. Levin, D. J. D’Orazio, and S. Garcia-Saenz, Phys. Rev. D 98, 123002 (2018). 19. LIGO/Virgo Sci. Collab., arXiv e-prints (2018). 20. The LIGO Sci. Collab., the Virgo Collab., J. Abadie, et al., arXiv:1003.2481 (2010). 21. The LIGO Sci. Collab., the Virgo Collab., et al., arXiv:1908.01012 (2019). 22. V. M. Lipunov, K. A. Postnov, M. E. Prokhorov, and E. Y. Osminkin, Astrophys. J. 423, L121 (1994). 23. M.Marklund, G. Brodin,and P.K.S. Dunsby, Astrophys. J. 536, 875 (2000). 24. B. D. Metzger, arXiv:1910.01617 (2019). 25. E. Nakar, Phys. Rep. 442, 166 (2007). 26. G. Nelemans and C. A. Tout, Mon. Not. R. Astron. Soc. 356, 753 (2005). 27. E. Pfahl, P. Podsiadlowski, and S. Rappaport, Astrophys. J. 628, 343 (2005). 28. L. Piro, E. Troja, B. Zhang, et al., Mon. Not. R. Astron. Soc. 483, 1912 (2019). 29. S. B. Popov, K. A. Postnov, and M. S. Pshirkov, Phys. Usp. 61, 965 (2018). 30. K. Postnov, A. Kuranov, and N. Mitichkin, Phys. Usp. 62, 1153, (2019); arXiv:1907.04218 (2019). 31. K. A. Postnov and A. G. Kuranov, Mon. Not. R. Astron. Soc. 483, 3288 (2019). 32. K. A. Postnov and I. V. Simkin, Journal of Physics: Conf. Ser. 1390, 012086 (2019). 33. K. A. Postnov and L. R. Yungelson, Liv. Rev. Relativ. 17, 3 (2014). 34. K. A. Postnov, A. G. Kuranov, D. A. Kolesnikov, S. B. Popov, and N. K. Porayko, Mon. Not. R. Astron. Soc. 463, 1642 (2016). 35. M. Shibata and K. Hotokezaka, Ann. Rev. Nucl. Part. Sci. 69, (2019). 36. M. Shibata and K. Taniguchi, Liv. Rev. Relativ. 14,6 (2011). 37. A. M. Soderberg, S. R. Kulkarni, E. Nakar, et al., Nature (London, U.K.) 442 (7106), 1014 (2006). 38. A. Tchekhovskoy, R. Narayan, and J. C. McKinney, Astrophys. J. 711, 50 (2010). 39. R. M. Wald, Phys. Rev. D 10, 1680 (1974). 40. L. R. Yungelson, A. G. Kuranov, and K. A. Postnov, Mon. Not. R. Astron. Soc. 485, 851 (2019). 41. F. Zappa, S. Bernuzzi, F. Pannarale, M. Mapelli, and N. Giacobbo, Phys. Rev. Lett. 123, 041102 (2019). 42. B. Zhang, Astrophys. J. Lett. 827, L31 (2016). 43. B. Zhang, Astrophys. J. Lett. 873, L9 (2019). 44. S.-Q. Zhong, Z.-G. Dai, and C.-M. Deng, arXiv:1909.00494 (2019). \label{lastpage} \end{document}
{'timestamp': '2020-06-01T02:04:20', 'yymm': '2005', 'arxiv_id': '2005.14332', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14332'}
arxiv
\section{Introduction} \label{sec:intro} Due to the high plasma $\beta$ in the photosphere and chromosphere, non-magnetic forces should be taken into account when we study these layers. If we neglect
dynamics and plasma flow, then the resulting static state of the system can be described by the so-called magnetohydrostatic (MHS) assumption, which is determined by the balance of Lorentz force, pressure gradient and gravity force, together with the solenoidal condition of the magnetic field. The 3D MHS equations can be linearized by some assumptions. The analytical solutions for the linear MHS (LMHS) model have been developed in many papers \citep[e.g.,][]{l85,l91,o85,n97,pn00,nw19}. These solutions have been used for a number of specific applications to the Sun \citep[e.g.,][]{bg91,zh93,ads98,adm99,p00,rwi08,gfm13,wnn15,wnn17} and stars \citep{la08,mgn16}. A more challenging problem is to solve the MHS equations in the generic case, which can be done only numerically. A few methods have been developed. The magnetohydrodynamic (MHD) relaxation method was introduced to derive the MHS solution by using an ``evolution technique'' \citep{mm94,jmm97,zwd13,zwd16,mki19}. The Grad-Rubin iteration, which is well known in the calculation of the nonlinear force-free field (NLFFF), was extended by \cite{gw13} and \cite{gbb16} to solve the MHS equations. \cite{wi03} and \cite{wn06,wnr07} develop the optimization method to treat the MHS equations. We recently extended this method by introducing the gravity force \citep{zw18}. In addition, our new algorithm ensures that the resulting plasma pressure and density are positive definite. More recently, we further test our code with the radiative MHD simulation of a solar flare \citep{zw19}. This challenging test (solar flares are intrinsically non-static and even non-stationary) provides a solid foundation for the application of the code to real observations. It is worth noting that, besides the traditional NLFFF models, a new type of NLFFF model \citep{msd12,a13,a16} has been introduced recently. This alternative approach uses the line-of-sight magnetogram to constrain the potential field. With the help of a forward-fitting method the angle between the magnetic field and coronal or chromospheric loops is minimzed. In this model, the assumption of a force-free photosphere is not used either. This works well in the corona, but may not be the ideal way to describe the non-force-free chromospheric field. In this paper, we apply our code, for the first time, to a vector magnetogram obtained by the IMaX instrument on the \textsc{Sunrise} balloon-borne solar observatory. Since the field-of-view (FOV) of IMaX is limited to part of the active region, an SDO/HMI vector magnetogram is used to cover the unobserved parts. The organization of the paper is as follows. In Section \ref{sec:method}, we describe the dataset used in this paper. In Section \ref{sec:results}, we analyze the results and compare them with other models. Conclusions and perspectives are presented in Section \ref{sec:conclusion}. \section{Magnetohydrostatic Extrapolation and its application to AR11768} \label{sec:method} The MHS extrapolation computes the magnetic field, plasma pressure and density consistently with the help of an optimization principle. The algorithm was described and tested in detail in \cite{zw18,zw19}. The primary dataset used in this work was recorded by the vector magnetograph IMaX \citep{mda11} onboard the \textsc{Sunrise} balloon-borne solar observatory \citep{bgs11,bss11,ggb11} during its second flight, refer to as \textsc{Sunrise} II \citep{srb17}. The IMaX data have a pixel size of 40 km a FOV that contains $936\times936$ pixels ($50''\times 50''$). This FOV is limited to part of AR11768. The data were inverted by \cite{krs17} using the SPINOR inversion code \citep{fsf00} that builds on STOPRO routines \citep{s87}. A one-component atmospheric model with three optical depth nodes (at log$\tau$=-2.5, -0.9, 0) for the temperature and height-independent magnetic field is applied. The affect of the inversion (not the same inversion used here) on the extrapolation result was studied by \cite{wys10} and found to be minor. The 180$^\circ$ ambiguity is removed with an acute angle method \citep{mlb06} which minimizes the angle with, in this case, the corresponding HMI vector magnetogram. We note that there is a square pattern in the transverse field of IMaX (Fig.\ref{fig:magnetogram}). It originates from disambiguating IMaX with HMI data in which a square patten also exists (seen in the transverse field). Essentially, the square pattern is the IMaX noise that is modulated to match the HMI spatial resolution by disambiguating. Although this noise appears in big mosaics (big compared to the IMaX resolution), the size of every single one is still very small compared to the active region. Therefore the effect they have on the extrapolation is expected to be similar to the effect of the normal noise which has been studied by \cite{zw18}. In that paper we found, based on the quantitative assessment for the extrapolation, the influence of the random noise (20\% level) on the magnetic field to be less than 4\%. As pointed out by \cite{dsb09}, it is necessary to have flux-balance within the FOV and to catch the magnetic connectivity in order to find unique solutions. So we have embedded the IMaX data into the SDO/HMI vector magnetogram \citep{ptc12,ssb12} taken closest in time to the analyzed IMaX magnetogram. Fig.~\ref{fig:magnetogram} shows both IMaX and combined vector magnetogram. \begin{figure} \includegraphics[width=\hsize]{magnetogram.pdf} \caption{Top: IMaX measurements at 23:48 UT embedded in the vector magnetogram of HMI (observed at 23:48 UT). The outlined region with a clearly visible higher resolution is the IMaX FOV. Bottom: vector magnetogram of IMaX.} \label{fig:magnetogram} \end{figure} \begin{figure} \includegraphics[width=\hsize]{flowchart.pdf} \caption{Schematic flow chart of the MHS code applied to a vector magnetogram.} \label{fig:flowchart} \end{figure} \begin{figure} \includegraphics[width=\hsize]{temperature_1D.pdf} \caption{Temperature profile of the gravity-stratified atmosphere employed as the initial condition.} \label{fig:temperature_1D} \end{figure} Fig.~\ref{fig:flowchart} depicts the flow chart of our code applied to the combined magnetogram. The following steps are plotted. First, compute a NLFFF \citep{w04} with a preprocessed magnetogram \citep[the net Lorentz force and torque are removed within the error margin of the measurement by using a minimization principle to make the data compatible with the force-free assumption]{wis06} and a gravity-stratified atmosphere along field lines with a 1D temperature profile (Fig.~\ref{fig:temperature_1D}). The pressure on the bottom boundary is computed using $p=p_{quiet}-\frac{1}{2}{B_z}^2$, where $p_{quiet}$ is the pressure in the quiet region. The bottom density is determined by assuming a uniform temperature of $6000$ Kelvin on the photosphere. Second, carry out a further optimization to achieve an MHS equilibrium with the original magnetogram. Note that although a temperature profile is given at the initial state to relate the gas pressure and density, the initial temperature profile is no longer a restriction on the pressure and density optimization. There are two options to input the bottom magnetic boundary. One is to replace the preprocessed magnetogram with the original magnetogram directly, which is used in this study. The other is to change the magnetic field gradually from the preprocessed value to the original value. The side and top boundaries are fixed to their initial values which are potential fields. The computation is performed in a $2336\times1824\times128$ box with a 40 km grid spacing both in horizontal and vertical directions. All of the following analyses are restricted to a $936\times936\times128$ box above the IMaX FOV (unless otherwise stated). \section{Analysis of the extrapolation results} \label{sec:results} \subsection{Solution consistency} Both the magnetogram and the disequilibrium of the initial atmosphere (due to the nonuniform gas pressure and density in the photosphere) drive the evolution of the system to an MHS state. Fig.~\ref{fig:residual_force} illustrates the compensation of forces in the initial state and in the final equilibrium. The horizontal component of the forces are shown in panels (a) and (c), while the vertical components are illustrated in panels (b) and (d). In the initial state, we find the residual force is greater than the Lorentz force (see panels (a) and (b)), so that this state is obviously far from an MHS equilibrium. As mentioned before, this promotes further optimization. We see, in the final solution, that the Lorentz force is balanced effectively by the pressure gradient and vertically also by gravity (see panels (c) and (d)). On average, the residual force is 43\% of the Lorentz force in the transverse direction while the percentage is only 24\% in the vertical direction below 2 Mm. These residual forces are not close to 0. The main contributions come from the photosphere and regions with very high or low $\beta$. The inadequate boundary condition of plasma as well as the noise in the measured magnetic field prevent the MHS extrapolation from balancing forces well on the photosphere. In a very high plasma region ($\beta > 10$) Lorentz force are too weak to act against plasma forces, which could result in a ratio of residual force over Lorentz force that is much larger than 1. In a very low plasma region ($\beta < 0.1$) plasma forces are too weak to against Lorentz force, which could result in a ratio that is close to 1. We recompute the ratios excluding the bottom boundary and only in regions where $0.1 \geq \beta \leq 10$. The numbers are 23\% and 7.6\% for the transverse and vertical direction, respectively, which means that the major part of the Lorentz force is balanced. Although those two numbers are still not very small, they are acceptable considering that we have embedded two data sets in each other (recorded by different instruments and obtained by different inversion technologies, etc.) when producing the magnetogram that provides the bottom boundary. \begin{figure} \includegraphics[width=\hsize]{residual_force.pdf} \caption{Planar average of forces as a function of height in the initial unrelaxed state (a)(b) and in the MHS equilibrium (c)(d). Panels (a) and (c) show the transverse direction, while panels (b) and (d) illustrate the forces in the vertical direction.} \label{fig:residual_force} \end{figure} \subsection{Plasma} \begin{figure} \includegraphics[width=\hsize]{pressure_density_cut.pdf} \caption{(a)-(f) Gas pressure and density at heights of 0 Mm, 0.72 Mm and 1.44 Mm. (g)(h) Pressure and density at the plane y=18.7 Mm (black line in panel (a)). The red contours in panel (a) outline the locations at which magnetic field strength is 1000 G. Two white ``X'' in panel (a) indicate the intersections of the 1000 G contour and the black line. They are also the seed points of the two white lines in panel (g). The red rectangle in panel (c) specifies the FOV of Fig.~\ref{fig:squeeze}. Uniformly selected magnetic field lines in panel (c) range from 600 km to 1400 km.} \label{fig:pressure_density_cut} \end{figure} Fig.~\ref{fig:pressure_density_cut} shows plasma distributions from different perspectives. From both top and side views, we see clearly that plasma pressure and density are reduced in strong field regions to keep the force balance. This is consistent with sunspot observations. Similar results were also found both in the LMHS modelings \citep{ads98,adm99,wnn15,wnn17} and the previous tests of our MHS code \citep{zw18,zw19}. However, we also find that, at some parts of the active region edges, plasma pressure and density are enhanced. This has never been reported in previous extrapolations. Fig.~\ref{fig:squeeze} depicts selected Lorentz force vectors in a FOV that is outlined by the red square in Fig.~\ref{fig:pressure_density_cut} (c). We see that regions with an enhanced gas pressure which are encircled by blue ellipses are surrounded with a net inward flux of the Lorentz force. As a result, the plasma is squeezed together. It is also worth noting that the fibril-like plasma pattern traces the magnetic field in Fig.~\ref{fig:pressure_density_cut} (c) and Fig.~\ref{fig:plasma_compare} (a). Such localized concentrations reflect the nonlinear nature of the MHS system, which cannot be observed in the LMHS model (see Fig.~\ref{fig:plasma_compare}). \begin{figure} \includegraphics[width=\hsize]{squeeze.pdf} \caption{Two-dimensional Lorentz force vectors overplotted on the gas pressure at a height of 1.44 Mm. The plotted FOV is outlined by the red rectangle in Fig.\ref{fig:pressure_density_cut} (c). Blue ellipses encircle four typical regions where the gas pressure is enhanced.} \label{fig:squeeze} \end{figure} According to the model of plasma $\beta$ over an active region developed by \cite{g01}, the magnetic field dominates plasma above a height of $1\sim 2$ Mm (see Fig.~3 in \cite{g01}). Fig.~\ref{fig:LorentzForce} (a) shows that the $\beta_{(z)}$ of the MHS equilibrium is right inside the $\beta$ range illustrated in \cite{g01}. A high plasma $\beta$ is a necessary, but not a sufficient condition for the magnetic field to be non-force-free. To see whether the magnetic field is non-force-free in the high $\beta$ region, the current-weighted sine of the angle between the magnetic field and the electrical current density \citep{sdm06} is computed \begin{equation} \sigma=\left.\displaystyle\sum_i{\frac{|\mathbf J_i\times \mathbf B_i|}{B_i}} \right/ \sum{J_i}. \label{eq:cwsin} \end{equation} As shown in Fig.~\ref{fig:LorentzForce} (b), $\sigma$ decreases fast from 0.7 at the photosphere to less than 0.1 above 2.0 Mm. Since the effective vertical resolution of the solution is roughly the same as the horizontal resolution of the magnetogram, the high resolution IMaX data allow us to study this non-force-free layer in detail. Coarser data (e.g. HMI) with a few grid points to resolve this layer meet with difficulties when focussing on the lower atmopshere. Fig.~\ref{fig:LorentzForce} (c) shows Lorentz force distributions at a height of 0.4 Mm. We see strong Lorentz forces are mainly located at edges of strong-field features, where plasma $\beta$ drops precipitous. This is a natural result caused by the strong plasma forces at edges. The great plasma differences at edges of magnetic features in the lower atmosphere are routinely observed. \begin{figure} \centering \includegraphics[width=\hsize]{plasma_compare.pdf} \caption{Pressure, density and magnetic field strength of the MHS model (a)(b)(c) and the LMHS model (d)(e)(f) at a height of 0.72 Mm. Uniformly selected magnetic field lines in panel (a) range from 600 km to 1400 km.} \label{fig:plasma_compare} \end{figure} \begin{figure} \includegraphics[width=\hsize]{LorentzForce.pdf} \caption{Planar average of plasma $\beta$ (a) and of current-weighted sine $\sigma$ (b), where $\sigma$ is the angle between the magnetic field and current, as functions of height. (c) The magnitude of Lorentz force at a height of 0.4 Mm.} \label{fig:LorentzForce} \end{figure} \subsection{Relation between plasma pressure, current density and photospheric brightness} Fig.~\ref{fig:pre_sufi300} shows the mapping of plasma and current density onto an image acquired by \textsc{Sunrise}/SuFI. Note that the extrapolation data are cut according to the SuFI FOV of $15''\times 38''$. Bright points are clearly seen in the inter-granular lanes in panel (c). They are typically regarded as nearly vertical slender flux tubes with kG magnetic fields \citep{s93,nts08,rdb14}. The lateral radiative inflow makes the tube hot and bright \citep{s76}, making them visible as bright points at wavelengths sensitive to temperature \citep{ssb03,rsm10}. \begin{figure} \centering \includegraphics[width=\hsize]{pre_sufi300.pdf} \caption{(a) Gas pressure and (b) current density at height of 80 km. (c) SuFI 3000 $\AA$ image. (d) IMaX magnetogram. Strong pressure regions are outlined by yellow ellipses.} \label{fig:pre_sufi300} \end{figure} Fig.~\ref{fig:pre_sufi300} (a) (b) show that regions of high plasma pressure and strong electric current density coincide. Most of them are located near the edges of magnetic flux tubes. These flux tubes which appear as photospheric bright points at SuFI 3000 $\AA$ (see Fig.~\ref{fig:pre_sufi300} (c)) are often accompanied by high plasma pressure (below 400 km) and electric current around them. At the edges of flux tubes, the plasma and magnetic field interact, which leads to the high current and co-spatial high plasma pressure. Such a high current density around magnetic bright points is also reminiscent of the electric current sheets expected to bound flux tubes. It is worth noting that many bright points do not have corresponding enhancements in the electric current. This may be related to the local dynamics. It must also be kept in mind that the MHS model does not take into account the radiation. \subsection{Comparisons of magnetic fields and chromospheric fibrils} The SuFI instrument provides diffraction-limited images at 3968 $\AA$ with contributions from both the photosphere and low-to-mid-chromosphere \citep{jss17}. The observed slender fibrils at this wavelength are the dominant structures (see Fig.~\ref{fig:fl_sufi397} (a)) in the SuFI FOV. It is generally believed that long fibrils in the chromosphere outline magnetic fields in this layer \citep{ds11,jyr11,spl13,lcr15,zwd16}. We plot field lines within the sub-volume spanning the 600-1400 km height range, which implies that low field lines ($\leq$ 600 km) are ignored. Seed points of the field lines are uniformly selected in the photosphere. Fig.~\ref{fig:fl_sufi397} shows that most fibrils trace field lines nicely. The similar field line patterns of different extrapolation models imply that plasma forces have limited impact on the large scale magnetic field of this potential-like active region, at least in the height range from 600-1400 km. \begin{figure} \centering \includegraphics[width=\hsize]{fl_sufi397.pdf} \caption{Magnetic field lines of different models within the heights [600, 1400] km overplotted on the image observed in Ca II H core line with 1.1 $\AA$ wide filter. The field lines are rooted in the same set of seed points in all panels.} \label{fig:fl_sufi397} \end{figure} In order to quantitatively show the degree of agreement between fibrils and field lines we compute the angle $\theta$ between fibrils and the plane-of-the-sky component of the magnetic field. The computation is not carried out on every pixel on the image. Instead, to show more clearly the discrepancy among different models, we focus on the regions where large differences between magnetic vectors of the MHS model and the NLFFF model appear (see contours in Fig.~\ref{fig:fl_sufi397}). We have tried to identify all fibrils in the regions of interest. For each fibril, one point is picked for the statistics. Then the method introduced by \citet{ifw08} is used to estimate the orientation of the fibril. The key point of the method is to use the gradient and Hessian matrix of the image intensity to determine the orientation. Then $\theta$ is computed vertically within the 600-1400 km height range. The smallest one is specified as the discrepancy between the fibril and the magnetic vector. For totally 26 points picked (hence 26 fibrils identified), the average of $\theta$ for the MHS model, NLFFF model and LMHS model are $11.8^\circ,\ 15.7^\circ$ and $20.9^\circ$, respectively. There are 18 points (nearly 70\%) at which the MHS model's $\theta$ is less than those of the other two models. \begin{figure} \centering \includegraphics[width=\hsize/2]{vec_sufi397.pdf} \caption{Contours show where the average of angles between the MHS and NLFFF's magnetic vectors range from 600-1400 km equals to $5^\circ$. Red vectors depict orientations of the fibrils. Each vector is started from the point at which $\theta$ has the largest value over this fibril.} \label{fig:vec_sufi397} \end{figure} \section{Conclusion and perspectives} \label{sec:conclusion} In this work we apply, for the first time, a nonlinear MHS code to model the solar lower atmosphere starting from a real magnetogram. A combined vector magnetogram to cover the whole active region is used as the boundary input. MHS equilibrium is constructed in which Lorentz forces are effectively balanced by plasma forces. The pressure and density depletion take place in the strong field regions together with enhancements in the active region edges. In the low $\beta$ layers, the fibril-like plasma structures clearly outline the magnetic field lines. A thin non-force-free layer is resolved within about 50 grid points in the z direction. The high plasma pressure and co-spatial high electric current appear around the bright points prominent in SuFI 3000 $\AA$ images. Although similar general patterns of the magnetic fields are found in different types of extrapolations (MHS, linear MHS, nonlinear force-free fields), a quantitative comparison implies that the magnetic field vectors of the MHS model are more aligned with the orientation of the chromospheric fibrils observed at SuFI 3968 $\AA$. As mentioned before, the gas pressure and density are optimized independently. That means the initial isothermal temperature profile can not be kept. An equation of state is needed to close the MHS equations. However, the present version of the code deals with pressure and density separately. Future studies may focus on the extrapolation under more constrains from observations. The MHS extrapolation converges at a much lower speed than the NLFFF extrapolation. In this application with $2336\times1824\times128$ grid points, the MHS code runs about 55 hours on 2 Intel Xeon Gold 6150 CPU with 18 cores while the NLFFF code only takes 1 hour. Considering the thinness of the non-force-free layer, a future extrapolation could consist of a time-consuming MHS model in the lower atmosphere and a much faster NLFFF model in the higher atmosphere by using the model below as the boundary input. Such a combination is a superior solution to the NLFFF model with preprocessed magnetogram \citep{wis06}. Last but not least, the MHS equilibrium, as computed by the new model, can serve as the initial conditions for time-dependent data-driven MHD simulations \citep{jwf16,gxk19}. \begin{acknowledgements} We acknowledge constructive suggestions of the referee and valuable discussions with B. Inhester and T. Riethm\"{u}ller. The German contribution to \textsc{Sunrise} and its reflight was funded by the Max Planck Foundation, the Strategic Innovations Fund of the President of the Max Planck Society (MPG), DLR, and private donations by supporting members of the Max Planck Society, which is gratefully acknowledged. This project has also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 695075) and has been supported by the BK21 plus program through the National Research Foundation (NRF) funded by the Ministry of Education of Korea. This work was also supported by DFG-grant WI 3211/4-1. \end{acknowledgements} \bibliographystyle{aa}
{'timestamp': '2020-06-01T02:08:40', 'yymm': '2005', 'arxiv_id': '2005.14448', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14448'}
arxiv
\section{Introduction} The Sun is a magnetically active star, filling the interplanetary space with a stream of charged particles called the solar wind \citep[see e.g. a recent review by][]{Verschare
n2019}. The solar wind properties are far from being homogeneous, with strong variations in temperature, density, or interplanetary magnetic field observed in connection with various phenomena of solar activity. The main drivers of strong disturbances of the solar wind are coronal mass ejections (CMEs), the fast--slow solar wind interaction on the borders of corotating interaction regions, and fast-wind outflows from coronal holes. Solar-wind disturbances may ultimately interact with Earth's magnetosphere, thereby triggering geomagnetic activity. As first proposed by \citet{Dungey1961}, the dynamic pressure exerted by the solar wind on the magnetosphere can trigger magnetic reconnection, opening dayside dipolar geomagnetic field lines. The solar wind then transports this magnetic field to the nightside, forming a long tail behind the Earth. This transfer of magnetic flux and the resulting reconfiguration of the magnetosphere eventually leads to nightside magnetic reconnection, returning flux to the dayside in various phenomenological response modes that depend on the disturbance level \citep{Dungey1961, Kepko2015}. However, a common characteristic of all such response modes is the formation of a current wedge system \citep{Kepko2015, McPherron&Chu17:ssr}. A fraction of the tail current along geomagnetic field lines is then temporarily diverted through the ionosphere, allowing a closure of the current wedge and causing perturbations in the auroral zone and at middle latitudes \citep{McPherron&Chu17:ssr}. Both substorms and geomagnetic storms give rise to a current wedge, plasma sheet inward convection by inductive electric fields, and energetic particle injections \citep{Ganushkina2017, Kepko2015, McPherron&Chu17:ssr, Thomsen2004}. However, the current wedge has generally a more limited temporal extent during substorms than during storms, which frequently last for days \citep{Ganushkina2017, Kepko2015}. Substorms are one of the key dynamical processes occurring during storms, but isolated substorms also occur outside storms \citep{Viljanen2006, Turnbull2009}. During storms (mainly caused by strong interactions between CMEs and the magnetosphere), a stronger buildup of the inner ring current (a westward current of ions roughly $\sim2-4$ Earth radii above the equator) is provided by a deeper inward transport of charged particles from the plasma sheet, leading to a significant and prolonged decrease of the geomagnetic field \citep{Ganushkina2017}. All these ionospheric and magnetospheric currents, and the related field-aligned currents, can cause important geomagnetic field variations during periods of rapidly evolving solar wind dynamic pressure \citep{Gonzalez94, Lakhina2016, McPherron&Chu17:ssr, Kappenman2005, tsurutani2009brief}. This realization has led to the traditional concept of disturbed days: days of smooth and regular geomagnetic field variations have been called {\it quiet days}, whereas days of stronger and irregular variations have been called {\it disturbed days} \citep{ChapmanBartels1940}. Geomagnetically induced currents (GICs) in the ground are due to strong variations $dH/dt$ of the horizontal component $H$ of the geomagnetic field over typical time scales of $\sim 10-1000$ seconds during disturbed days \citep{Carter2015, Kappenman2003, Kataoka2008, Pokhrel2018,Zhang2016}. Substorms generally produce the largest $dH/dt$ at high and mid-latitudes during periods of fast solar wind and have caused many of the major GIC impacts during large storms -- e.g., the Quebec voltage collapse on 13 March 1989 was triggered by a first substorm, while two later substorms tripped out transformers in the UK \citep{Boteler2019}. $dH/dt$ was found to be twice smaller in general during non-storm substorms than during storm-related substorms, possibly due to an additional input from ring current variations during storms \citep{Viljanen2006, Turnbull2009}. Other important sources of $dH/dt$ during geomagnetic storms include sudden commencements (the shock compression of the magnetosphere when a fast CME impacts the magnetosphere at the start of a storm, leading to an increase of Chapman-Ferraro currents at the dayside magnetopause; e.g., see \citealt{Kikuchi2001}) and rapid variations of the ring current, through its role in the generation of Region 2 field-aligned currents \citep{Ganushkina2017}. Sudden commencements have a large $dH/dt$ because of their shock-like nature, while rapid increases of ring current energy density following large scale injection or inward convection of energetic charged particles coming from the plasma sheet can also produce large $dH/dt$ \citep{Kappenman2003, Kappenman2005, Kataoka2008}. GICs propagate through conducting regions in the ground and water, but also in the grounded conductors. The presence of GICs in the electric power grid can cause various kinds of damage. GICs are quasi-DC currents that can lead to half-cycle saturation and drive a transformer response into a non-linear regime. This poses a risk for transformers by producing high pulses of magnetizing current, a local heating (also vibration) within the transformer \citep{Gaunt2014}, and the generation of AC harmonics that propagate out into the power network, where they can disrupt the operations of various devices \citep{Kappenman2007, Molinski2002}. In particular, the propagation of harmonics in the power grid during half-cycle saturation can distort the electrical current waveform, eventually triggering a detrimental reaction of protective relays connecting power lines, or leading to a disruption of other devices attached to these lines. GICs identified by fast variations of the geomagnetic field have been linked with various power grid failures \citep{schrijver2013disturbances}, eventually leading to power grid disruptions \citep{Kappenman2007, pirjola2000geomagnetically, Pulkkinen2017, schrijver2013disturbances}. Although high latitude regions are more at risk from GICs, middle and low latitude regions may also be impacted by significant GICs \citep{bailey2017, Carter2015, gaunt2007, Lotz2017, Marshall2012, Torta2012, Tozzi2019, Wang2015, Watari2009, Zhang2016, Zois2013}. A first study of anomalies in the Czech power grid as a function of geomagnetic activity (defined by the $K$ index computed from the measurements of the Earth's magnetic field at a local magnetometer station near Budkov -- e.g., see \citealt{Mayaud1980,McPherron&Chu17:ssr}) has already identified some statistically significant increases of the rate of anomalies around month-long periods of higher geomagnetic activity than nearby periods of lower activity \citep{Vybostokova2018}. Nevertheless, the relationship between geomagnetic events and anomalies still remained somewhat loose. Accordingly, the main goal of the present paper is to better ascertain the existence of a tight relationship between power grid anomalies and geomagnetic storms, on the basis of the same data set. We shall discuss the physical mechanisms by which GICs may cause anomalies in power lines and transformers, and show that our statistical results are suggestive of a causal relationship based on those mechanisms. We shall also address the important and unanswered question of the time delay between moderate to large geomagnetic storms with minimum $Dst<-40$ nT \citep{Gonzalez94} and the actual occurrences of anomalies. For that purpose, we shall use Superposed Epoch Analysis to investigate the relative occurrence of GIC effects in the Czech power grid during disturbed days as compared with quiet days. Such disturbed days will be categorized using different time-integrated parameters of geomagnetic activity, related to the magnitude of temporal variations of the horizontal component of the geomagnetic field, which can induce detrimental currents in power lines. \section{Data sets} In this study, we searched for a causal relation between two types of time series. The first series describing the daily anomaly rates in the Czech electric power-distribution grid, and the second serving as a proxy of disturbed days for the estimation of geomagnetically induced currents. \subsection{Logs of Anomalies} The Czech Republic is a mid-latitude country (around $\sim 50^\circ$ geographic latitude and $\sim 45^\circ$ corrected geomagnetic latitude), where the effects of solar/geomagnetic activity on ground-based infrastructures is expected to be moderate at most. The modelled amplitudes of GICs during the Halloween storms in late October 2003 reached 1-minute peaks of about 60~A\footnote{Smi\v{c}kov\'a, A., Geomagnetically Induced Currents in the Czech Power Grid, BSc. thesis (supervisor \v{S}vanda, M.), Faculty of Electrical Engineering, Czech Technical University, 2019, available online \url{http://hdl.handle.net/10467/84988}.}. The country has a shape prolonged in the east--west direction (about 500 km length), whereas in the south--north direction it is about 280~km long from border to border. The spine of the electric power network is operated by the national operator \v{C}EPS, a.s., which maintains the very-high-voltage (400~kV and 220~kV) transmission network, and connects the Czech Republic with neighbouring countries. \v{C}EPS also maintains the key transformers and electrical substations in the transmission network. The area of the state is then split into three regions, where the electricity distribution is under the responsibility of the distribution operators. The southern part is maintained by E.ON Distribuce, a.s., the northern part by \v{C}EZ Distribuce, a.s., and the capital city of Prague is maintained by PREdistribuce, a.s. All three distributors maintain not only very-high-voltage (110 kV) and high-voltage (22 kV) power lines, but also connect the consumers via the low-voltage (400 V) electric power transmission network. All four above-mentioned power companies have agreed to provide us their maintenance logs. The datasets used in this study are exactly the same datasets already used in the study by \cite{Vybostokova2018}. Thus, we refer the reader to section 3.2 of this previous paper for a more detailed description of the datasets. By mutual non-disclosure agreement with the data providers, the datasets were anonymised (by removing the information about the power-company name, and also by changing the calendar date to a day number) and must be presented as such. The total time span is 12 years, but the span of individual maintenance logs provided by the operators is shorter, varying between 6 to 10 years. We only briefly recall that the obtained logs were cleaned from events that were obviously not related to variations of geomagnetic activity. From these logs, we keep only the dates when the events occurred and did not consider any other details. These inhomogeneous datasets (the log entries were provided by different individuals with varying levels of details and quality of the event description) were split into twelve subsets D1--D12, which were investigated separately. Each sub-dataset was selected so that it contained only events occurring on devices of a similar type and/or with the same voltage level and were recorded by the same operating company. The dataset descriptions are briefly summarised in Table~\ref{tab:datasets}. \begin{table}[ht] \caption{Datasets analysed in this study. This is a reduced version of Table~1 in \cite{Vybostokova2018}.} \label{tab:datasets} \centering \begin{tabular}{l|lll} {\bf Dataset} & {\bf Voltage level} & {\bf Type} & {\bf Span}\\ {\bf ID} & {\bf } & {\bf } & {\bf } \\ \hline D1 & very high voltage & equipment: transformers, & 9 years\\ & & electrical substations& \\ D2 & high voltage & equipment & 6 years \\ D3 & very high voltage & equipment & 6 years\\ D4 & high and low voltage & power lines & 7 years\\ D5 & high and low voltage & equipment and power lines & 7 years\\ D6 & high and low voltage & equipment & 7 years \\ D7 & very high voltage & power lines & 10 years \\ D8 & high voltage & transformers & 10 years \\ D9 & very high voltage & transformers& 10 years \\ D10 & very high and high voltage & electrical substations& 10 years \\ D11 & very high voltage & power lines& 10 years \\ D12 & high voltage & power lines & 10 years \\ \end{tabular} \end{table} \subsection{Geomagnetic Indices and Parameters used for GIC Estimation} Various parameters have been considered to estimate the effects of geomagnetic activity on power grids \citep{schrijver2013disturbances}. GICs are due to strong variations $dH/dt$ over typical time scales of $\sim 10-1000$ seconds \citep{Kappenman2003}. There are two sources of such large $dH/dt$ at low and middle latitudes: (i) sudden impulses (SI), also called sudden commencements (SC) when they are followed by a storm caused by the shock preceding a fast CME, and (ii) the growth and decay of the ring current during a magnetic storm. Substorm-related disturbances are mostly limited to high and middle latitudes, whereas disturbances caused by ring current changes generally affect mainly middle and low latitudes. Statistically, periods of stronger cumulative effects of GICs in a power grid are therefore expected to correspond to {\it disturbed days} of elevated geomagnetic activity \citep{ChapmanBartels1940}. In the present study, we shall use various cumulative (time-integrated) parameters based on different magnetic indices to categorize such disturbed days, and we shall investigate the relative occurrence of GIC effects during such disturbed days as compared with quiet days. An appropriate quantity to estimate GICs at low latitudes is $d(\textit{SYM-H})/dt$, which directly provides a (longitudinally averaged) measure of the 1-minute $dH/dt$ due to ring current variations that drive GICs there \citep{Carter2015, Kappenman2003, Zhang2016}. Indeed, the $\textit{SYM-H}$ index is essentially similar to the hourly $Dst$ storm time index, but measured on 1-minute time scales -- that is, it provides the disturbance $\Delta H = H - H_{\rm quiet}$ of the horizontal component of the magnetic field as compared to its quiet-time level, longitudinally averaged based on ground magnetometer measurements at different low latitude magnetometer stations \citep{Mayaud1980}. Several studies have demonstrated the existence of significant correlations between GICs or electric grid failures and times of large $d(\textit{SYM-H})/dt$ at low to middle latitudes during geomagnetic storms, although $d(\textit{SYM-H})/dt$ is often inappropriate during strong substorms \citep{Carter2015, Wang2015, Zhang2016}. \cite{Carter2015} have further shown that the actual $dH/dt$ at middle latitudes due to SI/SCs can be a factor $\sim 2-3$ larger on the dayside than $d(\textit{SYM-H})/dt$, potentially allowing GIC effects even during geomagnetic events with relatively small $d(\textit{SYM-H})/dt$. We checked that $dH/dt$ at the Czech magnetometer station of Budkov can also be sometimes $>2-3$ times larger than $d(\textit{SYM-H})/dt$ during SI/SCs. \cite{Viljanen2014} have noticed the presence of a European region of low underground conductivity stretching from France through Czech Republic to Hungary that could favor significant GICs at middle latitudes. \cite{Gil2019} have shown the presence of GICs during a few selected storms in Poland, while \cite{Tozzi2019} have found that non-negligible GICs could exist even down to northern Italy. \cite{Wang2015} have further emphasized that cumulative GICs in a nuclear plant transformer during a long-duration geomagnetic event could sometimes be more harmful than short events, due to the longer cumulated time of transformer heating. Accordingly, we consider here the $Int(d(\textit{SYM-H})/dt)$ parameter to categorize disturbed days of expected significant GIC impacts on power grids. $Int(d(\textit{SYM-H})/dt)$ is calculated over each day, as the sum of all 1-minute $\vert d(\textit{SYM-H})/dt\vert$ values (in nT/min) obtained during times when $\textit{SYM-H}$ remains smaller than some threshold. The selected threshold (varying from $-50$ nT to $-25$ nT) should ensure that only geomagnetic storm periods are considered \citep{Gonzalez94}. This $Int(d(\textit{SYM-H})/dt)$ parameter allows, in principle, to take into account the immediate effects on power grids caused by large individual $\vert dH/dt\vert$ due to ring current variations, as well as the more delayed, cumulative effects potentially caused by prolonged periods of moderate to significant $\vert dH/dt\vert$ levels \citep{Carter2015, Wang2015, Zhang2016} -- although large individual $\vert dH/dt\vert$ during strong substorms will need other indices such as $AE$ or $ap$ to take them into account (see below). Other works have suggested that the mean or cumulative $Dst$ during storm main phase should be good indicators of long duration GICs, because larger and steeper decreases of $Dst$ correspond to stronger disturbances that should generally lead to larger $dH/dt$ at the relevant shorter time scales of $\sim 10-1000$ seconds \citep{Balan2014, Balan2016, Lotz2017}. Using observations in South Africa (at middle corrected geomagnetic latitudes $\sim 36^\circ-42^\circ$ not much lower than in the Czech Republic), \cite{Lotz2017} have demonstrated the existence of a linear relationship between the sum of induced electric fields recorded in the ground during geomagnetic storms and the integral of $\textit{SYM-H}$ (or $Dst$) values, suggesting that the cumulative $\textit{SYM-H}$ or $Dst$ could be used as good proxies for cumulated induced electric fields at middle corrected geomagnetic latitudes (although ring current effects are likely more important for GICs in South Africa than in the Czech Republic, where a more balanced mixture of ring current and substorm effects is present). They also noted that some effects might be present as long as $\textit{SYM-H}$ remained below $-20$ nT. Therefore, we also consider the $IntDst$ parameter to categorize disturbed days of expected significant GICs in the Czech Republic \cite[e.g., see][]{Mourenas2018}. $IntDst$ (in nT$\cdot$hr) is calculated as a sum of hourly $\vert Dst\vert$ values. This summation starts when $Dst$ first becomes smaller than a threshold (taken between $-50$ nT and $-25$ nT as before) chosen to ensure that only storm periods are considered, and this summation ends when $Dst$ reaches its minimum value over the next 24 hours. Each $IntDst$ value is then assigned to the starting day of a given summation, with all integration periods strictly separated by construction. As a result, $IntDst$ is generally measured during storm main phase, where the effects on GICs are likely stronger \citep{Balan2014, Balan2016}, to provide a complementary metric to the $Int(d(\textit{SYM-H})/dt)$ metric calculated over each whole day without any consideration of storm phase. While ring current variations during storms can be quantified by $Dst$ and $\textit{SYM-H}$ indices, the magnetic indices that provide a measure of magnetospheric and ionospheric current variations observed during strong substorms are $AE$, $AL$, $Kp$, or $ap$ \citep{Kamide1996, Mayaud1980, Mourenas2020, Thomsen2004}. The $ap$ index (as its logarithmic equivalent $Kp$) provides a global measure of the range of magnetic field variations at middle latitudes over 3-hour time scales, obtained by averaging measurements from different mid-latitude magnetometer stations spread in longitude \citep{Mayaud1980, Thomsen2004}. In contrast, the range indices $AE$ and $AL$ are measured at higher magnetic latitudes $>60^\circ$ inside the auroral region \citep{Mayaud1980, Kamide&Rostoker04}, and $AE$ saturates at high geomagnetic activity $am>150$ (with $am$ a mid-latitude index similar to $ap$) because the auroral oval then expands equatorward of the magnetometer stations measuring it \citep{Lockwood2019, Thomsen2004}. Therefore, $ap$ is probably more appropriate than $AE$ for quantifying the strength of time-integrated geomagnetic disturbances at middle (sub-auroral) geomagnetic latitudes than $AE$ \citep{Thomsen2004, Mourenas2020}. Although $ap$ cannot provide an accurate ranking or quantification of the maximum $dH/dt$ values reached during the most disturbed events due to its intrinsic saturation at $Kp=9$ and its coarse 3-hour time resolution, it may still provide rough estimates during less extreme events with $Kp\sim3-7$ \citep{Kappenman2005}. Therefore, it is worth examining whether some time-integrated measure of $ap$ could still be used to simply categorize disturbed/quiet days of expected stronger occurrence/absence of GIC effects at middle latitudes, during a large series of medium (most frequent) to strong (more rare) time-integrated $ap$ events spread over 6 to 10 years. Accordingly, we shall consider in section~\ref{sect:intAP} a third parameter of geomagnetic activity, $IntAp$, corresponding to the daily maximum level of the integral of 3-hourly $ap$ values over a continuously active period of $ap\geq 15$ nT \citep{Mourenas2019, Mourenas2020}. This should allow to categorize disturbed days that include contributions to GICs from both (storm-time) ring current variations and strong substorms, usefully complementing the $Int(d(\textit{SYM-H})/dt)$ and $IntDst$ parameters. Indeed, $IntAp$ provides a rough estimate of the effects at middle latitudes of significant time-integrated $dH/dt$ disturbances due to substorms, which often do not reach the low latitudes where $\textit{SYM-H}$ and $Dst$ are measured. In addition, we shall consider a fourth parameter, called $IntAE$, which is based on the high-latitude $AE$ auroral electrojet index \cite{Mayaud1980}. $IntAE$ is the daily maximum level of the integral of $AE$ calculated over the same period of continuously high $ap\geq15$ nT as $IntAp$ (generally corresponding to $AE>200$ nT), to ensure that the corresponding substorm-related magnetic disturbances effectively reach middle latitudes \citep{Mourenas2019, Mourenas2020}. $IntAE$ provides a measure of cumulative substorm-related disturbances, corresponding to continuous periods of auroral current variations roughly similar to High-Intensity Long-Duration Continuous $AE$ Activity (HILDCAA) events \citep{Tsurutani06}. \begin{figure} \centering \resizebox{0.9\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure1.pdf}}} \caption{Upper panel: $\textit{SYM-H}$, $Dst$, $Ap$, and $AE$ indices during the 13-15 February 2011 geomagnetic event. Bottom panel: corresponding $Int(d(\textit{SYM-H})/dt)$ (in nT), and $IntDst$, $IntAp$, and $IntAE$ (in nT$\cdot$hr) cumulative parameters, calculated using thresholds $\textit{SYM-H}\leq-30$ nT, $Dst\leq-30$~nT, or $ap\geq15$ nT. } \label{fig:2011-02-13} \end{figure} These four cumulative metrics of disturbed days are displayed in Figure \ref{fig:2011-02-13} together with 1-min $\textit{SYM-H}$ and $AE$, hourly $Dst$, and 3-hourly $ap$, during a moderate geomagnetic storm on 14-15 February 2011 that reached a minimum $\textit{SYM-H}=-49$ nT and a minimum $Dst=-40$ nT on 14 February, with strong substorms (identified by peaks in $AE$ and $ap$) during storm sudden commencement and main phase, and with a very weak secondary minimum of $Dst$ reaching $-30$ nT on 15 February at 17 UT during a burst of $AE$ activity. \section{Methods} In the present follow-up study to the work by \cite{Vybostokova2018}, we search for a tighter relationship between power grid anomalies and geomagnetic storms, based on the same datasets of anomalies in the Czech power grid. We also address the important and as yet unanswered question of the time delay between geomagnetic events and the occurrences of anomalies. Our working hypothesis is that {\it disturbed days} of high geomagnetic activity should cause an increase in daily rates of anomalies in the power distribution network as compared with {\it quiet days}. Accordingly, the daily anomaly rates should sharply peak within a few days (with some delay) after such disturbed days, and then decrease back to normal levels. This corresponds to a rapid response to GICs induced by substorms and storms, as observed for a few selected events -- e.g., see \cite{Gil2019, Wang2015}. Unfortunately, in a mid-latitude country such as the Czech Republic, the effects of geomagnetic activity are expected to be weak. Consequently, an investigation of individual, moderate geomagnetic events is not expected to reveal a significant increase of anomalies, because such anomalies induced by geomagnetic activity (via GICs) will generally remain hidden among many other anomalies caused by various other effects. It is therefore imperative in our statistical analysis to find a way to reduce the importance of anomalies caused by other effects. Note that our data series cover 6 to 10 years, each subset providing records of anomaly rates occurring during many separated disturbed days of high geomagnetic activity. Therefore, a feasible approach is to average over all these different events. The corresponding methodology is the \emph{Superposed Epoch Analysis}, widely used in astrophysics. A Superposed Epoch Analysis \citep[SEA;][]{Chree1913} is a statistical technique used to reveal either periodicities within a time sequence, or to find a correlation between two time series. In the later case, the method proceeds in several steps. \begin{enumerate} \item In the reference time series, occurrences of the repeated events are defined as key times (or epochs). \item Subsets are extracted from the second time series within some range around each key time. \item Subsets from each time series are superposed, synchronized at the same key time (Day 0), and averaged, allowing inter-comparisons. \end{enumerate} This methodology is known to efficiently enhance the ``signal'' (related variations in both series) with respect to ``noise'' (unrelated variations in both series), because the noise adds up incoherently, whereas the signal is reinforced by the superposition. Thus, we performed the SEA of geomagnetic activity defined by $Int(d(\textit{SYM-H})/dt)$ or $IntDst$ parameters. A range of event thresholds $\textit{SYM-H}$ (or $Dst$) $<-25$ nT to $-50$ nT was considered, to keep only periods corresponding to weak to large geomagnetic storms \citep{Gonzalez94} and to allow for the determination of the best thresholds on event strength. Other days were assigned a zero level of $Int(d(\textit{SYM-H})/dt)$ or $IntDst$. An important further requirement was that the 5-day period immediately preceding the start of a geomagnetic storm (Day 0 in the SEA) contained a zero level of the considered geomagnetic activity parameter (that is, all such quiet days must have $IntDst=0$ or $Int(d(\textit{SYM-H})/dt)=0$). This rather strict constraint should allow to better quantify the effect of geomagnetic storms on the power grid during {\it disturbed days} as compared with {\it quiet days}, at the expense of a slight reduction of the number of considered events. In a second step, we analyzed in more details these SEAs to determine as accurately as possible the time delay (after the start of a storm) that corresponds to the statistically most significant increase of anomalies, for each type of power grid equipment. \section{Results of Superposed Epoch Analysis} A Superposed Epoch Analysis was performed based on $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ parameters, considering successively thresholds $Dst<-25$ nT, $-30$ nT, $-40$ nT, and $-50$ nT, or $\textit{SYM-H}<-25$ nT, $-30$ nT, $-40$ nT, and $-50$ nT, to explore the dependence of power grid anomalies on the minimum strength of geomagnetic storms. The number of epochs considered in the SEAs of each reference series are given in Table~\ref{tab:epochs}. \begin{table}[] \caption{The number of epochs considered in SEAs for various reference series. } \centering \begin{tabular}{ccl} {\bfseries Reference series} & {\bfseries Threshold} & {\bfseries \# of epochs}\\ \hline $IntDst$ & $-50$~nT & 138 \\ $IntDst$ & $-40$~nT & 172 \\ $IntDst$ & $-30$~nT & 221 \\ $IntDst$ & $-25$~nT & 222 \\ $Int(d(\textit{SYM-H})/dt)$ & $-50$~nT & 154 \\ $Int(d(\textit{SYM-H})/dt)$ & $-40$~nT & 191 \\ $Int(d(\textit{SYM-H})/dt)$ & $-30$~nT & 218 \\ $Int(d(\textit{SYM-H})/dt)$ & $-25$~nT & 231 \end{tabular} \label{tab:epochs} \end{table} \begin{figure} \centering \resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure2a.pdf}}} \resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure2b.pdf}}} \caption{Plots of epoch-superposed daily numbers of anomalies in the D12 series, considering $IntDST$ (in nT$\cdot$hr, left) and $Int(d(\textit{SYM-H})/dt)$ (in nT, right) for different upper thresholds on $Dst$ and $\textit{SYM-H}$. Solid lines indicate the superposed anomaly rates (upper row) or geomagnetic activity in the reference time series (lower row) during Days~$-1$ to $+5$ from the epoch (Day~0), whereas dashed lines show the same quantities for the remaining days. Error bars show the one-standard-deviation half-widths. } \label{fig:SEA4} \end{figure} The SEAs obtained for $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ both show a clear peak of geomagnetic activity at Day 0 and a sharp decrease on Day~1 for $IntDst$ or on Day~2 for $Int(d(\textit{SYM-H})/dt)$. The later decrease for $Int(d(\textit{SYM-H})/dt)$ is due to the presence of significant $d(\textit{SYM-H})/dt$ variations during the recovery phase of many storms stretching over at least 2 consecutive days, whereas $IntDst$ is generally calculated only during storm main phase. Fig. \ref{fig:SEA4} shows the SEAs obtained for the D12 series (power lines). Similar trends are found for other datasets concerning power lines. All the figures corresponding to the different series D1 to D12 are available in the online supplement as Figs.~\ref{fig:D1seas}-\ref{fig:D12seas}. \subsection{Storm Effects: 5-day Periods After/Before Day 0} \label{sect:5daysafterbefore} Next, we compared the period of 5 {\it disturbed days} immediately following Day 0 (the day of peak storm activity) with the 5-day period immediately preceding Day 0 -- a preceding period of {\it quiet days} especially selected to have zero $IntDst$ or $Int(d(\textit{SYM-H})/dt)$ levels. This allows to directly check the impact of {\it disturbed days} of geomagnetic storms on power grid anomalies, as compared with {\it quiet days}. For the two time intervals, we summed the total number of registered anomalies in the superposed series for each data subset and computed the statistical significance of the differences using the standard binomial statistical test. We tested the null hypothesis that the number of anomalies recorded over quiet days is not different from the number of anomalies recorded over disturbed days, that is, the null hypothesis that the probability of recording anomalies is the same during quiet and disturbed days. Should the resulting $p$-value be smaller than the selected statistical threshold (usually 0.05 for single-bin tests), we reject the null hypothesis, thereby saying that the recorded differences are indeed statistically significant. \begin{table}[] \caption{Comparison of the number of power grid anomalies in the 5-day period prior to Day~0 $N_{-}$ and in the 5-day period after Day 0~$N_{+}$, together with $p$-values of the statistical significance of the differences. These values are given for different reference series involved in SEAs with varying thresholds. } \centering $IntDst$ \begin{tabular}{l|lll|lll|lll|lll} {\bfseries ID} & \multicolumn{3}{c|}{$<-25$~nT} & \multicolumn{3}{c|}{$<-30$~nT} & \multicolumn{3}{c|}{$<-40$~nT} & \multicolumn{3}{c}{$<-50$~nT}\\ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$\\ \hline D1 & 60 & 59 & 1.0 & 54 & 52 & 0.92 & 35 & 33 & 0.90 & 29.0 & 36.0 & 0.46\\ D2 & 100 & 115 & 0.34 & 109 & 137 & 0.08 & 94 & 112 & 0.24 & 82 & 94 & 0.41\\ D3 & 17 & 17 & 1.0 & 20 & 23 & 0.76 & 16 & 22 & 0.42 & 18 & 12 & 0.36\\ D4 & 58 & 38 & 0.05 & 52 & 43 & 0.41 & 45 & 46 & 1.0 & 38 & 40 & 0.91 \\ D5 & 86 & 75 & 0.43 & 91 & 84 & 0.65 & 83 & 82 & 1.0 & 71 & 68 & 0.87\\ D6 & 30 & 36 & 0.54 & 40 & 39 & 1.0 & 38& 37 & 1.0 & 34& 31& 0.80\\ D7 & 134 & 132 & 0.95 & 143 & 137 & 0.77 & 115 & 120 & 0.79 & 98 & 105 & 0.67\\ D8 & 968 & 955 & 0.78 & 892 & 922 & 0.50 & 710 & 760 & 0.20 & 562 & 586 & 0.50\\ D9 & 105 & 102 & 0.89 & 95 & 112 & 0.27 & 70 & 67 & 0.86 & 44 & 53 & 0.42\\ D10 & 14292 & 14338 & 0.79 & 13245 & 13477 & 0.16 & 10791 & 11047 & 0.08 & 8601 & 8764 & 0.22\\ D11 & 415 & 494 & 0.01 & 403 & 476 & 0.02 & 302 & 387 & $<0.01$ & 247 & 297 & 0.04\\ D12 & 11366 & 12118 & $<0.01$ & 10787 & 11748 & $<0.01$ & 8965 & 9421 & $<0.01$ & 7242 & 7606& $<0.01$ \end{tabular} \vskip5mm $Int(d(\textit{SYM-H})/dt)$ \begin{tabular}{l|lll|lll|lll|lll} {\bfseries ID} & \multicolumn{3}{c|}{$<-25$~nT} & \multicolumn{3}{c|}{$<-30$~nT} & \multicolumn{3}{c|}{$<-40$~nT} & \multicolumn{3}{c}{$<-50$~nT}\\ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$\\ \hline D1 & 59 & 56 & 0.85 & 59 & 58 & 1.0 & 43 & 47 & 0.75 & 32 & 37 & 0.63\\ D2 & 98 & 98 & 1.0 & 104 & 110 & 0.73 & 101 & 121 & 0.20 & 93 & 107 & 0.36\\ D3 & 20 & 15 & 0.50 & 20 & 16 & 0.62 & 15 & 20 & 0.50 & 17 & 18 & 1.0 \\ D4 & 53 & 36 & 0.09 & 51 & 37 & 0.17 & 43 & 45 & 0.92 & 46 & 49 & 0.84\\ D5 & 79 & 66 & 0.32 & 83 & 70 & 0.33 & 80 & 78 & 0.94 & 83 & 77 & 0.69\\ D6 & 29 & 28 & 1.0 & 35 & 31 & 0.71 & 38 & 33 & 0.64 & 38 & 29 & 0.33\\ D7 & 115 &118 & 0.90 & 137 & 127 & 0.58 & 116 & 122 & 0.75 & 119 & 118 & 1.0\\ D8 & 964 & 936 & 0.54 & 1005 & 964 & 0.37 & 784 & 790 & 0.90 & 635 & 667 & 0.39\\ D9 & 98 & 101 & 0.89 & 107 & 102 & 0.78 & 80 & 93 & 0.36 & 58 & 74 & 0.19\\ D10 & 14220 & 14061 & 0.35 & 14594 & 14518 & 0.66 & 11951 & 11877 & 0.64 & 9702 & 9854 & 0.28\\ D11 & 408 & 450 & 0.16 & 420 & 473 & 0.08 & 334 & 415 & $<0.01$ & 300 & 323 & 0.38\\ D12 & 11273 & 11798 & $<0.01$ & 11675 & 12305 & $<0.01$ & 9669 & 10162 & $<0.01$ & 8385 & 8714 & 0.01 \end{tabular} \label{tab:pvalues} \end{table} The results, summarized in Table~\ref{tab:pvalues}, reveal a clear increase of anomalies during the period of 5 {\it disturbed days} following Day 0 as compared with the period of 5 {\it quiet days} preceding Day 0, for the two series D11 and D12 corresponding to power lines. The number of anomalies increases by 5\% for D12 and by 30\% for D11, with corresponding $p$-values always statistically significant ($<0.05$), for thresholds $<-30$ nT or $<-40$ nT -- except for $Int(d(\textit{SYM-H})/dt)$ and D11 for a threshold $<-30$ nT. Lower or higher thresholds usually lead to less statistically significant increases of anomalies, although not always -- e.g. for D11 and $IntDst$, the $<-25$ nT threshold gives a higher statistical significance. This means that moderate events with minimum $Dst$ or $\textit{SYM-H}$ near $-40$ nT have often a statistically detectable impact on anomaly rates, whereas weaker events do not. The same thresholds also lead to the highest peaks of anomalies after Day 0 in many other series. Finally, for D11 and D12, the $<-40$ nT thresholds lead to the smallest $p$-values ($<0.01$) for both $IntDst$ and $Int(d(\textit{SYM-H})/dt)$, as well as to the smallest $p$-values $<0.1-0.2$ for D8 and D10 when considering $IntDst$, and to the smallest or second smallest $p$-values $<0.2-0.36$ for D2 and D9 when considering $Int(d(\textit{SYM-H})/dt)$. Therefore, the thresholds $\textit{SYM-H}<-40$ nT and $Dst<-40$ nT are probably the most appropriate to detect statistically significant increases of anomalies related to geomagnetic storms. The weaker significance of results for higher thresholds $<-25$ nT agrees with previous observations from \cite{Lotz2017} that weaker events have little effects on induced electric fields. However, moderate $Dst$ or $\textit{SYM-H}$ geomagnetic disturbances in the range $-40$ nT to $-50$~nT are found to still have some impact on power lines. The weaker significance of results for lower thresholds $<-50$ nT is likely due to a combination of two different effects: (i) storms start slightly later when using a threshold $<-50$ nT than for higher thresholds $<-40$ nT or $<-30$ nT, meaning that the 5-day period preceding Day 0 can actually contain significant $dH/dt$ geomagnetic activity leading to some anomalies, and (ii) the $<-50$ nT threshold corresponds to a 30\% to 40\% smaller number of events than the $<-30$ nT threshold, decreasing the sensibility of the SEA to a potential slight increase of anomalies due to storms. A detailed inspection of the SEAs of D12 lends further credence to the impact of geomagnetic storms on power lines. Indeed, for both $IntDst$ and $Int(d(\textit{SYM-H})/dt)$, the peaks of anomalies in the few days following Day 0 reach the highest daily levels of anomalies of the whole 21-day SEAs for $<-30$ nT to $<-50$ nT thresholds, the main increases of anomalies occurring from Day $+0$ to Day $+3$. For D11 and thresholds $<-30$ nT to $<-40$ nT, the 4-day period following Day 0 has also the highest number of anomalies of the whole 21-day SEA, while the 5-day interval preceding Day 0 has the lowest average number of anomalies of the whole SEA. \subsection{Storm Effects: 3-day Periods Before/After Day 0 with Time Lags} \label{sect:3days} \begin{figure} \centering \includegraphics[width=\textwidth]{figure3a.pdf} \includegraphics[width=\textwidth]{figure3b.pdf} \caption{(left) Maps for D12 of increases (or decreases) of the number of anomalies as a function of the middle day of the first (abscissa) and second (ordinate) considered 3-day periods. (right) Maps of the corresponding $p$-values. The upper row is computed for the $IntDST$ reference series, whereas the lower row corresponds to the $Int(d(\textit{SYM-H})/dt)$ reference series. The $p$-values are evaluated only if there is an increase of anomaly rates in the second 3-day period as compared to the first 3-day period. Note the logarithmic scale of the plotted $p$-values: $p=0.0055$ (the adopted level of statistical significance for individual bins) corresponds to $\log p=-2.26$. Statistically significant bins are indicated by white dots. Blank bins are indicated by the white colour. } \label{fig:pvalues12} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{figure4a.pdf} \includegraphics[width=\textwidth]{figure4b.pdf} \caption{(left) Maps for D8 of increases (or decreases) of the number of anomalies as a function of the middle day of the first (abscissa) and second (ordinate) considered 3-day periods. (right) Maps of the corresponding $p$-values. The upper row is computed for the $IntDST$ reference series, whereas the lower row corresponds to the $Int(d(\textit{SYM-H})/dt)$ reference series. The $p$-values are evaluated only if there is an increase of anomaly rates in the second 3-day period as compared to the first 3-day period. Note the logarithmic scale of the plotted $p$-values: $p=0.0055$ (the adopted level of statistical significance for individual bins) corresponds to $\log p=-2.26$. Statistically significant bins are highlighted by white dots. Blank bins are indicated by the white colour.} \label{fig:pvalues8} \end{figure} Next, we examined in more details the SEAs performed based on $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ parameters for thresholds $Dst<-40$ nT and $\textit{SYM-H}<-40$ nT. We considered two shorter 3-day periods, located before and after Day 0. We varied the time lag between them and calculated (as before for 5-day periods) the statistical significance of the difference in anomaly rates between these two periods. Considering shorter 3-day periods should help to determine more precisely the (statistically most significant) time delay between the start of a geomagnetic storm and the related increase of the number of anomalies. Fig. \ref{fig:pvalues12} for D12, Fig. \ref{fig:pvalues8} for D8, and Figs.~\ref{fig:D1ps}--\ref{fig:D12ps} in the online supplement for all other datasets, show two-dimensional maps of the increases (or decreases) of the number of anomalies as a function of the middle day of the first and second 3-day periods, together with maps of the corresponding $p$-values computed only for increases. Let us examine these maps of $p$-values. For consistency with the procedure of estimation of the statistical significance adopted in Section~\ref{sect:5daysafterbefore}, we need to compare the number of anomalies over the same 5-day periods after and before Day 0. Accordingly, we must only consider the bins (representing 3-day periods) comprised between Days $-4$ and $-2$ (actually covering Days $-5$ to $-1$) for the period before Day 0, and the bins comprised between Days $+2$ and $+4$ (actually covering Days $+1$ to $+5$) for the period following Day 0. There are $3\times 3 = 9$ such bins. Finding only one bin with a $p$-value $\sim 0.05$ (corresponding to a 5\% probability to obtain an increase of anomalies by chance) among 9 bins is not anymore as statistically significant as before. Therefore, an individual bin (representing 3-day periods) is hereafter required to have a smaller $p$-value $\leq 0.05/9= 0.0055$ to be considered statistically significant. In the case of the D12 dataset (power lines), there are six bins with $p$-values $< 0.0055$ for both $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ in the considered square of $3\times 3$ bins centered on $(-3,+3)$ in Fig.~\ref{fig:pvalues12}, corresponding to a statistically significant increase of anomalies. A significant increase of anomalies is already observed over final 3-day periods centered on Day $+1$, as compared with initial 3-day periods centered on Days $-3$ and $-2$, indicating an immediate effect of geomagnetic storms on power lines. In the case of D8 (transformers), however, the three bins corresponding to increases of anomalies with the smallest $p$-values are found in Fig. \ref{fig:pvalues8} for final 3-day periods centered on Days $+3$ to $+4$, as compared with initial 3-day periods centered on Days $-1$ to $0$. Therefore, there is a clear time delay of $\sim 2-3$ days between a variation of $IntDst$ or $Int(d(\textit{SYM-H})/dt)$ and the corresponding variation of the number of anomalies in the D8 dataset. In such a situation, it is more appropriate to consider for D8 the square of $3\times 3 =9$ bins centered on $(-1,+3)$ in Fig. \ref{fig:pvalues12}. Inside this domain, one bin has a $p$-value $=0.0045 < 0.0055$ for $IntDst$ in Fig. \ref{fig:pvalues8}, indicating a statistically significant {\it delayed} increase of anomalies for D8. Overall, the results displayed in Figs. \ref{fig:pvalues12}-\ref{fig:pvalues8} and in Figs.~\ref{fig:D1ps}--\ref{fig:D12ps} therefore confirm the preceding results obtained for 5-day periods, but they further allow to determine the optimal time delays before a statistically significant increase of anomalies in different power grid equipment. Most strikingly, a statistically highly significant increase of anomalies is found for D11--D12 (power lines) for both $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ only $\sim 0-1$ day after Day 0, and as compared with all the preceding 3-day periods without storm activity (i.e., with $IntDst=0$ or $Int(d(\textit{SYM-H})/dt)=0$). Some less significant increases are also found for D4 (power lines as D11--D12) for $IntDst$. Such results imply an immediate effect of geomagnetic storms on power lines, already on Days 0 to $+1$. This looks quite realistic, because any effect of GICs on power lines (due to harmonics-related current waveform distortion leading to a detrimental reaction of protective relays or other devices connected to these lines) is likely to occur almost immediately. Furthermore, Fig. \ref{fig:pvalues8} reveals the presence of a statistically significant {\it delayed} increase of anomalies for D8 (high voltage transformers) following geomagnetic storms when considering $IntDst$ (an increase is also present for $Int(d(\textit{SYM-H})/dt)$ but somewhat less significant), with a delay of $\sim 3$ days after Day 0. This strongly suggests the presence of some delayed effects of storm-time geomagnetic activity on transformers (note also that the lowest rates of anomalies are observed here on Days $-2$ to $0$, similarly corresponding to a delayed effect of the previous days of zero storm activity). Transformers may indeed be affected by GICs but still continue to operate for a while -- typically for a few days -- before actual problems ultimately show up and are registered in logs \citep[e.g.,][]{Wang2015}. \subsection{Ring and Auroral Currents Effects: $IntAp$ parameter} \label{sect:intAP} Since both ring current variations during storms and other (mainly auroral) current variations during strong substorms may produce significant GICs, we further performed similar SEAs for the $IntAp$ parameter, which (despite its own limitations, see section 2.2 and \cite{Kappenman2005}) is expected to roughly take into account the effects of both kinds of disturbances -- whereas $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ only correspond to storm periods. However, due to the relatively low threshold $ap\geq 15$ (equivalent to $Kp\geq 3$) of integration used to calculate daily $IntAp$ levels, this new data series contained many more events (notably, many isolated substorms, sometimes outside of storms) than the previous $IntDst$ (storm) data set. As a result, requiring as before a 5-day period prior to events with $IntAp=0$ led to only a weak $IntAp$ maximum on Day 0, with a preceding $IntAp$ peak on Days $-10$ to $-5$ of comparable magnitude. Therefore, we changed our selection procedure, to consider only events with a peak $IntAp>1000$ nT$\cdot$hr and such that no similar event was present in the preceding 5 days. The resulting SEAs displayed in Fig.~\ref{fig:INTAP} show that this new selection procedure produces a large peak $IntAp\sim 1400$ nT$\cdot$hr on Day 0 in the SEAs, with much lower levels on all 10 previous days, especially between Days $-6$ to $-2$. The daily number of anomalies is found to increase by a statistically very significant amount during the 5-day period following Day 0 as compared to the 5-day period preceding Day 0, for series D11 and D12 in Fig. \ref{fig:INTAP}, with corresponding $p$-values 0.03 and 0.007, respectively. There is a remarkable simultaneity between the peak of $IntAp$ and the peak of anomalies in the two SEAs with at most one day of delay. Moreover, such peaks of daily anomalies on Days 0 or $+1$ are consistently larger than all other daily values in the full 21-days SEAs. Such results therefore demonstrate the likely presence of nearly immediate effects of both storm-related and substorm-related geomagnetic disturbances on GICs and power lines (D11--D12) in the Czech power network. This is certainly due to the major impact of strong substorms on GICs, both during and outside geomagnetic storms. \begin{figure} \centering \resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure5a.pdf}}} \resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure5b.pdf}}} \caption{Plots of epoch-superposed subsets D11 (left) and D12 (right) of variations of the daily number of anomalies as a function of time, considering the $IntAp$ parameter. Solid lines indicate mean superposed daily rates of anomalies (upper row) or geomagnetic activity $IntAp$ (in nT$\cdot$hr) in the reference time series (bottom row) during Days $-1$ to $+5$ from the epoch (Day 0), whereas dashed lines show the same quantities for the remaining days. Error bars show the one-standard-deviation half-widths.} \label{fig:INTAP} \end{figure} There are also detectable increases of daily anomalies between 5-day periods before/after Day 0 for D8 (transformers, with a delay of $\sim 2$ days) and for D4 (power lines, immediate), but they are not statistically significant, with $p$-values $\simeq 0.25$ (see SEAs for all D1 to D12 series provided in Figs.~\ref{fig:D1intAP}--\ref{fig:D12intAP} in the online supplement). Besides, there is a statistically significant increase of anomalies for D10 (high and very high voltage electrical substations) with a $p$-value of 0.006, with a first peak of anomalies at Day $+1$ but a much delayed higher peak on Days $+4$ and $+5$. While power lines react immediately to GICs, high and very high voltage electrical substations, which comprise busbars, capacitors, or transformers, may indeed be affected but still continue to operate without registered problems until the cumulative damage reaches a sufficient level. A time lag of 3--5 days does not seem wholly unrealistic in this respect \citep{Kappenman2007, Wang2015}. It is worth noting that our previous analysis based on $IntDst$ did not show a statistically significant impact of storms for D10 (although the smallest $p$-value reached 0.08 in Table \ref{tab:pvalues}), contrary to the present analysis based on $IntAp$. This suggests that prolonged 2-3 day periods of repeated non-storm-time substorms or solar wind sudden impulses (SIs), taken into account by $IntAp$ but not by $IntDst$, could have a noticeable effect on some electrical substations. \subsection{Auroral Current Effects: $IntAE$ parameter} Next, we performed similar SEAs for the $IntAE$ parameter that provides a measure of cumulated high-latitude auroral current variations. An increased hourly auroral electrojet index $AE>150-250$ nT is one of the dominant manifestations of substorms, and many substorm studies rely on $AE$ to estimate the intensity of substorms, although $AE$ is not a specific measure of substorms \citep{Kamide1996, Tsurutani2004}. We compared the period of 5 {\it disturbed days} (with daily $IntAE>150$ nT$\cdot$hr) immediately following Day 0 (the day of peak $IntAE$) with the 5-day period immediately preceding Day 0 -- a preceding period of {\it nearly quiet days} (with daily $IntAE<30$ nT$\cdot$hr) especially selected to have such nearly zero $IntAE$ levels. This way, we can check the impact of {\it disturbed days} of strong $AE$ activity (often corresponding to substorms, occurring both during and outside storms) on power grid anomalies, as compared with {\it quiet days}. We also tried as before to consider shorter 3-day periods to help determine the best time lags between increases of anomalies and Day 0. \begin{figure} \centering \resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure6.pdf}}} \caption{Epoch-superposed daily numbers of anomalies in the D11 series, considering the $IntAE$ parameter. Solid lines indicate superposed anomaly rates (upper panel) or $IntAE$ (in nT$\cdot$hr) in the reference time series (lower panel) during Days~$-1$ to $+5$ from the epoch (Day~0), whereas dashed lines show the same quantities for the remaining days. Error bars show one-standard-deviation half-widths. } \label{fig:IntAE} \end{figure} All the corresponding plots are given in Figs.~\ref{fig:D1intAE}--\ref{fig:D12intAE} in the Online attachment. In general, these results mostly agree with the $IntAp$ results. However, they are somewhat less statistically significant than the results obtained with all the preceding metrics, except for the D11-D12 (power lines) series. For D11, we find a statistically significant 15\% increase in the total number of anomalies after/before Day 0, with a $p$-value of 0.034 (see Fig. \ref{fig:IntAE}), while for D12 (power lines) the increase of anomalies is only 2.6\%, with a barely significant $p=0.055$. An important point is that these results based on $IntAE$ confirm the impact on power lines of auroral electrojet disturbances, often related to substorms. Nevertheless, these results also suggest that the $IntDst$, $IntAp$, or $Int(d(\textit{SYM-H})/dt)$ metrics may be slightly more appropriate than $IntAE$ for categorizing disturbed days leading to GIC effects at middle latitudes in the Czech power grid. This could stem from the higher latitudes of stations measuring $AE$ than for the mid-latitude $ap$ index: $IntAE$ may either take into account weak substorms that actually do not strongly affect middle latitudes, or it may under-estimate mid-latitude disturbances produced by large substorms \citep{Lockwood2019, Thomsen2004}. Alternatively, there could be some significant impacts of ring current variations on GICs at mid-latitudes, not taken into account in $IntAE$. \section{Discussion} In the SEAs, roughly $\approx 5-10$\% increases of the number of anomalies were often observed during the 5 most disturbed days as compared with the preceding 5 consecutive quiet days. However, it is important to note that such increases of anomalies were present during only the 5 most disturbed days among the 21-day total duration of each SEA. It is also unclear if there was any statistically significant increase of anomalies caused by the much weaker geomagnetic activity present during other days that did not fulfill the criteria for our SEA analysis. It is thus difficult to obtain a credible estimate of the total fraction of anomalies that could be directly related to geomagnetic effects. In our previous study \citep{Vybostokova2018}, the corresponding total number of anomalies attributable to variations of geomagnetic activity was also estimated as 1--4\%. Such values are consistent with results from a previous study of the impact of solar activity on the US electric power transmission network in 1992--2010, which showed that $\sim$ 4\% of the corresponding anomalies were likely attributable to strong geomagnetic activity and GICs \citep{schrijver2013disturbances}. We also considered different parameter series, namely cumulative $IntDst$, $IntAp$, and $IntAE$ parameters integrated over the preceding 5 or 10 days, to evaluate the effects of a longer exposure to GICs on power-grid devices. The corresponding superposed epoch analysis did not yield statistically significant results. Without a proper event selection procedure and no integration limit, the SEAs were dominated by weak events, during which the effects were probably weak and did not emerge from the average rates of anomalies due to causes other than geomagnetic activity. SEAs were further performed separately for weak, moderate, and strong events, but this did not significantly improve the results. The most promising results in terms of magnitude of increase of anomalies during stronger activity were for D8, D10, and D12 for $IntDst$ (with lags of 1--3 days), and D8 and D11 for $IntAE$. Based on our analysis, it turns out that geomagnetic disturbances affected mostly the datasets registering anomalies on power lines. It is interesting to note that most of the power lines in D7, D11, and D12 are the power lines with distances between grounding points of the order of tens of kilometers. We also found significant delayed effects in the D8 dataset of high-voltage transformers. Although significant effects were observed in D4 during strong storms (see Fig.~S40), the distances between grounding points are of the order of hundreds of meters in this case, that is, much shorter than for the other power-line datasets. The topology of the network in D4 is also far more complex than in the other power-line datasets. It is unlikely that GICs induced in the D4 network could be responsible for the observed increase of anomaly rate after Day 0 in the corresponding SEA. Nevertheless, some detrimental currents could have entered the D4 network from nearby connected networks of other power companies and caused operational anomalies during strong events. \section{Conclusions} As noted by \cite{schrijver2013disturbances}, the selection of an appropriate geomagnetic parameter is very important when searching for correlations between anomalies recorded in human infrastructures and variations of geomagnetic activity. Here, we have presented results obtained by considering four different and complementary parameters of cumulative geomagnetic activity, namely the different storm-time $Int(d(\textit{SYM-H})/dt)$ and $IntDst$ low-latitude metrics tracking mainly ring current variations, the high-latitude $IntAE$ metric mainly tracking auroral current variations, and the mid-latitude $IntAp$ metric tracking both ring and auroral current variations -- all of which were integrated over geomagnetically disturbed periods. This allowed us to compare the cumulated number of anomalies observed in the Czech power grid during the corresponding disturbed days of high geomagnetic activity with the number of anomalies recorded during quiet days. At the considered middle geomagnetic latitudes, our statistical analysis of $\sim10$ years of data has shown that space weather-related events affected mostly long power lines (D11, D12), probably due to a distortion of the electrical current waveform that eventually triggered a detrimental reaction of protective relays or disrupted other connected devices. However, significant and slightly more delayed (by $\sim1-2$ days) effects were also observed in high-voltage transformers. Both substorm-related disturbances and magnetic storms were found to have statistically significant impacts on the power grid network, since the four considered measures of disturbed days ($IntDst$, $Int(d(\textit{SYM-H})/dt)$, $IntAp$, and $IntAE$) led to more or less similar results -- although $IntAE$ was slightly less efficient. In addition, we found that considering moderate thresholds (neither too large nor too small) on time-integrated geomagnetic activity quantified by $IntDst$, $Int(d(\textit{SYM-H})/dt)$, or $IntAp$, produced the most statistically significant increases in anomaly rates, suggesting a non-negligible impact of moderate disturbances. These results are therefore consistent with a major impact of substorms, either inside or outside storms, on GICs at middle latitudes, together with a possible additional impact of ring current variations during storms. It is worth noting that our study showed that in the 5-day period following the commencement of geomagnetic activity there is an approximately 5--10\% increase in the recorded power line and transformers anomalies in the Czech power grid, probably related to geomagnetic activity and GICs. Such values are consistent with previous results concerning the US power grid \citep{schrijver2013disturbances}. \cite{schrijver2014assessing} further found that for the US network, the 5\% stormiest days were apparently the most dangerous, with a 20\% increase of grid-related anomalies as compared to quiet periods. We similarly found that the days with a minimum $Dst<-50$ nT (roughly representing the $\approx 8$\% stormiest days, see \citealt{Gonzalez94}) had probably the strongest impact in the Czech power grid, leading to immediate or slightly delayed $\sim 5-20$\% increases of anomalies as compared to quiet periods. \begin{acknowledgements} M.\v{S} was supported by the institute research project RVO:67985815. We are grateful to power grid data providers for giving us an opportunity to exploit their logs of anomalies, namely to P.~Spurn\'y (\v{C}EPS), J.~Bro\v{z} and J.~Bu\v{r}i\v{c} (\v{C}EZ Distribuce), R.~Hanu\v{s} (PREdistribuce), and D.~Mezera and R.~B\'il\'y (E.ON Distribuce). The maintenance logs are considered strictly private by the power companies and are provided under non-disclosure agreements. We gratefully acknowledge the World Data Center in Kyoto and the Space Physics Data Facility (SPDF) at NASA Goddard Space Flight Center for the OMNI data at \url{http://omniweb.gsfc.nasa.gov} of $Dst$ and $\textit{SYM-H}$ geomagnetic indices used in this paper. {\bf Author contributions:} DM designed the study and provided processed geomagnetic data, K\v{Z} and TV wrote the processing code as parts of their student projects under the supervision of M\v{S}. M\v{S} performed the analysis. DM and M\v{S} interpreted the data and wrote the manuscript. All authors contributed to the final version of paper. \end{acknowledgements}
{'timestamp': '2020-06-01T02:05:15', 'yymm': '2005', 'arxiv_id': '2005.14361', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14361'}
arxiv
\section{Introduction} A main goal of the paper is to develop a methodology to price European option's contracts on electricity and future oil prices. The approach is based on Fourier expansions and
implements models that capture specific stylized features of the underlying assets such as stochastic volatility and random jumps. In particular, we consider a switching time-changed Levy process as an alternative to the BSM model and implement a pricing algorithm based on the expansion of its characteristic function as considered in \cite{Fourier}.\\ We compare the prices we obtain with those obtained via a computationally costly, but accurate, Monte Carlo method and study sensitivities with respect to relevant parameters, e.g. maturity, strike price and initial price. Moreover, we contrast prices under the regime switching model to those given by the Black-Scholes equation and show that the prices agree when the switching model is reduced to the Black-Scholes model.\\ We rely on the Esscher transformation, see \cite{EsscherTransform}, to obtain an equivalent martingale measure(EMM) and in order to work on a risk-neutral setting. To calibrate parameters, we use market option prices and minimize the mean squared error. On the other hand, and in order to estimate parameters, we implement the method of moments, minimum distance estimation and maximum likelihood estimation techniques. For simplicity an Expectation Maximization algorithm (EM) is not considered.\\ Although most of these elements have been previously implemented, to the best of our knowledge, the combination consisting of the selected model class, the pricing methodology, the risk-neutral framework and the estimation/calibration approach have not been studied before in energy markets or elsewhere. It is worth noticing that non-switching models with Levy noises have been introduced in \cite{benth1}. On the other hand, results for switching Levy models and their characteristic functions can be found in \cite{Switching}.\\ Many financial time series, including commodity futures seem to exhibit dramatic breaks in their behaviour, for example in the events of political changes or financial crises. Different intervals sharing similar characteristic can be grouped together under a single regime. Models that can capture such behaviour are regime switching Levy. Under such a model, the process switches randomly between different Levy processes according to an unobservable Markov chain. The regime switching time-changed Levy process is a pure jump process which captures two key features of the market: the existence of regimes and price jumps.\\ The organization of the paper is the following: Section \ref{modelsAndCharacteristicFunctions} introduces the regime switching time-changed Levy model, we then derive its characteristic function under Gamma and Inverse Gaussian subordinators. In section \ref{pricingMethods} we discuss Monte Carlo and Fourier Cosine pricing methods. For Monte Carlo, we develop an algorithm to simulate trajectories of the process, as well as for pricing European call options by simulating many regime switching processes simultaneously. In section \ref{estimationAndOutput} we use calibration and various methods to estimate values of model parameters from option quotes and historical prices of oil and electricity commodities. \section{Model, contract and characteristic function} \label{modelsAndCharacteristicFunctions} Let $(\Omega ,\mathcal{A}, (\mathcal{F}_{t})_{t \geq 0}, P)$ be a filtered probability space verifying the usual conditions. For a stochastic process $(X_t)_{t \geq 0}$ defined on the space filtered space above the functions $\varphi_{X_{t}}(u)$ and $\Psi_X(u)=\frac{1}{t} \log \varphi_{X_{t}}(u)$ define its characteristic function and characteristic exponent respectively. When the process has stationary and independent increments the later does not depend on $t$. $A^T$ is the transpose of matrix $A$ unless it is specified differently.\\ Let the process $\{S_t\}_{t\geq 0}$ represent the price of the underlying asset at any time $t>0$, $X_t=\log S_t$ represents the logarithm of the prices.\\ We define a continuous-time Markov chain $\{s_t\}_{0\leq t\leq T}$ driving the changes between regimes with the state space $E=\{1,2\}$. The switching times are described by a sequence of independent random variables $(\tau_j)_{j \in \mathbb{N}}$ in a way that: \begin{equation*} \lim_{t\rightarrow\tau^-_k}s_t\neq s_{\tau_k}\:\:\:\: \text{for}\:\:\:\:k=1,2. \end{equation*} The infinitesimal generator matrix of the continuous-time Markov chain is given by \begin{equation} Q=\begin{bmatrix}\label{Q} -\lambda_{12} & \lambda_{12}\\ \lambda_{21} & -\lambda_{21} \end{bmatrix}. \end{equation} Hence: \begin{eqnarray*} P \{s_{t+h}=2/s_t=1 \} &=& \lambda_{12}h+o(h)\\ P \{s_{t+h}=1/s_t=2 \} &=& \lambda_{21}h+o(h) \end{eqnarray*} Next, consider a collection of independent subordinators $L^j=\{L^j_t\}_{t \geq 0}$ for $j\in E$, where each subordinator $L^j$ is also independent of each process $X^i$, for $i, j\in E$. Each subordinator is characterized by two parameters $\alpha_j, \beta_j >0$ which change between states on the (non-observable) Markov Chain. Each process $L^j$ is a pure jump process and each process $X^j$ has almost surely continuous paths \\ We define the collection of time-changed Levy processes $Y^j=\{Y_t^j\}_{t \geq 0}$ where \begin{equation*}\label{time_changed_brownian_motion} Y_t^j=\mu_j L_t^j+\sigma_j B_{L_t^j}. \end{equation*} with $\mu_j\in\mathbb{R}$ and $\sigma_j>0$.\\ There exists a natural economic interpretation of a time-changed process. Energy markets alternate at random times between calmed and frenzy periods at random times.\\ We now define the regime switching time-changed Levy process $Z=\{Z_t\}_{t \geq 0}$ as: \begin{equation} \label{switch} Z_t\equiv Y_t^{s_t}\:\:\:\:\text{where}\:\:\:\: Y^{s_t}_t=\mu_{s_t}L^{s_t}_t+\sigma_{s_t}B_{L^{s_t}_t} \end{equation} or, in differential terms: \begin{equation*} dZ_t\equiv dY_t^{s_t}\:\:\:\:\text{where}\:\:\:\: dY^{s_t}_t=\mu_{s_t}dL^{s_t}_t+\sigma_{s_t}dB_{L^{s_t}_t} \end{equation*} The regime switching time-changed Levy process $Z$ is assumed to be the log-price process of the underlying asset and the stochastic process of the asset price itself $\{S_t\}_{t \geq 0}$ is defined as: \begin{equation*}\label{stockpriceprocess} S_t=S_0\exp(Z_t). \end{equation*} For simplicity we assume that the process will always start out at state 1 with probability 1.\\ Following \cite{Switching}, for the process $Z$ defined in equation (\ref{switch}) along with a Markov chain with the infinitesimal generator matrix $Q$ defined in equation (\ref{Q}), its characteristic function is given by: \begin{equation}\label{eq:charfunc} \varphi_Z(u)=\exp(i u y_0)~ \mathbb{E}_{\mathcal{Q}}\{[1, 1] \exp(t \Phi(u))[1,0]^T \} \end{equation} where $y_0= \log S_0$ and $\Phi(u)$ is the matrix: \begin{equation*} \Phi(u)=\left( \begin{array}{cc} -\lambda_{12}+ \Psi_{L_t^{(1)}}(\mu^1 u +\frac{1}{2}i \sigma^2_1 u^2) & \lambda_{12}\\ \lambda_{21} & -\lambda_{21}+ \Psi_{L_t^{(2)}}(\mu_2 u +\frac{1}{2}i \sigma^2_2 u^2) \\ \end{array} \right). \end{equation*} Notice that conditionally on $s_t=j$ the characteristic function of $Z^j$, i.e. the characteristic function of $Z$ conditionally on $s_t=j$, is: \begin{equation*} \varphi_{Z_t}(u)=\varphi_{L^j_t}(\mu_j u +\frac{1}{2}i \sigma^2_j u^2). \end{equation*} To compute the exponential matrix $\Phi(u)$ we use a \textit{scaling and squaring algorithm}, see \cite{expm2}, based on the following approximation: \begin{equation*} e^{\Phi(u)}=(e^{2^{-s}\Phi(u)})^{2^s}\approx r_m(2^{-s}\Phi(u))^{2^s}, \end{equation*} where $r_m(x)$ is the $[m/m]$ Pade approximant of $e^x$ and the nonnegative integers $m$ and $s$ are chosen in such a way as to achieve minimum error at minimal cost. A table of errors as a function of $s $ and $m$ is given in \cite{expm1}. The $[k/m]$ Pade approximant for the exponential function is: \begin{equation*} r_{km(x)}=p_{km}(x)/q_{km}(x) \end{equation*} where: \begin{equation*} p_{km}(x)=\sum_{j=0}^k \frac{(k+m-j)!k!}{(k+m)!(k-j)!}\frac{x^j}{j!},\:\:\: q_{km}(x)=\sum_{j=0}^m\frac{(k+m-j)!m!}{(k+m)!(m-j)!}\frac{(-x)^j}{j!}. \end{equation*} The discounted price process is denoted $\Tilde{S}=(\Tilde{S}_t)_{t \geq 0}$ where $\Tilde{S}_t:=exp(-rt)S_t$. We have that under an EMM $\mathcal{Q}$, the discounted price process $\Tilde{S}$ is a martingale under $\mathcal{Q}$ if and only if the following equation is satisfied: \begin{equation}\label{martingale_condition} \Psi_{Z}(-\text{i})=r. \end{equation} See \cite{Martingale} for details. \begin{example}\textit{Case of Inverse Gaussian and Gamma subordinators.}\\ Inverse Gamma and Gamma subordinators have been studied in \cite{ig} and \cite{gamma}.\\ When the subordinator $L^j$ is an Inverse Gaussian process with shape parameter $\alpha_j$ and rate parameter $\beta_j$, we have: \begin{equation}\label{eq:tc_ig} \Psi_{Z^j}(u)=\alpha_j (\sqrt{2 (\mu_j u +\frac{1}{2}i \sigma^2_j u^2)+\beta_j^2}-\beta_j),\alpha_j>0,\beta_j>0, j=1,2. \end{equation} In Figure \ref{fig:subim2} the real and imaginary parts of the characteristic function and the characteristic function of $Z=\{Z_t\}_{t\geq 0}$ with an Inverse Gaussian subordinator are shown. Parameters are obtained from estimating procedures for future oil prices as explained in Section \ref{estimationAndOutput}. A similar result is obtained under a Gamma subordinator. \begin{figure}[htb!] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{Real_Characteristic_InverseGaussian_Switching.png}}} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{Imaginary_Characteristic_InverseGaussian_Switching.png}} \caption{The real part of the function $\varphi_{Z_t}(u)$ under an IG subordinator (top) and its imaginary part(bottom). Parameters are obtained from estimating procedures for future oil prices as explained in Section \ref{estimationAndOutput}. }\label{fig:subim2} \end{center} \end{figure} When the subordinator $L^j$ is a Gamma process with shape parameter $\alpha_j$ and rate parameter $\beta_j$, we have: \begin{equation}\label{eq:tc_gamma} \Psi_{Z^j}(u)=-\alpha_j \log \Big(1+\frac{\mu_j u +\frac{1}{2}i \sigma^2_j u^2)+\beta_j^2}{\beta_j}\Big),\alpha_j>0,\beta_j>0, j=1,2 \end{equation} To have discounted prices being a martingale, according to equation (\ref{martingale_condition}): \begin{align}\nonumber \psi^j_{Gamma}(-\text{i})&=t^{-1}\log\Bigg[\Bigg(1+\frac{\text{i}\mu_j u-\frac{(\sigma_j)^2u^2}{2}}{\beta_j}\Bigg)^{-\alpha_jt}\Bigg]\Bigg|_{u=-\text{i}}=r \nonumber \\ \nonumber \end{align} leading to: \begin{equation*}\label{} \mu_j= -\beta_j(\exp(-\frac{r}{\alpha_j})-1)+\frac{(\sigma_j)^2}{2} \end{equation*} When the process $Z^j$ is a time-changed process subordinated by an Inverse Gaussian process with parameters $\alpha_j, \beta_j$, the characteristic function is given by equation (\ref{eq:tc_ig}) and therefore for each state $j\in E$ we solve for $\mu_j$: \begin{align} \psi^j_{IG}(-\text{i})&=t^{-1}\log\big[\exp(-\alpha_j t\Big(\sqrt{2(-\text{i}\mu_j u+\frac{(\sigma_j)^2u^2}{2})+(\beta_j)^2}-\beta_j\Big))\big]\big|_{u=-\text{i}}=r \nonumber \end{align} It leads to: \begin{eqnarray*}\label{mu^j_IG} \mu^j&=&\frac{1}{2}[(\beta_j-\frac{r}{\alpha_j})^2+(\beta_j)^2]+\frac{\sigma^2_j}{2}. \end{eqnarray*} Holding all the other parameters constant, the drift verifies equation (\ref{martingale_condition}). The value of $\mu_j$ is such that the $j$-th discounted price process, when the subordinator is an Inverse Gaussian process, is a martingale. \end{example} \section{Pricing European options under switching Levy models } \label{pricingMethods} We turn back to pricing, consider a European call contract with maturity at $T$ and strike price $K$. Its payoff, written in terms of the log-returns, is: \begin{equation*}\label{} h(Z_T)=(S_0 e^{Z_T}-K)_+=K (exp(x_0+Z_T)-1)_+ \end{equation*} where $x_0:=\log(S_{0}/K)$.\\ The price of the contract at a time $t<T$ and $x= \log (\frac{S_t}{K})$ is denoted as $v(x,t)$ and verifies: \begin{equation}\label{eq:Fourier_COS_equation} v(x,t)=e^{-r\Delta t}~\mathbb{E}_{\theta}[v(y,T)]=e^{-r\Delta t}\int_\mathbb{R} v(y,T)f_{Z_T}(y|x)dy, \end{equation}\ Notice that $v(y,T)$ is the payoff at maturity time $T$ and $y:=\log(S_T/K)$.\\ The value $\Delta t=T-t$ is the time to maturity and $r$ is the risk-neutral interest rate. The function $f_{Z_T}(y|x)$ is the probability density function (p.d.f.) of $Z_T$ given the value $x=\log(S_{0}/K)$.\\ $\mathbb{E}_{\theta}$ is the expectation value operator with respect to an EMM $\mathcal{Q}^{\theta}, \theta \in \mathbb{R}$ which is determined by an Esscher transform of the historic measure $P$. See \cite{EsscherTransform} for a rationale in terms of a utility-maximization criteria. \\ For a stochastic process $(X_t)_{t \geq 0}$ the latter is defined as: \begin{equation}\label{eq:esscher} \frac{d \mathcal{Q}^{\theta}_t}{d P_t}=\exp(\theta X_t-t ~l_X(\theta)),\; 0 \leq t \leq T,\; \theta \in \mathbb{R} \end{equation} where $P_t$ and $\mathcal{Q}^{\theta}_t$ are the respective restrictions of $P$ and $\mathcal{Q}^{\theta}$ to the $\sigma$-algebra $\mathcal{F}_t$. We define by $\varphi^{\theta}_{X_t}$, $\Psi^{\theta}_{X_t}$ and $ l^{\theta}_X(u)$ respectively the characteristic function, characteristic exponent and moment generating function of a process $(X_t)_{t \geq 0}$ under the probability $\mathcal{Q}^{\theta}$ obtained by an Esscher transformation as given in equation (\ref{eq:esscher}). \\ We follow a pricing approach via Fourier- Cosine Series expansion of the p.d.f. $f_{Z_T}$. The method has been proposed in \cite{Fourier}. \\ The solution to (\ref{eq:Fourier_COS_equation}) is obtained by expanding the p.d.f. $f_{Z_T}(./x)$ in terms of a Fourier basis under Gamma and Inverse Gaussian subordinators introduced in the previous section .\\ The domain of integration is truncated to a finite interval $[a,b]$ for the purposes of numerical integration, for a discussion about selecting the truncation interval and its associated error, see \cite{convergence_COS}. \\ The Fourier-cosine expansion of $f_{Z_T}$ is given by: \begin{equation}\label{function_ab} f_{Z_T}(y|x)=\sum_{k=0}^{\infty}A_k(x)\cos \Big(k\pi\frac{y-a}{b-a}\Big), \end{equation} where the first term of the summation is weighted by one-half.\\ The coefficients of the Fourier expansion, denoted by $A_k(x)$ , are approximated by: \begin{equation*} A_k(x)=\frac{2}{b-a}\text{Re}\Big\{\varphi_{Z_T} \Big(\frac{k\pi}{b-a}/x \Big) \exp\Big(-\text{i} k\pi\frac{a}{b-a}\Big)\Big\}. \end{equation*} Hence, substituting (\ref{function_ab}) into equation (\ref{eq:Fourier_COS_equation}) we have: \begin{equation*} v(x,t)=\frac{1}{2}(b-a)e^{-r\Delta t}\sum_{k=0}^\infty A_k(x)V_k, \end{equation*} where $V_k$ is the Fourier coefficients of $v(y,T)$ given by: \begin{equation*}\label{eq:V_k} V_k:=\frac{2}{b-a}\int_a^b v(y,T) \cos\Big(k\pi\frac{y-a}{b-a}\Big)dy. \end{equation*} In particular, for a European call option we have: \begin{equation*} V^{Call}_k=\frac{2 K}{b-a}\int_0^b (e^y-1)\cos\Big(k\pi\frac{y-a}{b-a}\Big)dy. \end{equation*} For a European put option denoted by $V^{Put}_k$ a similar expression is found. Finally, truncating the infinite series to $N$ terms, we obtain: \begin{equation*}\label{Characteristic} v(x,t)\approx e^{-r\Delta t} \sum_{0 \leq k < N} \text{Re} \Big\{\varphi_{Z_T} \Big(\frac{k\pi}{b-a}/x\Big) e^{-\text{i} k\pi\frac{a}{b-a}} \Big\}V_k. \end{equation*} The characteristic function of the log-return at maturity $Z_T$ is computed above by equation (\ref{eq:charfunc}) under Inverse Gaussian and Gamma subordinators. As Fourier-cosine series of entire functions converges exponentially, so $N$ does not be too large to obtain good approximations. For European call options, it is found that the price is not accurate and extremely sensitive to the values of $b.$ On the other hand, it is also found that for large values of $b$, $V^{Call}_k$ diverges while $V^{Put}_k$ converges quickly and varies little with changes in the left-end of the truncation interval $a$. We therefore rely on the put-call parity which allows for the computation of the European call option using the put option. We summarize the calculations as follows:\\ \textbf{Algorithm} \begin{enumerate} \item Initialization: choose appropriate boundary points $a,b$, number of terms $N$ in the series expansion and contract specifications (i.e. interest rate $r$, initial stock price $S_0$, strike price $K$, with $x:=S_0/K$ and time to maturity $\Delta t$). \item Initialize $N\times 1$ array of payoffs $v^{Put}$ and $v^{Call}$. \item For $k=0$ to $N-1$ \begin{enumerate} \item Define the $k$-th element of $v^{Put}$ to be:\\ $v^{Put}(k)=e^{-rT}\text{Re}\Big\{\varphi_{Z_{\Delta t}}(k\pi/(b-a); x) \exp(-\text{i} k\pi(a/(b-a))\Big\}V^{Put}_k$.\\ \item $v^{Call}(k)=v^{Put}(k)+S_0-Ke^{-rT}$ \quad\quad(put-call parity) \end{enumerate} \item $v^{Call}_{final}=\frac{1}{2}v(1)+\sum_{k=1}^{N-1}v(k)$\quad\quad (Summation) \end{enumerate} The algorithm is implemented in a desktop computer using MATLAB. We compare the European call payoff and running time under Monte Carlo simulation and Fourier-Cosine pricing in Table \ref{monte carlo COS comparison} for different maturities and strike prices. \begin{table}[h!] \centering \caption{Comparison of European Call option Payoffs using Monte Carlo Simulation and Fourier-Cosine Pricing, as well as their computational times.} \begin{tabular}{|c|c|c|c|c|} \hline ($T,K)$ & Monte Carlo & Running Time (sec.) & Fourier-Cosine & Running time (sec.) \\ [0.5ex] \hline $(1,1)$ & 18.9554 & 9.31 &19.0401&0.0234 \\ \hline $(2,1)$ & 19.9612 & 17.54 & 20.3456&0.1433\\ \hline $(1,2)$ & 17.9942 & 10.01 &18.2164&0.1339\\ \hline $(2,2)$& 19.0166& 11.40 & 18.5523&0.193 \\ \hline \end{tabular} \label{monte carlo COS comparison} \end{table} In the parametric set considered (the next section discusses how to estimate the parameters), the Fourier-cosine method is on average 100 times faster than a standard Monte Carlo approach, while producing similar level of precision.\\ On the other hand, it is found that the difference between pricing European call options using Monte Carlo and Fourier-Cosine pricing remains constant for different strike prices. The error however grows linearly for increasing maturity times.\\ To compare with the price obtained via Monte Carlo we discuss the simulation procedure employed.\\ It is well-known that the Markov chain $\{ s_t \}_{t \geq 0}$ spends independent and exponentially distributed random times between regime transitions. The $k-$th switching time is determined by: \begin{equation*} \tau_k=\sum_{j=1}^k \Delta \tau_j \end{equation*} where $\Delta\tau_j$ are the times between regime changes. The parameters of the exponential random variables alternate between $\lambda_{12}$ and $\lambda_{21}$. \\ The differential equation is approximated numerically through finite differences using the Euler-Maruyama Method. Hence, if at time $t$ the process $Z$ is under regime $j\in E$, the increment of the process $Z$ during the time interval $[t, t+\Delta t)$ is given by: \begin{equation*} \Delta Z^j_t=\mu^{j}\Delta L^{j}_t+\sigma_j\sqrt{\Delta L^j_t}N(0,1). \end{equation*} Notice that, for small $\Delta t$, the process remains under regime $j$ with a probability close to one.\\ \begin{figure}[H] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{Levy_trajectory2.png}}}\hspace{5pt} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{Switching2.png}}}\hspace{5pt} \caption{Trajectories of a switching time-changed model. Parameters: $T=1, \mu_1=0.01, \mu_2=-0.1, \sigma_1=1, \sigma_2=5, \alpha_1=\alpha_2=0.1, \beta_1=0.1, \beta_2=10, \lambda_{12}=5, \lambda_{21}=2. $, $\lambda_{12}=\lambda_{21}=4$, $\lambda_{12}=\frac{1}{10}, \lambda_{21}=4$}\label{traj 2} \end{center} \end{figure} Figure \ref{traj 2}(a) shows a single realization of a switching time-changed Levy process (top) under an IG subordinator, as well as the underlying Markov chain (bottom). Figure \ref{traj 2}(b) shows trajectories under a Gamma subordinator.\\ We devise an algorithm which computes the payoff of a European Call option by simulating $m$ independent realizations of the process $Z$ simultaneously and then estimating the price according to: \begin{align*} v(x,T) & \simeq \exp(-rT)\frac{1}{m}\sum_{j=1}^m (S_0 \exp(Z_{j,T})-K)_+ \end{align*} where $Z_{j,T}$ is the $j$-th simulated log-price at maturity.\\ \begin{figure}[htb!] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{ConfidenceIntervalGamma.png}}}\hspace{5pt} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{ConfidenceInterval_IG.png}}}\hspace{5pt} \caption{Monte Carlo price and its confidence interval vs the number of simulations under Gamma (top) and an IG (bottom) subordinators. Parameters are the same than in previous figure except $\beta_1=0.1,\:\: \beta_2=0.01$.}\label{fig:confidence IG} \end{center} \end{figure} We can also estimate the price using confidence intervals. The confidence interval is useful because it provides a range of values that are likely to contain the population mean. Endpoints of the confidence interval are given by: \begin{equation*} \bar{h}\pm z_{0.95}\frac{s}{\sqrt{m}}, \end{equation*} where $\bar{h}$ is the sample mean of the simulated payoffs, $s$ is the sample standard deviation, $m$ is the sample size and $z_{0.95}$ is the $95\%$ -percentile of the normal distribution.\\ The confidence interval decreases as the number of simulations approaches infinity, however in the case of the Inverse Gaussian subordinator, the interval is larger at each simulation because Inverse Gaussian random variables have a larger variance than Gamma random variables when $\beta<1$.\\ At $m=10^4,$ the confidence interval when the subordinator is Inverse Gaussian is $ [18.7353, 19.1168]$. When the subordinator is a Gamma process, the confidence interval is $[ 17.8768, 17.9136]$.\\ \begin{figure}[htb!] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{SurfLevyMtrixfigure.png}}}\hspace{5pt} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{SurfLevyMtrixfigure_IG.png}}}\hspace{5pt} \caption{ European call payoff as a function of $T$ and $K$ under two different subordinators with risk neutral drift. The other parameters are identical for each figure: $\sigma_1=0.03, \sigma_2=0.7, \alpha_1=\alpha_2=0.1, \beta_1=1, \beta_2=1.2, \lambda_{12}=2.5, \lambda_{21}=1$. For an IG subordinator $\mu_1=0.3204, \mu_2=0.6450$. For a Gamma subordinator $\mu_1=-0.2316, \mu_2=0.0541$.}\label{fig:image2} \end{center} \end{figure} Figure \ref{fig:image2} indicates the behaviour of the price of a European call option for a regime switching time-changed Levy process model under Inverse Gaussian (top) and Gamma subordinators (bottom). Both payoff models are monotone in $T$ and $K$. Moreover as $T$ increases, the expected payoff increases. For $K>>S_0$ the probability that $S_T\geq K$ is very small, therefore the payoff is close to $0.$\\ \begin{figure}[htb!] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{MonteCarlo_alpha_parameters.png}}}\hspace{5pt} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{MonteCarlo_beta_parameters.png}}}\hspace{5pt} \caption{Payoff as a function of parameters $\alpha_1, \alpha_2$. Payoff as a function of parameters $\beta_1, \beta_2$}\label{fig:G_b1_b2} \end{center} \end{figure} \begin{figure}[htb!] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{MonteCarlo_alpha2_beta2_parameters2.png}}}\hspace{5pt} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{MonteCarlo_lambda1_lambda2_parameters.png}}}\hspace{5pt} \caption{ Payoff as a function of parameters $\alpha_2, \beta_2$. Payoff as a function of parameters $\lambda_{12}, \lambda_{21}$}\label{fig:image3} \end{center} \end{figure} In each figure parameters between states are held constant, except the ones indicated in the graph: $\mu_1=\mu_2=0.01,\:\: \alpha_1=\alpha_2=\beta_1=\beta_2=0.1,\:\: \sigma_1=\sigma_2=0.01,\:\lambda_{12}=\lambda_{21}=0.25,\: r=0.04,\: T=1,\: S_0=20\:$ and $ K=1.$ Setting the parameters $\lambda_{12}=\lambda_{21}$ implies the process spends an equal amount of time in each regime, on average. The only exception made is in Figure \ref{fig:G_b1_b2}, where $\lambda_{12}=1000$ and $\lambda_{21}=1/10$ so that the process would spend a majority of the time in the second regime; this makes the process an approximation to a time-changed Levy process.\\ For changes in $\beta_1, \beta_2,$ the payoff approaches an asymptote because Gamma and Inverse Gaussian random variables both have mean $\alpha/\beta,$ which approaches infinity for $\beta\rightarrow 0^+.$ In Figure \ref{fig:image3} changes in the price with respect to the intensity parameters and the parameters of the underlynig subordinators are shown. \begin{table}[h!] \centering \caption{European call option payoff comparison between the Black Scholes formula and Monte Carlo simulation of the reduced switching Levy process at different parameters}\label{bshpri} \begin{tabular}[H]{|c|c|c|} \hline Parameters & Reduced Switching Levy Model&Black Scholes formula \\ $(T,K,r,\sigma)$& $(\#\:\: \text{simulations}\:\: N=10^6)$& \\ \hline (1,1,0.04,0.5) & 19.0463 &19.0392\\ \hline (3,1,0.1,1)& 19.2955 & 19.3139 \\ \hline (2,30,0.5,0.001) &8.963608 & 8.96361\\ \hline \end{tabular} \end{table} Notice that we can reduce the regime switching Levy processes to the Black-Scholes model by defining the subordinator $L_t=t$ and setting the parameters equal across both regimes. Setting the drift such that the discounted Black Scholes price process is a martingale $\mu_1=r-\frac{1}{2}\sigma^2_1$. we find that the price by a Fourier-cosine approach is consistent with the price given by the Black-Scholes formula. See Table \ref{bshpri}. \begin{figure}[htb!] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{oilWTIpriceprocess.png}} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{logWTIplot.png}}}\hspace{5pt} \caption{Daily price series and log-returns for WTI futures (left) and log-returns of WTI futures in NYMEX. Source: Blooomberg Terminal, April 2018 }\label{fig:Estimation1} \end{center} \end{figure} \begin{figure}[htb!] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{Electricity.png}}} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{diffElectricity.png}}}\hspace{5pt} \caption{Price series and log-returns for Ontario daily average electricity spot price. Source: Blooomberg Terminal, April 2018.}\label{fig:Estimation2} \end{center} \end{figure} \section{Parameter estimation and numerical pricing} \label{estimationAndOutput} We implement two approaches of fitting the parameters of the underlying model to financial historical data: calibration and estimation depending when option's prices or the underlying electricity and oil prices are considered. In a calibration approach, the parameters are fitting by minimizing the quadratic error between the prices obtained numerically and option quotes. The option quotes are taken from Bloomberg's data base for a variety of strike prices and times to maturity. The parameters are fitted using European call option quotes of West Texas Intermediate (WTI) crude oil. \\ In a parameter estimation, we implement the following techniques based on historical prices of oil and electricity: method of moments, minimum distance method and maximum likelihood estimation, combined with empirical estimation of the switching parameters. Specifically, we use daily historical NYMEX WTI crude oil futures (11-16-2012 to 06-05-2018) and IESO Ontario (Canada) Zone 24H electricity average spot prices (06-06-2008 to 06-05-2018). Spikes and stochastic volatility are observed in the series, Similar phenomena have been reported in other electricity markets, see \cite{benth1, cartea}. We assume that there are 250 trading days in a year with each trading day corresponding to $\Delta t=1/250$ of unit time. \\ Figures \ref{fig:Estimation1} and \ref{fig:Estimation2} plot the historic WTI oil futures and average Ontario electricity prices, as well as their log-returns. Electricity spot prices sometimes move below zero, implying a surplus of electricity produced during low demand. Because electricity produced by power suppliers must be consumed immediately, the supplier pays wholesale customers to buy the surplus energy, see \cite{Electricity_Regime}. All negative prices are arbitrarily modified to CAD $\$ 0.01$ for estimation purposes.\\ We also compare the empirical density function of the log-returns of each commodity to the normal distribution under the same mean and variance parameters as the historical log-return process. To this end we implement a kernel smooth technique. Hence, the estimated p.d.f. is: \begin{equation*}\label{eq:kerel} \hat{f}(x; \theta)=\frac{1}{nh}\sum_{j=1}^n K\Big(\frac{x-z_j}{h}\Big), \end{equation*} where the function $K$ is the Gaussian kernel.\\ As the p.d.f. $f(x_k;\theta)$ of the log-returns is not available in a close form we simulate the model under a parametric set $\theta$ to get the empirical p.d.f. $\hat{f}(x_k;\theta)$ using a kernel smoothing technique, see for example \cite{kernel}, and continue the optimization procedure. \\ The value of $h,$ is chosen to equal Silverman's quantity $h=1.06 \sigma n^{-1/5}$, where $\sigma$ is the standard deviation of the log-return series, see \cite{MaxLikelihood}. See Figure \ref{fig:image5} for empirical p.d.f.'s corresponding to WTI futures (left) and Ontario electricity log-return prices(right).\\ \begin{figure}[htb!] \begin{center} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{ksdensity_diffoil.png}} \subfigure[]{ \resizebox*{7cm}{!}{\includegraphics{ksdensity_diffElectricity.png}}}\hspace{5pt} \caption{Empirical density functions vs. normal distributions oil log-returns.electricity log-returns}\label{fig:image5} \end{center} \end{figure} The kurtosis of WTI future and Ontario electricity log-return series are respectively 6.34 and 23.947, much larger than that of the normal distribution, suggesting the presence of heavier tails and extreme behavior. Significant negative skewness is also reported on both series, respectively -0.1314 and -0.1377\\ In our model the parameters are described by vectors $ \theta^j=(\mu_j,\sigma_j,\alpha_j,\beta_j)$ for $j=1,2,$ while $\lambda_{12}$ and $\lambda_{21}$ reflect the parameters of the hidden Markov chain. Hence the times the chain remains in regime $j$ are independent and exponentially distributed with mean $\lambda^j=\frac{1}{\lambda_{jk}}$, with $k=\mod(2)+1$. We set $\Theta^j\subset\mathbb{R}^4$ to be the set of all feasible parameters for $\theta^j$. We assume that the two sets of parameters belong in different parameter spaces i.e. $\Theta^1\neq\Theta^2.$ The two parameters of the subordinator and the diffusion coefficient are required to be positive, therefore we add the natural constraints $\sigma_j>0, \alpha_j>0, \beta_j>0.$\\ We assume that the duration of the $j$-th observed historic regime is the most probable value i.e. it is equal to the expectation value $\lambda^j.$ For each commodity, the $j$-th holding-rate parameter is given by: \begin{equation*} \lambda^j=\frac{\text{total number of days in regime}\:\: j }{\text{ number of occurrences of regime}\:\: j} \end{equation*} where it is assumed that there are 250 trading days every year.\\ By simple inspection of the log-return process data of oil futures we set the process to be in regime one between 11-16-2012 and 11-16-2014 as well as between 02-06-2017 and 06-05-2018; otherwise, we assume that the process is in regime two. In the case of the log-returns of electricity spot prices; we set the process be in regime two whenever the absolute value of the log-returns exceeds 3 and in regime one otherwise. \\ Table \ref{tab:lambda} shows the average time and daily standard deviation in the two regimes of the WTI futures and Ontario electricity series. We also include the variance of the log-returns within each regime; the different orders of magnitude between regimes justifies the use of a switching model.\\ \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c} \hline Commodity& $\hat{\lambda}^1$&$\hat{\lambda}^2$ & St. Dev. (regime 1)& St. Dev. (regime 2)\\ \hline Oil&0.900&3.80&0.0052&0.0148\\ \hline Electricity & 0.2618&0.0081&0.6020&6.1086 \\ \hline \end{tabular} \caption{Holding-rate parameters estimation for each commodity as well as the variance in each regime} \label{tab:lambda} \end{table} By having defined the location of the regime changes and therefore estimated the values of $\lambda^1, \lambda^2$, the historic log-returns are separated into two sets of data, one containing all the data points for each regime.\\ To calibrate the parameters within each regime we minimize the \textit{mean square error} between the numeric option payoffs and European call option quotes. When the option is out of the money the option price is obtained using a Monte Carlo approach because the Fourier Cosine method exhibits significant error.\\ Thus, the objective function in regime $j$ is \begin{align*} J(\theta^j)&=\sqrt{\frac{1}{n}\sum_{T,K}(V(T,K; \theta^j)-\hat{V}(T,K; \theta^j))^2},\quad j\in\{1,2\}. \end{align*} where, by a convenient change in notation to emphasize the dependence on the parameters we write $V(T,K; \theta^j)$ and the option quotes $\hat{V}(T,K; \theta^j)$, taken over a range of strike prices $K$ and maturity times $T$.\\ The optimal parameter is $\hat{\theta}^j=\operatorname*{arg\,min}_{\theta\in\Theta^j} J(\theta^j)$. It is calculated using a gradient descent method.\\ The stopping criteria is taken to be step tolerance, taken to be equal to 1e-10. The k-th step tolerance is a lower bound on the size of the step $||\theta^t-\theta^{t-1}||_2$. The solver stops if the stopping criteria is reached, or if the maximum number of iterations (fixed to 1000 steps) is exceeded. Different initial starting points are found to give similar estimation of the parameters. Table \ref{tab:calibration} gives the estimated calibration in the case when the subordinator is a Gamma process or an Inverse Gaussian process. As expected, the volatility is higher in regime two in the case of both subordinators. \begin{table}[h!] \centering \caption{Parameter Calibration using a Mean Square Error criteria} \begin{tabular}{|c|c|c|c|c|} \hline Commodity (subordinator) & $\hat{\mu}$ & $\hat{\sigma}$ & $\hat{\alpha}$ & $\hat{\beta}$\\ \hline Oil log-return Regime 1 (Gamma)& -0.03387 & 0.0030& 2.640710 & 1.007e-8\\ \hline Oil log-return Regime 2 (Gamma)& -0.01445& 1.116184& 2.56567e-5 & 10.32441 \\ \hline Oil log-return 1 (Inverse Gaussian) & -0.04976& 0.130011& 0.24788 & 92.6926\\ \hline Oil log-return 2 (Inverse Gaussian) & -0.04950& 0.515891& 8.531e-4& 8.43091 \\ \hline \end{tabular} \label{tab:calibration} \end{table} To choose the initial set of parameters we use the \textit{Method of Moments}. Theoretical moments are computed from the characteristic function of the model under both subordinators considered. Matching both, empirical and theoretical moments up to order forth leads to the following non-linear system of equations, in the case of a model under an Inverse Gaussian subordinator: \begin{align*} {\hat\mu}_1&={\alpha} {\mu} {\Delta t}/{\beta}\\ \hat{\mu}_2 &={\alpha} {\Delta t} ({\mu}^{2} + {\beta}^{2} {\sigma}^{2} + {\alpha} {\beta} {\mu}^{2} {\Delta t})/{\beta}^{3}\\ \hat{\mu}_3&={\alpha} {\mu} {\Delta t} ({3} {\mu}^{2} + {3} {\beta}^{2} {\sigma}^{2} + {3} {\alpha} {\beta} {\mu}^{2} {\Delta t} + {\alpha}^{2} {\beta}^{2} {\mu}^{2} {\Delta t}^{2} + {3} {\alpha} {\beta}^{3} {\sigma}^{2} {\Delta t})/{\beta}^{5}\\ \hat{\mu}_4&=({\alpha} {\Delta t} ({15} {\mu}^{4} + {3} {\beta}^{4} {\sigma}^{4} + {18} {\beta}^{2} {\mu}^{2} {\sigma}^{2} + {15} {\alpha} {\beta} {\mu}^{4} {\Delta t} + \\ &\quad\quad{6} {\alpha}^{2} {\beta}^{2} {\mu}^{4} {\Delta t}^{2} + {\alpha}^{3} {\beta}^{3} {\mu}^{4} {\Delta t}^{3} + {3} {\alpha} {\beta}^{5} {\sigma}^{4} {\Delta t} + {6} {\alpha}^{2} {\beta}^{4} {\mu}^{2} {\sigma}^{2} {\Delta t}^{2} + {18} {\alpha} {\beta}^{3} {\mu}^{2} {\sigma}^{2} {\Delta t}))/{\beta}^{7}. \end{align*} where ${\hat\mu}_k$ is the empirical k-th moment.\\ The system of equations is solved separately for each regime using the function \verb fsolve in MATLAB based on the trust region algorithm. The results are summarized in Table \ref{tab:method_moment IG}.\\ A similar result is obtained for the model under a Gamma subordinator.\\ \begin{table}[h!] \centering \caption{Parameter Estimation using Method of Moments under Inverse Gaussian Subordinator} \begin{tabular}{|c|c|c|c|c|} \hline Commodity & $\hat{\mu}$ & $\hat{\sigma}$ & $\hat{\alpha}$ & $\hat{\beta}$\\ \hline Oil log-return Regime 1& 0.1624 & 0.7213 & 0.3238 & 1.6971\\ \hline Oil log-return Regime 2 & -0.0354 & 1.3402 & 0.0400 & 1.9584\\ \hline Electricity log-return 1 & 0.1111 & 3.1233 & 28.4386 & 3.4862 \\ \hline Electricity log-return 2 & -3.7405 & 19.5346 & 0.0132 & 0.3539\\ \hline \end{tabular} \label{tab:method_moment IG} \end{table} The method encounter difficulties to find a global minimum in the case where the empirical moments were calculated using electricity log-return prices. Changing the initial starting points resulted in varying results, which indicates the presence of local minima. Despite these shortcomings the method of moments can be used as an initial solution for a \textit{Minimum Distance} approach based on the distance between the theoretical and empirical characteristics function of the log-returns, the later defined as: \begin{equation*}\label{eq:CHAR} \hat{\varphi}_{Z^j_{\Delta t}}(u)=\frac{1}{n}\sum_{k=1}^n\exp(\text{i} u z_k) \end{equation*} for a sample $z_1, z_2, \ldots, z_n$ of $n$ log-returns of the underlying series. See \cite{MinimumDistanceEstimates2} for details.\\ The objective function under regime $j$ is defined by: \begin{equation*} ||\varphi_{Z^j_{\Delta t}}(u; \theta)- \hat{\varphi}_{Z^j_{\Delta t}}(u)||_2 :=\Bigg(\int_{-\infty}^{\infty} |\varphi_{Z^j_{\Delta t}}(u; \theta)- \hat{\varphi}_{Z^j_{\Delta t}}(u)|^2 w(x)dx\Bigg)^{1/2}, \end{equation*} where $w$ is the weight function $w(x)=(1/\sqrt{2\pi})\exp(-x^2/2)$.\\ Then, $\hat{\theta}$ is the minimum distance estimate of $\theta$ if \begin{equation*} ||\varphi_{Z^j_{\Delta t}}(u; \hat{\theta})- \hat{\varphi}_{Z^j_{\Delta t}}(u)||_2 =\inf_{\theta\in\Theta}\{ ||\varphi_{Z^j_{\Delta t}}(u; \theta)- \hat{\varphi}_{Z^j_{\Delta t}}(u)||_2 \} \end{equation*} Again, we apply the algorithm to each regime separately.\\ The integral is computed numerically using a global adaptive quadrature algorithm, where the interval of integration is subdivided and the integration takes place on each subdivided interval. Intervals are further subdivided if the algorithm determines that the integral is not computed to sufficient accuracy. \begin{table}[h!] \centering \caption{Parameter Estimation using Minimum Distance Method under Inverse Gaussian subordinator} \begin{tabular}{|c|c|c|c|c|} \hline Commodity & $\hat{\mu}$ & $\hat{\sigma}$ & $\hat{\alpha}$ & $\hat{\beta}$\\ \hline Oil log-return Regime 1& 0.01736& 0.11675& 31.648& 8.0554\\ \hline Oil log-return Regime 2 & -0.4956& 2.0078& 2.2260& 10.141\\ \hline Electricity log-return 1 &0.00813& 02.0139& 67.456 & 0.00154 \\ \hline Electricity log-return 2 & 5.7435 & 4.48714& 76.004 & 6.871e-4\\ \hline \end{tabular} \label{tab:Min dist method_IG} \end{table} In table \ref{tab:Min dist method_IG} estimates of the model parameters under both regimes and for the two series of underlying assets are shown.\\ Given a random sample $x=(x_1, ..., x_n)$ of a random variable $X$ with an associated density function $f(x;\theta)$ of the data $x$ under the real world and unknown parameters $\theta$, maximum likelihood estimation (MLE) is a method used to estimate the vector valued parameter $\theta$ of the model by maximizing the likelihood function: \begin{equation*} l(\theta;z)=\sum_{k=1}^n \log f(z_k;\theta);\:\:\:\: \theta\in\Theta, \end{equation*} with respect to $\theta$. The value of $\theta$ is constrained to $\Theta\subset\mathbb{R}^4$, the space of all feasible values of the parameters. The maximum likelihood function $\mathcal{L}$ is primarily a function of the unknown parameters $\theta$. The maximum likelihood estimator is given by: \begin{equation*}\label{maxlikelihoodeq} \hat{\theta}=\arg\max_{\theta\in\Theta}l(\theta;x) \end{equation*} In tables \ref{mlegamma} and \ref{mleig} estimations based on the empirical m.l.e. for both regimes and models under Gamma and Inverse Gaussian subordinators are shown. \begin{table}[h!] \centering \caption{Parameter Estimation using Maximum Likelihood Method under Gamma subordinator}\label{mlegamma} \begin{tabular}{|c|c|c|c|c|} \hline Commodity & $\hat{\mu}$ & $\hat{\sigma}$ & $\hat{\alpha}$ & $\hat{\beta}$\\ \hline Oil log-return Regime 1& 0.0023& 0.0431& 42.928& 11.9960\\ \hline Oil log-return Regime 2 & -0.372& 0.52851 & 17.3008& 88.556\\ \hline Electricity log-return 1 & 5.844e-3 & 1.5002 & 93.271 & 2.1903\\ \hline Electricity log-return 2 & -0.0148& 7.543& 90.5900& 0.01770 \\ \hline \end{tabular} \end{table} \begin{table}[h!] \centering \caption{Parameter Estimation using Maximum Likelihood Method under Inverse Gaussian subordinator}\label{mleig} \begin{tabular}{|c|c|c|c|c|} \hline Commodity & $\hat{\mu}$ & $\hat{\sigma}$ & $\hat{\alpha}$ & $\hat{\beta}$\\ \hline Oil log-return Regime 1&-0.4883& 0.5058& 0.64603& 63.709 \\ \hline Oil log-return Regime 2 & 0.1201 & 2.9707 & 0.00014 & 9.993\\ \hline Electricity log-return 1 &-0.1781& 0.25873& 0.91878& 20.0860\\ \hline Electricity log-return 2 & -0.0191& 4.9752 & 5.512e-5& 11.016\\ \hline \end{tabular} \end{table} In each case, the values of the volatility $\sigma$ are found to be higher in the second regime, hence justifying the use of a regime switching model. In nearly every method, the value of $|\mu|$ was found to be very small, which is expected as the long term deterministic contribution to the process is expected to be near zero.\\ In choosing constraints, we set the lower bound of $\sigma, \alpha, \beta $ to be some small number $\epsilon=10^{-6}$. We set the upper bound of $\sigma$ to 5 as the diffusion is expected to be smaller than 1 and for $\alpha,\:\: \beta,$ we set the upper bound to be 100, as the expected value of both Inverse Gaussian and Gamma random variables depends on the ratio $\alpha/\beta$ rather than any particular value for $\alpha$ and $\beta$. The drift $\mu$ is expected to be small, so in most cases, it was constrained to the set $[-1,1]$. \section{Conclusions} We price European-style options with oil and electricity prices as underlying assets under a switching Levy time-changed noise. These are realistic models that allow to incorporate stylized features in the dynamic of the prices. Our findings show that under this framework a pricing method based on a Fourier-cosine offers an efficient and accurate result when compared with a standard Monte Carlo approach.\\ In addition, we address the problem of parameter estimation and calibration. To this end we successfully tried different methods based on both, historic and risk-neutral measure. \section{Acknowledgments} This research is supported by the Natural Sciences and Engineering Research Council of Canada.
{'timestamp': '2020-06-04T02:09:42', 'yymm': '2005', 'arxiv_id': '2005.14328', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14328'}
arxiv
\section{Introduction} The weakly beyond-mean-field description of a Bose quantum fluid, initially introduced by Bogoliubov, relies on small collective excitations on top of a time-independent conden
sate \cite{Bogoliubov1947, Pitaevskii2016}. These excitations are described as non-interacting quasi-particles with a specific energy spectrum: sound-like at low momenta and free-particle-like at large momenta. Due to these linear then parabolic dependences at respectively low and large momenta, a system exhibiting this type of dispersion satisfies the Landau criterion for superfluidity \cite{leggett2001bose}, which is a benchmark for the system to behave as a superfluid, one of the most striking manifestations of quantum many-body physics. In optics, a growing community focuses on quantum-fluid physics with light in non-linear media \cite{carusotto2013quantum}. For example, Bose-Einstein condensation has been observed both in exciton-polariton \cite{kasprzak2006bose} and dye-filled \cite{klaers2010bose} microcavities. Initially proposed by Pomeau and Rica \cite{Pomeau1993} and neglected experimentally for a long time, paraxial fluids of light present exciting perspectives for studying quantum-fluids physics \cite{carusotto2014superfluid,noh2016quantum}. In this approach, photons acquire an effective mass as a consequence of the paraxial approximation while effective repulsive photon-photon interactions are mediated by the optical non-linearity of the medium in which they propagate. Experimental implementations rely on the propagation of an intense laser beam within a negative third-order, Kerr non-linear medium such as photorefractive crystals \cite{wan2007dispersive,michel2018superfluid}, thermo-optic media \cite{vocke2016role, Elazar2013}, and hot atomic vapors \cite{vsantic2018nonequilibrium, Fontaine2018}. In this 2D + 1 geometry, the system is two-dimensional in the transverse direction and the propagation coordinate is analogous to an effective time. Recently, two experiments have measured the dispersion relation of weak-amplitude excitations on top of a paraxial fluid of light with two complementary approaches \cite{Vocke2015, Fontaine2018} following a proposal of Ref. \cite{carusotto2014superfluid}. The evolution of these elementary excitations is described by the Bogoliubov theory, revealing the rich analogy existing between non-linear photonics and quantum condensed matter physics. If this analogy is now well established, theoretical works \cite{Larre2017,Ferreira2018} have questioned the presence and the impact of interferences between counter-propagating Bogoliubov excitations in paraxial fluids of light. In this paper, we present the first experimental evidence of these interferences and we demonstrate their dramatic impact on the reconstruction of the dispersion relation and on the identification of superfluidity of light. Moreover, we propose an interpretation of these interferences as stimulated analogue of the Sakharov oscillations of cosmology~\cite{sakharov1966initial,cosmobook}, recently observed in atomic condensate~\cite{hung2013cosmology}. Finally, we show that this effect is robust across several experimental systems used for paraxial fluids of light by numerically taking into account the photon absorption, the finite size of the fluid, the saturation and the non-locality of the photon-photon interactions. Because all these corrections only marginally impact the observed behavior, our work opens the way to novel experimental techniques for probing paraxial fluid of light, based on the observation of Bogoliubov-excitation interferences. For example, we propose that extracting the contrast of constructive interference fringes in the output plane as a function of the probe parameters will give access to the efficiency at which we can excite phonons, also known as the static structure factor \cite{shammass2012phonon}. \section{Paraxial fluid of light} \label{TheoreticalGrounds} We consider a monochromatic beam of light propagating along the positive-$z$ direction in a $\smash{\chi^{(3)}}$ non-linear medium. In the paraxial and scalar approximations, the evolution of the slowly varying envelope $\mathcal{E}(\mathbf{r}_{\perp},z)$ of the complex electric field $E(\mathbf{r}_{\perp},z)=\mathcal{E}(\mathbf{r}_{\perp},z)e^{i(k_0z-\omega t)}$ is known to obey the non-linear Schr\"{o}dinger equation (NLSE) of non-linear optics \cite{Boyd2008}: \begin{equation} i\frac{\partial\mathcal{E}}{\partial z}=\bigg({-}\frac{1}{2k_0}\boldsymbol{\nabla}_{\perp}^{2}-\frac{3k_0\chi^{(3)}}{8n^{2}}|\mathcal{E}|^{2}-\frac{i\alpha}{2}\bigg)\mathcal{E}. \label{NLSE} \end{equation} In this equation, $k_0=n\omega/c$ is the laser propagation constant in the medium with $n$ the linear refractive index, $\omega$ the laser angular frequency, and $c$ the vacuum speed of light, $\boldsymbol{\nabla}_{\perp}$ is the gradient with respect to the transverse coordinates $\mathbf{r}_{\perp}=(x,y)$, and $\alpha\geqslant0$ is the absorption coefficient describing photon losses. Except for the last term describing losses, \eqref{NLSE} is formally analogous to the Gross-Pitaevskii equation (GPE) describing the temporal evolution of the macroscopic wavefunction of an atomic Bose-Einstein condensate (BEC) in two dimensions \cite{Pitaevskii2016}. In the right-hand side in particular, the Laplacian term mimics the kinetic-energy term with a mass corresponding to the laser propagation constant $k_0$. In addition, the $\smash{\chi^{(3)}}$ contribution is analogous to the contact-interaction potential with an interaction parameter $g$ proportional to the Kerr susceptibility: $ g=-3k_0\chi^{(3)}/(8n^{2})$. In the following, we consider the non-linearity to be self-defocusing ($\smash{\chi^{(3)}}<0$) so that the effective photon-photon interactions are repulsive ($g>0$). In this analogy, the fluid density $\rho(\mathbf{r}_{\perp},z)$ is directly proportional to the field intensity $I(\mathbf{r}_{\perp},z)$ according to $ \rho(\mathbf{r}_{\perp},z)=|\mathcal{E}(\mathbf{r}_{\perp},z)|^{2}=2I(\mathbf{r}_{\perp},z)/(c\epsilon_{0}n) $, where $\epsilon_{0}$ denotes the vacuum permittivity. However, while the GPE describes the evolution of a condensate wavefunction for a matter quantum fluid \textit{in time}, the NLSE describes how the electric-field envelope $\mathcal{E}(\mathbf{r}_{\perp},z)$ of the light beam propagates \textit{in space}, along the $z$ axis. Therefore, the propagation coordinate $z$ is equivalent to an effective time in the NLSE. As a consequence, every transverse plane (spanned by $\mathbf{r}_{\perp}$) along the propagation axis $z$ can be regarded as a snapshot of the evolution of the two-dimensional paraxial fluid of light (see Fig.~\ref{fig:manip}). The role of the physical time $t$ as a third spatial coordinate for propagating light was highlighted in~\cite{Larre2015}. These features are however not relevant in the monochromatic excitation case under investigation here. \begin{figure}[t!] \centering \includegraphics[width=6.5cm]{manip.pdf} \caption{Paraxial fluid of light. In the paraxial and scalar approximations, a laser beam propagates along the $z$ axis in a $\smash{\chi^{(3)}}$ non-linear medium according to the effective Gross-Pitaevskii equation (\ref{NLSE}). The field profile on each transverse $\mathbf{r}_{\perp}=(x,y)$ plane along the propagation direction $z$ is equivalent to a snapshot of the evolution of the paraxial fluid of light.} \label{fig:manip} \end{figure} In the following theoretical description of the paraxial fluid of light and of its Bogoliubov excitations, we disregard the effect of photon losses by taking $\alpha=0$. This approach has the advantage of shining light on the general features without harming the generality of our conclusions. A complete theory including photon losses will be presented later in Fig.~\ref{fig:ShiftVersus}, showing no qualitative change. In the ideal lossless case, we assume that the beam maintains a wide flat-top and $z$-independent intensity profile all along its propagation, so that the corresponding solution of \eqref{NLSE} reads \begin{equation} \label{FlatTopBeam} \mathcal{E}_{0}(z)=\sqrt{\rho_{0}}e^{-ik_0\Delta nz}. \end{equation} In this equation, $\rho_{0}$ is the density of the homogeneous paraxial fluid of light and $\Delta n=g\rho_{0}/k_0$ is the change of refractive index induced by the optical non-linearity. A small departure from the uniform and stationary configuration (\ref{FlatTopBeam}) is described by a solution of \eqref{NLSE} of the form \begin{equation} \mathcal{E}(\mathbf{r}_{\perp},z)=\mathcal{E}_{0}(z)+\delta\mathcal{E}(\mathbf{r_{\perp}},z), \end{equation} where $|\delta\mathcal{E}(\mathbf{r_{\perp}},z)|\ll|\mathcal{E}_{0}(z)|$. Such an expansion depicts weak-amplitude fluctuations (for example intensity fluctuations) on top of the homogeneous (i.e., $\mathbf{r}_{\perp}$-independent) background defined in \eqref{FlatTopBeam}. The complex field $\delta\mathcal{E}(\mathbf{r}_{\perp},z)$, which is solution of the linearized version of \eqref{NLSE}, can be decomposed following the Bogoliubov approach \cite{Pitaevskii2016} as a linear superposition of plane waves counter-propagating in the transverse $\mathbf{r}_{\perp}$ plane with opposite wavevectors $\pm\mathbf{k}_{\perp}$ and oscillating in the effective time $z$ at the same angular frequency $\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})$: \begin{align} \notag \delta\mathcal{E}(\mathbf{r}_{\perp},z)&\left.=e^{-ik_0\Delta nz} \int\frac{d^{2}\mathbf{k}_{\perp}}{(2\pi)^{2}}\Big\{u(\mathbf{k}_{\perp})b_{\mathbf{k}_{\perp}}e^{i[\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp}-\Omega_{\mathrm{B}}(\mathbf{k}_{\perp}) z]}\right. \\ \label{Fluctuation} &\left.\hphantom{=}+v^{\ast}(\mathbf{k}_{\perp})b^{\ast}_{\mathbf{k}_{\perp}}e^{-i[\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp}-\Omega_{\mathrm{B}}(\mathbf{k}_{\perp}) z]}\Big\}.\right. \end{align} In this expression, the complex amplitudes of the plane waves with wavevectors $\mathbf{k}_{\perp}$ and $-\mathbf{k}_{\perp}$ are respectively denoted by $u(\mathbf{k}_{\perp})b_{\mathbf{k}_{\perp}}$ and $v^{\ast}(\mathbf{k}_{\perp})b_{\mathbf{k}_{\perp}}^{\ast}$, where $b_{\mathbf{k}_{\perp}}$ is chosen to be homogeneous to a voltage times a length so that $u(\mathbf{k}_{\perp})$ and $v(\mathbf{k}_{\perp})$ are by construction dimensionless. The latter satisfy the eigenvalue problem \cite{Pitaevskii2016} \begin{equation} \label{BdG} \begin{split} &\mathcal{L}(\mathbf{k}_{\perp}) \renewcommand\arraystretch{1} \begin{bmatrix} u(\mathbf{k}_{\perp}) \\ v(\mathbf{k}_{\perp}) \end{bmatrix} =\Omega_{\rm B}(\mathbf{k}_{\perp}) \renewcommand\arraystretch{1} \begin{bmatrix} u(\mathbf{k}_{\perp}) \\ v(\mathbf{k}_{\perp}) \end{bmatrix} ,\quad\text{where} \\ &\mathcal{L}(\mathbf{k}_{\perp})= \renewcommand\arraystretch{1} \begin{bmatrix} k_{\perp}^{2}/(2k_{0})+k_{0}\Delta n & \!\!\! k_{0}\Delta n \\ -k_{0}\Delta n & \!\!\! -k_{\perp}^{2}/(2k_{0})-k_{0}\Delta n \end{bmatrix}, \end{split} \end{equation} with the wavenumber $k_{\perp}=|\mathbf{k}_{\perp}|$. Without loss of generality, we take $u(\mathbf{k}_{\perp})$ and $v(\mathbf{k}_{\perp})$ to be real. Setting the normalization condition $u^{2}(\mathbf{k}_{\perp})-v^{2}(\mathbf{k}_{\perp})=1$, we get the dispersion relation \begin{align} \label{DispersionRelation} \Omega_{\mathrm{B}}(\mathbf{k}_{\perp})&=\sqrt{\frac{k_{\perp}^{2}}{2k_0}\bigg(\frac{k_{\perp}^{2}}{2k_0}+2k_0\Delta n\bigg)}\quad\text{and} \\ \label{BogAmplitudes} u(\mathbf{k}_{\perp})\pm v(\mathbf{k}_{\perp})&=\bigg[\frac{k_{\perp}^{2}}{2k_0}\bigg/\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})\bigg]^{\pm\frac12}. \end{align} Equation (\ref{DispersionRelation}) is the optical analog of the Bogoliubov excitation spectrum of an homogeneous two-dimensional atomic BEC at rest and \eqref{BogAmplitudes} gives the $\mathbf{k}_{\perp}$ dependence of the Bogoliubov amplitudes $u(\mathbf{k}_{\perp})$ and $v(\mathbf{k}_{\perp})$. Here, the linear combinations $u+v$ and $u-v$ respectively correspond to the density and phase amplitudes of the Bogoliubov collective wave in wavevector space. From \eqref{DispersionRelation}, we can extract the peculiar behavior of the Bogoliubov dispersion relation $\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})$, which is linear (sound-like) at small $\mathbf{k}_{\perp}$ and parabolic (free-particle-like) at large $\mathbf{k}_{\perp}$: \begin{equation} \Omega_{\mathrm{B}}(\mathbf{k}_{\perp})\simeq \begin{cases} c_{\mathrm{s}}k_{\perp} & \text{when} \quad k_{\perp}\xi\ll1 \\ \displaystyle{\frac{k_{\perp}^{2}}{2k_0} + k_0 \Delta n} & \text{when} \quad k_{\perp}\xi\gg1. \end{cases} \end{equation} These asymptotic behaviors bring up the optical analogs of the Bogoliubov sound velocity, $c_{\mathrm{s}}=\sqrt{\Delta n}$, and of the healing length, $\xi=1/(k_0\sqrt{\Delta n})=1/(k_0c_{\mathrm{s}})$, of atomic BECs. In the present optical context, $c_{\mathrm{s}}$ is by construction dimensionless, as it corresponds to the propagation angle with respect to the $z$ axis. The peculiar refraction properties corresponding to the constant $c_s$ in the $k \xi \to 0$ limit were highlighted in~\cite{Fontaine2018}. In the large $k\xi$ limit, the shift in $\Omega_B(\mathbf{k}_\perp)$ is simply linked to the nonlinear refractive index change. \section{Extracting the Bogoliubov dispersion relation from the phase velocity} The phase velocity $v_{\mathrm{ph}}(\mathbf{k}_{\perp})$ of a Bogoliubov plane wave with wavevector $\mathbf{k}_{\perp}$ is related to the Bogoliubov dispersion relation $\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})$ through \begin{equation} v_{\mathrm{ph}}(\mathbf{k}_{\perp})=\frac{\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})}{k_{\perp}}. \end{equation} Therefore, it is expected that we can directly reconstruct $\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})$ from the measurement of $v_{\mathrm{ph}}(\mathbf{k}_{\perp})$, which can be assessed from the measurement of the distance \begin{equation} S(\mathbf{k}_{\perp})=v_{\mathrm{ph}}(\mathbf{k}_{\perp})L \end{equation} that the Bogoliubov excitation travels in the transverse plane between the effective times $z=0$ and $z=L$, where $L$ stands for the length of the non-linear medium. In the experimental configuration initially proposed in Ref. \cite{Vocke2015} and studied here, $S(\mathbf{k}_{\perp})$ corresponds to the transverse displacement of a weak interference pattern obtained by overlapping a large-intensity flat-top background with a low-intensity probe, slightly tilted by an angle $\theta_{\mathrm{i}}$ with the $z$ axis along which the background propagates. These two beams come from the same laser, have the same frequency and polarization, and thus interfere, producing a small fluctuation $\delta\mathcal{E}(\mathbf{r}_{\perp},z)$ on top of the background envelope $\mathcal{E}_{0}(z)$ in the non-linear medium. The norm $k_{\perp}=(k_0/n)\sin\theta_{\mathrm{i}}$ of the transverse wavevector of the incident probe is controlled by changing $\theta_{\mathrm{i}}$, which must be small enough so that the whole optical system falls into the paraxial limit $k_{\perp}\ll k_0$ considered here. After propagation inside the medium of length $L$, the background (``bg'') and the probe (``p'') have accumulated different phases $\Phi_{\mathrm{bg}}$ and $\Phi_{\mathrm{p}}(\mathbf{k}_{\perp})$. According to \eqref{Fluctuation}, the latter depends on the Bogoliubov dispersion relation $\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})$. The difference \begin{equation} \Phi(\mathbf{k}_{\perp})=\Phi_{\mathrm{p}}(\mathbf{k}_{\perp})-\Phi_{\mathrm{bg}} \label{eqshift} \end{equation} between these two phases is responsible for an interference pattern in the transverse plane, shifted by \begin{equation} \label{S-Phi} S(\mathbf{k}_{\perp})=\frac{\Phi(\mathbf{k}_{\perp})}{k_{\perp}}. \end{equation} Experimentally, it is possible to have access to \begin{equation} \label{DeltaS} \Delta S(\mathbf{k}_{\perp})=S_{\mathrm{NL}}(\mathbf{k}_{\perp})-S_{\mathrm{L}}(\mathbf{k}_{\perp}), \end{equation} the relative deviation between the fringes patterns obtained at high and low background intensity, that is, in the non-linear (``NL'') regime and the linear (``L'') one, respectively. This quantity can be, at first, estimated in a geometrical approach, as detailed below. \subsection{Geometrical approach} In the linear regime, simple geometry yields the following expressions for the phases accumulated by the background and the probe beams: \begin{align} \Phi_{\mathrm{bg},\mathrm{L}}&=k_0L\quad\text{and} \\ \Phi_{\mathrm{p},\mathrm{L}}(\mathbf{k}_{\perp})&=k_0\sqrt{L^{2}+L^{2}\tan^{2}\theta_{\mathrm{r}}}\simeq k_0L\bigg(1+\frac{\theta_{\mathrm{r}}^{2}}{2}\bigg), \end{align} where $\theta_{\mathrm{r}}\simeq\theta_{i}/n$ is the refraction angle of the probe at the entrance of the medium. Using $k_{\perp}\simeq(k_0/n)\theta_{\mathrm{i}}$, we then obtain \begin{equation} \label{LPPhaseDiff} \Phi_{\mathrm{L}}(\mathbf{k}_{\perp})=\frac{k_{\perp}^{2}}{2k_0}L. \end{equation} In a geometrical approach, the same formula is supposed to hold in the non-linear regime provided the free-particle dispersion relation $k_{\perp}^{2}/(2k_0)$ is replaced with the Bogoliubov spectrum (\ref{DispersionRelation}) and we obtain using \eqref{eqshift}: \begin{equation} \Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})=\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})L. \label{HPPhaseDiff} \end{equation} In the light of Eqs.~(\ref{S-Phi}) and (\ref{DeltaS}), this geometric approach then leads to \begin{equation} \Delta S(\mathbf{k}_{\perp})=\frac{k_{\perp}}{2k_0}\left[\sqrt{1+\Delta n\bigg(\frac{2k_0}{k_{\perp}}\bigg)^{2}}-1\right]L. \label{ShiftFaccio} \end{equation} This expression states that $\Delta S(\mathbf{k}_{\perp})$ saturates to a constant value proportional to the Bogoliubov speed of sound in the deep phonon regime: \begin{equation} \label{SaturationFaccio} \Delta S(\mathbf{k}_{\perp})\underset{k_{\perp}\xi\ll1}{\simeq}\sqrt{\Delta n}L=c_{\mathrm{s}}L. \end{equation} This approach has been experimentally implemented for non-local photon fluids \cite{Vocke2015}. In particular \eqref{SaturationFaccio} suggests that the displacement $\Delta S(\mathbf{k}_{\perp})$ tends at small $k_{\perp}$ towards the intuitive geometric value given by the product of the sound velocity $c_s$ by the effective time $L$. Surprisingly, this geometric approach differs drastically from the results of the full theory in~\cite{Larre2017,Ferreira2018} which predict instead a linear increase of $\Delta S(\mathbf{k}_{\perp})$ with $\Lambda =2\pi/k_{\perp}$ at small $k_{\perp}$, even in the limit of weak interactions ($\Delta n \to 0$). In the following, we explain the physical origin of this correction and show that interferences between the counter-propagating Bogoliubov collective excitations are responsible for the disagreement between \cite{Vocke2015} and \cite{Larre2017,Ferreira2018} in the sonic regime ($k_{\perp}\xi\ll1$). \subsection{Theoretical model including the interferences between counter-propagating Bogoliubov excitations} \label{SubSec:ActualTheory} Let us first introduce qualitatively this effect before deriving the full analytical solution. When the background and the probe enter the non-linear medium, both experience a sudden jump of the $\smash{\chi^{(3)}}$ susceptibility, analogous to a quantum quench of the interactions \cite{Larre2015, Larre2016, Larre2018}. This generates a conjugate beam, due to the boundary condition on the electric field amplitude at the interface. The conjugate field oscillates at the same frequency as the background and the probe, and propagates in the transverse direction with a wavevector $-\mathbf{k}_{\perp}$ opposite to the one of the incident probe. In optics, this third-order non-linear wave-mixing process is known as degenerate four-wave mixing \cite{glorieux2010double,glorieux2012generation,agha2011time}. Interestingly, the interferences between the two counter-propagating Bogoliubov excitations (the probe and the conjugate within the medium), neglected in the geometrical model~\cite{Vocke2015}, are continuously taking place all along their propagation in the non-linear medium. Since the pump, probe and conjugate have the same frequency, they do fulfill the phase-matching condition only when they are co-propagating, that is, when $k_{\perp}=0$. This can be seen by evaluating the ratio of the conjugate Bogoliubov amplitude $v(\mathbf{k}_{\perp})$ to the probe one $u(\mathbf{k}_{\perp})$ using \eqref{BogAmplitudes}. In the free-particle regime $k_{\perp}\xi\gg1$, this ratio is small as it scales as $1/(k_{\perp}^{2}\xi^{2})$. In this limit, the impact of the interferences between the conjugate and the probe can be safely neglected and \eqref{HPPhaseDiff} is valid, as shown in the next section. However, in the phonon regime $k_{\perp}\xi\ll1$, $|v(\mathbf{k}_{\perp})/u(\mathbf{k}_{\perp})|\simeq 1$ and a full model taking into account the interferences between the counter-propagating Bogoliubov excitations gives drastically different results from the geometric approach detailed above. In the case of a finite diameter probe mode (not a plane wave), the geometric model of \eqref{ShiftFaccio} is recovered when the length $L$ of the medium is long enough for the probe and conjugate wavepackets to get spatially separated during the propagation~\cite{carusotto2014superfluid}. In this limit, the distance between of the wavepackets centers gives access to the group velocity~\cite{Fontaine2018}, while the position of the fringes within the wavepackets gives access to the phase velocity. For realistic parameters, this requires impractically long samples. In the following, we derive an exact expression for the relative phase $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})=\Phi_{\mathrm{p},\mathrm{NL}}(\mathbf{k}_{\perp})-\Phi_{\mathrm{bg},\mathrm{NL}}$ accumulated by the probe with respect to the background after propagation through the medium. We use an approach similar to the quantum optics input-ouput formalism \cite{reynaud1992quantum,courty1992generalized} with a description of the medium given by the Bogoliubov theory \cite{Larre2017}. \noindent In air (i.e. $\smash{z<0}$ and $\smash{z>L}$) the envelope of the electric field including the background and its fluctuations may be expanded as \begin{equation} \mathcal{E}_{\mathrm{air}}(\mathbf{r}_{\perp},z)=\sqrt{\rho_{\mathrm{air}}(z)}e^{i\Phi_{\mathrm{air}}(z)}+e^{i\Phi_{\mathrm{air}}(z)}\int\frac{d^{2}\mathbf{k}_{\perp}}{(2\pi)^{2}}a_{\mathbf{k}_{\perp}}(z)e^{i\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp}}. \label{eqAir} \end{equation} In this equation, $\rho_{\mathrm{air}}(z)$ and $\Phi_{\mathrm{air}}(z)$ are the density and the phase of the homogeneous background in air. Due to the conservation of energy at $z=0$ and $z=L$, the densities are related by $\rho_{\mathrm{air}}=\rho_{\mathrm{air}}(z>L)=n\rho_{0}$, while the phases are $\Phi_{\mathrm{air}}(z<0)=0$, and $\Phi_{\mathrm{air}}(z>L)=-k_0\Delta nL$. In \eqref{eqAir}, $a_{\mathbf{k}_{\perp}}(z)$ denotes the Fourier amplitude of the fluctuations superimposed on the background in air. In our experiment, only one $\mathbf{k}_{\perp}$ component (corresponding to the probe mode for $z=0^-$) is injected into the medium and all the other modes are set to zero. Using the sign convention adopted in \eqref{Fluctuation}, the phase difference $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$ can then be expressed as follows: \begin{equation} \label{TruePhiNL} \Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})=-\mathrm{arg}\bigg[\frac{a_{\mathbf{k}_{\perp}}(L^+)}{a_{\mathbf{k}_{\perp}}(0^-)}\bigg]. \end{equation} To derive the input/output relation between $a_{\mathbf{k}_{\perp}}(L^+)$ and $a_{\mathbf{k}_{\perp}}(0^-)$ we need to use both energy conservation at the interfaces and the Bogoliubov formalism for the evolution within the medium. In a first step, when entering the medium at $z=0$, the probe of amplitude $a_{\mathbf{k}_{\perp}}(0^-)$ transforms, by energy conservation, into $a_{\mathbf{k}_{\perp}}(0^+)=\sqrt{n}\,a_{\mathbf{k}_{\perp}}(0^-)$. Then, in analogy to the quantum formalism of dilute Bose gases, we can consider the term $a_{\mathbf{k}_{\perp}}(0^+)$ to be equivalent to the annihilation operator for the weakly interacting particles. Following the Bogoliubov approach, it can be connected to the non-interacting Bogoliubov operators $b_{\mathbf{k}_{\perp}}$ using the transformation: \begin{align} \label{ContinuityAirMedium} \renewcommand\arraystretch{1} \begin{bmatrix} a_{\mathbf{k}_{\perp}}(0^+) \\ a_{-\mathbf{k}_{\perp}}^{\ast}(0^+) \end{bmatrix} \renewcommand\arraystretch{1} \begin{bmatrix} u(\mathbf{k}_{\perp}) & \!\!\! v(\mathbf{k}_{\perp}) \\ v(\mathbf{k}_{\perp}) & \!\!\! u(\mathbf{k}_{\perp}) \end{bmatrix} \renewcommand\arraystretch{1} \begin{bmatrix} b_{\mathbf{k}_{\perp}} \\ b_{-\mathbf{k}_{\perp}}^{\ast} \end{bmatrix}. \end{align} Thereafter, the counter-propagating Bogoliubov excitations evolve along the optical axis, accumulating a propagation phase provided by the Bogoliubov dispersion relation $\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})$. This evolution is analogous to those observed after a quench in atomic BEC, and leads to synchronized phases between the counter-propagating phonon modes ($b_{\mathbf{k}_{\perp}}$ and $b_{-\mathbf{k}_{\perp}}^{\ast}$). Interestingly, this synchronized effect is at the origin of Sakharov oscillations \cite{hung2013cosmology}. We obtain: \begin{align} \label{ContinuityMediumAir} \renewcommand\arraystretch{1} \begin{bmatrix} a_{\mathbf{k}_{\perp}}(L^-) \\ a_{-\mathbf{k}_{\perp}}^{\ast}(L^-) \end{bmatrix} \renewcommand\arraystretch{1} \begin{bmatrix} u(\mathbf{k}_{\perp})e^{-i\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})L} & \!\!\! v(\mathbf{k}_{\perp})e^{i\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})L} \\ v(\mathbf{k}_{\perp})e^{-i\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})L} & \!\!\! u(\mathbf{k}_{\perp})e^{i\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})L} \end{bmatrix} \renewcommand\arraystretch{1} \begin{bmatrix} b_{\mathbf{k}_{\perp}} \\ b_{-\mathbf{k}_{\perp}}^{\ast} \end{bmatrix}. \end{align} We can then invert \eqref{ContinuityAirMedium} using the Bogoliubov normalization $u^{2}(\mathbf{k}_{\perp})-v^{2}(\mathbf{k}_{\perp})=1$ and inject the result into the right-hand side of \eqref{ContinuityMediumAir}. Finally, taking into account energy conservation at the medium output $a_{\mathbf{k}_{\perp}}(L^-)=\sqrt{n}\,a_{\mathbf{k}_{\perp}}(L^+)$, we obtain the following input-output relations: \begin{equation} \label{InputOutputRelation} \renewcommand\arraystretch{1} \begin{bmatrix} a_{\mathbf{k}_{\perp}}(L^+) \\ a_{-\mathbf{k}_{\perp}}^{\ast}(L^+) \end{bmatrix} = \renewcommand\arraystretch{1} \begin{bmatrix} U(\mathbf{k}_{\perp}) & \!\!\! V^{\ast}(-\mathbf{k}_{\perp}) \\ V(\mathbf{k}_{\perp}) & \!\!\! U^{\ast}(-\mathbf{k}_{\perp}) \\ \end{bmatrix} \renewcommand\arraystretch{1} \begin{bmatrix} a_{\mathbf{k}_{\perp}}(0^-) \\ a_{-\mathbf{k}_{\perp}}^{\ast}(0^-) \end{bmatrix} , \end{equation} where $U(\mathbf{k}_{\perp})$ and $V(\mathbf{k}_{\perp})$ are defined by \begin{align} \label{U} U(\mathbf{k}_{\perp})&=u^{2}(\mathbf{k}_{\perp})e^{-i\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})L}-v^{2}(\mathbf{k}_{\perp})e^{i\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})L}\quad\text{and} \\ \label{V} V(\mathbf{k}_{\perp})&=-2iu(\mathbf{k}_{\perp})v(\mathbf{k}_{\perp})\sin[\Omega_{\mathrm{B}}(\mathbf{k}_{\perp})L]. \end{align} In our configuration, right before the medium entrance ($z=0^-$), the mode with wavevector $-\mathbf{k}_{\perp}$ has a zero amplitude (i.e., the conjugate mode is seeded by vacuum) and therefore we set $a_{-\mathbf{k}_{\perp}}^{\ast}(0^-)=0$ in \eqref{InputOutputRelation}. As a result, we eventually come to the simple relation \begin{equation} \label{ProbeProbe} a_{\mathbf{k}_{\perp}}(L^+)=U(\mathbf{k}_{\perp})a_{\mathbf{k}_{\perp}}(0^-), \end{equation} from which we can simplify \eqref{TruePhiNL} and obtain the $\Omega_{\rm B}(\mathbf{k}_{\perp})$ dependence of $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$ from Eqs.~(\ref{DispersionRelation}), (\ref{BogAmplitudes}), and (\ref{U}): \begin{equation} \label{HPPhaseLossless} \begin{split} \Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})&=-\mathrm{arg}[U(\mathbf{k}_{\perp})] \\ &=\Omega_B(\mathbf{k}_{\perp}) L -\mathrm{arg}[u^2(\mathbf{k}_{\perp})-v^2(\mathbf{k}_{\perp}) e^{2i\Omega_B(\mathbf{k}_{\perp}) L}] \\ &=\arctan\bigg\{\frac{[k_{\perp}^{2}/(2k_{0})]^{2}+\Omega_{\rm B}(\mathbf{k}_{\perp})^{2}}{k_{\perp}^{2}/k_{0}\times\Omega_{\rm B}(\mathbf{k}_{\perp})}\tan[\Omega_{\rm B}(\mathbf{k}_{\perp})L]\bigg\}. \end{split} \end{equation} \noindent The second expression of \eqref{HPPhaseLossless} allows for a direct understanding of the role of the interferences between Bogoliubov phonon excitations in the correction to \eqref{ShiftFaccio}. In the free-particle regime ($k_{\perp}\xi\gg1$), the $v^2$ term in the second expression of \eqref{HPPhaseLossless} is negligible. This is equivalent to say that the phase-matching condition is not fulfilled and the four-wave-mixing process is inefficient to create the conjugate mode. Since $u(\mathbf{k}_{\perp})$ is real, we get \begin{equation} \Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})\underset{k_{\perp}\xi\gg1}{\simeq} \Omega_{\rm B}(\mathbf{k}_{\perp}) L. \end{equation} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{ThPhase_Shift_2.pdf} \caption{(a) Non-linear phase shift $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$ and (b) relative fringes displacement $\Delta S(\Lambda)$ as functions of, respectively, the Bogoliubov wavenumber $k_{\perp}=|\mathbf{k}_{\perp}|$ and the Bogoliubov wavelength $\Lambda = 2 \pi / k_{\perp}$ for different values of the optical non-linearity $\Delta n$. (a) The phase shift $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$ generally follows $\Omega_{\rm B}(\mathbf{k_{\perp}}) L$ in average (dashed lines), except for small $k_{\perp}$, where it saturates at a non-zero value for all values of $\Delta n \neq 0$. According to \eqref{LinearTrend}, the ${k}_{\perp}=0$ limit $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp}=0)$ is a growing function of $\Delta n$ and tends towards $\pi/2$ for large $\Delta n$. (b) The staircase structure of $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$ translates into oscillations in $\Delta S(\Lambda)$, most visible for large interactions. At long $\Lambda$, $\Delta S(\Lambda)$ increases linearly according to \eqref{ShiftLinearTrend}. This trend is present even for weak interactions ($\Delta n \to 0$). Solid lines corresponds to the full model and the black dashed-dotted line shows for comparison the displacement obtained from the geometric approach of \eqref{ShiftFaccio} with $\Delta n = 10^{-4}$.} \label{fig:ThPhaseShift} \end{figure} \noindent This limit exactly corresponds to the geometric model of \eqref{HPPhaseDiff} and correctly describes the transverse fringes displacement $\Delta S(\mathbf{k}_{\perp})$. However, this approximation is only valid in the parabolic dispersion limit at large momenta $k_{\perp}\xi\gg1$. In the phonon regime ($k_{\perp} \xi \ll 1$) where superfluidity is manifest, there are fundamental differences between the predictions of the geometrical approach and the full model, because the $v^2$ term cannot be neglected anymore in the second expression of \eqref{HPPhaseLossless}. This interference term leads to a correction to \eqref{HPPhaseDiff}, which we can expand analytically in the limit $k_{\perp}\xi\ll1$ to get at the leading order \begin{equation} \Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})\underset{k_{\perp}\xi\ll1}{=}\arctan(2k_{0}\Delta nL)+O(k_{\perp}^{2}\xi^{2}). \label{LinearTrend} \end{equation} An essential feature of \eqref{LinearTrend} is that the non-linear phase difference $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$ at the medium output converges towards a constant non-zero value for small $\mathbf{k}_{\perp}$. This can be clearly seen in Fig.~\ref{fig:ThPhaseShift}(a) for realistic experimental parameters. This non-zero value holds independently of the strength of the interactions $\Delta n$, and therefore is a general feature of paraxial fluids of light and a direct consequence of the interferences between Bogoliubov phonons. At large $\Delta n$, this offset saturates towards $\pi/2$. In between these two asymptotic limits, we also observe numerically a smooth staircase structure, which follows on average the trend of the geometric prediction \eqref{ShiftFaccio} in the large-$k_{\perp}$ limit (dashed lines in Fig.~\ref{fig:ThPhaseShift}(a)). This staircase structure becomes more and more visible as the optical non-linearity $\Delta n$ increases. This effect is less robust than the non-zero value of $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$ in the small $k_{\perp}$ limit previously described and does not hold for weak interactions $\Delta n$ (see the green curve of Fig.~\ref{fig:ThPhaseShift} (a)). Therefore, in order to evidence the presence of interferences between the Bogoliubov phonons, we will focus our attention on the phase difference at small $k_{\perp}$ by looking at the displacement $\Delta S$ as function of $\Lambda = 2\pi/k_{\perp}$. In Fig.~\ref{fig:ThPhaseShift}(b), we present this displacement $\Delta S$ (accessible experimentally) as function of the density modulation wavelength $\Lambda$. Because of the staircase structure of $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$, the displacement $\Delta S(\Lambda)$ oscillates at short $\Lambda$. Once again this effect disappear for weak interactions $\Delta n$ (green curve of Fig.~\ref{fig:ThPhaseShift} (b)). In the contrary, the linear increase of $\Delta S(\Lambda)$ when $\Lambda\gg \xi $ is always present for all $\Delta n$ and can be computed from \eqref{LinearTrend} as: \begin{equation} \Delta S(\Lambda)\underset{\Lambda/\xi\gg1}{\simeq}\frac{\arctan(2k_{0}\Delta nL)}{2\pi}\Lambda, \label{ShiftLinearTrend} \end{equation} This expression significantly contrasts with \eqref{SaturationFaccio}, obtained within the geometrical approach. For comparison, the total displacement (\eqref{ShiftFaccio}) predicted by the geometric method is plotted for $\Delta n=10^{-4}$ in black dashed-dotted line in Fig.~\ref{fig:ThPhaseShift}(b). As expected, the two descriptions match in the free-particle regime, but the linear increase (\eqref{ShiftLinearTrend}) at long $\Lambda$ is only present in the full model and not predicted by \eqref{ShiftFaccio}. In the next section, we explore experimentally this configuration in a hot atomic vapor to compare and verify the predictions of the two models. We will show evidences of the interference between the counter-propagating Bogoliubov collective excitations at small ${k}_{\perp}$ and of their role on the measurement of the dispersion relation following \eqref{ShiftLinearTrend}. \section{Experimental evidences of interferences between Bogoliubov excitations } \begin{figure}[ht] \center \includegraphics[width=0.96\columnwidth]{ShiftExp.pdf} \caption{(a) Experimental setup. A laser is shaped with two cylindrical lenses (CL). It is split and recombined (BS1) within an unbalanced Mach-Zehnder (MZ) interferometer to create a low contrast fringes pattern. Two sets of fringes (low and high intensity) are vertically shifted using an 90:10 beam-splitter (BS2) before going into an atomic vapor cell. The cell output is imaged on a camera after filtering in the Fourier space (FS). (b-c) Background-subtracted images reveal the small amplitude density modulation which propagates on a low (b) and a high intensity background fluid (c). The blue and red points in (d) are obtained by integrating the intensity in between the white dashed lines in (b) and (c) respectively. We first filter out the high frequency noise (dashed lines) and then normalize the envelopes (solid lines). The shift is computed by measuring the nearest peak-to-peak distance between the solid lines. (e) Fourier space image obtained by inserting a microscope objective (MO).} \label{fig:ShiftExp} \end{figure} \subsection{Experimental setup} Our experimental setup is sketched in figure~\ref{fig:ShiftExp} (a). A continuous-wave laser field at $780$~nm is elongated in the $x$ direction using a set of two cylindrical lenses. This cylindrical telescope is slightly defocused in order to loosely focus the beam onto the medium input facet. In this plane, the minor axis width$\;\omega_{0,y}$ (radius at $1/e^{2}$) is $500$ $\mu$m while the major axis one, $\omega_{0,x}$ is $1$ cm. The Rayleigh length associated to $\omega_{0,y}$ is much longer than the cell length ($L=7.5$~cm). The cell is filled with an isotopically pure $^{85}$Rb vapor heated up to $400$ K. The laser frequency is 2.6 GHz red-detuned with respect to the $F=3 \rightarrow F'$ transition of the $^{85}$Rb $D_{2}$ line, which ensure an linear index of refraction close to 1 and a transmission larger than 60~$\%$. The weak intensity modulation pattern is created using an unbalanced Mach-Zehnder interferometer. The beam is then split in two with a $90\!:\!10$ ($R\!:\!T$) beam splitter and recombined with a vertical shift to have simultaneously a weak intensity modulation evolving on-top of a high intensity beam forming the photon fluid. The medium exit plane is imaged on a CMOS camera with a $4f$ telescope. By inserting a microscope objective on the beam path, we can image the momentum distribution (inset (e) of figure~\ref{fig:ShiftExp}). Spatial Fourier filtering using a razor blade is conducted in this plane to filter out the conjugate beam that blurs the fringes pattern. As sketched in figure~\ref{fig:ShiftExp}, we perform simultaneously the experiment in two regimes: (i) low fluid density and (ii) high fluid density. The low density fluid corresponds to the case of a negligible non-linearity and provides a reference ($\Phi_{\mathrm{L}}$) for the fringe displacement. Comparing both patterns we observe the fringe displacement and measure $\Delta S$. \subsection{Data analysis and results} \begin{figure}[] \center \includegraphics[width=0.8\columnwidth]{MainShiftRes7.pdf} \caption{(a) Displacement $\Delta S$ as function of the modulation wavelength $\Lambda$ for a fluid intensity of $1.3$~W.cm$^{-2}$. The laser is 2.6 GHz red-detuned with respect to the $F=3 \rightarrow F'$ transition of the $^{85}$Rb $D_{2}$ line and the cell length is 7.5~cm. The experimental data (blue circle) are fitted with the full theory (blue line) for $\Delta n = 1.3 \, 10^{-6}$. For comparison, the displacement obtained using~\eqref{ShiftFaccio} has been plotted (black dashed dotted line). (b) Slope of the asymptotic linear increase of $\Delta S$ at large $\Lambda$ as function of the fluid intensity. (c) $\Delta n$ extracted with eq.~\eqref{LinearTrend} from the slope of $\Delta S$ at large $\Lambda$ as function of the fluid intensity. The linear scaling of $\Delta n$ with $I$ confirms that we are not saturating the non-linearity, i.e. $\Delta n =n_2 I$ with $n_2=1\times10^{-10}$m$^2$/W. } \label{fig:MainShiftRes} \end{figure} After removing the background intensity distribution to keep only the small density modulation on top of it, typical interference patterns obtained at the medium output plane are shown in figures~\ref{fig:ShiftExp} (b-c). The displacement between the fringes of the low intensity reference (b) and high intensity fluid (c) $\Delta S$ is clearly visible. We can note that the fringes are slightly bent in (c) because the intensity profile along the vertical axis is Gaussian and therefore the nonlinear phase shift accumulated during the propagation depends on $y$. In order to avoid errors during the data analysis, we average the intensity profile over the central region in between the white dotted line in figure~\ref{fig:ShiftExp} (b-c). After averaging, the resulting profiles are plotted in the (d) panel of Fig.~\ref{fig:ShiftExp}: the blue points are for the low intensity reference (a) while the red ones are for the high intensity non-linear case (b). The high frequency noise is filtered out and we remove the envelopes using a cubic spline interpolation method to normalize it and obtain the blue and red solid curves. The relative displacement is computed by averaging on several fringes the distance (black arrows) between the nearest maxima in the low intensity reference and in the high intensity case. In figure~\ref{fig:MainShiftRes} (a), we present the experimentally measured $\Delta S$ as function of the modulation wavelength $\Lambda$. The probe power is taken to ensure a modulation depth of less than $5\%$. The full model is shown in blue solid line. For comparison, the geometrical model computed with~\eqref{ShiftFaccio} is plotted in black dashed dotted line. These experimental results are a clear evidence that the geometrical model fails to describe the displacement $\Delta S$ at large $\Lambda$. Indeed, at large $\Lambda$, we observe a clear signature of the linear increase of $\Delta S$, as predicted by the full model. By including the interferences between elementary Bogoliubov excitations, the full model also allows to predict the value of the slope as function of the nonlinear refractive index change $\Delta n$. To verify the consistency of our model, we repeated the measurement of $\Delta S$ for various field intensities $I$ and estimated $\Delta n$ from the theoretical predictions, using \eqref{ShiftLinearTrend}. An intriguing feature of this equation is the non-linear behavior of the phase shift and the saturation at large interaction $\Delta n$ (figure~\ref{fig:MainShiftRes} (b)). However, as visible in figure~\ref{fig:MainShiftRes} (c), the value of $\Delta n$ extracted from \eqref{ShiftLinearTrend}, depends linearly with the background intensity $I$ as expected for a Kerr medium and it validates our experimental approach. \section{Interferences between Bogoliubov waves} \subsection{Is this interference effect robust with respect to corrections to the lossless local Kerr model ?} Several non-linear media have been proposed and implemented for fluid of light experiments, including atomic vapor \cite{vsantic2018nonequilibrium, fontaine2019attenuation}, methanol \cite{vocke2016role}, photo-refractive crystal \cite{wan2007dispersive, michel2018superfluid, boughdad2019anisotropic} and nematic liquid crystals \cite{Ferreira2018}. In these systems the microscopic origin of light-matter interaction strongly differs and can impact the properties of these fluids of light. To verify that these variations do not change significantly the long wavelength behavior of $\Delta S$, we numerically studied the dependence on four key parameters: (i) the losses $\alpha$, (ii) the width of the pump beam $w_{0,y}$, (iii) the non-locality and the (iv) saturation of the non-linear response. All the simulations have been performed using using a second order split-step method on the 2D nonlinear Schr\"{o}dinger equation and a common set of parameters. The background intensity is set to $\rho_0=2.5 \, 10^{5}$ W/m$^{2}$, the linear index to $n=1$, and the nonlinear index to $n_2=4 \, 10^{-11}$ m$^{2}$/W. In the lossless situation, the nonlinear change of refractive index is equal to $\Delta n=1.0 \, 10^{-5}$. The simulation results are presented in figure~\ref{fig:ShiftVersus}. \begin{figure}[ht!] \center \includegraphics[width=0.88\columnwidth]{ShiftVersus5.pdf} \caption{Numerical simulations (symbols) and analytical solutions (solid and dashed line) of $\Delta S$ as function of $\Lambda$. (a) Different cell transmissions $t$. The simulations and the theory model are similar as long as the transmission remains large ($t > 0.5$). (b) Different background widths $w_{0,y}$. (c) Different non-local transport length scales $l_{b}$. The oscillations are smoothed by non-locality. In our system $l_b<10\mu$m. (d) Different saturation intensity $\mathcal{I}_s$ of the non-linear Kerr interaction. Once again oscillations are smoothed by a saturation of the medium. For all the simulations $\Delta n=1.0 \, 10^{-5}$.} \label{fig:ShiftVersus} \end{figure} In figure~\ref{fig:ShiftVersus} (a), the displacement $\Delta S$ is plotted, for different cell transmissions $t = \exp(-\alpha L)$. The colored points stem from numerical simulations whereas the theoretical curves are plotted in black solid. A full derivation of the analytical model is given in the Supplementary Materials. Absorption smooths out the oscillations at small $\Lambda$, similarly to a reduction of the non-linear interactions $\Delta n$ as seen in Fig.\ref{fig:ThPhaseShift}. However the long-$\Lambda$ limit is qualitatively unchanged from the lossless case. The analytical predictions (dashed lines) gives a accurate estimation of the long-$\Lambda$ slope for transmission larger than 0.5. In figure~\ref{fig:ShiftVersus} (b), the effect of the finite beam width $w_{0,y}$ on the displacement $\Delta S$ is studied. We notice a reduction in the displacement oscillations amplitude when $w_{0,y}$ decreases. But as for the absorption, this effect does not affect the general shape of the displacement curve and its large $\Lambda$ linear trend. It can be understood intuitively, because for smaller beam width $w_{0,y}$, the Kerr self defocusing effect increases and therefore the background density spreads faster in the transverse plane along the propagation. This results in a decrease of the beam intensity on the major axis during the propagation and a consequent reduction of the effective interaction $\Delta n$. In figure~\ref{fig:ShiftVersus} (c), the impact of non-locality is reported. The nonlinear phase shift formula~\eqref{HPPhaseDiff} has been generalized using the non-local dispersion relation to take ballistic transport of excited atoms into account in the theory (see supplementary materials for details). The theoretical predictions are plotted in black solid and match perfectly with simulations. The main effect here is more subtle than the ones of the losses or the finite width of the beam. The slope of the linear trend at high $\Lambda$ remains unchanged but a significant modification of the displacement in the oscillating part is observed. This effect becomes significant for non-local ballistic length scales $l_{b}$ much longer than the typical ones of atomic vapors (typically, $l_{d} \approx 8$ $\mu$m at 400 K). The situation is very different in the thermo-optic media considered in~\cite{Vocke2015}, where the non-local length is on the order of 100~$\mu$m \cite{vocke2016role} and thus is able to significantly modify the behavior of the displacement for small $\Lambda$. Finally, in figure~\ref{fig:ShiftVersus} (d), we have studied the impact of a saturation of the non-linearity. The interaction strength $\Delta_n$ is replaced by $\Delta n\times \frac{1}{1+I/\mathcal{I}_s}$, where $\mathcal{I}_s$ is the saturation intensity. This model reproduces saturation observed in atomic media and photorefractive crystals. Compared to losses, the finite beam width, and non-locality, the effect of saturation on the displacement is the most important, as it not only attenuates the oscillations at small $\Lambda$ but also modifies the slope at large $\Lambda$. This correction is a consequence of the reduction of the sound velocity by a larger factor to $c_s\times\frac{1}{(1+I/\mathcal{I}_s)^2}$. Nevertheless, saturation does not affect the large-$\Lambda$ behavior of the displacement and, in particular, does not lead to the constant value for the large-$\Lambda$ limit predicted by the geometrical approach. All these simulations confirm that the corrections to the ideal lossless model are able to modify the behavior of $\Delta S$ at small $\Lambda$, but do not affect the linear trend at large $\Lambda$. The impact of the interferences between Bogoliubov modes is therefore robust and can thus be envisioned as a novel tool to probe the dispersion and the static structure factor of the photon fluid, in a similar way to what was done with atomic BEC \cite{shammass2012phonon}. In the last part of this work, we propose an explanation for the robustness of these interferences and for their importance to understand the superfluid behavior based on a universal mechanism known as the Sakharov oscillations \cite{sakharov1966initial,hung2013cosmology}. \vspace{-0.5cm} \subsection{Stimulated Sakharov-like oscillations} The Bogoliubov excitation (\eqref{Fluctuation}) generated at the entrance of the non-linear medium consists in a superposition of counter-propagating plane-waves in the $\mathbf{r}_{\perp}$ plane with opposite wavevectors $\mathbf{k}_{\perp}$ and $-\mathbf{k}_{\perp}$. These Bogoliubov components are simultaneously generated at the medium entrance and oscillate at the respective angular frequencies $\Omega_{\rm B}(\mathbf{k}_{\perp})$ and $-\Omega_{\rm B}(\mathbf{k}_{\perp})$ along the propagation axis, which is analogous to time. As a consequence, at a given effective time $z$, these components will have acquired a relative phase difference of $2\Omega_{\rm B}(\mathbf{k}_{\perp})z$. Interestingly, this behavior is very similar to the one predicted for the Sakharov oscillations in cosmology \cite{sakharov1966initial,cosmobook} and can be understood in terms of the interference between the counter-propagating phonons that are spontaneously generated after a quantum quench \cite{hung2013cosmology, Martone2018, Robertson2017}. Here, we draw the analogy and we consider our experimental observations as a stimulated analogue of the Sakharov-like oscillations by seeding phonons on the $+\mathbf{k}_{\perp}$ mode. For paraxial fluids of light experiments, we only have access to the intensity at $z=L$ and not inside the medium. Therefore, we have solved numerically the nonlinear Schr\"{o}dinger equation (\eqref{NLSE}) and computed the intensity of the total electric field inside the non-linear medium at every transverse planes along the $z$ axis to evidence the stimulated Sakharov-like oscillations in this optical system. In figure~\ref{fig:Contrast}, we present the intensity profiles along $x$ in both the phonon regime ($k_{\perp}\xi<1$) in panel (a) and in the free-particle regime ($k_{\perp}\xi\leq1$) in panel (b). On this figure, the background fluid density has been subtracted. Two remarkable observations can be highlighted in figure~\ref{fig:Contrast} . First, constructive (maximum contrast) or destructive (minimum contrast) interferences between the counter-propagating Bogoliubov waves are clearly visible in the transverse direction. Along $z$, constructive interferences are located at $\Omega_{\rm B}(\mathbf{k}_{\perp}) z = p \pi$, with $p\geqslant1$ integer valued. Reversely, when $\Omega_{\rm B}(\mathbf{k}_{\perp}) z = (p+1/2) \pi$, Bogoliubov modes destructively interfere and the contrast is minimum. Interestingly, these interference patterns can also be observed at a fixed effective time (e.g. $z=L$) by changing the value of $\mathbf{k}_{\perp}$. In figure~\ref{fig:Contrast} (c), we have extracted the dispersion relation using this approach. At fixed $z=L$, we reported each value of $\mathbf{k}_{\perp}$ leading to a visibility maximum, as we know that $\Omega_{\rm B}(\mathbf{k}_{\perp}) = p \pi/L$ (green diamonds on figure~\ref{fig:Contrast} (c)). To increase the resolution of the reconstruction we can apply the same procedure with the visibility minima (black circles on figure~\ref{fig:Contrast}(c)) and obtain a sampling of the dispersion for $\Omega_{\rm B}(\mathbf{k}_{\perp}) = (p+1/2) \pi/L$. Secondly, we can notice that the reduction in the contrast of the interference fringes that is observed when the two Bogoliubov components destructively interfere is more pronounced at low (Fig. \ref{fig:Contrast} (a)) than at large (Fig. \ref{fig:Contrast} (b)) wavevectors. This effect is not present in the spontaneous Sakharov oscillations triggered by zero-point fluctuations \cite{hung2013cosmology} and is a direct consequence of the stimulation of the process by the classical incident field in the $+\mathbf{k}_{\perp}$ mode. Indeed by seeding the process we break the symmetry between $+\mathbf{k}_{\perp}$ and $-\mathbf{k}_{\perp}$ modes, so the visibility reduction can be understood by comparing $|u^2(\mathbf{k}_{\perp})|$ and $|v^2(\mathbf{k}_{\perp})|$ using \eqref{DispersionRelation}. When $k_{\perp}\xi\gg 1$, then $|v^2(\mathbf{k}_{\perp})|$ becomes small comparing to $|u^2(\mathbf{k}_{\perp})|$ and therefore the interference contrast is reduced. As a consequence, we can see in figure~\ref{fig:Contrast} that the trajectories of a bright fringe (black dashed line) are much less deformed with respect to the speed of sound propagation (blue solid line) for $k_{\perp}\xi=1$ (panel (b)) than for $k_{\perp}\xi= 0.5$ (panel (a)), where a staircase-like structure is apparent. This exemplifies once again why the geometrical approach is a good approximation only in the free particle regime ($k_{\perp}\xi> 1$). \begin{figure}[t!] \centering \includegraphics[width=0.95\columnwidth]{IAlongZ4light2.pdf} \caption{Evolution along the $z$ axis of the transverse field intensity in a given $y$ plane for (a) $k_{\perp}\xi=0.5$ and (b) $k_{\perp}\xi=1$. The background intensity is subtracted on both images. The black dashed curves follow the center of a bright fringe. The blue solid line is a trajectory of a Bogoliubov mode at the speed of sound. (c) Visibility of the interference fringes at the output plane $z=L$ as function of $k_{\perp}$ (solid blue line). Visibility maxima are shown by green diamonds and minima are shown by black circle. The dispersion relation (solid red line - right axis) is reconstructed using a sampling based on the position of the maxima ($\Omega_{\rm B}(\mathbf{k}_{\perp}) = p \pi/L$) and minima ($\Omega_{\rm B}(\mathbf{k}_{\perp}) = (p+1/2) \pi/L$). Here $\Delta n=1.0 \, 10^{-5}$.} \label{fig:Contrast} \end{figure} \section*{Conclusion} In this work, we have studied the Bogoliubov excitations of a photon superfluid. We have experimentally demonstrated a previously undetected phenomenon whereby the propagation of plane wave excitations in the fluid does not tend to the geometric prediction for the displacement, namely the product of the sound velocity $c_s$ by the effective time $L$, but keeps growing linearly with the excitation wavelength. This is shown to be a direct consequence of the interference between counter-propagating Bogoliubov modes that are generated at an interaction quench and have only been observed in atomic superfluids \cite{cheneau2012light}. These interferences can also be interpreted as stimulated Sakharov oscillations \cite{hung2013cosmology}, i.e. an analogue of fluctuations imprinted in the primordial Universe and visible as oscillations in the cosmic microwave background power spectrum \cite{sakharov1966initial,cosmobook}. These results shows that these interferences are an essential element to describe accurately the dynamics of excitations in superfluids of light. It brings a novel understanding of superfluidity for paraxial fluids of light and opens exciting perspectives for studying quantum effects in these systems, including quantum depletion and entanglement of phonons in Sakharov oscillations. \vspace{-0.5cm} \section*{Funding Information} This work has received funding from the French ANR grant (C-FLigHT 138678, QFL) and from the European Union’s Horizon 2020 Research and Innovation Program under grant agreement No 820392 (PhoQuS). QG and AB thank the Institut Universitaire de France (IUF) for support. IC acknowledges support from the Provincia Autonoma di Trento. \section*{Supplementary Materials} \subsection*{Experimental alignment procedure} In order to accurately measure the displacement $\Delta S$, one needs to precisely align the reference beam with respect to the high power one. The alignment procedure is as follow: \begin{itemize} \item[(1)] We first make sure that both background beams (probe off) roughly propagate with the same transverse wave-vector and are correctly positioned one above the other (their respective center should lie on the same vertical axis). \item[(2)] We then switch the probe beam on. The next step is to align the interference fringes of the lower and upper interference patterns. We start by removing the cell and make sure that bright fringes on the bottom face bright fringes on the top. Of course, by doing so, the optical axis of the lower and upper beams are not parallel anymore. We should then switch to k-space, bring back the backgrounds to the initial position ($k_{\perp} = 0$) and repeat this procedure iteratively (beam walking). We finally check that for every transverse wave-vector $k_{\perp}$ the interference fringes remain aligned before putting the cell back on the beams path. \end{itemize} \subsection*{Photon absorption} Photon absorption is described in \eqref{NLSE} by the term proportional to $\alpha\geqslant0$. When $\alpha\neq0$, the $\mathbf{r}_{\perp}$-independent electric-field envelope $\mathcal{E}_{0}$ and its linearized fluctuations $\delta\mathcal{E}$ acquire the following $z$-dependences: \begin{align} \label{BackgroundAbsorption} \mathcal{E}_{0}(z)&\left.=\sqrt{\rho_{0}}e^{-\alpha z/2-ik_{0}\Delta n(1-e^{-\alpha z})/\alpha},\right. \\ \notag \delta\mathcal{E}(\mathbf{r}_{\perp},z)&\left.=e^{-ik_{0}\Delta n(1-e^{-\alpha z})/\alpha}\right. \\ \notag &\left.\hphantom{=}\times\int\frac{d^{2}\mathbf{k}_{\perp}}{(2\pi)^{2}}\Big\{u(\mathbf{k}_{\perp},z)b_{\mathbf{k}_{\perp}}e^{i[\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp}-\int_{0}^{z}dz'\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},z')]}\right. \\ \label{FluctuationsAbsorption} &\left.\hphantom{=}+v^{\ast}(\mathbf{k}_{\perp},z)b_{\mathbf{k}_{\perp}}^{\ast}e^{-i[\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp}-\int_{0}^{z}dz'\Omega_{\mathrm{B}}^{\ast}(\mathbf{k}_{\perp},z')]}\Big\}.\right. \end{align} In Eqs.~(\ref{BackgroundAbsorption}) and (\ref{FluctuationsAbsorption}), $\rho_{0}$ is the density of the paraxial fluid of light at $z=0$ and $\Delta n=g\rho_{0}/k_{0}$ is the corresponding non-linearity. We treat the $z$-dependence of the Bogoliubov spectrum $\Omega_{\mathrm{B}}$ and of the Bogoliubov amplitudes $u$ and $v$ in the adiabatic-evolution approximation \cite{born1928beweis, Larre2017}. Searching for real-valued $u$ and $v$ such that $u^{2}-v^{2}=1$ for all $z$, this gives \begin{align} \label{BogoliubovSpectrumAbsorption} \Omega_{\mathrm{B}}(\mathbf{k}_{\perp},z)&=\sqrt{\frac{k_{\perp}^{2}}{2k_{0}}\bigg(\frac{k_{\perp}^{2}}{2k_{0}}+2k_{0}\Delta ne^{-\alpha z}\bigg)}-\frac{i\alpha}{2}, \\ \label{BogoliubovAmplitudesAbsorption} u(\mathbf{k}_{\perp},z)\pm v(\mathbf{k}_{\perp},z)&=\bigg\{\frac{k_{\perp}^{2}}{2k_{0}}\bigg/\mathrm{Re}[\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},z)]\bigg\}^{\pm\frac{1}{2}}. \end{align} All the observables computed in this paper rely on the input-output relation (\ref{InputOutputRelation}), which also holds when $\alpha\neq0$ provided (\ref{U}) and (\ref{V}) are respectively replaced with \begin{align} \notag U(\mathbf{k}_{\perp})&\left.=u(\mathbf{k}_{\perp},0)u(\mathbf{k}_{\perp},L)e^{-i\int_{0}^{L}dz\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},z)}\right. \\ \label{UAbsorption} &\left.\hphantom{=}-v(\mathbf{k}_{\perp},0)v(\mathbf{k}_{\perp},L)e^{i\int_{0}^{L}dz\Omega_{\mathrm{B}}^{\ast}(\mathbf{k}_{\perp},z)},\right. \\ \notag V(\mathbf{k}_{\perp})&\left.=u(\mathbf{k}_{\perp},0)v(\mathbf{k}_{\perp},L)e^{-i\int_{0}^{L}dz\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},z)}\right. \\ \label{VAbsorption} &\left.\hphantom{=}-v(\mathbf{k}_{\perp},0)u(\mathbf{k}_{\perp},L)e^{i\int_{0}^{L}dz\Omega_{\mathrm{B}}^{\ast}(\mathbf{k}_{\perp},z)}.\right. \end{align} For example, the non-linear phase $\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})$ expected for $\alpha\neq0$ reads \begin{align} \notag &\left.\Phi_{\mathrm{NL}}(\mathbf{k}_{\perp})\right. \\ \notag &\left.\quad=\arctan\!\bigg(\frac{[k_{\perp}^{2}/(2k_{0})]^{2}+\mathrm{Re}[\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},0)]\mathrm{Re}[\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},L)]}{k_{\perp}^{2}/(2k_{0})\times\{\mathrm{Re}[\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},0)]+\mathrm{Re}[\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},L)]\}}\right. \\ \label{NLPhaseAbsorption} &\left.\quad\hphantom{=}\times\tan\!\bigg\{\int_{0}^{L}dz\mathrm{Re}[\Omega_{\mathrm{B}}(\mathbf{k}_{\perp},z)]\bigg\}\bigg),\right. \end{align} from which we infer the following linear trend of the transverse displacement $\Delta S(\Lambda)$ in the long-wavelength, superfluid regime: \begin{equation} \label{LinearTrendAbsorption} \Delta S(\Lambda)\simeq\frac{1}{2\pi}\arctan\!\bigg(2k_{0}\Delta nL\times\frac{2}{\alpha L}\frac{1-e^{-\alpha L/2}}{1+e^{-\alpha L/2}}\bigg)\Lambda. \end{equation} \subsection*{Non-locality model} So far, we have assumed that the non-linear change of refractive index $\Delta n(\mathbf{r}_{\perp})$ at a given position $\mathbf{r}_{\perp}$ in the transverse plane only depends on the laser intensity at this point, $\propto|\mathcal{E}(\mathbf{r}_{\perp})|^{2}$, and not on the intensity nearby. However, such a local dielectric response may not correctly describe hot atomic vapors, in which the ballistic transport of excited atoms on large length scales induces non-locality \cite{Skupin2007}. Indeed, the coherence between the ground and excited states of an atom, from which the medium non-linear response arises, is more likely to be transported away in hot vapors, as the atomic motion is more significant at large temperatures. Following \cite{Skupin2007}, we can express the non-local non-linear change of refractive index $\Delta n^{\rm nl}(\mathbf{r}_{\perp})$ as follows: \begin{equation} \Delta n^{\rm nl}(\mathbf{r}_{\perp}) = n_{2} \int d^{2}\mathbf{r}_{\perp}' G_{\rm b} (\mathbf{r}_{\perp}-\mathbf{r}'_{\perp}) |\mathcal{E}(\mathbf{r}'_{\perp})|^{2}, \end{equation} where $G_{\rm b}$ stands for the steady-state ballistic response function. By using the convolution theorem, we can then easily rewrite the Bogoliubov dispersion relation (\ref{DispersionRelation}) in the non-local case: \begin{equation} \label{NLDispRelation} \Omega_{\rm B}^{\mathrm{nl}}(\mathbf{k}_{\perp}) = \sqrt{\frac{k_{\perp}^{2}}{2 k_{0}} \bigg[ \frac{k_{\perp}^{2}}{2 k_{0}} + 2 k_{0} |n_{2}| \rho_{0} \widetilde{G}_{\rm b}(\mathbf{k}_{\perp}) \bigg]}, \end{equation} where $\widetilde{G}_{\rm b}$ is the Fourier transform of $G_{\rm b}$. By introducing the ballistic transport length scale $\ell_{\rm b} = u\tau$---where $u = \sqrt{2 k_{\rm B} T/ m}$ is the most probable speed of the atoms in the transverse plane (at the vapor temperature $T$) and $\tau = 2/\gamma$ is the characteristic decoherence time ---and by calling $\mathrm{erfc}$ the complementary error function, $\widetilde{G}_{\rm b}$ can be written in the following way: \begin{equation} \widetilde{G}_{\rm b}(\mathbf{k}_{\perp}) = \sqrt{\pi} \frac{e^{1/(k_{\perp}\ell_{\rm b})^2}}{k_{\perp} \ell_{\rm b}} \mathrm{erfc}\bigg(\frac{1}{k_{\perp}\ell_{\rm b}}\bigg). \end{equation} The solid lines in Fig.~\ref{fig:ShiftVersus}(c) have been obtained by plugging \eqref{NLDispRelation} into \eqref{HPPhaseLossless}. In the experiment, the vapor temperature was 400 K, leading to a non-local ballistic length $\ell_{\rm b}$ of about 8 $\mu$m. As can be seen in Fig.~\ref{fig:ShiftVersus}(c), non-local effects do not significantly affect the shift $\Delta S$ for such a small value of $\ell_{\rm b}$.
{'timestamp': '2020-06-01T02:03:58', 'yymm': '2005', 'arxiv_id': '2005.14316', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14316'}
arxiv
\section*{Summary} \begin{enumerate} \item Ecologists use distance sampling to estimate the abundance of plants and animals while correcting for undetected individuals. By design, data collection is s
implified by requiring only the distances from a transect to the detected individuals be recorded. Compared to traditional design-based methods that require restrictive assumption and limit the use of distance sampling data, model-based approaches enable broader applications such as spatial prediction, inferring species-habitat relationships, unbiased estimation from preferentially sampled transects, and integration into multi-type data models. Unfortunately, model-based approaches require the exact location of each detected individual in order to incorporate environmental and habitat characteristics as predictor variables. \item Using a hierarchical specification, we modified model-based methods for distance sampling data by including a probability distribution that accounts for location uncertainty generated when only the distances are recorded. We tested and demonstrated our method using a simulation experiment and by modeling the habitat use of Dickcissels (\textit{Spiza americana}) using distance sampling data collected from the Konza Prairie in Kansas, USA. \item Our results showed that ignoring location uncertainty can result in biased coefficient estimates and predictions. However, accounting for location uncertainty remedies the issue and results in reliable inference and prediction. \item Location uncertainty is difficult to eliminate when collecting some types of ecological data. Like other types of measurement error, hierarchical models can accommodate the data collection process thereby enabling reliable inference. Our approach is a significant advancement for the analysis of distance sampling data because it remedies the deleterious effects of location uncertainty and requires only distances be recorded. In turn, this enables historical distance sampling data sets to be compatible with modern data collection and modeling practices.\bigskip{} \end{enumerate} \begin{doublespace} \noindent \textbf{\textit{\emph{Key-words: }}}ecological fallacy, hierarchical model, integrated population model, point process, resource selection, species distribution model \vspace{-1cm} \end{doublespace} \section*{Introduction} Distance sampling has been widely used for nearly a half a century to estimate abundance of plants and animals. This method involves one or more observers recording the distances from point or line transects to detected individuals (\citealt{burnham1980estimation,buckland2001introduction}). Early statistical methods for the analysis of distance sampling data used design-based estimators that accounted for errors in detection. This resulted in a hybrid analysis, involving model-based methods used to account for errors in detection and design-based methods used to estimate abundance (\citealt{buckland2016model}). More recently, fully model-based approaches have been developed to enable spatial prediction, statistical inference regarding species-habitat relationships, and unbiased estimation of abundance from point and line transects that are placed non-randomly (\citealt{stoyan1982remark,hogmander1991random,hedley2004spatial,johnson2010model,miller2013spatial}). Current areas of research include data assimilation, fusion, integration or reconciliation requiring the development of joint models that combine multiple types of data. Such recent developments include integrated species distribution models that incorporate distance sampling data and presence-only data collected by citizen scientists (\citeauthor{Fletcher2019} \textit{in press}). Spatially-explicit models that link abundance to environmental and habitat characteristics are used in many areas of ecological research. For example, presence-absence, count, and presence-only data enable spatial prediction of species distributions (\citealt{elith2009species,hefleyHSDM}). Other examples include integrated species distributions models used to predict abundance and occupancy with higher accuracy by combining multiple types of data (e.g., \citealt{williams2017integrated}; \citeauthor{Fletcher2019} \textit{in press}). A common theme among these approaches is that the location of the individual is conceptualized as a point in geographic space where environmental conditions and habitat characteristics are measured (\citealt{hefleyHSDM,kery2015applied,milller2019}). Those location-specific conditions and characteristics (hereafter ``predictor variables'') are used to specify an intensity function which enables statistical inference regarding species-habitat relationships and provides estimates of abundance and occupancy that can be mapped at any spatial resolution. This framework relies on characterizing the distribution of abundance as a spatial point processes which is the same approach used to develop models for distance sampling data (\citealt{stoyan1982remark,hogmander1991random,hedley2004spatial,johnson2010model,miller2013spatial}). Often ecologists do not have the exact locations of individuals. For example, exact locations are unrecoverable from distance sampling data collected along line transects unless auxiliary information such as the location of the observer at the time of detection is recorded. Regardless of the mechanisms that obscure the exact locations, uncertainty limits the usefulness of the data because values of the predictor variables cannot be obtained. For example, if a distance sampling data set does not include the exact locations then analysis is restricted to models that include only the predictor variables that are constant for all individuals detected from a given transect (\citealt{johnson2010model,buckland2016model}). In practice, this lead to model-misspecification and lower predictive accuracy. Sometimes researchers attempt to mitigate this problem by using surrogate predictor variables such as the habitat characteristics at a convenient location (e.g., the center of the transect) or the average value of the predictor variable calculated from an area within an arbitrary distance of the transect line or point. Use of surrogate predictor variables can also bias parameter estimates and predictions. In some cases, the bias can invert the inferred relationship between predictor variables and abundance (\citealt{hefley2014correction,hefley2017bias,walker2018bias}; \citeauthor{walker2019bias} \textit{under revision}). To eliminate these issues, we present a model-based approach for distance sampling data that can be used when the location of individuals is uncertain. Our approach enables the same inference as model-based approaches requiring exact locations and can be incorporated into integrated data models that are based on an underlying point process (e.g., \citeauthor{Fletcher2019} \textit{in press}). Our approach relies on a hierarchical modeling framework, but results in relatively simple marginalized distributions that can be used for efficient Bayesian or likelihood-based estimation and inference. Using simulated data, we evaluate the ability of our method to account for location uncertainty and compare it to existing approaches commonly used in practice. Finally, we demonstrate our method using line transect data to estimate habitat use of a grassland bird species, the Dickcissel. \begin{spacing}{1.9} \section*{Materials and methods} \end{spacing} \begin{spacing}{1.9} \subsection*{MODEL-BASED DISTANCE SAMPLING} \end{spacing} \noindent A common practice when constructing statistical models is to choose a probability distribution that matches important characteristics of the data. For example, if the data are counts then a statistical model that assumes a Poisson distribution might be used. Counts must be non-negative integers and the Poisson distribution is capable of generating non-negative integers (i.e., the support of the data and probability distribution match). As a result, a statistical model that is constructed using a Poisson distribution has the potential to have generated the observed data. Adhering to this principal results in generative statistical models that capture important characteristics of the process (e.g., predicted counts that are always $\geq0$), which can be important for interpretation and model checking (\citealt{conn2018guide,gelfand2018bayesian}). When constructing a statistical model for distance sampling data probability distributions should match the following characteristics of the data: 1) the number and locations of individuals are random variables; and 2) the location of individuals exists in continuous geographic space. In what follows, we use the term \textquotedblleft continuous geographic space\textquotedblright{} to describe spatial areas that are defined as polygons and contain an infinite number of possible locations (points) within the boundary of each polygon. Researchers have developed model-based approaches for the spatial analysis of distance sampling data (e.g., \citealt{miller2013spatial,buckland2016model}), but existing approaches do not account for location uncertainty except in the case where the distribution of individuals is spatially uniform (e.g., \citealt{borchers2015unifying}). Our approach builds upon one of the most common model-based methods for the spatial analysis of distance sampling data that uses an inhomogeneous Poisson point process (IPPP) distribution, which allows for heterogeneity in the spatial distribution of individuals. In what follows, we review previously developed modeling approaches based on the IPPP distribution and then extend this model to account for location uncertainty. The IPPP distribution describes the random number and locations of individuals within a continuous geographic space. The IPPP is constructed by assuming the spatial distribution of individuals is explained by an intensity function, $\lambda(\mathbf{s})$, where $\mathbf{s}$ is a vector that contains the coordinates of a single location contained within the study area $\mathcal{S}$. The intensity function is commonly specified as \begin{equation} \text{log}(\lambda(\mathbf{s}))=\beta_{0}+\boldsymbol{\mathbf{x}}(\mathbf{s})^{'}\boldsymbol{\beta}\:, \end{equation} where $\beta_{0}$ is the intercept, $\boldsymbol{\mathbf{x}}(\mathbf{s})$ is a $p\times1$ vector that contains predictor variables at location $\mathbf{s}$, and $\boldsymbol{\beta}\equiv(\beta_{1},...,\beta_{p})^{'}$ is a vector of regression coefficients. Estimating the regression coefficients from distance sampling data enables inference regarding species-habitat relationships. An important property of the IPPP distribution is that estimates of abundance can be obtained for any geographic region that is contained within the study area. More precisely, for any sub-region $\mathcal{A}$ contained within the study area $\mathcal{S}$, an estimate of abundance is \begin{equation} \bar{\lambda}=\intop_{\mathcal{A}}\lambda(\mathbf{s})d\mathbf{s}\:, \end{equation} which is referred to as the integrated intensity function. Clearly, accurate estimates of abundance requires reliable estimation of the intensity function ($\lambda(\mathbf{s})$) which depends on the intercept ($\beta_{0}$), regression coefficients ($\boldsymbol{\beta}$) and predictor variables ($\boldsymbol{\mathbf{x}}(\mathbf{s})$). As with traditional distance sampling methods, the IPPP can incorporate a detection function, which we denote $q(\mathbf{s})$, where $q(\cdot)$ is the usual detection function (e.g., half-normal function) that depends on the location $\mathbf{s}$ by way of the distance between an individual and the point or line transect. Employing the notation from above, the probability distribution function for the IPPP is \begin{equation} [\mathbf{z}_{1},\mathbf{z}_{2},...,\mathbf{z}_{n},n|\lambda(\mathbf{s}),q(\mathbf{s})]=e^{\mathbf{-\intop_{\mathcal{S}}\lambda(\mathbf{s})}q(\mathbf{s)}d\mathbf{s}}\prod_{i=1}^{n}\lambda(\mathbf{z}_{i})q(\mathbf{z}_{i})\:, \end{equation} where $\mathbf{z}_{1},\mathbf{z}_{2},...,\mathbf{z}_{n}$ are the coordinate vectors (i.e., exact locations) of the $n$ detected individuals (\citealt{johnson2010model}). The product of $\lambda(\mathbf{s})$ and $q(\mathbf{s})$ is referred to as the thinned intensity function (\citealt{cressie1993statistics}, p. 689). The bracket notation $[\cdot]$, used on the left hand side of Eq. 3, represents a probability distribution. Using bracket notation, $[y,z]$ is a joint distribution where $y$ and $z$ are the random variables, $[y|z]$ is a conditional distribution where $y$ is the random variable given $z$. The marginal distribution of $y$ can be obtained by ``integrating out'' $z$ (i.e., $[y]=\int[y,z]dy$). When expressed as a likelihood function, Eq. 3 can be used to estimate parameters associated with the intensity and detection functions. For example, using Eq. 3 as a likelihood function facilitates estimation of the regression coefficients, $\boldsymbol{\beta}$, from Eq. 1 and enables inference regarding species-habitat relationships. Evaluation of the likelihood function, however, requires the exact locations of all $n$ detected individuals. \vspace{0.66cm} \begin{spacing}{1.9} \subsection*{ACCOUNTING FOR UNKNOWN LOCATION} \end{spacing} \noindent Parameter estimation and statistical inference using the IPPP distribution requires the exact location of each detected individual because the likelihood function from Eq. 3 assumes that $\mathbf{z}_{i}$ is recorded. When collecting distance sampling data, the exact location of each detected individual is usually not recorded which generates location uncertainty. Below, we extend the IPPP distribution so that the model can be implemented when the location of individuals is uncertain. Our extension could be viewed as a special case of the unified approach of \citet{borchers2015unifying}, however, the authors did not present how models that involve nonuniform distribution of plants and animals, such as the IPPP, might be implemented. In what follows, we show in detail how to implement such models. Using distance sampling data collected from a line transect, an individual's exact location is an unknown point that lies on one of two lines parallel to the transect at a perpendicular distance that is equal to the recorded distance. Similarly, for point transects, the location of an individual is an unknown point on the perimeter of a circle centered on the transect with a radius that is equal to the recorded distance. If the exact location is unknown but lies on a line or perimeter of a circle, a model that is based on the IPPP distribution and accounts for location uncertainty in \begin{equation} [\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{n},n|\lambda(\mathbf{s}),q(\mathbf{s})]=e^{\mathbf{-\intop_{\mathcal{S}}\lambda(\mathbf{s})}q(\mathbf{s)}d\mathbf{s}}\prod_{i=1}^{n}{\displaystyle |\mathcal{L}_{i}|}^{-1}\intop_{\mathcal{L}_{i}}\lambda(\mathbf{z}_{i})q(\mathbf{z}_{i})d\mathbf{z}_{i}\: \end{equation} where the modification involves replacing the product of $\lambda(\mathbf{z}_{i})$ and $q(\mathbf{z}_{i})$ in Eq. 3 with an integral. In Eq. 4, the random variable is the coordinate vectors $\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{n}$ and number of detected individuals $n$. As in Eq. 3, $\mathbf{z}_{i}$ is the exact coordinate of the $i^{\text{th}}$ individual, which is integrated out of the joint distribution to obtain Eq. 4. Knowing the distance and transect of detection determines the limits of integration in Eq. 4, where $\mathcal{L}_{i}$ is the parallel lines (circle perimeter) and $|\mathcal{L}_{i}|$ is the length of the lines (or circle perimeter). Conceptually $\mathbf{y}_{i}$ can be thought of as the coordinate where the $i^{\text{th}}$ individual was ``recorded'', which is different than the true location of the individual $\mathbf{z}_{i}$ because the ``recorded'' location is a uniformly distributed point on $\mathcal{L}_{i}$ (see Appendix S1 for more detail).\vspace{0.66cm} \subsection*{ACCOUNTING FOR MEASUREMENT ERROR IN DISTANCES} In many cases the distance between the transect and the detected individuals may be recorded with error, which introduces another source of location uncertainty. For example, in the data illustration that follows, the distances recorded for individual birds close to the transect line were almost certainly recorded with greater accuracy than those detected further from the line. To account for error in distances, we construct a hierarchical model where the observed random variable, $\mathbf{y}_{i}$, is the ``recorded'' location which depend on the exact locations $\mathbf{z}_{i}$ (see Appendix S1 for more detail). A hierarchical model that accounts for both types of location uncertainty is \begin{equation} [\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{n},n|\boldsymbol{\theta},\lambda(\mathbf{s}),q(\mathbf{s})]=e^{\mathbf{-\intop_{\mathcal{S}}\lambda(\mathbf{s})}q(\mathbf{s)}d\mathbf{s}}\prod_{i=1}^{n}{\displaystyle |\mathcal{L}_{i}|}^{-1}\intop_{\mathcal{S}}[d(\mathbf{y}_{i},t_{i})|d(\mathbf{z}_{i},t_{i}),\boldsymbol{\theta}]\lambda(\mathbf{z}_{i})q(\mathbf{z}_{i})d\mathbf{z}_{i}\:. \end{equation} In equation above, $\boldsymbol{\theta}\equiv(\theta_{1},\theta_{2},...\theta_{m})^{'}$ is a vector of unknown parameters of $[d(\mathbf{y}_{i},t_{i})|d(\mathbf{z}_{i},t_{i}),\boldsymbol{\theta}]$, which is a probability distribution that describes the recorded distances $d(\mathbf{y}_{i},t_{i})$ given the true distances $d(\mathbf{z}_{i},t_{i})$. The function $d(\cdot,\cdot)$ returns the perpendicular distance between a coordinate vector and the transect, $t_{i}$, where the $i^{\text{th}}$ individual was detected. We refrain from specifying the functional form of $[d(\mathbf{y}_{i},t_{i})|d(\mathbf{z}_{i},t_{i}),\boldsymbol{\theta}]$ because this portion of our model is general and any appropriate distribution can be chosen as shown in the data illustration. \subsection*{MODEL IMPLEMENTATION} For all models, estimating the parameters associated with the intensity ($\mathbf{\lambda(\mathbf{s})}$) and detection ($q(\mathbf{s})$) functions requires evaluation of the the integrals in the likelihood. In nearly all situation, solving the integrals will require numerical integration using approximations such as Monte Carlo or numerical quadrature (\citealt{BB2L}). Using a numerical quadrature involves the approximation \begin{equation} \intop_{\mathcal{A}}f(\mathbf{s)}d\mathbf{s}\approx|\mathcal{A}|{\displaystyle \sum_{q=1}^{Q}}f(\mathbf{s}_{q})\:, \end{equation} where $f(\mathbf{s})$ is an unspecified function, $\mathcal{A}$ is an arbitrary region (or line) with area (or length) $|\mathcal{A}|$ and $Q$ is the number of (equally spaced) points partitioning the polygon (or line). The function $f(\mathbf{s})$ is specified based on the integral. For example, Eqs. 3\textendash 5 contains the integral $\mathbf{\intop_{\mathcal{S}}\lambda(\mathbf{s})}q(\mathbf{s)}d\mathbf{s}$, which could be approximated by defining $f(\mathbf{s})\equiv\lambda(\mathbf{s})q(\mathbf{s}).$ Accounting for location uncertainty requires a hierarchical model because the probability distribution in Eqs. 4 and 5 were constructed by conditioning on the exact locations which are random variables that follow an IPPP distribution (see Appendix S1). Regardless of the inferential paradigm, the models are challenging to fit to distance sampling data because each detected individual has a latent (unobserved) true coordinate vector, which results in $2n$ additional parameters. For example, the true coordinate vectors could be estimated by fully specifying a Bayesian model and obtaining samples from the marginal posterior using a Markov chain Monte Carlo (MCMC) algorithm. This approach, however, requires sampling from a high-dimensional posterior distribution. Such an approach is challenging because the MCMC algorithm can be difficult to tune and require multiple evaluations of the likelihood function, involving a quadrature approximation. Estimates of the true coordinate vectors are of little interest because most studies use distance sampling data to obtain predictions densities and infer species-habitat relationships. As a result, the true coordinate vectors can be treated as ``nuisance parameters'' and removed by integrating the joint likelihood as we did in Eqs. 4\textendash 5 (\citealt{borchers2015unifying}). The integrated likelihood has $2n$ fewer parameters and the remaining parameters can be estimated using maximum likelihood estimation or by sampling from the posterior distribution using techniques such as MCMC (see Appendix S1). In both the simulation experiment and data example that follow, we estimate all parameters by maximizing the appropriate likelihood using the Nelder-Mead algorithm in R (\citealt{nelder1965simplex,TeamR}; see Appendix S2 and S3). For all model parameters, we obtain approximate variances by inverting the Hessian matrix and construct Wald-type confidence intervals (CIs; \citealt{pawitan2001all}). To obtain CI for derived quantities, we use percentiles of the empirical distribution obtained from a bootstrapping approach outlined by \citet{lele2010estimability} based on the results of \citet{harris1989predictive}.\vspace{0.66cm} \begin{spacing}{1.9} \subsection*{SIMULATION EXPERIMENT} \end{spacing} \begin{doublespace} \noindent We conducted a simulation experiment to evaluate the influence of location uncertainty on model-based distance sampling methods and to test the efficacy of our new approach. We expect that standard approaches will result in biased parameter estimates which may obscure species-habitat relationships or, in the worst case, result in misleading conclusions. Conversely, we expect that our proposed method that accounts for location uncertainty will result in unbiased parameter estimates (in the asymptotic sense). We expect that accounting for location uncertainty will result in parameter estimates that are more variable and have estimates of uncertainty that are appropriately inflated (e.g., CIs will be wider) when compared to estimates obtained from exact locations. \end{doublespace} We simulated the exact location of individuals from an IPPP distribution on the unit square with a single predictor variable ($x(\mathbf{s})$) and specified the intensity function as $\text{log}(\lambda(\mathbf{s}))=\beta_{0}+\beta_{1}$$x(\mathbf{s})$ with $\beta_{0}=9$ and $\beta_{1}=1$ (Fig. 1). We evaluated two scenarios by varying the location of 16 point transects. In the first scenario, we placed point transects in poorer quality ``habitat'' (i.e., at lower values of $x(\mathbf{s})$; Fig. 1a) whereas in the second scenario we randomly placed the transects but restricted transect placement so that detection of the same individual from multiple transect was not possible (Fig 1b). We designed the first and second scenarios to evaluate how location uncertainty influences parameter estimates when transects are placed based on convenience and under a randomized design respectively. We simulated the detection of each individual using a Bernoulli distribution by calculating the probability of detection for each individual using a truncated half-normal detection function specified as $p_{i}=e^{-\left(\frac{d_{i}}{0.025}\right)^{2}}I(d_{i}<0.06),$ where $p_{i}$ is the probability of detection for the $i^{\text{th}}$ individual that occurs at a distance $d_{i}$ from the point transect (Fig. 1). We simulated 250 data sets for each scenario and fit four models to each data set. For the first model, we used the exact locations of the individuals and fit Eq. 3 to the simulated data. This is the ideal situation because the generating process used to simulate the data matches the generating process specified by the statistical model. Thus, we expected unbiased parameter estimates with the narrowest CIs under model 1. For the second and third models, the ``available'' data included only the distances from the point transects to the locations of the detected individuals. Because the exact locations of the individuals were unavailable, we fit Eq. 3 using two different surrogate predictors which included: 1) the value of $x(\mathbf{s})$ at the point transect where the individual was detected (model 2); and 2) the average of $x(\mathbf{s})$ within a distance of $0.06$ of the transect where the individual was detected (model 3). The distance of $0.06$ distance was chosen to correspond to the value used to truncate the half-normal detection function, which would be unknown in practice. Finally, our fourth model uses the same data as the second and third model, but instead accounts for location uncertainty using Eq. 4. For each of the two scenarios and four models, we assessed reliability of the inferred species-habitat relationship by comparing the true values of $\beta_{1}$ to the estimated value and the coverage probability of the 95\% CIs. We assessed efficiency by calculating the average length of the 95\% CI for $\hat{\beta}_{1}$ that was obtained from the model that accounts for location uncertainty (i.e., model 4) divided by the average length of the 95\% CI for $\hat{\beta}_{1}$ obtained from fitting the IPPP distribution using data with exact locations (i.e., model 1). In Appendix S2, we provide a tutorial with R code to implement the simulation and reproduce Fig. 1 and table 1 .\vspace{0.66cm} \begin{spacing}{1.9} \subsection*{FIELD-COLLECTED AVIAN DATA} \end{spacing} \noindent Distance sampling data from 137 bird species were collected over a 29 year period from 1981 to 2009 as part of the Long Term Ecological Research Program at the Konza Prairie Biological Station (KPBS; Fig. 2). The KPBS is a tallgrass prairie site located in northeastern Kansas, USA and is experimentally managed under varied grazing and fire regimes (\citealt{knapp1998grassland}). We used data from a single species, Dickcissels (\textit{Spiza americana}), which are the most common grassland-breeding species at KPBS. Both male and female Dickcissels perch conspicuously from the tops of vegetation and males vocalize frequently (\citealt{temple2002}). For our analysis, we used observations of Dickcissels collected by a single observer over the period of May 27, 1981 to June 26, 1981. This resulted in 106 individuals detected on 11 of the 14 transects at perpendicular distances ranging from $0\,\text{m}$ to $61\,\text{m}$. A full description of the data is provided in \citet{zimmerman1993birds} and \citet{KONZADATA}. We illustrate our method using elevation as a predictor variable (Fig. 2). Elevation within the KPBS is available from a digital elevation model that has cell resolution of $2\,\text{m}\times2\,\text{m}$ (\citealt{KONZAElev}). Based on previous research, we expect the abundance of Dickcissels should be greater at higher elevations when compared to lower elevations (\citealt{zimmerman1993birds}). Given our distance sampling data, it is not possible to reconstruct the exact location of each detected individual, therefore, we are unable to obtain the elevation at the location of each detected Dickcissel. We implement four models that included: a) the standard distance sampling model (Eq. 3) using a surrogate predictor which was the average elevation along the transect where the individual was detected (model a); b) a model that accounts for location uncertainty assuming that distances are recorded without error (i.e., Eq. 4; model b); c) a model that accounts for location uncertainty and distance mismeasurement that follows a truncated normal distribution (model c); and d) a model that accounts for location uncertainty distance mismeasurement that follows a Laplace distribution. For models c and d, which accounted for error in the recorded distances, we assumed that variance of the normal and Laplace distributions was zero on the transect line, but increased linearly at an unknown rate as the distance between the individual bird and transect increased (see Fig 3c. for an example). We truncated the normal and Laplace distributions at distances below $0\,\text{m}$ and above $150\,\text{m}$ to increase computational efficiency of the quadrature approximation. Because the maximum recorded distance in our data set was $61\,\text{m}$, this truncation has negligible influence on our results. Depending on the specifics of the study design, it is easy to incorporate different specifications such as a constant variance model or alternative probability distributions (e.g., a Poisson distribution for distances that are rounded to the nearest meter). In Appendix S3, we include additional details associated with the field-collected avian data analysis along with a tutorial and R code to implement the models and reproduce Table 2 and Figs. 2 and 3.\vspace{0.66cm} \section*{Results} \subsection*{SIMULATION EXPERIMENT} When the exact location of each detected individual was available, the standard IPPP model in Eq. 3 (model 1) performed as expected in that estimates of $\beta_{1}$ appeared to be unbiased and 95\% CI covered the true value with probabilities between 0.93\textendash 0.96 (Fig 1; Table 1). In contrast, when location uncertainty was not accounted for (models 2 and 3), the estimated regression coefficients were biased (Fig. 1) and coverage probabilities of the 95\% CIs were $\leq0.38$ (Table 1). When the transect locations were placed based on convenience and the surrogate predictor variable was obtained from the center of the transect (model 2), the bias was particularly large and resulted in negative estimates of regression coefficients for most data sets even though the true value was $\beta_{1}=1$ (Fig. 1a). Our method (model 4), which accounted for location uncertainty, yielded apparently unbiased estimates of $\text{\ensuremath{\beta_{1}}}$ for both scenarios and produced coverage probabilities between $0.95$\textendash $0.98$ (Fig. 1; Table 1). The 95\% CI were $1.49$\textendash $1.50$ times longer when the exact location was unknown (Table 1). These results demonstrate that our proposed method efficiently accounted for location uncertainty and resulted in parameter estimates that are about 50\% less precise than parameter estimates obtained when the exact location is known. This shows that the loss of information resulting from location uncertainty could be ameliorated by collecting more data.\vspace{0.66cm} \begin{spacing}{1.9} \subsection*{FIELD-COLLECTED AVIAN DATA} \end{spacing} All three models that accounted for location uncertainty (models b\textendash d) had similar Akaike information criterion scores that were $>811$ points lower than model a, which used Eq. 3 and the average elevation along the transect as the predictor (Table 1). This indicates that accounting for location uncertainty improved the fit of the model to the data. Predicted Dickcissels abundance at higher elevations was greater when location uncertainty was accounted for in models b\textendash d (Table 2; Figure 3a). This difference in predictions was caused by the regression coefficient estimates, which were 27\% larger for the three models that accounted for location uncertainty (models b\textendash d) when compared to the model that used the surrogate predictor (model a; Table 2).\textcolor{black}{{} The model comparison of the estimated relationship between elevation and abundance, distance and the probability of detection, and the true distance and recorded distance are visualized in Fig. 3.} \section*{Discussion} This study demonstrates that location uncertainty, when unaccounted for, can result in biased parameter estimates and unreliable inference regarding species-habitat relationships. Within a broader context, location uncertainty manifests as an ecological fallacy namely that the inferred relationship from aggregated data could differ when compared to individual-level data (\citealp{ecologicalfallacy,cressie2011statistics}, p. 197). Point process models using exact locations of individuals targets inference about the habitat and environmental preferences of individuals whereas ignoring location uncertainty and using transect-wide surrogate predictors results only in inference about how abundance varies among the transects. Individual-level inference is invariant to changes in the spatial scale of the analysis, whereas transect-level inference is scale specific. Our study demonstrates that spatial model-based approaches for traditional distance sampling data can provide reliable individual-level inference, even when the exact locations of individuals are unknown. This is a significant advancement because our approach enables individual-level inference, but does not require auxiliary data that may difficult or impossible to obtain for historical data sets (e.g., \citealt{borchers2015unifying}). Our approach also offers insight into best practices for future distance sampling study design when recording the exact location of individuals may be difficult or infeasible. Our simulation results suggest that, given a desired statistical power or level of precisions for parameter estimates, there is a tradeoff between sample size (i.e., the number of detected individuals) and location accuracy. The deleterious effects of collecting distance sampling data without recording the exact location can be remediated by simply collecting more data and selecting an appropriate model. For example, in both scenarios of our simulation experiment, the same precision of parameter estimates can be achieved by either detecting $n$ individuals and recording their exact locations, or by detecting $\approx1.5n$ individuals and recording only their distances. Although the sample size calculations from our simulation results are not generalizable to future data collection efforts, study-specific power analyses could be conducted to determine the tradeoff between the two data collection approaches. \vspace{0.66cm} \begin{spacing}{1.9} \subsection*{PRACTICAL GUIDANCE FOR DATA ANALYSIS} \end{spacing} \noindent Location uncertainty is ubiquitous in all sources of spatially referenced data because it is impossible to measure and record the location with infinite precision. Despite the presence of location uncertainty in all sources of data, accounting for it may be time consuming because the models are tailored to the specifics of the study and usually must be fit using ``custom built'' computer code (\citealt{BB2L}). For some data sets accounting for location uncertainty will be required and in other data sets it may not be possible or beneficial. Prior to fitting models to distance sampling data, we urge researchers consider the seven questions below to determine possible impact of location uncertainty on study outcomes. \begin{enumerate} \item \textit{Does the predictor variable exhibit fine-scale spatial variability?} If so, the predictor variable is likely to change when moving a short distance from one location to another location. In this case, the surrogate predictor variable (e.g., elevation at the transect centroid) will likely differ from the predictor variable at the location of the individuals. Whenever the surrogate predictor variable included in a model is different from the value of the predictor at the exact locations, there is the potential for bias. The larger the difference between the value of the surrogate variable and the value at the exact location, the more important it will be to account for location uncertainty. \item \textit{Are the placement of the transects related to the spatial structure of the predictor variable?} For example, the transects may be placed along roads within larger areas of homogeneous habitat. In this case, a surrogate predictor such as the percentage of grassland within $100\,\text{m}$, may be strongly influenced by the location of the transects and creates a potential for bias to occur. Accounting for location uncertainty may be needed. \item \textit{Are the spatial scales of the predictor variables known?} In many cases environmental characteristics within an area surrounding the location of the individuals is used to determine the influence of landscape-level processes (e.g., percentage of grassland within $100\,\text{m}$). These approaches use a buffer or kernel that is typically centered at the exact location of the individual. The predictor variable is calculated by integrating the kernel and point-level predictor variable over the study area (e.g., \citealt{heaton2011kernel,heaton2011spatial,heaton2012kernel,bradter2013identifying,chandler2016estimating,miguet2017quantify,stuber2017bayesian}). Because the buffer or kernel is centered at the exact location of the individuals, accounting for location uncertainty is likely to influence the inference. \item \textit{Is the spatial resolution of the predictor variables too course?} In some situation the spatial resolution of the predictor variable will only be available at a course grain. For example, WorldClim provides a set of climate variables that are predictions available on a $1\times1\,\text{km}$ grid (\citealt{hijmans2005very}). The transect from our data example are all $\leq1589\,\text{m}$ in length. For a given transect, most detected individuals would be assigned the same value of the predictor variables from WorldClim because most of the individual birds occur within a single $1\times1\,\text{km}$ grid cell. If the goal of the study is to relate abundance to climatic variables using WorldClim, then in such a case, the researcher would experience minimal or no gain from accounting for location uncertainty. \item \textit{Are spatial data for the predictor variables available over the entire study area?} In some situations the predictor variables will not be available at every location within the study area. For example, many studies that collect distance sampling data on animals also collect detailed information on vegetation at a feasible number of locations within the study area. It is tempting to use vegetation measurements as predictor variables that are collected at the location that is thought to be closest to the individual animal. This approach presents two challenges: 1) if the vegetation at the location of the individual is different than the location where the measurements were taken, the predictor variables may result in biased coefficient estimates; and 2) fitting point process models to data requires a continuous surface of the predictor variable over the entire study area. In this situation, we recommend first building a statistical model that can predict the vegetation measurements as a continuous surface over the study area. This is equivalent to building a custom high-resolution ``raster layer'' using the the vegetation data. Developing auxiliary models for predictor variables that are measured in the field is a common technique used in spatial statistics to ameliorate the problem of misaligned data (\citealt{gotway2002combining,gelfand2010misaligned}; \citeauthor{Pacifici2019} \textit{in press}). Once the predictor variable is available as a continuous surface or high resolution raster layer, then accounting for location uncertainty is likely to be beneficial. \item \textit{Is the location uncertain for only a portion of the observations?} There may be situations were only a portion of the observations have uncertain locations (e.g., \citealt{hefley2014correction,hefley2017bias}). If the number of observations with uncertain locations is small (e.g., $<5\%$ of the observation), these could be removed from the data set and perhaps cause only minor changes in inference. If the number of observations with uncertain locations is larger, then we recommend constructing models that integrate both sources of data (e.g., by combining portions of the likelihood in Eq. 3 and Eq. 4). Similar approaches could be applied to situations where different sources of data result in the magnitude of the location uncertainty being variable. For example, the mismeasurement of distances in our historic Dickcissel distance sample data are likely to be larger than more recent surveys in the same data set because researchers adopted the use of laser rangefinders. \item \textit{Do the predictor variables contain errors?} In some situations, the predictor variables contain errors. For example, researchers may include modeled climatic predictor variables, but the predicted climate at any given location is different than the true conditions. In this situation, the error in the predictors may mask or exacerbate the effect of location uncertainty. This problem is well-studied in the statistics literature where it is known as ``errors-in-variables'' (\citealt{carroll2006measurement}). In addition to location uncertainty, errors-in-variables can be accounted for by using a hierarchical modeling framework (e.g. \citealt{foster2012uncertainty,stoklosa2015climate,hefleyHSDM}).\vspace{0.66cm} \end{enumerate} \begin{spacing}{1.9} \subsection*{CONCLUSION} \end{spacing} \noindent Ecological data are messy in ways that result in many potential biases. For example, failing to account for detection may result in biased estimates of abundance. Ecologists have focused intensively on some sources of bias (e.g., detection) while paying little attention to other sources of bias such as location uncertainty. Thus, there is a shortage of tools available for less studied sources of bias, leading researchers to rely on ad hoc and untested procedures. We recommend avoiding untested procedures because location uncertainty can create complex biases that are difficult to anticipate and understand (e.g., \citealp{montgomery2011implications,hefley2014correction,brost2015animal,mitchell2016sensitivity,hefley2017bias,gerber2018accounting,walker2018bias}; \citeauthor{walker2019bias} \textit{under revision}). Model-based approaches have provided a wealth of tools to address biases in many types of ecological data. In situation where location uncertainty is a concern, model-based approaches like those proposed in our study will enable researchers to make reliable ecological inference and accurate predictions. \noindent \vspace{0.66cm} \section*{Acknowledgements} We thank all individuals, including Elmer Finck, John Zimmerman, and Brett Sandercock, who contributed to the the distance sampling data used in our study. The material in this study is based upon research supported by the National Science Foundation (DEB-1754491 and DEB-1440484 under the LTER program).\vspace{0.66cm} \section*{Data accessibility} The field-collected avian data is available from \citet{KONZADATA}. Additional files required to reproduce the results of this study (e.g. shapefiles of transects) are archived in the Dryad Digital Repository (\citealt{hefley2019dryaddata}). \renewcommand{\refname}{\vspace{0.1cm}\hspace{-0cm}\selectfont\large References}\setlength{\bibsep}{0pt} \bibliographystyle{apa}
{'timestamp': '2020-06-01T02:03:53', 'yymm': '2005', 'arxiv_id': '2005.14313', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14313'}
arxiv
\section{Introduction} \label{sec:intro} Several prior works for depth estimation from monocular images doesn't work with raw images straight out of the camera, which would be distorted and unre
ctified. Instead, these raw images are corrected with image processing techniques before being fed into the network to predict an undistorted rectified depth map. We propose a novel fully differentiable architecture which allows estimating the distorted unrectified depth directly from a raw image without the need for any pre-processing techniques to correct the image. This pipeline in turn saves time and computation power, and can be used directly in real-time scenarios without any prerequisites. The parameters which define distortion vary slightly with the environment in which the camera is present. Our model exploits the fact that more or less the amount of distortion in an image is fixed as long as we use the same camera. This allows us to pre-define a transformation flowfield for undistorting the image and using it in the training pipeline. Image rectification on-the-fly with prerecorded camera parameters has an overhead of around 75msec for a typical 30 FPS (frames per second) camera in one second. Our proposed model for unrectified distorted images doesn't have any overhead at inference when compared to models based on rectified images as it uses a neural network with the same number of trainable parameters. Image rectification also leads to information loss from the raw image. Higher the camera distortion, the higher the loss of pixel information from the raw data. In the KITTI dataset due to the distortion effect of the camera, the images have been rectified and cropped to 1242 $\times$ 375 such that the size of the rectified undistorted images is smaller than the raw unrectified distorted image size of 1392 $\times$ 512 pixels. Image rectification in KITTI resulted in the pixel loss of 10.77\% and 27.34\% along the width and height of the image, respectively. This pixel loss becomes more prominent at higher resolutions. We address the above issues by proposing the following contributions: \begin{figure} \centering \resizebox{0.5\textwidth}{!}{ \input{fig/att_map/att_map.tex}} \caption{Visualization of self-attention map. Left: Shows Query locations marked with a red dot on the unrectified input image. Right: Attention map for those query locations overlayed on the unrectified input image with arrows summarizing most-attended regions. Rather than attending to spatially adjacent pixels, the proposed network learns to attended pixels with similar color and texture. } \label{fig:att} \end{figure} \begin{enumerate} \itemsep -0.5em \item Introduced an end to end fully differentiable novel pipeline to tackle monocular depth estimation on distorted and unrectified images. \item Proposed a novel self-attention depth network to handle long range dependencies in the feature map leading to sharper depth maps. \item Incorporated instance normalisation throughout model architecture to handle style variation in the input data. \end{enumerate} Some of the prior self-supervised monocular depth estimation works are briefed here. Zhou \textit{et al}., \cite{zhou2017unsupervised} proposed an unsupervised depth estimation end-to-end framework from video sequence where view synthesis act as the supervisory signal. Mahjourian \textit{et al}., \cite{mahjourian2018unsupervised} introduced a 3D loss to enforce consistency of the estimated 3D point cloud. Yin \textit{et al}., \cite{yin2018geonet} jointly learned monocular depth, optical flow and ego motion from videos in unsupervised manner. Luo \textit{et al}., \cite{luo2018every} also jointly learned monocular depth, optical flow and ego motion from video but used these result to produce motion mask, occlusion mask, 3D motion map for rigid background and dynamic objects. Godard \textit{et al}., \cite{godard2019digging} achieved state of the art results in KITTI rectified dataset with minimum reprojection loss, full-resolution mutli-scale sampling and a mask that ignores training pixels that violate camera motion assumptions. \setlength{\belowcaptionskip}{-1pt} \begin{figure*}[ht] \centering \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{fig/att/att_encoderv3.pdf} \caption{} \label{fig:input} \end{subfigure} \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{fig/att/att_blockv2.pdf} \caption{} \label{fig:output} \end{subfigure} \caption{(a) Proposed self-attention depth estimation network. (b) Self-attention block. $\bigotimes$ denote matrix multiplication.} \label{fig:net} \end{figure*} \section{Synthesis of unrectified image} All the prior works on monocular depth estimation have used rectified images during training to estimate depth. But the raw images from camera are unrectified images; henceforth we enable an efficient way to train the depth estimation network on unrectified images. Rectification is required on images like stereo pairs to rectify the error in rotation and translation between cameras. Image distortion is prevalent in images caused by varying magnification in relation to the angular distance to the optical axis. Due to the imperfect alignment of the lens or sensor, distortion may get decentered, but that is rarely an issue in current cameras. Usually, these images are fixed beforehand, as it would affect the projection geometry from one view to another. Distortion of a typical $90^\circ$ FOV (field of view) camera can be modeled with few intrinsic parameters where $k_1,k_2,k_3,p_1,p_2$ are the known distortion parameters. We can obtain the undistorted image coordinate $\hat{c}_{t-1}^{undist}$ at time $t-1$ from distorted image coordinate $c^{dist}_{t}$ at time $t$ by Equ.~\ref{eq:1}. \begin{align} \label{eq:1} \hat{c}_{t-1}^{undist} \approx \mathbf{K} \mathbf{T}_{t \rightarrow t-1}D_{t}\mathbf{K}^{-1}\xi\{c^{dist}_{t}\} \end{align} where $\mathbf{K}$ is the camera intrinsic $3 \times 3$ matrix that transforms camera coordinates to image coordinates which comprises of focal length $f_x, f_y$, principal point offset $x_0, y_0$ along x and y directions and axis skew $s$ as shown in Equ.~\ref{eq:2}, $\mathbf{T}_{t \rightarrow t-1}$ camera transformation $3 \times 4$ matrix from target view frame at $t$ to source view frame at $t-1$ consisting of $3 \times 3$ rotation matrix $\mathbf{R}$ and $3 \times 1$ translation vector $\mathbf{t}$ as shown in Equ.~\ref{eq:3}, $D_t$ is the per-pixel depth predicted by the network and $\xi$ is the undistortion function with pre-computed distortion parameters. \begin{align} \label{eq:2} \mathbf{K} = \left[ \begin{array}{ccc} f_x & s & x_0\\ 0 & f_y & y_0\\ 0 & 0 & 1 \end{array} \right] \end{align} \begin{align} \label{eq:3} \mathbf{T} = [ \mathbf{R} \, |\, \mathbf{t}] = \left[ \begin{array}{ccc|c} r_{1,1} & r_{1,2} & r_{1,3} & t_1 \\ r_{2,1} & r_{2,2} & r_{2,3} & t_2 \\ r_{3,1} & r_{3,2} & r_{3,3} & t_3 \end{array} \right] \end{align} Per-pixel depth $D_t$ of frame at time $t$ and camera transformation $\mathbf{T}_{t \rightarrow t-1}$ from frame at time $t$ to $t-1$ are estimated by self-attention depth and pose network, respectively as shown in Fig.~\ref{fig:net}. Pose network \cite{zhou2017unsupervised} takes three adjacent distorted image frames at time $t-1$, $t$, $t+1$ and predicts camera transformation $\mathbf{T}_{t \rightarrow t-1}$ and $\mathbf{T}_{t \rightarrow t+1}$. While self-attention depth network takes single distorted image frame $I^{dist}_{t}$ at time $t$ to predict per-pixel depth $D_t$. During inference, only self-attention depth network is used to predict depthmap of an input image. \subsection{Image coordinates to World coordinates} \label{sec:2.1} With intrinsic matrix $\mathbf{K}$, estimated depth $D_t$ and undistortion function $\xi$, we can project distorted image coordinates to world coordinates as shown in Equ.~\ref{eq:4}. \begin{align} \label{eq:4} \left[\begin{array}{c} X \\ Y \\ Z \end{array}\right] \approx D_{t}\mathbf{K}^{-1}\xi \Bigg\{ \left[\begin{array}{c} x^{dist}_t \\ y^{dist}_t \\ 1 \end{array}\right]\Bigg\} \end{align} where $(x^{dist}_t, y^{dist}_t)$ is unrectified image coordinates and $(X, Y, Z)$ is world coordinates. Undistortion function $\xi$ is used to pre-compute undistortion map which is used in the training pipeline as shown in Equ.~\ref{eq:5}. \begin{align} \label{eq:5} \left[\begin{array}{c} x^{undist}_t \\ y^{undist}_t \\ \end{array}\right] \approx \xi \Bigg\{ \left[\begin{array}{c} x^{dist}_t \\ y^{dist}_t \\ \end{array}\right]\Bigg\} \end{align} The mathematical formulation of radial and tangential distortions used in the undistortion function $\xi$ are shown below in Equ.~\ref{eq:6} and Equ.~\ref{eq:7} \begin{align} \label{eq:6} \begin{gathered} \hat{x}^{dist}_t \approx x^{undist}_t(1+k_1r^2+k_2r^4+k_3r^6) \\ \hat{y}^{dist}_t \approx y^{undist}_t(1+k_1r^2+k_2r^4+k_3r^6) \end{gathered} \end{align} Here $x^{undist}_t$ and $y^{undist}_t$ denotes undistorted pixel coordinates, $\hat{x}^{dist}_t$, $\hat{y}^{dist}_t$ denotes radially distorted pixel coordinates and $r^2={x^{undist}_t}^2+{x^{undist}_t}^2$. Tangential distortion caused by lens misalignment can be formulated with distortion parameters $p_1$ and $p_2$ along with radial distortion as shown in Equ.~\ref{eq:7}. \begin{align} \label{eq:7} \begin{gathered} x^{dist}_t \approx \hat{x}^{dist}_t + 2p_1x^{undist}_ty^{undist}_t + p_2(r^2+2{x^{undist}_t}^2) \\ y^{dist}_t \approx \hat{y}^{dist}_t + p_1(r^2+{y^{undist}_t}^2) + 2p_2x^{undist}_ty^{undist}_t \end{gathered} \end{align} $x^{dist}_t$ and $y^{dist}_t$ are resultant image coordinate after radial and tangential distortion. Distortion mapping in Equ.\ref{eq:5} is used to inverse map distorted image coordinates to undistorted image coordinates. These undistorted image coordinates can be projected to world coordinates by dot operation with inverse camera intrinsic $\mathbf{K}^{-1}$ and estimated per-pixel depth $D_t$ as shown in Equ.~\ref{eq:8}. \begin{align} \label{eq:8} \left[\begin{array}{c} X \\ Y \\ Z \end{array}\right] \approx D_{t}\mathbf{K}^{-1} \left[\begin{array}{c} x^{undist}_t \\ y^{undist}_t \\ 1 \end{array}\right] \end{align} \subsection{World coordinates to Image coordinates} \label{sec:2.2} The estimated transformation matrix $\mathbf{T}_{t \rightarrow t-1}$ from image frame at time $t$ to $t-1$ along with intrinsic matrix $\mathbf{K}$ is used to transform world coordinates to image coordinates of frame at time $t-1$ as shown in Equ.~\ref{eq:9}. \begin{align} \label{eq:9} \left[\begin{array}{c} x^{undist}_{t-1} \\ y^{undist}_{t-1} \\ 1 \end{array}\right] = \left[\begin{array}{c} x/z \\ y/z \\ 1 \end{array}\right] \equiv \left[\begin{array}{c} x \\ y \\ z \end{array}\right] \approx \mathbf{K}T_{t \rightarrow t-1} \left[\begin{array}{c} X \\ Y \\ Z \\ 1 \end{array}\right] \end{align} \subsection{View Synthesis} The image coordinate projection from frame $t$ to $t-1$ as discussed in subsection \ref{sec:2.1} and \ref{sec:2.2} can only project distorted image coordinates at $t$ frame to $t-1$ unrectified image coordinates as shown in Equ.~\ref{eq:1}. For view synthesis, we propose a trick with image undistortion function $\varphi$ using the pre-computed distortion map to undistort the source image $I^{dist}_{t-1}$ at t-1 as shown in Equ.~\ref{eq:10}. It should be noted that this undistortion operation is performed only in the training phase. \begin{align} \label{eq:10} I^{undist}_{t-1} = \varphi \{ I^{dist}_{t-1} \} \end{align} This unrectified image $I^{undist}_{t-1}$ can be used to sample pixel values according to projected image unrectified coordinates $x^{undist}_{t-1}, y^{undist}_{t-1}$ to synthesize unrectified image $\hat{I}^{dist}_{t}$ at time $t$ as shown in Equ.~\ref{eq:11}. $\big \langle \big \rangle$ is the bilinear sampler. For simplicity, synthesis of distorted image $\hat{I}^{dist}_{t}$ at time $t$ from distorted image $I^{dist}_{t-1}$ at time $t-1$ is shown. But the proposed method synthesis $I^{dist}_{t}$ from images $I^{dist}_{t+1}$ at time $t+1$ too. \begin{align} \label{eq:11} \hat{I}^{dist}_{t} = I^{undist}_{t-1} \big \langle x^{undist}_{t-1}, y^{undist}_{t-1} \big \rangle \end{align} \section{Reconstruction loss} The depth network takes one distorted unrectified image $I^{dist}_{t}$ at time $t$ to reconstruct $\hat{I}^{dist}_{t}$ by inverse wrapping nearby distorted views $I^{dist}_{t-1}$ and $I^{dist}_{t+1}$ with estimated depth $D_{t}$. The reconstruction loss $L^s$ at scale $s$ of our network consist of two parts, photometric loss $L_{p}$ and smoothness loss $L_s$. We follow \cite{godard2017unsupervised} in using mixture of L1 and SSIM $S(.)$ with $\alpha_1=0.85$ and $\alpha_2=0.15$ for the reconstruction loss for supervising the networks as shown in Equ.~\ref{eq:12}. \begin{align} \label{eq:12} L_{p}^{t-1} = \eta_{t-1}\Bigg(\alpha_1 \frac{1-S(I_t^{dist},\hat{I}_{t}^{dist})}{2} + \alpha_2 |I_t^{dist}-\hat{I}_{t}^{dist}|\Bigg) \end{align} Here $\eta_{t-1}$ stands for auto mask \cite{godard2019digging} which is used to mask out the pixels which do not change across time steps. This mask allows to remove static objets like a car moving at same speed as the camera and far away objects like sky, since they don't contribute of the loss. We also use an edge-aware smoothness loss is as shown in Equ.~\ref{eq:13}. \begin{align} \label{eq:13} L_{s}^{t-1} = |\bigtriangledown_u D_{t}| e^{-|\bigtriangledown_u I_{t}^{dist}|} + |\bigtriangledown_v D_{t}| e^{-|\bigtriangledown_v I_{t}^{dist}|} \end{align} \begin{table*}[t] \vspace{2mm} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|c|cccc|ccc} Approach & Input &\multicolumn{4}{c|}{Lower is better} & \multicolumn{3}{c}{Higher is better}\\ \cline{3-9} & & Abs Rel & Sq Rel & RMSE & RMSE log & $\delta < 1.25$ & $\delta < 1.25^2$ & $\delta < 1.25^3$ \\ \hline Zhou~\cite{zhou2018unsupervised} & rectified & 0.176 & 1.532 & 6.129 & 0.244 & 0.758 & 0.921 & 0.971 \\ Mahjourian~\cite{mahjourian2018unsupervised} & rectified & 0.134 & 0.983 & 5.501 & 0.203 & 0.827 & 0.921 & 0.981 \\ GeoNet~\cite{yin2018geonet} & rectified & 0.132 & 0.994 & 5.240 & 0.193 & 0.833 & 0.953 & 0.985 \\ EPC++~\cite{luo2018every} & rectified & 0.120 & 0.789 & 4.755 & 0.177 & 0.856 & 0.961 & 0.987 \\ Monodepth2~\cite{godard2019digging} & rectified & 0.090 & 0.545 & 3.942 & 0.137 & 0.914 & 0.983 & 0.995 \\ \hline Our (resnet18 w/o attention) & unrectified & 0.1481 & 1.2579 & 5.805 & 0.221 & 0.817 & 0.941 & 0.978 \\ Our (resnet50 w/o attention) & unrectified & 0.1254 & 1.0812 & 5.407 & 0.197 & 0.866 & 0.956 & 0.983 \\ \hline Our (resnet18 w/ attention BN) & unrectified & 0.1305 & 1.0462 & 5.501 & 0.195 & 0.838 & 0.95 & 0.983 \\ Our (resnet50 w/ attention BN) & unrectified & 0.1158 & 0.7714 & 4.865 & 0.175 & 0.861 & 0.961 & 0.988 \\ \hline Our (resnet18 w/ attention IN) & unrectified & 0.1253 & 0.9420 & 5.367 & 0.187 & 0.846 & 0.954 & 0.985 \\ Our (resnet50 w/ attention IN) & unrectified & 0.1067 & 0.6866 & 4.585 & 0.163 & 0.879 & 0.968 & 0.991 \\ \end{tabular} } \caption{Qualitative result comparison on KITTI improved ground truth eigen split. All state of the art methods predict depth from rectified distorted image unlike the proposed method that predict depth capped at 80m from unrectified distorted image. BN: Batch normalization and IN: Instance normalization.} \label{tab:qual} \end{table*} \begin{figure*}[!ht] \centering \resizebox{\textwidth}{!}{ \input{fig/quali/qual.tex}} \caption{Qualitative result comparison on KITTI unsynced+unrectified dataset. Figure best viewed in color.} \label{fig:qual} \end{figure*} \section{Self-Attention network} Most of the depth estimation networks like \cite{godard2019digging},\cite{garg2016unsupervised} use convolutions for capturing the local information. But it is well known fact that receptive field of convolutions is quite small. Inspired by \cite{zhang2018self}, we introduce self-attention at different stages within our Resnet encoder \cite{he2016deep} of the depth estimation network to handled long range relations. This enables us to incorporate the contextual information into the high dimensional feature maps. This particularly constrains the network to produce more robust feature maps which result in much improved depth estimation.To the best of knowledge our architecture is the first to incorporate contextual information at different layers of the network for the purpose of unsupervised depth estimation. As a proof of our contribution we visualize attention maps of few samples in Fig.~\ref{fig:att}. We can observe how the network learns to attend pixels belonging to the same attribute or texture for a particular query pixel. \section{Handling variation in style} We believe that major issue in training such models is the high variation in style of the input images which makes it difficult for the model from reaching its best potential. Therefore taking inspiration from \cite{huang2017arbitrary} we normalise feature maps after each convolutional layer. This trivial procedure particularly normalises style of every training image and thus facilitates better training as the model discards the style information and focuses only on the content information to infer depth. \section{Experiments and results} We use the KITTI raw (unsynced+unrectified) dataset \cite{geiger2013vision} for training. We extracted 652 of the 697 eigen test split to evaluate unrectified depth with improved KITTI groundtruth depth maps on which all our qualitative results are done, as shown in Table~\ref{tab:qual}. Our proposed depth estimation method is evaluated using metrics in \cite{eigen2014depth} for a fair comparison with other state of the art methods for rectified images. Note that as there is no previous benchmark for depth estimation on unrectified images, we compare our results with those of rectified images. For evaluating predicted unrectified depthmap, we undistort the unrectified depth map and compare it to the rectified ground truth. The main limitation of proposed method is the transferability to other dataset captured with a different camera. As distortion varies from one camera to another, direct transfer learning would be difficult, but fine-tuning is the best way until and unless the projection doesn't break. The projection used in the propose method will fail with extreme distortion like wide-angle lens. \section{Conclusion} We proposed a novel self-attention network for learning monocular depth from unrectified video frames. Our attention framework is able to capture long distance contextual information which results in much sharper depth maps. We also handle style variance in the training distribution by using instance normalisation. Finally, we evaluated the unrectified depth estimation against state of the art methods on rectified depth estimation and achieved comparable results.
{'timestamp': '2021-02-23T02:19:09', 'yymm': '2005', 'arxiv_id': '2005.14358', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14358'}
arxiv
\subsection*{Acknowledgments} We wish to thank Ping Xi for many helpful comments, especially his suggestion on the presentation of the proof of \eqref{e.M2}. We are grateful to Nick Katz and Zhiyon
g Zheng for encouragement. We thank the referee for useful comments. We acknowledge the support of Princeton University, where part of this work was done during a visit. \begin{bibdiv} \begin{biblist} \bib{dSSV}{article}{ author={de la Bret\`eche, R\'{e}gis}, author={Sha, Min}, author={Shparlinski, Igor E.}, author={Voloch, Jos\'{e} Felipe}, title={The Sato-Tate distribution in thin parametric families of elliptic curves}, journal={Math. Z.}, volume={290}, date={2018}, number={3-4}, pages={831--855}, issn={0025-5874}, review={\MR{3856834}}, doi={10.1007/s00209-018-2042-0}, } \bib{Deligne}{article}{ author={Deligne, P.}, title={Application de la formule des traces aux sommes trigonom\'etriques}, book={ title={Cohomologie \'etale}, series={Lecture Notes in Mathematics}, volume={569}, note={S\'eminaire de G\'eom\'etrie Alg\'ebrique du Bois-Marie SGA 4$\frac{1}{2}$}, publisher={Springer-Verlag}, place={Berlin}, review={\MR{0463174 (57 \#3132)}}, date={1977}, }, pages={168--232}, } \bib{ET}{article}{ author={Erd{\"o}s, P.}, author={Tur{\'a}n, P.}, title={On a problem in the theory of uniform distribution. I, II}, journal={Nederl. Akad. Wetensch., Proc.}, volume={51}, date={1948}, pages={1146--1154, 1262--1269 = Indagationes Math. \textbf{10}, 370--378, 406--413}, review={\MR{0027895 (10,372c)}, \MR{0027896 (10,372d)}}, } \bib{Katz}{book}{ author={Katz, N. M.}, title={Gauss sums, Kloosterman sums, and monodromy groups}, series={Annals of Mathematics Studies}, volume={116}, publisher={Princeton University Press}, place={Princeton, NJ}, date={1988}, pages={x+246}, isbn={0-691-08432-7}, isbn={0-691-08433-5}, review={\MR{955052 (91a:11028)}}, } \bib{KZ}{article}{ author={Katz, N. M.}, author={Zheng, Z.}, title={On the uniform distribution of Gauss sums and Jacobi sums}, conference={ title={Analytic number theory, Vol.\ 2}, address={Allerton Park, IL}, date={1995}, }, book={ series={Progr. Math.}, volume={139}, publisher={Birkh\"auser Boston}, place={Boston, MA}, date={1996}, }, pages={537--558}, review={\MR{1409377 (97e:11089)}}, } \bib{LZZ}{article}{ author={Lu, Qing}, author={Zheng, Weizhe}, author={Zheng, Zhiyong}, title={On the distribution of Jacobi sums}, journal={J. Reine Angew. Math.}, volume={741}, date={2018}, pages={67--86}, issn={0075-4102}, review={\MR{3836143}}, doi={10.1515/crelle-2015-0087}, } \bib{Shp}{article}{ author={Shparlinski, I. E.}, title={On the distribution of arguments of Gauss sums}, journal={Kodai Math. J.}, volume={32}, date={2009}, number={1}, pages={172--177}, issn={0386-5991}, review={\MR{2518562 (2010b:11104)}}, doi={10.2996/kmj/1238594554}, } \bib{Xi}{article}{ author={Xi, Ping}, title={Equidistributions of Jacobi sums}, note={arXiv:1809.04286v1, preprint}, } \end{biblist} \end{bibdiv} \end{document}
{'timestamp': '2020-06-01T02:05:16', 'yymm': '2005', 'arxiv_id': '2005.14363', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14363'}
arxiv
\section{Introduction} Judging similarity between any pair of stimuli is an ambiguous problem: deciding what counts as similar is subjective and sensitive to context \cite{Medin1993}. Nevertheless, pe
ople are relatively consistent in making similarity judgments, which is perhaps explained in part by the biases they develop towards emphasizing some stimulus features over others (\textit{e.g.,} shape over size, color, or material; \citeNP{Diesendruck2003}). Understanding the features (and the weights upon them) that people employ when evaluating the similarity of complex stimuli like images remains an open problem. Deep neural networks have been demonstrated to be predictive of multiple aspects of human visual perception in visuoperceptual tasks \cite<\textit{e.g.,}>{Lake2015,Kubilius2016}. This utility has led to their increasing use as proxies of human cognition to understand mechanisms underlying cognitive processes or as proofs-of-concept to establish the possibility of a certain cognitive strategy \cite{Kriegeskorte2015, Cichy2019}. For example, \citeA{Sanders2020} show that CNNs can be trained using multidimensional scaling representations to derive psychological representations of images. In other work, \citeA{Peterson2017} show correspondences between similarities in convolutional neural net (CNN) representations and human similarity judgments for natural images. They find that, while out-of-the-box CNN representations are only partially reflective of human psychological representations, they can be adapted to support a more fine-grained correspondence. The success of CNNs in predicting human similarity judgments suggests that they might also be helpful in identifying the features which inform those judgments. However, CNN representations are high-dimensional, potentially redundant, and likely include psychologically irrelevant stimulus information. An important question, given their increasing use in cognitive modeling is how many relevant features/dimensions they really contribute and what the nature of those features might be. In this work, we propose a method inspired by previous work by \citeA{Rumelhart1993} which we call similarity-driven dimensionality reduction (SimDR), which obtains low-dimensional projections of CNN image representations that best capture human similarity judgments. Surprisingly, our method reveals that human similarity judgments continue to be well-preserved even up to two orders of magnitude fewer dimensions than previous work. This suggests that the dimensionality of psychological representations is considerably less than the full set of CNN features. We further probe the individual dimensions of these representations that capture concepts essential to judging similarity, and find that most of them are interpretable. In particular, we show that broad categories are given more importance by our model than finer ones captured in subsequent dimensions, in line with the hierarchical structure oft-found to characterize human cognition \cite{Cohen2000, Rogers2004SemanticCA}. \section{Method} \citeA{Peterson2017} show that the final representation layer of a CNN can be adapted to better predict human similarity judgments. The size of the final representation in CNNs is typically of the order of $10^3$, which makes interpretation difficult. To serve our purpose of leveraging CNN representations to understand human similarity judgments, we require representations that are interpretable and can give us insights into the actual cognitive task. \citeA{Rumelhart1993} constructed a connectionist model to mimic human similarity judgments. The model takes two stimuli as input and outputs a similarity judgment. The hidden layer is of lower dimensionality than the input, resulting in a compressed representation. Extending this idea to modern CNNs, our method (SimDR) reduces the CNN representations of images to a low-dimensional space which is optimal for predicting human similarity judgments. If the obtained representations have sufficiently low dimensionality, we can interpret individual dimensions to see what they capture and make inferences about similarity judgments in humans. This model and the data used are explained in the following sections. \subsection{Datasets} \citeA{Peterson2017} collected six human similarity datasets for natural images drawn from the following domains: animals, vehicles, vegetables, fruits, furniture and a dataset encompassing a variety of domains (``various''). Each of these sets comprises pairwise similarity ratings from ten people for 120 images, which we employ in all analyses that follow. \subsection{Similarity-driven Dimensionality Reduction} \citeA{Peterson2017} showed that the final, fully-connected representation layer of VGG-19 \cite{Simonyan15} is most predictive of human similarity judgments, hence we use the same 4096-dimensional VGG-19 representations for all our experiments. The task of obtaining low-dimensional representations of images which capture factors underlying human similarity judgments is split by SimDR into two objectives: (a) projecting VGG-19 representations to a low-dimensional space, and (b) predicting human similarity judgments using the low-dimensional representations. These two objectives are jointly optimized leading to low-dimensional representations that are predictive of human similarity judgments. VGG-19 representations of two input images are passed through a single linear layer of small width (\textit{i.e.,} a bottleneck layer) which projects them to a lower-dimensional space. This is followed by an inner product of the outputs of the bottleneck layer to obtain the predicted similarity rating for the input pair (Fig.~\ref{fig::model_diag}). The inner product is our representational similarity measure, which contrasts with \citeA{Rumelhart1993}, and more directly generalizes the method of \citeA{Peterson2017}. For both input images, the weights of the bottleneck layer are shared. The weights are learned by back-propagating the loss incurred during the prediction of human similarity judgments, hence optimizing the projected representations to predict human similarity judgments. This contrasts with the method of \citeA{Peterson2017}, which learns weights for each of the 4096 input dimensions, or principal component analysis (PCA), which preserves as much information as possible as opposed to just that which is relevant to human similarity judgments (and thus may inflate the estimated intrinsic dimensionality). We first trained a separate model for each dataset. CNN feature vectors were first normalized such that their norms were one. We used mean squared error loss with L2 regularization to train each model. The L2 coefficient was selected between $10^{-3}$ and $10^3$ by $6$-fold cross-validation over the $120$ images. Further, for every dataset, the number of nodes in the bottleneck layer is varied in the range of $1-64$. We also compare the above with a simple unsupervised baseline that alternatively obtains low-dimensional representations by running PCA over the input VGG-19 representations. These low-dimensional representations are then transformed by ridge regression using the method of \citeA{Peterson2017} to predict similarity ratings. As above, we vary the number of principal components in the range of $1-64$. \begin{figure}[!t] \begin{center} \includegraphics[width=0.9\linewidth, trim=3mm 3mm 3mm 4mm, clip=true]{figs/model} \end{center} \caption{Overview of SimDR. CNN representations for an image pair are down-projected using a shared low-dimensional bottleneck layer. An inner product of the outputs gives predicted similarity rating for the input pair.} \label{fig::model_diag} \end{figure} \begin{table}[!b] \begin{center} \setlength{\tabcolsep}{3.5pt} \begin{tabular}{lcccc} \hline Dataset & Raw & \citeA{Peterson2017} & SimDR & PCA \\ \hline Animals & 0.58 & 0.74 & 0.64 & 0.47\\ Vehicles & 0.51 & 0.58 & 0.57 &0.51 \\ Fruits & 0.27 & 0.36 & 0.30 & 0.27\\ Furniture & 0.19 & 0.35 & 0.33 & 0.28\\ Various & 0.37 & 0.54 & 0.50 & 0.31\\ Vegetables & 0.27 & 0.34 & 0.30 & 0.32\\ \hline \end{tabular} \caption{$R^2$ scores for all datasets (SimDR values are for bottleneck layer of size 64).} \label{table::r2score} \end{center} \end{table} \begin{figure*}[!t] \begin{center} \includegraphics[width=1.0\linewidth, trim=2mm 2mm 3mm 3mm, clip=true]{figs/dimvsr2} \end{center} \caption{Explained variance ($R^2$) of our models in predicting human similarity judgments on each dataset. The dashed lines correspond to the prediction performance in \citeA{Peterson2017} when all input dimensions are used.} \label{fig::dimvsr2} \end{figure*} \section{Few dimensions predict similarity judgments} We observe for all datasets that the SimDR $R^2$ score at 64 dimensions is higher than that of the raw (untransformed) CNN representations (Table~\ref{table::r2score}). The PCA-based model performed worse than SimDR for all datasets (except for the \textit{vegetables} dataset), suggesting that supervision is much more selective of the human-relevant dimensions. We also observe that the prediction performance of SimDR quickly saturates as the number of dimensions increases beyond $10-20$, approaching the prediction performance obtained using all VGG-19 features (Fig.~\ref{fig::dimvsr2}; dashed lines). Notably, the \textit{animals} dataset requires only 6 nodes to achieve an $R^2$ score of 0.6 while the \textit{various} dataset achieves an $R^2$ of 0.49 at 6 nodes. These results strongly suggest that human similarity judgments can be captured by considerably fewer dimensions (by at least two orders of magnitude) than those comprising VGG-19 representations, and more generally that psychological representations as measured by similarity experiments are much lower-dimensional than CNN representations. Additional evidence for this can be seen in the intrinsic dimensionality of the CNN representations themselves without respect to human judgements. Fig.~\ref{fig::pca_data} illustrates this using PCA: cumulative variance explained is shown as a function of the number of components, for each dataset. Notably, the dimensionality elbow is both longer and later than those in Fig.~\ref{fig::dimvsr2}. Interestingly, CNNs also appear to assign equal dimensionality to all datasets (except \textit{various}), apparently much unlike humans (Fig.~\ref{fig::dimvsr2}). \section{Interpretation of low-dimensional features} Now that we have demonstrated the sufficiency of low-dimensional representations to predict similarity judgments, we can attempt to interpret the reduced dimensions. For this experiment, we focus on the top 3 datasets based on $R^2$ score---\textit{animals}, \textit{vehicles}, \textit{various}. As mentioned above, SimDR achieves an $R^2$ score of 0.6 on the \textit{animals} dataset using only 6 dimensions. On the \textit{various} dataset, it achieves an $R^2$ score of 0.49 using 6 dimensions, and an $R^2$ score of 0.45 on \textit{vehicles} dataset using 16 dimensions. We fix these as the bottleneck layer sizes for each of these datasets. The aforementioned dimensions for each of the three datasets are chosen by visually identifying an elbow in performance (Fig.~\ref{fig::dimvsr2}) such that the rate of increase in $R^2$ score is small beyond this point. We want to understand these individual dimensions; however, they may not be orthogonal. To address this, we further orthogonalize our low-dimensional representations using PCA to ensure that each dimension encodes unique information. We then take the top few dimensions which explain most of the variance for each dataset. This contrasts with the use of PCA above to produce a baseline reduced feature set in that it is performed after supervised dimensionality reduction. \begin{figure}[!b] \begin{center} \includegraphics[width=1.0\linewidth, trim=7mm 2mm 13mm 10mm, clip=true]{pca_data} \end{center} \caption{Cumulative variance explained in the full VGG-19 representations as a function of principal component.} \label{fig::pca_data} \end{figure} \begin{figure*}[!t] \begin{center} \includegraphics[width=1.0\linewidth, trim=6mm 8mm 6mm 6mm, clip=true]{pca_1d} \end{center} \caption{Image embeddings along the top four principal components of low-dimensional SimDR representations.} \label{fig::pca_1d} \end{figure*} \begin{figure*}[!t] \begin{center} \includegraphics[width=1.0\linewidth, trim=7mm 7mm 14mm 6mm, clip=true]{pca_2d} \end{center} \caption{Examples of image embeddings for three datasets using the top principal components of the SimDR representations.} \label{fig::pca_2d} \end{figure*} \begin{figure*}[!t] \begin{center} \includegraphics[width=1.0\linewidth, trim=3mm 2mm 3mm 2mm, clip=true]{figs/dendogram} \end{center} \caption{Dendrograms of hierarchical clustering for 2-dimensional representations and 6-dimensional representations on \textit{animals} dataset. \textit{H: Herps, B: Birds, P: Primates, R: Rodents, WC: Wild cats, G: Grazers, E: Dogs, Bears and Large animals}.} \label{fig::dendrograms} \end{figure*} \subsection{Visualizing individual dimensions} The ability of the low-dimensional representations to predict similarity indicates that they are efficiently encoding information essential for making similarity judgments. Hence, they can further be leveraged to understand what factors allow them to predict similarity judgments. To this end, for each of the three datasets, we visualize image embeddings along the top four principal dimensions of the low-dimensional features learned via SimDR. We visualize validation images for a single fold (out of the 6 cross validation folds), though we observe that the dimensions were consistent across all folds in terms of capturing the same concepts (Fig.~\ref{fig::pca_1d}). We observe that the first dimension for each dataset appears to be largely continuous, and captures broad categories. In the animals dataset, this dimension goes from non-mammals to mammals. The first dimension of the \textit{various} dataset goes from inanimate objects to dogs and humans. The first dimension of the \textit{vehicles} dataset shows a gradation from vehicles with two or no wheels (\textit{e.g.,} sled, wheelchair) to those with four wheels (\textit{e.g.,} trucks, buses), though the interpretation in this case is not as evident, which may stem from the low variance (12\%) captured by the top component. Some of the other principal components are also apparently interpretable and interesting. For example, the second principal component of the \textit{vehicles} dataset distinguishes water transport from land transport, the third principal component of the \textit{various} dataset distinguishes natural things from artificial ones, while the fourth dimension in the \textit{animals} dataset distinguishes birds from non-birds. Each of these individual dimensions captures a different taxonomic relationship, suggesting that such relationships are important factors in determining similarity judgments of natural images. \subsection{Clusters formed by pairs of dimensions} \label{subsec::clusters} As an alternative visualization strategy, we explore 2D projections of the image representations along two of the top four principal components in Fig.~\ref{fig::pca_2d}. These plots are useful in observing clusters of images formed by a combination of principal components, where each cluster tells us what kind of images are considered similar by the model. Echoing \citeA{Peterson2017}, we observe clusters for herptiles, primates, birds, wild cats, rodents, and grazers in the \textit{animals} dataset. We see clusters for human faces and body parts, animals, vegetables, houses, and natural things in the \textit{various} dataset. The \textit{vehicles} dataset shows distinct clusters for trains, bikes, horses, airplanes, and tanks. \subsection{Hierarchical similarity and bottleneck effects} Next, we analyze the effect of changing the width of the bottleneck layer. We know that increasing the width improves prediction performance. Here, we are interested in interpreting the information captured by different bottleneck sizes. To visualize this, we explore dendrograms \cite{Shepard390} for the \textit{animals} dataset. Fig.~\ref{fig::dendrograms} shows that when the size of the bottleneck layer in SimDR is 2, two clusters---primates and non-primates---are formed. This suggests that belonging to the primate group is the most important trait influencing similarity judgments in the \textit{animals} dataset, which is encoded in as little as two dimensions. At a bottleneck size of 6, however, further hierarchical structure can be seen where many more categories are present. At intermediate sizes between 2 and 6, additional clusters continue to emerge (not shown). The hierarchical structure formed by the 6-dimensional representations is closely related to that formed using human similarity data in \citeA{Peterson2017}. We observe that increasing the bottleneck width introduces further categorical distinctions in other datasets too. For the \textit{various} dataset, at a bottleneck width of 4, we observe distinct clusters for animals and humans (and their body parts). In the case of the \textit{vehicles} dataset, 4-dimensional bottleneck layer representations preserve distinctions based on wheels. Hence, these are primary traits influencing similarity judgments which are captured at small bottleneck widths. These results motivate a hierarchical organization of factors underlying human similarity judgments in our model, providing empirical results consistent with mathematical theories of hierarchical semantic knowledge organisation in neural networks \cite{Saxe2019}. \section{Shared features across domains} We have seen that each of the six individual SimDR models can discover low-dimensional representations which are predictive of similarity judgments separately for each domain. A natural question that follows from this is whether the dimensions learned by these models trained on specific domains are also shared across domains. Translating this into the framework of human judgments, the question we pose is the following: do different domains share factors underlying human similarity judgments? \subsection{Canonical Correlation Analysis} \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth, trim=0mm 5mm 22mm 15mm, clip=true]{figs/cca_heatmap} \caption{Inter-domain relatedness ($R^2$) as measured by regularized CCA between all domain pairs.} \label{fig:cca} \end{figure} We use L2-regularized canonical correlation analysis (CCA; \citeNP{Bilenko2016}) to evaluate the degree of shared information or factors between low-dimensional representations belonging to any two domains. From each of the six models trained on individual domains, we obtain 64-dimensional representations for all pairs of images (from all 6 datasets). We then perform regularized CCA on 64-dimensional representations from every pair of domains. We observe in Fig.~\ref{fig:cca} that the $R^2$ score is highest for \textit{fruits} and \textit{vegetables}, followed by \textit{animals} and \textit{vehicles}. This implies that the model trained on fruits and the model trained on vegetables have overlapping latent factors and hence, their similarity predictions are also based on some common factors. The same is true for \textit{animals} and \textit{vehicles} datasets. While it seems reasonable for \textit{fruits} and \textit{vegetables} to share common factors for similarity, the relationship between \textit{animals} and \textit{vehicles} is less clear, although we suspect it may have something to do with common backgrounds (which often contain scene information such as grass, sky, and water, unlike our other categories). \subsection{Domain-agnostic SimDR} To determine whether a more general set of dimensions could be learned that generalizes across domains, we trained a SimDR model on image pairs from all six datasets using 6-fold cross-validation. We compared this to models trained on individual domains and tested on all others to assess how they generalize on their own. The results, shown in Fig.~\ref{fig:pooled}, reveal that the pooled model nears saturation at a few hidden dimensions. Hence, even with a diverse dataset, few dimensions are enough to predict similarity judgments. Next, we see that the domain-specific models do not generalize well when tested on all datasets, lending credibility to our earlier claim that these models learn dimensions which are specific to individual domains. Lastly, Fig.~\ref{fig:pooledtest} shows the performance of the pooled model in predicting individual domains, and reveals that certain domains (\textit{animals}, \textit{vehicles}, \textit{various}) are well-explained by general features learned from the pool of all domains, while others require more domain-specific features (\textit{vegetables}, \textit{fruits}, \textit{furniture}). \begin{figure}[!b] \centering \includegraphics[width=1.0\linewidth, trim=6mm 2mm 16mm 14mm, clip=true]{figs/dimvsr2_pooled} \caption{Performance of models tested on all domains (with varying bottleneck layer size). The dashed line shows the performance of the model trained on all domains in \citeNP{Peterson2017}. Solid lines correspond to models trained on different datasets.} \label{fig:pooled} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=1.0\linewidth, trim=6mm 2mm 16mm 14mm, clip=true]{figs/pooled1} \caption{Performance of pooled model tested on individual domains and on all domains (with varying bottleneck layer size). The dashed line shows the performance of the model trained on all domains in \citeNP{Peterson2017}. Solid lines correspond to the pooled model tested on different datasets.} \label{fig:pooledtest} \end{figure} \section{Conclusion} Our work shows that CNN representations can be transformed to lower dimensions---where interpretation is far less cumbersome---while still being predictive of similarity judgments. We also observe that only a few dimensions are required to predict psychological representations as opposed to a considerably larger, full set of CNN features. This finding is interesting because the deep feature sets increasingly being used in both cognitive modeling \cite<for a review, see>{ma2020neural} and neuroscience \cite{Kriegeskorte2015,kietzmann2018deep,Cichy2019} are much higher-dimensional. Indeed, some work may already suggest that our findings could generalize to modeling neural activity as well \cite{mur2013human}, though future work must bear this out. Moreover, in this low-dimensional space, we are able to visualize individual dimensions and show that they code for unique concepts. Hence, they provide insights into potential factors that influence human similarity judgments, and potentially various other visual tasks. We observe that increasing the size of the bottleneck layer introduces finer levels of distinction, mirroring hierarchical clustering in human cognition. These results together show the ability of CNN representations to both predict and explain human similarity judgments using a few dimensions. This work takes a step towards showing that psychological representations can be predicted by far fewer dimensions than used in CNNs; and that they are not only quantitatively predictive of human similarity judgments but provide insights about how people make similarity judgments. We think our approach can help bridge the interpretation gap between CNN representations and psychological representations by providing interpretable factors which influence human similarity judgment. \section{Acknowledgements} This work was supported by the National Science Foundation (grant number 1718550), and the School of Engineering and Applied Sciences at Princeton University. \bibliographystyle{apacite} \setlength{\bibleftmargin}{.125in} \setlength{\bibindent}{-\bibleftmargin}
{'timestamp': '2020-06-01T02:05:27', 'yymm': '2005', 'arxiv_id': '2005.14366', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14366'}
arxiv
\section{INTRODUCTION} \label{sec:intro} In the standard model, radio pulsars are highly-magnetised rapidly rotating neutron stars that emit a coherent light-house beam of often highly polarised rad
io emission directed by their magnetospheres \citep{2004hpa..book.....L}. The weak braking torques caused by their rapidly rotating magnetic fields and their high moments of inertia make them extremely stable flywheels, and it is often possible to predict the pulsar spin period and indeed pulse phase years in advance of observation \citep{1992RSPTA.341..117T}. Most radio pulsars regularly emit irregular single-pulse shapes that usually sum to an average profile within 1000 rotations that is often remarkably constant \citep{2012MNRAS.420..361L}. Timing of these mean profiles against a template produces an arrival time which can be used to derive a model of the pulsar's spin-down, astrometric and binary parameters and propagation through the ionised interstellar medium. The frequency-dependence of the pulse arrival time is well described by the cold plasma dispersion relation, and allows observers to compute the column density of free electrons along the line of sight to the observer. The integral of this column density is referred to as the pulsar's dispersion measure (DM). The most accurate pulse arrival times require observers to remove the broadening of the pulse profile across the finite channel bandwidths using a process known as coherent dedispersion \citep{1975MComP..14...55H} and accurately monitor changes in the DM \citep{2013MNRAS.429.2161K}. According to version 1.62 (Feb 2020) of the Australian Telescope National Facility pulsar catalogue\footnote{https://www.atnf.csiro.au/research/pulsar/psrcat/} \citep{mhth05} there are currently 2800 pulsars known, $\sim 97$\% of which are visible at radio wavelengths. Radio pulsars range in pulse period ($P$) from 1.4\,ms to 23.5\,s, and have inferred dipolar magnetic field strengths from 5$\times10^7$G to $\sim$10$^{15}$\,G. Over 10 percent of known pulsars are members of binary systems, and the majority of these are the so-called `recycled pulsars', that have had their magnetic fields weakened and spin periods shortened by mass accretion from a donor \citep{1991PhR...203....1B}. The fastest pulsars ($P<20$\,ms) are usually referred to as `millisecond pulsars' (MSPs). These pulsars are often in very clean systems well approximated by point masses and are ideal for tests of gravitational and stellar evolution theories \citep{2016ApJ...829...55W}. State of the art pulsar timing allows us to measure pulse arrival times to better than one part in 10$^4$ of the pulse period \citep{2001Natur.412..158V}, leading to sub-microsecond arrival times for the MSPs. In their most recent data release (dr2) the International Pulsar Timing Array\footnote{www.ipta4gw.org} lists an rms timing residual for the bright 5.7\,ms MSP PSR~J0437$-$4715 of just 110\,ns and 14 others with residuals below 1\,$\mu$s \citep{2019MNRAS.490.4666P}. Modern radio telescopes can detect radio pulsars with a mean flux density (ie averaged over the pulse period) down to just a few $\mu$Jy in very deep pointings, and the large-scale surveys of much of the galactic plane are complete to $\sim$0.1 mJy \citep{2015MNRAS.450.2922N}. The population exhibits a standard log-$N$/log-$S$ distribution consistent with a largely planar distribution with a slope of $\sim-$1. The most compelling pulsar science is usually derived from accurate pulse timing which for most pulsars is signal-to-noise limited as the vast majority of known pulsars have flux densities less than 1\,mJy at 1400 MHz. For this reason the field has been dominated by the world's largest radio telescopes that possess low-temperature receivers and digital backends capable of coherently dedispersing the voltages induced in the receiver by the radio pulsars. These telescopes can produce the high signal-to-noise profiles required to test theories of relativistic gravity, determine neutron star masses, clarify the poorly-understood radio emission mechanism, and relate the latter to the magnetic field topology. The galactic centre is at declination $\delta=-29^\circ$ and this makes the Southern hemisphere a particularly inviting location for pulsar studies. For many years the Parkes 64\,m telescope has had almost exclusive access to radio pulsars south of declination $\delta=-$35$^\circ$, and consequently discovered the bulk of the pulsar population. When choosing a site and host country for the forthcoming Square Kilometre Array \cite[][]{2009IEEEP..97.1482D} SKA1-mid telescope, the strong pulsar science case made Southern hemisphere locations particularly desirable. MeerKAT \citep{jonas2009} is the South African SKA precursor telescope located at the future site of SKA1-mid and the full array has four times the gain (i.e.~2.8\,K/Jy) of the Parkes telescope (0.7\,K/Jy). The first receivers (L-band) to come online possess an excellent system temperature ($\sim$18\,K) along with 856\,MHz of recorded bandwidth. The pulsar processor often just records the inner 776\,MHz of this for science purposes. The telescope is located at latitude $-30^{\circ}43'$ and is ideal for studies of the large population of southern pulsars and those in the Large and Small Magellanic Clouds. Much of pulsar science is signal-to-noise limited until a pulsar hits its `jitter limit' (the lowest timing residual obtainable due to pulse-to-pulse variability in the individual pulses - see \citep{2014MNRAS.443.1463S} ). When far from the jitter limit the timing error is inversely proportional to the signal to noise ratio. For most pulsars in the Parkes Pulsar Timing Array \citep{2013PASA...30...17M}, the limit is rarely reached when observed with the Parkes 64\,m telescope unless the pulsar is experiencing a scintillation maximum \citep{sod+14}. A notable exception is the bright MSP PSR~J0437$-$4715, that is always jitter-limited when observed at the Parkes telescope \citep{2011MNRAS.418.1258O} due to its large 1400\,MHz mean flux density of 150\,mJy \citep{2015MNRAS.449.3223D}. As telescopes become more sensitive, the number of pulsars in the same integration time being jitter-limited increases. The South African Radio Astronomy Observatory (SARAO) owns and operates MeerKAT and, before it was commissioned, called for Large Survey Projects (LSPs) that could exploit the telescope's scientific potential. The MeerTime\footnote {http://www.meertime.org} \citep{2018arXiv180307424B} collaboration was successful at obtaining LSP status and commenced its first survey observations in February of 2019. This paper reports on MeerTime's validation of MeerKAT as a pulsar telescope and presents some early science results from its four major themes: Relativistic and Binary Pulsars, the Thousand Pulsar Array \citep{johnstonetal2020}, Globular Clusters and the MeerKAT Pulsar Timing Array. A glimpse of MeerKAT's potential as a pulsar telescope was presented in \citep{2018ApJ...856..180C} when it was part of a campaign that observed the revival of the magnetar PSR~J1622$-$4950. Since then there have been a number of developments of the system that enable a wider range of pulsar observing modes that will be discussed forthwith. The structure of this paper is as follows: In section \ref{sec:MKP}, we provide an overview of MeerKAT as a pulsar telescope including examples of the UHF and L-band radio bands, the Precise Time Manager (PTM), choice of polyphase filterbanks, and the SKA1 prototype pulsar processor PTUSE developed by Swinburne University of Technology. In section \ref{sec:SV}, we describe our validation of the system and pulsar hardware before presenting new science from observations of selected pulsars and globular clusters in section \ref{sec:RESULTS}. Finally, we briefly discuss some of the prospects for the future of this facility including new modes, receivers, and extensions and ultimate extension to become SKA1-mid in section \ref{sec:discussion}. \section{The MeerKAT telescope as a Pulsar Facility} \label{sec:MKP} A high level block diagram of the system is provided in Figure \ref{fig:block_diagram} that describes the system all the way from the antennas to the final data product archive. The backend system design largely followed the CASPER philosophy of transferring as much of the transport between the digital subsystems to commodity-off-the-shelf (COTS) components and industry standard protocols (eg ethernet) that involve commercial switches and is interfaced to the pulsar processor which is itself a modern server comprised entirely of COTS components. \begin{figure*} \centering \includegraphics[scale=0.7]{Meertime_BD_V2.pdf} \caption{ A block diagram of the signal chain for MeerTIME observations. Signals from all antennas are digitised in the field and sent to the correlator-beamformer (CBF) engine in the building via the CBF switch. For pulsar observations, CBF performs channelisation (1K or 4K mode) and beamforms at the requested sky position. The beamformed voltages are sent to the PTUSE machines via the CBF switch at a data rate of up to 24.7 Gb/s. Each PTUSE node can process one beam/sub-array. In each PTUSE node, the incoming voltages are temporarily stored in a ring buffer from which they are sent to the two GPUs, each processing one half of the band. The GPUs perform coherent dedispersion and square law detection to obtain 16-bit, full Stokes data, which, depending on the observation, is either folded into pulsar archives or scrunched into 8-bit, total intensity filterbank data. The GPUs write the end products to the NVME disks via the CPUs, from which the two bands are stitched together and transferred to the local disks. The baseband recorder, when triggered, can dump baseband data to the NVME disks for a period of about 40 minutes. The data from the local disks are eventually transferred to the MeerKAT archive in Cape Town, from which they are transferred to international mirror sites. } \label{fig:block_diagram} \end{figure*} \subsection{The MeerKAT Radio Frequency Spectrum} \label{RFS} MeerKAT is located in the Karoo, some 450 km north-east of Cape Town in the Northern Cape Province. Its low population density makes it an attractive site to pursue radio astronomy. The low-frequency HERA experiment \citep{2017PASP..129d5001D} and the future 197-dish SKA1-mid telescope \citep{2009IEEEP..97.1482D} -- of which MeerKAT will be a part -- will be located at the site which is protected by legislation against radio transmissions in many bands of relevance to radio astronomers. The technology behind low-noise amplifiers and radio receivers has greatly improved since the dawn of radio astronomy in the 1950s. Whilst even just a couple of decades ago it was necessary to sacrifice fractional bandwidth to minimise system temperature new engineering practices and technologies now permit the development of low-noise ($\sim$20\,K) receivers over a full octave or more of bandwidth eg. \citep{2019arXiv191100656H}. The original MeerKAT radio telescope specification had a target effective collecting area per unit receiver temperature of $A_{\rm eff}/T$ = 220 m$^2$/K but remarkably achieved 350--450~m$^2$/K (depending upon radio frequency), well over a factor of 2 increase in observing efficiency over the design specification. These figures equate to a system equivalent flux density (\textrm{SEFD}) of $\textrm{SEFD}=T/G\sim7$\,Jy, where $T$ is the system temperature in K and $G$ is the total antenna gain $G= A{\eta}/(2k)$, $A$\, is the collecting area, $\eta$ is the aperture efficiency and $k$ is Boltzmann's constant. The total MeerKAT antenna gain is 2.8\,K/Jy and the system temperature about 18\,K in the optimal location of the 1400\,MHz band. The receivers have two orthogonal linear polarisations (H and V for horizontal and vertical respectively). \begin{figure*} \centering \begin{subfigure}{0.45\textwidth} \includegraphics[angle=0,width=\columnwidth]{J0737-3039A_UHF_Bandpass.png} \caption{UHF receiver bandpass for Stokes I (544--1088\,MHz).} \label{fig:UHF_bandpass} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \includegraphics[angle=0,width=\columnwidth]{J0737-3039A_L_1024ch_Bandpass.png} \caption{L-Band receiver bandpass for Stokes I (856--1712\,MHz)} \label{fig:Lband_bandpass} \end{subfigure} \caption{The post-calibration bandpasses of a tied-array beam, for MeerKAT's UHF and L-Band receivers. The flux density scale is arbitrary. It is often difficult to completely flatten the band in regions of persistent interference such as that near 1530-1600 MHz.} \label{fig:bandpasses} \end{figure*} In Figure \ref{fig:bandpasses} we present the radio spectrum as observed from the MeerKAT site for the UHF (544--1088\,MHz) and L-band (856--1712\,MHz) receivers taken from observations of the double pulsar PSR~J0737$-$3039A. Pulsar observations are usually made with 1024 or 4096 frequency channels. The UHF band is remarkably clean, with just some small (strong but narrow-bandwidth) residual mobile phone-related transmissions visible around 940 MHz. In most countries that house large-diameter (64m class and above) telescopes, the UHF band is so badly polluted by digital television and mobile phone transmissions that it is often unusable except in very narrow frequency windows some 10s of MHz wide. Both of the SKA sites appear to have been chosen well and offer a renewed opportunity to explore the Universe at these frequencies. For pulsars, this is especially relevant, as most possess steep spectra \citep{1998ApJ...506..863T, 2000A&AS..147..195M, 2018MNRAS.473.4436J}, with spectral indices of between $-1$ and $-3$ above 1 GHz. The 1400-MHz (L-band) receiver band is not as pristine as the UHF band, but still has much of the spectrum available for science (see a quantified analysis below), depending upon the flux density of the target pulsar. The tied-array beam helps dilute interfering signals by dephasing them but the large number of bits in the digitizers and beam-former that deliver accurate channelisation of the data have one drawback in the sense that the interference-to-noise ratio can be extremely high. This makes deletion of at least some frequency channels essential before integration across frequency channels. Like all modern observatories, the 1400\,MHz band suffers from transmissions from Global Navigation Satellite Systems (GNSS) and other satellites that are extremely strong and impossible to avoid. The small apertures (and hence large side-lobes) that the 14-m dishes of MeerKAT provide make satellite transmissions in the band almost omnipresent. The L-band spectrum is shown in Fig \ref{fig:Lband_bandpass}. The first of the S-band ($1.75-3.5$\,GHz) receivers of the Max-Planck-Institut f\"ur Radioastronomie \citep{2016mks..confE...3K} are currently being installed and tested. When fully installed these will provide the possibility of performing high precision timing at even higher frequencies. \subsection{Precise Time Systems in MeerKAT} \subsubsection{Background on requirements} \label{PTM} The SKA phase I (comprising SKA1-low and SKA1-mid) has been strongly motivated by two key science projects, the Epoch of Reionisation and strong-field tests of gravity respectively \citep[e.g.,][]{2016mks..confE...3K}. Despite the advent of direct gravitational wave detection \citep{Abbott+16} and black hole imaging \citep{EHT+19}, a number of precision strong-field tests can only be achieved at radio wavelengths using pulsars. This includes the detection of a gravitational wave background from supermassive black holes using pulsar timing arrays \citep[e.g. ][]{2015Sci...349.1522S, Lentati+15, Nanograv11yr}, which will require timing an ensemble of MSPs to precisions well below 100\,ns and possibly down to 10\,ns. In order to achieve such precisions, two of the SKA1 specifications are especially relevant, one related to the calibration of the polarimetry that otherwise leads to systematic errors in timing; see, e.g., \citep{fkp+15}, and the other the knowledge of absolute time with respect to Coordinated Universal Time (UTC) over a full decade. The SKA1-mid specification on calibratable polarisation purity is $-$40\,db, and on time 5\,ns over 10 years. The precise time systems in MeerKAT are specified to provide time products accurate to better than 5\,ns, relative to UTC. The telescope was designed for absolute, not just relative timing which is the norm. This is achieved via a first-principles approach, by managing the time delays associated with every element of the geometric and signal paths. \subsubsection{Realisation of system timing for the MeerKAT telescope} MeerKAT has defined a reference point on the Earth to which all time is referred. It is a location a few hundred metres approximately north of the centre of the array with with the ITRF coordinates X=5109360.0, Y=2006852.5, Z=$-$3238948.0\,m. $\pm$0.5 m. The position of the reference point was chosen as the circumcentre of the array, roughly a metre above the ground. There is no antenna at this point; all pulsar timing is ultimately referred to when an incident radio wave would have struck this point. This location for MeerKAT is installed in the observatory coordinate file for the pulsar timing software \textsc{tempo2} \citep{Hobbs+06}, and used for all work in this paper. The coordinates of the source, antenna and UT1 define the geometric delay, and every attempt is made to derive the further delays incurred by the signal as it reflects off the telescope surfaces, passes through the feed/receiver/cable/filter and is ultimately sampled by the analogue to digital converters (ADCs). Round-trip measurements account for cable and fibre delays and are accurate to typically 1\,ns. Estimates of the error in each stage of the path are also recorded for later dissemination to the pulsar processor to be recorded with the data. At each antenna, a digitiser is mounted close to the focus (see the lower panel in Figure \ref{fig:time_transfer}). The digitiser ADC is driven by the digitiser sample clock, derived from the Maser frequency standards and disseminated directly to the digitisers by optical fibre. The input voltages are sampled at the Nyquist rate after passing through an analogue filter for the selected band. The L-band receiver digitizes the data at exactly 1712\,MHz (real samples) and passes the second Nyquist zone (i.e., top half of the band 856-1712\,MHz) to the correlator-beamformer via optical fibre Ethernet. The digitiser maintains a 48-bit ADC sample counter, used to tag every packet. The counter is reset every day or two by telescope operators, before it overflows. The 10-bit digitiser offers excellent resistance to radio frequency interference and makes it possible to confine RFI to only the relevant frequency channels unless it causes saturation of the ADC. \begin{figure} \centering \includegraphics[scale=0.36]{time_transfer.pdf} \caption{MeerKAT time systems showing time transfer from GNSS, the local clock ensemble and transfer of time to the digitisers. Here Rb stands for the element Rubidium, WR is the White Rabbit sub-ns accuracy Ethernet-based time transfer system. PPS is the 1 pulse per second used in precision timing experiments, and FPGAs are Field Programmable Gate Arrays. See text for further details.} \label{fig:time_transfer} \end{figure} Optical timing pulses are generated by the Time and Frequency Reference (TFR) system in the processor building and disseminated to all antennas via dedicated optical fibres. The digitiser records the time of arrival of the optical pulse from the masers; it is also used to reset the digitiser sample clock when necessary. The timing fibres are buried 1\,m deep to minimise diurnal temperature variations, but change length over the year as the site temperature varies. A round-trip measurement system continuously measures their length by timing the round trip of the timing pulse as reflected back from the digitiser \citep{ spie2017, ifcs2018nr2}. This falls within a general class of time-of-flight measurement which is traceable to the SI unit of time \citep{terra}. In the correlator, prior to channelisation, the signals from all antennas are buffered and aligned using the ADC sample counter. Integer samples of delay are applied to compensate for physical delays. A multi-tap Hann-window polyphase filterbank is used to filter the data along with a phase gradient to eliminate the remaining (sub-sample) geometrical delay. Pulsar timing observations at L-band usually use the 1024 channel mode, giving a time resolution of 1/$B$ of ~1.196\,$\mu$s. The narrowest mean MSP profile features are 10s of $\mu$s in width although `giant' pulses have been observed with time resolution from the Crab pulsar with timescales of down to 1\,ns \citep{2003Natur.422..141H}. The Precise Time Manager (PTM) is a program that collects and aggregates all known delays in the system: geometric, physical delays in the antenna, analogue and digital delays in the receiver and digitiser, the correlator delay and the digitiser clock offset. PTM computes the \textit{time that the wavefront corresponding to a certain ADC sample crossed the array phase centre}, and the uncertainty in this time. This time is passed to PTUSE for recording in the header of the pulsar observation. In September 2019, the uncertainty in this value was 3 to 4\,ns. \subsubsection{Tracking of telescope time: Karoo telescope time} The station clock is referred to as KTT (Karoo Telescope Time). This timescale is generated by the TFR subsystem using an ensemble of two active hydrogen masers, two Rubidium clocks and a quartz crystal. It is a physical timescale, defined at a connector in the system. The masers drift by typically just a few ns per day; KTT is kept $\sim$1\,$\mu$s from UTC by adjusting their synthesized frequency every few months to keep them in defined offset bands. The TFR system provides a 10\,MHz frequency reference, from the Maser currently in use. Fractional frequency offset is kept to smaller than $2\times10^{-13}$ with respect to UTC; no significant drift in time occurs during any observing campaign. Frequency synthesizers in the building, one per band, are locked to the reference frequency and generate the 1712\,MHz and 1088\,MHz sample clocks for distribution to the digitisers. The TFR also provides accurate time to other components of the telescope via Precision Time Protocol (PTP): this is used amongst other things in the pointing of the telescope and control of the precision timing systems \citep{ifcs2018nr2}. Pulsar timing requires the difference between UTC and KTT to be measured. At MeerKAT, this is done with a set of calculations to calculate ensemble time using interclock differences between five clocks, and measurements of four clocks with respect to GPS via two dual-band GNSS receivers and two single band GPS-only receivers \citep{spieKTT}. The multiple measurements then lead to clock solutions for each of the clocks in the ensemble via linear combinations of the different measurements. The usage of multiple clocks enables error/instability detection in any one of the clocks and a lower variance estimate in any one of the clocks as with standard ensembling used in timescale generation \citep{levine}. Reference is made to the UTC via common-view comparison with the National Metrology Institute of South Africa (NMISA) in Pretoria, and by direct comparison with UTC(USNO) via GPS time dissemination. The uncertainty of the absolute time difference between KTT and UTC is specified to be less than 5\,ns. At present, the systematic (non-varying) offset is only \emph{stated} to 50\,ns due to verification of absolute offset calibration being undertaken. The offset calibration was performed using absolutely calibrated GPS receivers before main observations were being undertaken, and will in future be done using an EMC-quiet calibrator \citep{ifcs2018nr1} in order not to disrupt the observations. The repeatability between observations is thought to be about 5\,ns, implying that the systematics and the stability will finally converge to the latter number; final absolute calibration via a GPS simulator traceability chain will lower the absolute offset to <1\,ns. As we shall see later on, there is evidence from our pulsar timing results that we are approaching these levels of clock correction/stability. Furthermore, the clock tracker is currently being improved from a semi-real time predictive method, to a fully post-facto non-causal filtering type, using Savitzky-Golay filtering \citep{sg64} on which provisional internal self-consistency checks suggest a numerical error of <1\,ns. Post-facto \citep{levine}, non-causal calculation is always better in improving timing compared to real-time `UTC-like' timescale estimation, due to an increased data set, administrative oversight, ability to correct for non-idealities and the inherent outperformance of smoothers as compared to causal filters \citep{Einecke, jensen2012}. \subsection{The Polyphase Filterbanks} \label{PFB} The MeerKAT channeliser (the F-engine) uses a polyphase filterbank (PFB) to channelise the digitized bandwidth into 1024 or 4096 critically sampled frequency channels with configurations described in Table \ref{tb:fengine-configuration}. A PFB filter design with a 16-tap Hann window was deployed aiming to achieve high sensitivity for continuum mapping with minimal bandwidth losses. This design uses only 6dB of attenuation at the channel edges which however gives rise to significant aliasing from adjacent channels in pulsar observations. To address this, an alternate 16-tap Hann window design that provides superior spectral purity performance at a modest price in sensitivity (due to reduced effective bandwidth) was implemented. This was achieved by reducing the 6\,dB cut-off frequency to 0.91 times the channel width, sacrificing $\sim$5\% of the sensitivity to reduce the leakage by 10\,dB. Delay compensation is done in the F-engine. The requested delay polynomial is re-computed at every FFT. Coarse delay is done with a whole-sample delay buffer before the PFB. Fine delay is applied by phase rotation after the PFB. The channelised voltages in each packet from the F engine are time-stamped with the same 48-bit counter as the original digitiser voltages, delayed by the delay tracking system, and the impulse response of the channeliser. The F-engine operates internally with 22-bit complex numbers, these are requantised to 8 bits real + 8 bits imaginary for transmission over the network. The requantisation gain is chosen to provide adequate resolution on the quietest channels, which results in some of the strongest RFI channels being clipped. The requantisation gain register is also used to flatten the bandpass, equalise the gain of H and V polarisations, and correct for per-channel phase variations found during phase-up. \begin{figure} \centering \includegraphics[angle=0,width=\columnwidth]{hann_fir_response.pdf} \caption{Magnitude response of the two evaluated channeliser filter designs. The original filter design (dashed) led to significant artifacts in pulsars with high ratios of dispersion measure to period - see text.} \label{fig:hann_fir_response} \end{figure} A comparison of the shape of the transfer function for the two modes is shown in Figure \ref{fig:hann_fir_response}. The level magnitude response at channel boundaries (-0.5 and +0.5) determine the level of spectral leakage artifacts in pulsar timing observations. This effect is discussed in section \ref{leakage}, which shows the 0.91 filter design greatly reduces the artifacts and hence why this design has been deployed for most MeerTime observations (exceptions are those made with the 4096 channel mode). \begin{table*} \caption{MeerKAT F-Engine Configurations, with each producing dual polarisations quantised to 8 bits per sample. Frequencies and bandwidths are quoted in MHz and the sampling interval in microseconds.} \label{tb:fengine-configuration} \begin{threeparttable} \begin{tabular}{llllll} \hline Band & Approx. Centre Frequency & Bandwidth & Channels & Sampling Interval & Data Rate \\ & (MHz) $\dagger$ & (MHz) & ($N_{\rm chan}$) & ($\mu$s) & (Gbits/s) \\ \hline L & 1284 & 856 & 1024 & 1024/856 & 27.392 \\ L & 1284 & 856 & 4096 & 4096/856 & 27.392 \\ UHF & 816 & 544 & 1024 & 1024/544 & 17.408 \\ UHF & 816 & 544 & 4096 & 4096/544 & 17.408 \\ \hline \\ \end{tabular} \begin{tablenotes} \item[$\dagger$] The F-engine PFB implementation lowers the precise centre frequency of all channels by half a fine channel width (i.e. by $BW / N_{\rm chan} / 2$), where $BW$\, is the total bandwidth. For example, the precise centre frequency of the first channel for L band with 1024 channels is 856\,MHz. \end{tablenotes} \end{threeparttable} \end{table*} \subsection{The Beamformer} \label{Bengine} The MeerKAT beamformer (the B-engine) creates a dual polarisation tied-array beam by adding together the channelised complex voltages for all antennas, as produced by their individual F-engines. Thus the beam is also Nyquist-sampled like the antenna voltages. The B-engine is distributed among (typically) 64 SKARABs (custom boards developed by SARAO for digital signal processing designed to be used with the CASPER tools \citep{2016JAI.....541001H}), each processing a subset of frequencies for all antennas. Samples are aligned by a time-stamp before addition. A per-antenna real gain is provided for beam shaping; this is generally left at unity, but can be set to zero to eliminate an antenna from the beam, perhaps if it is not working properly. The B-engine output is also an 8-bit complex number and uses a requantisation gain to scale down the sum of the antenna voltages; this is typically scaled by $1/\sqrt{N_{\rm ants}}$. The output data is sent back onto the switch as {\sc spead}\footnote{https://casper.ssl.berkeley.edu/wiki/SPEAD} streams, one for each polarisation, using UDP multicast for consumption by downstream users. The packet sizes and rates are listed in Table \ref{tb:bengine-configuration}. The B-engine can produce up to four tied-array beams from up to four simultaneous sub-arrays for downstream processing by the PTUSE pulsar processing servers. The computational burden on each server is identical regardless of the number of antennas in the sub-array. \begin{table} \caption{MeerKAT B-Engine output configurations, based on the F-engine sampling interval and data rates.} \label{tb:bengine-configuration} \begin{tabular}{llll} \hline Band & Channels & Packet Size & Packet Rate \\ & & (B) & (kPackets/s) \\ \hline L & 1024 & 2048 & 1671.8 \\ L & 4096 & 4096 & 835.9 \\ UHF & 1024 & 2048 & 1062.5 \\ UHF & 4096 & 4096 & 531.3 \\ \hline \\ \end{tabular} \end{table} \subsection{PTUSE: An SKA Pulsar Processing Prototype} \label{sec:PTUSE} PTUSE stands for Pulsar Timing User Supplied Equipment in the standard SARAO nomenclature. This sub-system receives channelised voltage timeseries from the B-engines. Each of the tied-array beams are received on separate high end server class machines and processed to produce reduced data products which are then transferred to the MeerKAT data archive for long term storage and subsequent processing. The system design was developed by Swinburne University of Technology as the pulsar timing prototype for Square Kilometre Array (SKA) pre-construction. Two commissioning servers were deployed in 2015 for development and early science activities and were used until December 2019. Four production servers were then deployed to be used for the MeerTime key science program, allowing for increased processing capabilities and simultaneous processing of 4 tied-array beams. The configuration of servers for each deployment is described in Table \ref{tb:ptuse-hardware-deployments}. \begin{table*} \caption{PTUSE Hardware Deployments, detailing the hardware configuration of the commissioning and deployment systems used with MeerTIME.} \label{tb:ptuse-hardware-deployments} \begin{tabular}{lll} \hline Hardware & Commissioning System & Production System \\ \hline CPUs & 2 x Intel E5-2623 v3 & 2 x Intel Silver 4110 \\ RAM & 128 GB DDR4-2133 MHz & 192 GB DDR4-2666 MHz \\ GPUs & 2 x NVidia Titan X (Maxwell) & 2 x NVidia 2080Ti \\ NVME Disk System & N/A & 8 TB RAID (4 x 2TB Intel P4510) \\ SATA Disk System & 12 TB RAID (4 x 4TB SATA) & 24 TB RAID (4 x 8TB SATA) \\ 40Gb NIC & Mellanox ConnectX3 & Mellanox ConnectX5 \\ \hline \\ \end{tabular} \end{table*} Each PTUSE server subscribes to two {\sc spead} streams from the B-engines via a tree of Mellanox 40\,Gb switches, one for each polarisation. The {\sc spip}\footnote{https://github.com/ajameson/spip} software library receives the two streams, merging them and writing them to a {\sc psrdada}\footnote{https://psrdada.sourceforge.net} ring buffer in CPU memory. The ring buffer is configured to hold approximately 20 seconds of data, providing buffer space to absorb any downstream processing lag. The ring buffer uses shared memory to facilitate asynchronous I/O between the smaller, faster writes of the UDP receive process and the larger, slower reads of the signal processing software. Monitoring software can also periodically sample the data streams to provide signal displays and diagnostics. Regardless of the number of channels, the data are split into two equal sub-bands which are independently processed in parallel by the pipelines in the {\sc dspsr} \citep{vb11} software library. The pipelines perform the major signal processing functions on the GPUs which provide sufficient performance to process sub-bands in real-time. {\sc dspsr} performs coherent dedispersion and can produce folded pulsar profiles (fold mode) or filterbanks (filterbank mode), significantly reducing the output data rate. The fold mode and filterbank mode both support flexible configuration parameters defining the output data resolutions subject to the limits listed in Table \ref{tb:ptuse-processing-capabilities}. \begin{table*} \caption{PTUSE Processing Capabilities} \label{tb:ptuse-processing-capabilities} \begin{threeparttable} \begin{tabular}{llll} \hline Parameter & Fold Mode & Filterbank Mode & Unit \\ \hline pulsar ephemeris & catalogue or custom & N/A & \\ dispersion measure & 0 to 2000 & 0 to 2000 & pc cm$^{-3}$ \\ output phase bins & 64 to 4096 & N/A & \\ output polarisation products $\dagger$ & 1, 2 or 4 & 1, 2 or 4 \\ output sampling interval & N/A & (8 to 1024) * Tsamp & microseconds \\ output sub-integration length & 8 to 60 & N/A & seconds \\ output quantisation & 16 & 1, 2, 4 or 8 & bits per sample \\ \hline \end{tabular} \begin{tablenotes} \item[$\dagger$] With polarisations H and V, 1 denotes Stokes I (HH + VV), 2 denotes the square law detected power for each polarisation (HH, VV), 4 denotes the square law detected power in each polarisation and the real and imaginary components of the covariance between the polarisations (HH, VV, Real(H*V), Imag(H*V)). \end{tablenotes} \end{threeparttable} \end{table*} Both fold-mode and filterbank-mode data write the sub-banded results to disk in {\sc psrfits} \citep{hvm04} format, which are subsequently combined into a single file and transferred to both the MeerKAT Data Archive and the Swinburne OzSTAR supercomputer. The volume of filterbank data can be extreme when observing at the highest filterbank time resolutions, typical of globular cluster observing, recording data at over 400\,MB/s (30\,TB/day). These data products may be reduced on machines on-site prior to transfer to the data archive but also copied to future mirror sites planned in Europe (e.g. Max Planck Institute f\"ur RadioAstronomie). \subsection{PTUSE: Challenges and Upgrades} \label{sec:PTUSEsubsec} During commissioning, it was found that capture of small UDP multicast packets at the high rates required is challenging on the Intel E5-2623v3 CPUs in the Commissioning System. The Commissioning System suffered from occasional packet loss, with an average loss rate of 700 bytes per minute (less than about 4 parts per billion) when observing at L-band in 1024 channel mode. Rather than having "holes" in the data, the samples from the previous cycle of the ring buffer were used to maintain the system noise level. This can occasionally replace what should be system noise by a pulse and vice versa. The issue was traced to insufficient CPU memory bandwidth, and to eliminate potential artifacts, the switch to the Production System has eliminated virtually all packet loss. PTUSE supports a raw baseband observing mode which records the raw channelised voltage time series produced by the B-engines. The Commissioning Systems could only store $<$30\,s of data in this mode which would then require many minutes to slowly write out to the SATA disk system. The Production Systems each feature an NVME RAID disk which can record at 6 GB/s, which adds the capability to record up to 40 minutes of raw baseband data to disk on each server. This baseband mode allows recording at the native 1.196 $\mu$s time resolution of the PFB channel bandwidths, enabling the study of giant pulses, pulse microstructure or the timing of many globular cluster pulsars at Nyquist time resolution upon playback. The Commissioning Systems will be retained for data distribution and monitoring with the Production Systems used for all future PTUSE observations. The {\sc dspsr} software library supports the creation of multiple frequency channels within each coarse channel (using the -F option on the command line) but to date there has been no call for it from observers and the user interface does not yet support it. Writing baseband data to disk and running {\sc dspsr} on the command line would achieve this functionality if required. \section{System Verification} \label{sec:SV} \subsection{System Equivalent Flux Density} \label{SEFD} Most pulsars at high dispersion measure (i.e. with DM > 200\,pc\,cm$^{-3}$) show only modest ($<$ 1 dB) flux density variation at the MeerKAT observing frequencies and can be used to perform a first-order calibration of the system performance. Three such pulsars are PSRs~J1602$-$5100 (B1558$-$50), J1651$-$4246 (B1648$-$42) and J1809$-$1917 with flux densities of 7.0, 21.4 and 2.8 mJy at 1369~MHz as measured by the Parkes telescope \citep{2018MNRAS.474.4629J}. Each of these pulsars was observed on 4 separate occasions with MeerKAT using the L-band receiver and here we investigate the system performance they imply for the telescope. The data were excised of interference and split into two bands each of 194~MHz centered near 1200 and 1400~MHz in order to best compare with the Parkes data. We derived a system equivalent flux density (including the sky contribution) of 8.1, 8.0, and 10.3~Jy in the central part of the band for the three pulsar locations. It is difficult to accurately estimate the sky contribution with the current MeerKAT system, but the Parkes observations would imply values of 4, 4, and 10~K in the direction of the three pulsars. We therefore conclude that the \textrm{SEFD} is consistent with 7~Jy across 400~MHz of bandwidth. \subsection {Bandwidth Utilisation} To quantify the effect of radio frequency interference (RFI) on our pulsar science, we took observations of the narrow duty-cycle MSP PSR~J1909$-$3744 from Feb 2019 until Nov 2019 and eliminated interference by looking for deviations in the pulsar's baseline that exceeded 5 sigma after averaging the pulse profile every 8 seconds. In total, we analysed 2825 8-second integrations. These results are shown in Figure~\ref{fig:RFIplot}. In the central 928 channels of the 1024, we only had to delete 12.8\% of the 1400 MHz band, on average. Although, some of these channels have very persistent RFI and their deleted fraction is close to 100\%, particularly those channels associated with Global Positioning Satellites and the mobile phone band near 950 MHz. Other RFI sources are very time-dependent like those associated with aircraft. \begin{figure} \centering \includegraphics[width=\columnwidth]{RFIHist1909.PNG} \caption{The fraction of 8-second folded integrations on PSR~J1909$-$3744 where the baseline had an integrated boxcar greater than 5\,$\sigma$ from the mean and were consequently deleted between Feb-Nov 2019 using the L-Band receiver.} \label{fig:RFIplot} \end{figure} On short timescales, the fraction of affected integrations is similar. In an analysis of a 7200-second observation of the giant-pulse emitting pulsar PSR~J0540$-$6919 \citep[B0540$-$69][]{2003ApJ...590L..95J}, we created single pulse timing archives with an approximate integration time of 50.6\,ms. We followed the same procedure we used for the PSR~J1909$-$3744 observations to delete RFI-affected frequency channels. Single-pulse integrations make weaker RFI easier to detect as it is not washed out by the process of pulsar folding but also means RFI with an on/off timescale greater than (in this case) 50\,ms can lead to integrations with less or almost no RFI. We found that, in a single two-hour observation, 9.6\% of the band was deleted using the same criteria as for the integrated pulse profile tests. To place these results in context, for much of the last two decades, observations at the Parkes 64\,m telescope have used bandwidths of 256-340\,MHz in the 20-cm band. Our results suggest that, for pulsar timing and single-pulse studies, effectively 87-90\% of the 928 central frequency channels can be used for a total bandwidth of 675-700\,MHz centred at 1400\,MHz. This is very competitive with almost all existing large-aperture telescopes and comparable to the fraction of the 1400\,MHz band at the NRAO Green Bank telescope (GBT) and only exceeded by the recent development of the Ultra-Wideband receiver \citep{2019arXiv191100656H} at the Parkes telescope that operates from 704\,MHz to 4.032\,GHz. In Figure 6 we show the examples of broadband observations of the double pulsar PSR J0737--3039A that demonstrate the high fractional bandwidths available for pulsar observations at UHF and L-band with MeerKAT. \begin{figure*} \centering \begin{subfigure}{0.44\textwidth} \includegraphics[angle=0,width=\columnwidth]{J0737-3039A_UHF_Freq.pdf} \caption{Integrated pulse profile for PSR~J0737$-$3039A with the UHF receiver. The flux density scale is arbitrary.} \label{fig:UHF_profile} \end{subfigure} \hfill \begin{subfigure}{0.44\textwidth} \includegraphics[angle=0,width=\columnwidth]{J0737-3039A_L_Freq.pdf} \caption{Integrated pulse profile for PSR~J0737$-$3039A with the L-band receiver. The flux density scale is arbitrary.} \label{fig:Lband_profile} \end{subfigure} \caption{ Observations of PSR~J0737$-$3039A with MeerKAT's UHF and L-Band receivers. } \label{fig:j0737obs} \end{figure*} \subsection{Spectral Leakage} \label{leakage} The effect of spectral leakage, due to the shape of the PFB filter magnitude response in the F-engines, is demonstrated with 700-second observations of PSR~J1939+2134 (B1937+21). These observations were coherently dedispersed, folded and integrated into a single profile. The difference between an average frequency-dependent profile for the two filter designs (described in section \ref{PFB} ) as a function of frequency and pulse phase is shown in Figure \ref{fig:pennuccimask}. The original filters gave rise to frequency-dependent pulse profiles that possessed `reflections' of the main and inter-pulses due to spectral leakage from the adjacent channels. At lower frequencies the reflected pulses are offset further from the true pulse with the magnitude of the reflection being inversely proportional to the amplitude response at the channel boundary in the filter design. This is very bad for precision timing experiments, as depending upon the location of scintillation maxima it can systematically alter the shape of the pulsar's frequency-integrated profile and lead to systematic timing errors. The new filter greatly reduces the amplitude of these artifacts which are now seemingly negligible. \begin{figure} \centering \includegraphics[width=\columnwidth]{J1939_2134_resids.pdf} \caption{Average pulsar profiles after the bright main pulses and weaker interpulse are subtracted using a frequency-dependent mean analytical profile. This reveals the extent of the artifacts that were present in the original filters (left panel) and the extent to which they have been removed with the 0.91 filter design (right panel).} \label{fig:pennuccimask} \end{figure} \subsection{Artifacts} \label{ART} To explore the level of any potential system artifacts we compared the pulse profile of the MSP PSR~J1939+2134 observed with MeerKAT/PTUSE with archival observations from the CASPSR (CASPER-Parkes-Swinburne-Recorder) coherent dedisperser on the Parkes 64\,m radio telescope in the same frequency band. CASPSR digitizes the entire down-converted 400 MHz band and uses the \textsc{dspsr} library to coherently dedisperse the data using graphics processing units. PTUSE on the other hand dedisperses the narrow polyphase filterbank channels produced by the B-engine using the same software library. There is perhaps a danger that each of these methods may create different artifacts in the profile that affect precision timing and interpretation of pulse features. PSR~J1939+2134 has a steep spectrum and is prone to strong scintillation maxima in narrow (few MHz) frequency bands around 1400\,MHz hence it is important to focus on a relatively narrow fractional bandwidth to identify potential artifacts. We selected the relatively narrow bandwidth between 1280 and 1420 MHz at both sites and produced a Stokes I profile for each. As we saw earlier (section \ref{PFB}) in some MeerKAT F-engine modes (eg the early 1024 channel mode) the spectral leakage is significant between neighbouring channels and the relative heights of the two pulse components from the MeerKAT profile did not agree with the CASPSR data with the weaker of the two pulses being reduced in amplitude by circa 10\%. We then repeated the exercise using the new 1K mode of MeerKAT/PTUSE with the sharper filters and found the MeerKAT and Parkes profiles to be consistent within the noise. We attribute this improvement to the choice of filters now in use in MeerKAT's F-engine that eliminate spectral leakage. The Parkes, MeerKAT and difference profiles are shown in Figure \ref{fig:1939_PKS_MK}. This is encouraging for comparisons of pulsar profiles not only between different pulsars at MeerKAT but between observatories that use digital signal processing and common software libraries. \begin{figure} \centering \includegraphics[width=\columnwidth]{J1939_PKS_MK.pdf} \caption{Observations in the same frequency band of PSR J1939+2134 at Parkes (top panel), MeerKAT (middle panel) and their difference (bottom panel) in normalised units to the profile peak. The agreement between the telescopes and backends is excellent. } \label{fig:1939_PKS_MK} \end{figure} \subsection{Polarimetry} \label{POLN} The boresight polarimetric response of the MeerKAT tied-array beam was estimated using the Measurement Equation Modeling (MEM) technique described in \citet{van04}. Motivated by the results of \citet{lck+16}, the MEM implementation was updated to optionally include observations of an artificial noise source that is coupled after the orthomode transducer (OMT) and to remove the assumption that the system noise has insignificant circular polarisation. The updated model was fit to observations of the closest and brightest MSP, PSR~J0437$-$4715, made over a wide range of parallactic angles, and both on-source and off-source observations of the bright calibrator PKS~J1934$-$6342. The best-fit model parameters include estimated receptor ellipticities that are less than $1^\circ$ across the entire band, indicating that the degree of mixing between linear and circular polarisation is exceptionally low. The non-orthogonality of the receptors is also very low, as characterised by the intrinsic cross-polarisation ratio \cite[IXR;][]{cw11}, which varies between 50 and 80\,dB across the band. Noting that larger values of IXR correspond to greater polarimetric purity, the MeerKAT tied-array beam exceeds both the minimum pre-calibration performance \citep[$\sim 30$\,dB;][]{fkp+15} and the minimum post-calibration performance \citep[$\sim 40$\,dB;][]{ckl+04,van13} recommended for high-precision pulsar timing. The reference signal produced by the incoherent sum of the noise diode signals from each antenna significantly deviates from 100\% linear polarisation; its polarisation state varies approximately linearly from $\sim20$\% circular polarisation at 900~MHz to $\sim60$\% circular polarisation at 1670~MHz. Therefore, if an observation of the reference signal were to be used to calibrate the differential gain and phase of the tied-array response, then the technique described in Section 2.1 of \cite{ovhb04} would be necessary. However, the reference signal also exhibits evidence of a significantly non-linear tied-array response. This is observed as over-polarisation of the reference signal (e.g., degree of polarisation as high as 105\% -- 110\%) and is also observed in the goodness-of-fit (e.g.\ reduced $\chi^2$ between $\sim$300 and $\sim$800) reported when performing MEM with reference source observations included. The origin of the non-linearity is currently not understood; therefore, given that the best-fit values of differential receptor ellipticity are very small\footnote{Differential receptor ellipticity, which describes the mixing between Stokes~I and Stokes~V, must be constrained by observations of a source of known circular polarisation, as described in Appendix B of \citet{van04} and considered in more detail in \citet{lck+16}.}, all reference source observations (including on-source and off-source observations of PKS~J1934$-$6342) were removed from the MEM input data, yielding good fits to the pulsar signal with reduced $\chi^2$ between $\sim$1.6 and $\sim$1.9. To test the stability of the polarimetric response, observations of PSR~J0437$-$4715 made on 4 September 2019 were modelled and calibrated using MEM and then integrated to form a template with which to model observations made on 3 October 2019 using Measurement Equation Template Matching \cite[METM;][]{van13}. The template, formed from an integrated total of 2 hours of observing time, has a signal-to-noise ratio of $3.8 \times 10^4$. In each frequency channel that was not flagged as corrupted by RFI, the METM model fit the data well, with reduced $\chi^2$ values ranging between $\sim$1.1 and $\sim$1.3. The integrated total of the METM-calibrated data are plotted in Figure~\ref{fig:0437_poln}. \begin{figure*} \centering \includegraphics[angle=-90,width=0.75\textwidth]{J0437-4715_pol.pdf} \caption{Calibrated polarisation of PSR~J0437$-$4715, plotted as a function of pulse phase. In the top panel, the position angle of the linearly polarized flux is plotted with error bars indicating 1 standard deviation. In the bottom panel, the total intensity, linear polarisation, and circular polarisation are plotted in black, red and blue, respectively.} \label{fig:0437_poln} \end{figure*} As a final consistency check, the calibrated polarisation of PSR~J0437$-$4715 observed at MeerKAT was quantitatively compared with that observed at the Parkes Observatory using CASPSR. After selecting the part of the MeerKAT band that overlaps with the 400~MHz band recorded by CASPSR, the calibrated MeerKAT data were fit to the calibrated Parkes template using Matrix Template Matching \cite[MTM;][]{van06}. The MeerKAT data fit the Parkes data well, with reduced $\chi^2$ values ranging between $\sim$1.2 and $\sim$1.5. The Jones matrices that transform the MeerKAT data to the basis defined by the Parkes template in each frequency channel were parameterised using Equation 19 of \cite{bri00}. All model parameters were close to zero, except for the differential ellipticity, $\delta_\chi$, which varied between +1 and $-$2\,degrees as a function of frequency, and the rotation about the line of sight, $\sigma_\theta$, which varied between $-$5 and $-$8\,degrees. Non-zero values of $\delta_\chi$, which describes the mixing between Stokes~I and Stokes~V, are expected; as described in Appendix B of \citet{van04}, this mixing must be constrained by introducing assumptions that may be correct only to first order. Non-zero values of $\sigma_\theta$ are also expected owing to unmodelled Faraday rotation in Earth's ionosphere. After initialising the array, a standard operating procedure is to run the so-called delay calibration observation. During the observation, the noise diode as well as bright, well known sources are used to calculate and apply time-variable solutions for the antenna-based delays. The delay calibration observation consists of multiple stages: initially predefined F-engine complex gains are applied in the correlator for each antenna; a suitable calibrator is observed and simple antenna-based delays are calculated; next, a noise diode is activated and cross-polarisation delay as well as phase is measured for the entire array. The delays are derived and combined by the real-time calibration pipeline before being applied to the data with the exception of the cross-polarisation phases which are stored in the observation metadata and can be applied at a later stage. At present the parallactic angle is assumed to be the same for every antenna. Pulsars that transit 6 degrees from the zenith have at most about a 0.2 degree error because of this assumption. \subsection{Timing} \label{TIMING} To test the timing stability of the telescope, we routinely observed the bright, narrow MSP PSR~J1909$-$3744 over a period of about 11 months from March 2019. This pulsar has a well-established ephemeris that we took from the Parkes Pulsar Timing Array \citep[PPTA, ][]{kerr2020pptadr2}. We first excised radio frequency interference in the data cube using the {\sc coastguard} package \citep{lkg+16}, which we modified to work with MeerKAT data. Importantly we used frequency-dependent model templates to identify on and off-pulse regions from which to calculate a set of statistics that could be used to identify contaminated profiles that should be excised. We updated the dispersion measure in the data sets to a value near the mean over the observing interval. We then averaged to $32$ channels in frequency and completely in time for each observation. Using a template with $32$ frequency channels to capture profile evolution \citep[derived using the technique of ][]{Pennucci+19}, we derived arrival times using the Fourier-domain phase-gradient algorithm \citep{Taylor92} with Monte Carlo estimates for the arrival time uncertainties. We analysed the arrival times using {\sc temponest} \cite[][]{lah+14}. To first order, MSPs only drift slowly from their timing models, so we started the PPTA ephemeris from its second data release \citep{kerr2020pptadr2} and modelled only the minimum number of additional parameters: the pulsar spin and spin down rate; dispersion measure and first derivative dispersion measure; and the {\sc tempo2} FD (``frequency dependent'') parameters to account for pulse profile evolution with frequency. FD parameters model systematic offsets in the average frequency-resolved timing residuals \citep{Arzoumanian2015b}. The model is a simple polynomial as a function of log$_{10}$ of the observing frequency, where each FD parameter is one of the polynomial coefficients. However, it should be noted that FD parameters can absorb other systematic effects besides unmodeled evolution of the profile shape. We searched for three forms of stochastic noise in the data. We searched for red noise and DM variations using the established Bayesian methods now commonly employed. To model the white noise we searched for both EQUAD and EFAC (using the {\sc temponest} definitions), and a new parameter, TECORR, which accounts for correlated white noise, by adding an additional term to the noise covariance matrix \begin{equation} \sigma^2_{ij}=\delta(t_i-t_j) \sigma^2_{\rm TECORR, hr} \sqrt{3600~{\rm s}/T}, \end{equation} where $\delta(t_i-t_j)=1$ if the data are from the same integration, and $T$ is observation integration time in seconds. The noise parameter is similar to the ECORR that has been employed notably in NANOGrav data analyses \citep[Alam et al. \textit{in prep.},][]{Arzoumanian18b,Arzoumanian2015b}. However the noise accounts for varying observing lengths. Correlated noise introduced by stochasticity in pulse shape variations is predicted to reduce in proportion to the square root of time. We find no evidence for red noise in our data set, which is unsurprising given its short length. We note that the Bayesian methodology employed accounts for covariances between noise and the timing model, so would be able to detect red noise if it were sufficiently strong. We find strong evidence for dispersion measure variations, which are visible by eye in our $512$-s observations. We also find evidence for band-correlated white noise (TECORR), but no evidence for EQUAD and EFAC. We measure $\sigma_{\rm TECORR,hr} \approx 24$\,ns. Which is approximately a factor of two larger than the expected jitter noise measurements inferred from our short observations \cite[][]{sod+14}. The excess noise is the subject of current research, but we suspect it includes contributions from unmodelled dispersion measure variations \cite[][]{css16,sc17}. After subtracting the maximum likelihood model for dispersion measure variations and forming the weighted average of the sub-banded residual arrival times we found evidence for marginal orbital phase dependent variations in the residuals. After accounting for this by fitting for the companion mass, we measure the root-mean square of the average residual arrival times times to be $\approx 66$\,ns as shown in Figure \ref{fig:1909timing}. It is possible that the fitting (in particular of the position and spin down) is absorbing some noise in the data set. If we start from the ephemeris published in PPTA-DR2 \cite[][]{kerr2020pptadr2}, fitting only DM variations and FD parameters we measure the rms of the averaged residuals to be 76\,ns. We note that the PPTA-DR2 ephemeris was intended to be initial ephemeris for future studies, and only crude noise modelling was undertaken. Figure \ref{fig:1909timing_orb} shows the residual arrival times plotted versus pulse phase when ignoring entirely (panel a) and accounting for the Shapiro delay caused by the radio waves propagating through the gravitational field of the companion. \begin{figure} \centering \includegraphics[width=\columnwidth]{avg.pdf} \caption{ Epoch-averaged residual arrival times for PSR~J1909$-$3744. } \label{fig:1909timing} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{1909_orb.pdf} \caption{Epoch averaged residual arrival times for PSR~J1909$-$3744 plotted against orbital phase in cycles. Residual arrival times are plotted using the maximum-likelihood model without (panel a) and with accounting for the Shapiro delay induced by the companion. The predicted signal is shown as the solid line in panel a.} \label{fig:1909timing_orb} \end{figure} To test the timing stability of the system on short timescales, we made use of the bright Fermi source PSR~J2241$-$5236 \citep{2011MNRAS.414.1292K}. This 2.18\,ms MSP has a narrow duty-cycle ($\sim$3\%) and is regularly timed as part of the Parkes Pulsar Timing Array. It has a mean 1.4\,GHz flux density of 3-4\,mJy and often experiences bright scintillation maxima. On April 22 2019 UTC, this pulsar produced a 5030\,$\sigma$ profile in just 512\,s in 64 $\times$ 8\,s integrations. After forming an appropriate smoothed template, we produced arrival times every 8\,s after summing over the full bandwidth and obtained an rms residual of the resultant 8\,s integrations of only 90\,ns using the existing ephemeris. This is the lowest rms residual in 8\,s ever seen in pulsar timing and implies a very small jitter upper limit of only $90/(3600/8)^{1/2} = 4.2$\,ns in an hour. \begin{figure} \centering \includegraphics[width=\columnwidth]{mb_2241.pdf} \caption{Times of arrival during a 512s observation of PSR~J2241$-$5236 with MeerKAT using the L-band receiver. The post-fit rms residual is just 90\,ns using 8-s integrations. This implies a jitter limit of less than 4.2\,ns in one hour.} \label{fig:2241toas} \end{figure} Both of these results auger well for the future of MSP timing at the MeerKAT telescope and are a testament to the engineering care that has been achieved with the TFR, PTM and PTUSE. \subsection{Filterbank Mode} \label{subsec:GC} The globular cluster Ter\,5 was observed in filterbank mode for 9000\,s on May 27 2019 with 9.57\,$\mu$s time resolution and full polarimetry. Profiles for the 34 detected pulsars are shown in Figure \ref{fig:Ter5montage} showing the high time resolution and signal-to-noise ratios for many of the pulsars. These observations only used the central dishes within 500\,m radius of the core. A trade-off has to be made between the sensitivity of the tied beam and its width. With this configuration all of the pulsars except Ter\,5\,A, D, X and J were within the half-power point of the beam at the centre frequency of the observation. Ter\,5\,A is so bright that it was easily detected regardless, but the others were heavily attenuated. The detections of Ter\,5\,ah and Ter\,5\,aj are marginal. The pulsar profiles were obtained in a two-stage process. First, the pulsars were folded with the latest ephemerides available from the GBT program (Ransom, private communication). Some required minor refinement of their periods to correct for minor drifts in pulse phase. It is not uncommon for black widow pulsar systems to accumulate orbital phase drifts that manifest themselves as changes in the observed period for fractions of their orbit. A comparison of S/N was made for 29 of the pulsars that are routinely detected at the GBT in an equivalent observing time. Of these, 15 were better with MeerKAT and another 9 within 25\% of the GBT value. The four poorest (A, D, X and J) were all outside the half-power point of the MeerKAT tied beam. Ter\,5\,O was observed in parallel using the fold-mode of PTUSE and after calibration we derived a rotation measure for the cluster of 174.6$\pm$0.8 rad/m$^2$ which is consistent with the value (178.5$\pm$3.5) for the cluster obtained by \cite{2018ApJ...867...22Y} for Ter\,5\,A. The pulse profile and polarimetry are shown in Figure \ref{fig:Ter5O_poln}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Ter5_pulsars_plots.pdf} \caption{Folded pulse profiles of the 34 pulsars in Terzan\,5 from a 9000\,s integration. } \label{fig:Ter5montage} \end{figure} \begin{figure} \centering \includegraphics[angle=0,width=\columnwidth]{J1748-2446O.pdf} \caption{Calibrated polarisation profile of PSR~J1748$-$2446O, plotted as a function of pulse phase. In the top panel, the position angle of the linearly polarized flux is plotted with error bars indicating 1 standard deviation. In the bottom panel, the total intensity, linear polarisation, and circular polarisation are plotted in black, red and blue, respectively.} \label{fig:Ter5O_poln} \end{figure} \section{New Results} \label{sec:RESULTS} \subsection{A Giant Pulse from PSR~J0540--6919} PSR~J0540--6919 is a young pulsar in the Large Magellanic Cloud that has been observed by the Parkes telescope to emit giant pulses \citep{2003ApJ...590L..95J} and possesses a twin-peaked average pulse profile (bottom panel Figure \ref{fig:0540}). In a test of the PTUSE filterbank mode, we observed this source for two hours on March 26 2019 and detected a large number of giant pulses. The brightest giant pulse has a peak flux density of 5.4\,Jy and a mean flux density of 92\,mJy and is shown in Figure\, \ref{fig:0540}. These flux values are estimated using the MeerKAT SEFD values. At its assumed distance \citep{2004MNRAS.355...31J} of 50\,kpc, this giant pulse would have a peak luminosity of 13500\,Jy\,kpc$^2$. If placed in M31, this giant pulse would have a peak flux of $\sim$25\,mJy and would be detectable by the FAST telescope. \begin{figure} \centering \includegraphics[width=\columnwidth]{{J0540m6919_MGeyer_Feb2020_op3}.png} \caption{\textit{Top and middle:} Giant pulse from PSR~J0540$-$6919 with a peak flux density of 5.4\,Jy and an estimated mean flux density of $\sim$ 92\,~mJy. Using the SEFD of MeerKAT the off-pulse rms is estimated to be 20 mJy. The shaded region in the top panel shows the selected on-pulse region. The pulsar at these frequencies is subject to scattering that gives rise to the exponential tail. \textit{Bottom:} The averaged 2~hr pulse profile.} \label{fig:0540} \end{figure} \subsection{Nulling in PSR~J0633--2015} The raw sensitivity of MeerKAT makes it ideal to study single pulse phenomenology in large number of pulsars previously too weak for such studies. For example, understanding the population and characteristics of nulling pulsars, and how the pulsar sets the timescale for the on- and off-periods are key questions of the emission physics. PSR~J0633$-$2015 is a long period pulsar discovered by \citep{2006MNRAS.368..283B} who made no mention of its unusual pulse-pulse characteristics. Figure~\ref{fig:0633} shows a 5~minute observation made with MeerKAT on 2019 October 27. Each horizontal row shows a colour-coded representation of the flux density each individual pulse from the pulsar. Short duration nulling is clearly visible. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{0633.png} \caption{Short term nulling in PSR~J0633$-$2015 in observations made on 2019 October 27. Each horizontal row shows an individual pulse, with the colour coding denoting the flux density of the pulsar.} \label{fig:0633} \end{figure} \subsection{Double Pulsar Timing} A full orbit of the double pulsar PSR~J0737$-$3039A \citep{2003Natur.426..531B,2004Sci...303.1153L} was observed to assess the precision of its arrival times with MeerKAT using 56 antennas. Using $16\times 53.5$\,MHz sub-bands and 64\,s integrations we obtained a post-fit rms residual of just 9.3\,$\mu$s. Averaging across the full band yields a post-fit arrival time rms of just 2.3\,$\mu$s in 64\,s. By comparison, a 2013 archival observation of J0737$-$3039A with the Parkes 64\,m telescope using the multibeam receiver and the 340\,MHz CASPSR coherent dedisperser 64\,s integrations yielded 15\,$\mu$s rms residuals. The improvement in timing is therefore a factor of 6.5. We now examine how this compares with the radiometer equation expectations. The \textrm{SEFD} of the Parkes telescope/multibeam receiver is $\sim$ 30\,Jy whereas MeerKAT is $\sim$7\,Jy and the effective (RFI-free) bandwidths used are 340 and $\sim$756\,MHz respectively. Therefore, we would expect the ratio of the MeerKAT residuals to that of the Parkes multibeam CASPSR residuals to be of order $56/64 \times 7/30\times\sqrt{340/756}\sim 1/7$, in good agreement with our measured ratio of 6.5. Given the flux density variations exhibited by pulsars at the dispersion measure of this pulsar (48.9\,pc\,cm$^{-3}$) due to interstellar scintillation this is perfectly consistent with expectations based on our impressive telescope gain and system temperature figures quoted for MeerKAT's L-band receiver. A factor of 6.5 in timing residuals increases observing efficiency by a factor of $6.5^2 \sim 40$. These test observations imply that MeerKAT will be important for studies of the double pulsar, in particular its eclipse, Shapiro delay, and hunts for the (now invisible) ``B'' pulsar that demand high sensitivity \citep[see e.g.][]{2004hpa..book.....L}. In the longer term, extremely high precision will be required to separate the contributions of Lense-Thirring precession to the relativistic advance of periastron and hence determine the moment of inertia of the neutron star in a novel way. \section{Discussion and Outlook} \label{sec:discussion} The MeerKAT telescope was designed both to be a powerful standalone instrument and to be integrated into SKA1-mid. The development and commissioning of the PTUSE instrument has demonstrated that it is achieving excellent sensitivity, pulsar timing and polarimetric accuracy and already making discoveries such as new records on pulse jitter and timing stability. But MeerKAT has the potential to break other new ground. It is an extremely agile mechanical telescope, and can slew at 2 deg/s in azimuth and 1\,deg/s in elevation. The current dead time between pulsar observations is now just $\sim$5\,s leading to high observing efficiencies. In many regions of the galaxy, many pulsars will occupy the same primary beam, and in future array releases it will be possible to place a tied-array beam on up to four objects at once to further enhance the timing program efficiency. However other opportunities exist to further enhance pulsar timing at MeerKAT. The TRAPUM and Breakthrough Listen compute clusters currently being commissioned on site receive the voltages from every individual antenna and can form coherent beams that can be independently steered within the primary beam. TRAPUM will form up to (depending upon bandwidth) 900 coherent beams to search for pulsars and Fast Radio Bursts \citep{sk16} whilst the Breakthrough Listen cluster intends to search for techno-signatures from advanced civilisations \citep{gajjar2019breakthrough}. These instruments can complement PTUSE by placing a tied beam on all known pulsars within the primary beam to maximise efficiency when timing the dozens of MSPs that inhabit globular clusters like 47 Tucanae and Terzan 5. In February 2020 four completely independent sub-arrays were tested, each of them observing a different pulsar. It will soon be possible to time over 1000 pulsars in just an 8\,hour period using these modes. All of the low dispersion measure MSPs ($DM<40$ pc cm$^{-3}$) exhibit scintillation maxima on the timescale of hours that can easily amplify/deamplify their mean flux by factors of several. If we could observe MSPs during such maxima the benefits in observing efficiency are significant. In the absence of limits from pulse jitter, a factor of just two in mean flux density is worth a four-fold improvement in timing efficiency according to the radiometer equation. Hence in the future we intend to use the bulk of the array to time a timing array MSP whilst a small group of antennas conduct an audit of potential targets that may be in a bright scintillation state. This could lead to dramatic increases in the sensitivity of the MeerKAT pulsar timing array and its contribution to the IPTA's goal of detecting nanoHz gravitational waves. The MeerTime Large Survey Project consists of four major sub-projects or themes: relativistic and binary pulsars, globular clusters, the MeerTime Pulsar Timing Array and the Thousand Pulsar Array. These all aim to create a legacy dataset for current and future generations of astronomers. Data on the first three projects will be made available according to SARAO Large Survey Project data release guidelines and upon publication. The Thousand Pulsar Array \citep{johnstonetal2020} has an ambitious objective to make its data public once it is cleaned, calibrated, and the timing corrections are secure. As of Feb 26 2020 MeerTime\footnote{www.meertime.org} has already observed 1005 unique target pulsars in 825\,h of observing and shows that MeerKAT should be an exceptional pulsar facility in the lead-up to its incorporation into the SKA. Many of the pointings were of globular clusters, and hence well over 1000 individual pulsars have already obtained pulse profiles suitable for timing and polarimetry. \begin{acknowledgements} Parts of this research were conducted by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), through project number CE170100004. FJ acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 694745). AS and JvL acknowledge funding from the Netherlands Organisation for Scientific Research (NWO) under project ''CleanMachine'' (614.001.301). YM, AS and JvL acknowledge funding from the ERC under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 617199. A. Ridolfi gratefully acknowledges financial support by the research grant ``iPeska'' (P.I. Andrea Possenti) funded under the INAF national call PRIN-SKA/CTA approved with the Presidential Decree 70/2016. Pulsar research at the Jodrell Bank Centre for Astrophysics is supported by a consolidated grant from the Science and Technology Facilities Council (STFC). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. SMR is a CIFAR Fellow and is supported by the NSF Physics Frontiers Center award 1430284. Pulsar research at UBC is supported by an NSERC Discovery Grant and by the Canadian Institute for Advanced Research. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. SARAO acknowledges the ongoing advice and calibration of GPS systems by the National Metrology Institute of South Africa (NMISA) and the time space reference systems department department of the Paris Observatory. MeerTime data is housed on the OzSTAR supercomputer at Swinburne University of Technology. The MeerTime Pulsar Timing Array acknowledges support of the Gravitational Wave Data Centre funded by the Department of Education via Astronomy Australia Ltd. MK and DCJ acknowledge significant support from the Max-Planck Society (MPG). PTUSE was developed with support from the Australian SKA Office and Swinburne University of Technology. \end{acknowledgements} \begin{appendix} \end{appendix} \bibliographystyle{pasa-mnras}
{'timestamp': '2020-06-01T02:05:33', 'yymm': '2005', 'arxiv_id': '2005.14368', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14368'}
arxiv
\section{Introduction} Apart from the conventional microelectronic application, manganese silicide alloy of nominal composition Mn$_5$Si$_3$ has attracted renewed interest among researchers due to the
recent discovery of different interesting magnetic and magnetofunctional properties, which includes magnetocaloric effect, anomalous Hall effect, inverted hysteresis behavior, thermomagnetic irreversibility, and spin fluctuation~\cite{lander, menshikov, silva, gottschilch, brown, kanani, surgers, biniskos,prb-scd}. At room temperature, this Mn$_5$Si$_3$ alloy is paramagnetic (PM) in nature with hexagonal $D8_8$ type crystal structure (space group $P6_3/mcm$) having two distinct crystallographic sites for Mn (4(d) and 6(g), popularly termed as Mn1 and Mn2)~\cite{lander, menshikov}. A decrease in sample temperature ($T$) results in two magnetic transitions, namely PM to collinear antiferromagnetic (AFM2) state at 100 K and AFM2 to noncollinear antiferromagnetic (AFM1) state at 66 K~\cite{lander, menshikov}. Both of these magnetic transitions are also associated with the change in magnetic structures. Below PM to AFM2 transition temperature, the magnetic structure of the system becomes orthorhombic (space group $Ccmm$)~\cite{silva}. On the other hand, the monoclinic structure with the noncentrosymmetric $Cc2m$ space group has been observed below AFM2 to AFM1 transition point~\cite{silva}. Apart from AFM1 and AFM2 state, another intermediate noncollinear antiferromagnetic (AFM1$^{\prime}$) state has also been reported for the present alloy by S\"urgers {\it et al.}~\cite{surgers} Such intermediate AFM1$^{\prime}$ state appears during AFM1 to AFM2 field-induced transition. Detailed neutron diffraction studies confirm the presence of multiple Mn sites with different moment values and arrangements below the magnetic transition temperatures~\cite{silva}. Such dissimilar nature of Mn-sites in magnetically ordered phases plays a pivotal role in the observation of different magnetic properties. \par Our recent investigation on Mn$_5$Si$_3$ alloy reveals probably the first-ever observation of inverted hysteresis loop (IHL) in a bulk antiferromagnetic compound~\cite{prb-scd}. The isothermal arrest of intermediate AFM1$^{\prime}$ state is the crucial reason for such novel observation. Besides, isothermal as well as thermomagnetic arrest lead to a significant modification in the existing phase diagram of the alloy~\cite{prb-scd}. Now, to check the robustness of the observed IHL and arrested behavior, it is pertinent to investigate the effect of Mn-site doping in Mn$_5$Si$_3$ alloy. For the pure Mn$_5$Si$_3$ alloy, the nearest neighbor Mn-Mn distance plays a crucial role in the magnetic character of the alloy, and it is lying very near to the critical values of instability~\cite{jap-Kanani,biniskos,silva,gottschilch,brown1,brown,surgers}. Mn-site doping will directly influence the Mn-Mn distance and hence the magnetic interaction in the material. \par Until now, very few doping studies have been performed on the present manganese silicide system, which only reports the effect of doping on the magnetic transition temperatures and fields~\cite{songlin,jap-Kanani}. No detailed magnetic investigations (both macroscopic and microscopic) have yet been performed on such doped alloys. In the present work, we doped both smaller size Ni and larger size Cr atoms in the Mn-site of the Mn$_5$Si$_3$ alloy with the motivation of investigating the effects of such doping on the recently observed unique properties, such as IHL and thermomagnetic irreversibility. Experimental outcomes indicate a monotonous decrement of AFM1$^{\prime}$ phase fraction with increasing doping concentrations, and for 4\% doping, alloys are found to show field-induced transition directly from AFM1 to AFM2 phase (without intermediate AFM1$^{\prime}$ phase). A Gradual decrease of such intermediate AFM1$^{\prime}$ phase with doping affects the IHL and thermomagnetic irreversibility present in this system and eventually vanishes both for 4\% Ni and Cr doping cases. \begin{figure*}[t] \centering \includegraphics[width = 17 cm]{fig1.eps} \caption{(Color online) Room temperature x-ray powder diffraction pattern along with the Rietveld fitted curves, difference patterns, and Bragg’s peak positions of Ni and Cr-doped alloys are plotted in (a) and (b) respectively. Partial occupancies between Mn and Ni/Cr at all the parent Mn positions were considered for the best fitting of the diffraction patterns. (c) illustrates the $c$-axis projected structure of the 4\% Ni-doped alloy ($x$ = 0.2). The cyan balls represent Si atoms, whereas mixed colored balls show Mn (violet) and Ni (orange) as a combination.} \label{xrd} \end{figure*} \section{Experimental details} Ni and Cr-doped alloys of nominal compositions Mn$_{5-x}$A$_x$Si$_3$ ($x$ = 0.05, 0.1, 0.2 and A = Ni, Cr) were prepared by arc melting the constituent elements in an inert argon atmosphere using a Centorr make tri-arc furnace. Elements with purity better than 99.9\% were used for sample preparation. To ensure the homogeneity, all the alloys were remelted four times during the preparation. Finally, ingots were sealed in an evacuated quartz capsule and annealed at 900$^{\circ}$C for a week followed by rapid quenching in ice water. The room temperature x-ray powder diffraction (XRD) technique was adopted for the structural characterization of the prepared alloys. XRD patterns for the studied alloys were recorded in a Bruker AXS diffractometer (D8-Advance), where Cu-K$_{\alpha}$ radiation was used as a probe for x-ray diffraction measurements. Commercial cryogen-free high magnetic field system from Cryogenic Ltd., UK (temperature range 2-300 K and magnetic field range 0-150 kOe), equipped with vibrating sample magnetometer (VSM), was used for dc magnetic measurements. For better temperature stability, He-exchange gas was used, and the observed fluctuation in the sample temperature was about 15 mK. During dc magnetic measurements, true zero-field cooled (ZFC) conditions were obtained by following the methods mentioned in Das {\it et al.}~\cite{prb-scd}. \begin{figure}[t] \centering \includegraphics[width = 8.5 cm]{fig2.eps} \caption{(Color online) Main panels of (a) and (b) depict the temperature ($T$) variation of dc magnetization ($M$) in the presence of 10 kOe of an external magnetic field ($H$) in field cooling (FC) and field cooled heating (FCH) protocols for Ni and Cr-doped alloys respectively. Insets of (a) and (b) show iso-field $M(T)$ in FCH protocol at different constant $H$ for 2\% Ni and Cr-doped alloys ($x$ = 0.1) respectively. Isothermal $M(H)$ data recorded at 5 K for all Ni and Cr-doped alloys in ZFC conditions are plotted in (c) and (d), respectively.} \label{mtmh} \end{figure} \begin{figure*}[t] \centering \includegraphics[width = 17 cm]{fig3.eps} \caption{(Color online) Restricted region of the isothermal $M(H)$ curves recorded in ZFC condition at 5 K are plotted in the main panels for all the doped alloys ((a), (b), and (c) for Ni-doped alloys and (d), (e), and (f) for Cr-doped alloys). Arrows indicate a crossing of increasing and decreasing field line curves. Insets of (a), (b),(d) and (e) depict the restricted view of zero-field cooled (black solid ball symbols) and 10 kOe field cooled (red circle symbols) isothermal $M(H)$ data at 5 K for Ni ($x$ = 0.05, and 0.1) and Cr-doped ($x$ = 0.05, and 0.1) alloys respectively.} \label{ihl} \end{figure*} \section{Experimental Results} The room temperature powder XRD patterns for the doped alloys indicate that all of them crystallize in hexagonal structure with space group $P6_3/mcm$. The diffraction patterns were analyzed by the Rietveld refinement method using the fullprof software package. XRD patterns, along with Rietveld fitted curves, difference patterns, and Bragg's peak positions are plotted in fig.~\ref{xrd}(a) and (b). For the best fitting of the diffraction patterns, we have considered partial occupancy between Mn and Ni/Cr at all the Mn positions. No impurity peak has been observed in any of the prepared alloys. Ni-doping results in a decrease in lattice volume, whereas a significant increase in lattice volume has been found for Cr-doped alloys. Variations of lattice volume with doping concentration are listed in table 1. Using Rietveld refinement parameters, we prepared the structure of 4\% Ni-doped ($x$ = 0.2) alloy and $c$-axis projected structure of the same is shown in fig.~\ref{xrd}(c). \begin{table} \centering \begin{tabular}{c|c|c|c} \hline \hline Sample & Lattice & Moment at 5 K & AFM2-AFM1 \\ Name & Volume & in $H$ = 150 kOe & transition \\ & & & temperature ($T_{N1}$) \\ &\AA$^3$ & (emu/g) & (K) \\ \hline Mn$_{4.95}$Ni$_{0.05}$Si$_3$ & 199.09(9) & 48.03 & 56.52 \\ Mn$_{4.9}$Ni$_{0.1}$Si$_3$ & 199.01(3) & 45.04 & 47.96 \\ Mn$_{4.8}$Ni$_{0.2}$Si$_3$ & 198.51(5) & 42.27 & 24.16 \\ Mn$_{4.95}$Cr$_{0.05}$Si$_3$ & 199.33(1) & 49.14 & 57.41 \\ Mn$_{4.9}$Cr$_{0.1}$Si$_3$ & 199.43(2) & 48.61 & 53.11 \\ Mn$_{4.8}$Cr$_{0.2}$Si$_3$ & 199.58(5) & 47.99 & 31.21 \\ \hline \hline \end{tabular} \caption{Variation of lattice volume, moment at 5 K (in the presence of $H$ = 150 kOe), and AFM2 to AFM1 transition temperatures for the Ni and Cr-doped alloys.} \end{table} \par Now, let us concentrate on the temperature ($T$) variation of the dc magnetization ($M$) data. The field cooling (FC) and field-cooled heating (FCH) dc $M$ data, recorded in the presence of 10 kOe of an external magnetic field ($H$), for Ni and Cr-doped alloys are plotted in fig.~\ref{mtmh} (a) and (b) respectively. These iso-field $M(T)$ data indicate a strong influence of doping on the transition temperatures and moment values. A significant decrease in the AFM1 to AFM2 transition temperatures ($T_{N1}$) has been observed in doped alloys. In the case of Ni-doping, the reduction of such transition $T$ is about 41 K for 4\% of doping ($x$ = 0.2). On the other hand, a similar percentage of Cr-doping results in a 35 K decrease in $T_{N1}$. Variation of $T_{N1}$ with doping concentration is shown in table 1. Interestingly, PM to AFM2 transition temperature ($T_{N2}$), determined by differentiating the $M(T)$ data, remains almost unchanged at 100 K with doping. The observed effect of doping is consistent with the previously reported Ni and Cr-doped Mn$_5$Si$_3$ alloys.~\cite{jap-Kanani} The presence of thermal hysteresis between FC and FCH $M(T)$ data in all Ni and Cr-doped alloys around the $T_{N1}$ confirm the first-order nature of the transition. Similar thermal hysteresis has also been observed in pristine Mn$_5$Si$_3$ alloy~\cite{prb-scd}. A significant increase in dc $M$ value for both Ni and Cr-doped alloys has been noted below $T_{N2}$ in the presence of 10 kOe of external $H$. We have also recorded iso-field $M(T)$ data in the FCH protocol at different applied $H$. Effects of increasing $H$ on $M(T)$ data of the doped alloys are found to be similar to the results observed for pure Mn$_5$Si$_3$ alloy. For both pure and doped cases, the $T_{N1}$ is shifted towards lower $T$ with increasing $H$, whereas, $T_{N2}$ remains unchanged. The observed behavior is because the external $H$ prefers the AFM2 phase over the AFM1 phase. Some representative $M(T)$ curves for both 2\% Ni and Cr-doped alloys are plotted in the insets of fig.~\ref{mtmh} (a) and (b) respectively. \begin{figure*}[t] \centering \includegraphics[width = 17 cm]{fig4.eps} \caption{(Color online) Main panels depict the magnetization ($M$) recorded as a function of temperature ($T$) while heating in 0.1 kOe of external magnetic field ($H$) (a) for 1\% Ni-doped alloy ($x$ = 0.05) after being cooled in $H_{\rm cool}$ = (i) 10, (ii) 20, (iii) 30 and (iv) 50 kOe; (b) for 2\% Ni-doped alloy ($x$ = 0.1) after being cooled in $H_{\rm cool}$ = (v) 5, (vi) 10, (vii) 20 and (viii) 30 kOe; (c) for 4\% Ni-doped alloy ($x$ = 0.2) after being cooled in $H_{\rm cool}$ = (ix) 10, (x) 20, and (xi) 40 kOe; (d) for 1\% Cr-doped alloy ($x$ = 0.05) after being cooled in $H_{\rm cool}$ = (xii) 10, (xiii) 20, (xiv) 30, (xv) 40 and (xvi) 50 kOe; (e) for 2\% Cr-doped alloy ($x$ = 0.1) after being cooled in $H_{\rm cool}$ = (xvii) 10, (xviii) 20, (xix) 30, (xx) 40 and (xxi) 50 kOe; (f) for 4\% Cr-doped alloy ($x$ = 0.2) after being cooled in $H_{\rm cool}$ = (xxii) 10, (xxiii) 20, and (xxiv) 40 kOe. Time ($t$) evolution of dc magnetization ($M$) recorded at 5 K in FC protocol are plotted in the insets of (a), (b), (d) and (e) for 1\% Ni-doped, 2\% Ni-doped, 1\% Cr-doped and 2\% Cr-doped alloys respectively. For FC protocol, field was applied during cooling and measurement was carried out immediately after removing the field at 5 K.} \label{vmt} \end{figure*} \par To shed more light on the effect of Mn-site doping on the field-induced AFM1 to AFM2 transition via intermediate AFM1$^{\prime}$ phase, one of the key observation of undoped Mn$_5$Si$_3$, we recorded isothermal $M(H)$ data between $\pm$150 kOe of applied $H$ at 5 K for all the doped alloys in true ZFC condition. (see fig.~\ref{mtmh} (c) \& (d) for Ni and Cr-doped alloys respectively). Such $M(H)$ data indicate that unlike stoichiometric Mn$_5$Si$_3$ alloy, determining the signature of AFM1 to intermediate AFM1$^{\prime}$ in the form of slope change is getting harder as the critical field to reach the AFM2 phase decreases with increasing doping concentration. A faster decrease in the AFM2 transition field has been observed for Ni-doped alloys than the Cr-doped alloys. As the AFM2 critical transition field is quite different for different doping concentrations, we observed a significant difference in iso-field $M(T)$ data recorded at 10 kOe of external $H$. On the other hand, at a high-$H$ region (after reaching the AFM2 phase), a small decrease in dc $M$ value with increasing doping concentration has been noticed. It is also to be noted here that, like different transition temperatures and critical transition fields, Ni-doping also results in a faster decrease of dc $M$ value in AFM2 state at 5 K than the Cr-doping. Moment values in the presence of 150 kOe of $H$ at 5 K for all doped alloys are listed in table 1. \begin{figure*}[t] \centering \includegraphics[width = 8 cm]{fig5a.eps} \includegraphics[width = 8 cm]{fig5b.eps} \caption{(Color online) Field-temperature-doping concentration ($H-T-x$) phase diagrams during isothermal field application after zero-field cooling for Ni and Cr-doped alloys are depicted in (a) and (b) respectively. Red and blue colored surfaces indicate the AFM1-AFM1$^{\prime}$ and AFM1$^{\prime}$-AFM2 transition fields for different doping concentrations, respectively.} \label{ph} \end{figure*} \par As IHL behavior, with positive coercive field and negative remanence, is one of the unique properties of pure Mn$_5$Si$_3$ alloy, it is pertinent to check the effect of doping on IHL behavior. Such investigation will also shed more light on the robustness of the IHL behavior observed in pristine Mn$_5$Si$_3$ alloy. Besides, IHL also confirms the presence of AFM1$^{\prime}$ phase in these alloys as the isothermally arrested AFM1$^{\prime}$ phase is responsible for such unusual IHL behavior~\cite{prb-scd}. Though IHL behavior is reported to be visible in exchange-coupled multilayers, hard/soft multilayers, single domain particles with competing anisotropies, and in some bulk ferrimagnet, observation of IHL in a bulk antiferromagnetic compound is infrequent (pure Mn$_5$Si$_3$ is probably the only example)~\cite{oshea, aharoni, takanashi, wn, valvidares, demirtas, kim, ohkoshi, santos, nam, geshev, song, banerjee, prb-scd, ziese}. A closure look at the $M(H)$ isotherms of the presently studied doped alloys, recorded at 5 K in actual ZFC condition, reveal that IHL is visible for both Ni and Cr-doped alloys in the low doping concentration region (for $x$ = 0.05 and 0.1; see the restricted part of the 5 K $M(H)$ isotherms for the doped alloys, plotted in the main panels of fig.~\ref{ihl} (a), (b), (d) and (e). Here we have recorded all $M(H)$ isotherms between $\pm$150 kOe). For 4\% doping (both by Ni and Cr), IHL behavior found to vanish, and conventional hysteresis loops have been observed (see fig.~\ref{ihl} (c) and (f)). The presence of exchange bias in polycrystalline alloys may sometimes give rise to IHL~\cite{ziese,jap-Zheng1}. To probe the valid reason behind the observation of IHL behavior in the presently studied alloys, we have checked the presence of any exchange bias effect in the doped alloys showing IHL by recording isothermal $M(H)$ data in 10 kOe FC condition (see insets of fig.~\ref{ihl} (a), (b), (d) and (e)). Like the undoped alloy, we have not observed any shift in the center of the hysteresis loops for any alloy, and the FC $M(H)$ isotherms exactly matches the $M(H)$ curves recorded in ZFC condition. Such behavior confirms the role of isothermally arrested AFM1$^{\prime}$ state behind the IHL behavior. \par Apart from the unusual IHL properties, thermomagnetic irreversibility in $M(T)$ data is another important effect associated with the undoped Mn$_5$Si$_3$ alloy~\cite{prb-scd}. In our previous work, we have identified the thermomagnetic arrest of the AFM1$^{\prime}$ phase, which plays the key role for thermomagnetic irreversibility behaviors (we have ruled out the possibility of any arrested AFM2 state in our previous work)~\cite{prb-scd}. The presently studied doped alloys allow us to strengthen further our claim about the presence of arrested AFM1$^{\prime}$ phase. To probe the existence of such thermomagnetically arrested AFM1$^{\prime}$ phase, we recorded $T$ variation of $M$ for all the doped alloys using one of the protocols that have been used for the undoped Mn$_5$Si$_3$ alloy. Some of the representative curves for the doped alloys are depicted in the main panels of fig.~\ref{vmt} (a)-(f). Here we cooled the doped alloys from 150 K (well above the $T_{N2}$) in the presence of different applied cooling fields ($H_{\rm cool}$) and dc $M$ was recorded as a function of $T$ during heating in the presence of 100 Oe of applied field ($H_{\rm warm}$). Such protocols for $M(T)$ measurements with different cooling and heating fields are commonly used by the different groups to address the thermomagnetically arrested states in materials having first-order transitions~\cite{chatterjee, pramanick, dutta, chaddah, rawat, tokura}. For $x$=0.05 and 0.1 alloys (for both Ni and Cr-doping cases), the magnitude of dc $M$ below the $T_{N1}$ depend strongly on the $H_{\rm cool}$ (see fig.~\ref{vmt} (a), (b), (d), and (e)). An increase in the strength of $H_{\rm cool}$ results in a decrease in moment value, and eventually, it becomes negative. The critical values of $H_{\rm cool}$, for which the moments become negative, strongly depends on the doping concentration. A monotonic decrease in such critical values of $H_{\rm cool}$ has also been observed with increasing doping concentration. The magnitude of this critical field gives an idea about the AFM1-AFM1$^{\prime}$ field-induced transition, as we have identified in our previous work that the negative moment appears only if the cooling field is more than the AFM1-AFM1$^{\prime}$ transition field~\cite{prb-scd}. Observation of such irreversibility in $M(T)$ behavior and negative value of dc $M$ even in the presence of positive warming field are only possible for the presently studied alloys if there exists some thermomagnetically arrested AFM1$^{\prime}$. On the other hand, closure look to the $M(T)$ behavior with different cooling and heating fields for $x$ = 0.2 Ni-doped alloy indicates that all the curves follow the almost identical path even below the $T_{N1}$ (see fig.~\ref{vmt} (c)). For the 4\% Cr-doped alloy ($x$ = 0.2), though a very small difference in dc $M$ value at low temperatures (below $T_{N1}$) has been observed, but we failed to observe any negative value of dc $M$ in any protocol (see fig.~\ref{vmt} (f)). Such observations indicate the absence of any AFM1$^{\prime}$ state in 4\% Ni-doped alloy, whereas, a small amount of AFM1$^{\prime}$ and hence arrested AFM1$^{\prime}$ phase is present for 4\% Cr doped alloy but such a small amount of arrested AFM1$^{\prime}$ phase is not sufficient to induce IHL and negative value of $M$ below $T_{N1}$. The thermomagnetically arrested state observed for different materials is, in general, metastable in nature, and thermal energy (for $T\neq 0$) can assist those metastable systems in evolving into the equilibrium configuration. As a result of that, such materials show significantly large relaxation behavior~\cite{chatterjee}. Therefore, it is pertinent to see the time evolution of the thermomagnetically arrested state of the presently studied alloys. In the present work, we used the FC protocol for relaxation measurements. First, the sample was cooled in the presence of an external magnetic field $H_{\rm cool}$ from 150 K to 5 K. After reaching 5 K the field was removed and the dc $M$ was recorded as a function of time ($t$) (see insets of fig.~\ref{vmt} (a), (b), (d), and (e)). Depending on the magnitude of the negative moment observed during the $M(T)$ measurements, recorded under different cooling and warming field protocols, we have selected the value of $H_{\rm cool}$. The nature of the relaxation observed for both Ni and Cr doped alloys (only for $x$ = 0.05 and 0.1 concentrations) are found to be very stable (only a sluggish change in values towards the negative direction). Such observations confirm that the arrested AFM1$^{\prime}$ states are equally stable in doped alloys as it was observed for the pristine Mn$_5$Si$_3$ alloy. Relaxation measurement for 4\% doped alloys (both Ni and Cr doping cases) were not performed because of the absence of significant thermomagnetic irreversibility in $M(T)$ data. \section{Discussions \& conclusions} The present study of Mn-site doped Mn$_5$Si$_3$ alloys, based on structural and magnetic investigations, unveils two crucial features: (i) decrease in AFM1$^{\prime}$ phase fraction with increasing doping concentration; (ii) gradual disappearance of IHL and thermomagnetic irreversibility behavior with increasing doping concentration. Neutron diffraction studies on the undoped Mn$_5$Si$_3$ alloy confirmed that among the two Mn-sites, Mn2 splits into two independent sites (Mn21 and Mn22) along with a change in magnetic structure from hexagonal to orthorhombic below 100 K~\cite{silva}. Interestingly, out of the three Mn-sites present below 100 K, only the Mn22 site show ordered AFM moment~\cite{silva,gottschilch,brown1,brown,surgers,biniskos}. Due to the small separation between Mn1-sites ($\sim c/2$), it failed to stabilize AFM2 ordering~\cite{silva,gottschilch,brown1,brown,surgers,biniskos}. On the other hand, the triangular arrangement of Mn21 with Mn22 sites gives rise to magnetic frustration and leads to the collapse of Mn21 moment~\cite{silva,gottschilch,brown1,brown,surgers,biniskos}. Further decrease in sample temperature results in a sudden increase in lattice parameter $c$ below AFM2-AFM1 transition point (associated with), and AFM configuration stabilizes in Mn1-sites~\cite{silva,gottschilch,brown1,brown,surgers,biniskos}. It is clear that the Mn-Mn distance in this Mn$_5$Si$_3$ alloy is very near to its critical value, and a small change in Mn-Mn separation will significantly affect the magnetic configuration of the system~\cite{silva,gottschilch,brown1,brown,surgers,biniskos}. In the present work, we have altered this Mn-Mn distance by doping Ni and Cr in the Mn sites. As the Ni-atom size $<$ Mn-atom size $<$ Cr atom size, unit cell volume, and hence Mn-Mn distance decreases with Ni-doping, whereas Cr-doping results in an increase of such parameters. Interestingly for both the cases, we observe a lowering of $T_{N1}$. This indicates that a small change in Mn-Mn distance (both increase and decrease) triggers a significant impact on the magnetic ordering of the alloy. Notably, the reduction in $T_{N1}$ for Cr-doped alloys is found to be slow compare to the Ni-doped cases. A shift in $T_{N1}$ towards lower $T$ is also related to the decrease in lattice parameters of the materials in the presence of external $H$. Neutron diffraction data of pure Mn$_5$Si$_3$ alloys in the presence of an external $H$ confirms such a decrease in lattice parameters~\cite{gottschilch}. Besides, the field-induced transition from AFM1 to AFM2 phase via AFM1$^{\prime}$ phase also becomes more natural and occurs at much lower fields compare to the pristine Mn$_5$Si$_3$ alloys and the small perturbation of Mn-Mn distance in the form of Mn-site doping is playing the pivotal role. AFM1$^{\prime}$ phase has noncollinear arrangements of Mn22 atoms, but the Mn1 site moment vanishes again due to the reduction of lattice parameters (and hence Mn-Mn distance) in the presence of $H$. \par The gradual disappearance of IHL and thermomagnetic irreversibility, the two key observations of undoped Mn$_5$Si$_3$ alloys, are the most critical effects of Mn-site doping. In our previous work on pristine Mn$_5$Si$_3$ alloy, we have identified that the isothermally and thermomagnetically arrested of AFM1$^{\prime}$ phases are responsible for unusual IHL and thermomagnetic irreversibility behaviors respectively~\cite{prb-scd}. The AFM1$^{\prime}$ phases is an intermediate metastable phase that appears only during field-induced AFM1-AFM2 transition. The appearance of an intermediate metastable state during a first-order field-induced transition is not uncommon. During such a field-induced transition, it is often difficult for the system to overcome the substantial energy barrier and reach a more stable configuration directly. Instead, it proceeds through easily accessible local minima. Such behavior has already been observed for Heusler based shape memory alloys~\cite{chatterjee2}. In the present case, the doping of foreign elements in the Mn-site reduces the energy barrier between AFM1 and AFM2 states, along with the reduction of intermediate states (local minima). As a result, with a gradual increase of doping concentration, the system starts to reach a more stable configuration (AFM2 for the present case) in the presence of a magnetic field directly instead of going through the intermediate metastable state (AFM1$^{\prime}$ for the present case). The gradual disappearance of AFM1$^{\prime}$ phase with increasing doping concentration directly affects the unusual IHL and thermomagnetic irreversibility properties, and such properties eventually vanish for 4\% doping. \par Based on the experimental observations, we prepared the $H-T-x$ phase diagram for the presently studied doped alloys (see fig.~\ref{ph} (a) and (b) for Ni and Cr-doped alloys respectively). Here we have only considered the isothermal field application situation after true ZFC (out of the four different conditions explored in our previous work). All the critical fields were determined by differentiating the isothermal $M(H)$ curves recorded at different constant $T$ (not shown here). On the other hand, PM-AFM2 transition temperatures were determined from the iso-field $M(T)$ data recorded at different constant $H$. The blue colored surface indicates the AFM2 to AFM1$^{\prime}$ transition field, whereas, AFM1 to AFM1$^{\prime}$ transition fields were indicated by the red surface. A Clear decrease in the AFM1$^{\prime}$ region has been observed with increasing doping concentration. \par In conclusion, the present investigation on Ni and Cr-doped Mn$_5$Si$_3$ alloys reveal the doping effects on the unusual inverted hysteresis loop and thermomagnetic irreversibility properties observed for undoped Mn$_5$Si$_3$ alloy. Further, it reconfirms the role of AFM1$^{\prime}$ phase behind such unusual features. We have also tried to prepare an $H-T-x$ phase diagram for the doped alloys. Mn-site doped Mn$_5$Si$_3$ alloys (in the low doping region; up to 2\% doping), is an excellent addition to the bulk antiferromagnetic alloys, which show IHL behavior. \section{Acknowledgments} SCD (IF160587) and NK would like to thank DST-India and UGC-India, respectively, for their fellowship.
{'timestamp': '2020-06-01T02:11:09', 'yymm': '2005', 'arxiv_id': '2005.14520', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14520'}
arxiv
\section{Introduction} In response to climate change concerns, the decarbonization and digitization of the electricity sector have been accelerated in recent years. The path to decarbonization is as
sociated with the high penetration of Distributed Energy Resources (DERs), such as rooftop solar, home batteries and electric vehicles with the potential to support a reliable, affordable and lower-emissions electricity grid. Increase in the deployment of DERs raises several opportunities and challenges in the electricity systems \cite{openenergy}. Technical challenges include increase in bidirectional power flows, raise in the voltage levels, and lack of visibility \cite{coster2010integration}. On the other side, efficient DER integration can provide substantial benefits for energy customers \cite{aemo}. DER owners can benefit from Feed-In-Tariffs (FiTs) programs by selling energy back to the utility grid for a fixed price \cite{ye2017analysis}. Alternatively, they can be coordinated and aggregated for participation in different markets. Virtual Power Plants (VPPs) are an example of DER aggregation to exploit their inherent flexibility \cite{pudjianto2007virtual}. An emerging technique for integration of small-scale producers and consumers to energy markets is Peer-to-Peer (P2P) trading, which allows bilateral energy transactions between users \cite{tushar2020peer}. P2P trading provides significant benefits for both end users and grid operators such as increasing welfare by preference satisfaction \cite{morstyn2018using}, lowering operational costs, and improving system reliability \cite{mengelkamp2018designing}. P2P trading offers the flexibility required to coordinate agents with diverse preferences. Grid digitization provides a two-way communication network which allows the DER owners and energy customers to act as proactive agents in the energy markets and paves the path for P2P market implementation. Through direct interaction of agents in a decentralized platform, small-scale producers are allowed to compete with large traditional suppliers. In the P2P trading, it is expected that market participants can settle the bilateral transactions with less influence from a central authority \cite{giotitsas2015peer}. Hence, designing a decentralized platform for managing market participants with diverse preferences is a challenging task. Blockchain technology offers new opportunities for decentralized market designs and enables energy producers and consumers to directly negotiate and trade energy. It provides a platform to store and share data in a secure and verifiable manner, even when the identity and trustworthiness of market participants is unknown \cite{van2020integrated}. Given the properties of blockchain, several blockchain-based platforms for P2P energy trading have been developed recently \cite{mengelkamp2018designing,li2017consortium,wang2019energy}. The Brooklyn microgrid is a prototype of blockchain enabled P2P market, in which a blockchain framework is employed to implement a decentralized microgrid energy market \cite{mengelkamp2018designing}. A unified blockchain framework for various scenarios of energy trading in an industrial internet of things is proposed in \cite{li2017consortium}, in which a credit-based payment scheme is employed to overcome the transaction limitation caused by transaction confirmation delays. An operation framework for P2P energy trading at distribution level is presented in \cite{wang2019energy}, where the system operator clears the market using a crowdsourced energy system model. The blockchain framework should incorporates a proper market settlement approach to settle the bilateral transactions among the agents. There are several approaches that can be applied for P2P market clearing and peer matching, such as auction-based mechanism, game theory, and optimization-based methods. In the auction-based mechanism, agents express their interest in energy trading by submitting their offers, and the energy allocation and price would be determined based on the market clearing rules \cite{chen2019trading}. The game theory approaches aim to provide a stable solution that is beneficial for all parties \cite{tushar2018peer}. In the optimization-based method, the market settlement is formulated as an optimization problem, which can be decomposed to local subplroblems solvable by each agent \cite{khorasany2019decentralised}. The optimization-based methods can be implemented in a decentralized manner without any need to third party intervention and allows agent to optimize their welfare by participating in the P2P market, and hence, are well suited for blockchain-enabled P2P markets. The aim of this paper is to design a blockchain-enabled P2P market that provides a secure and transparent environment for energy trading of producer and consumer agents. In the proposed approach agents can \textit{advertise} their surplus/deficit energy. In order to reduce the computation and communication overheads, a \textit{Prioritization} step is considered in the market settlement process that enables agent to prioritize their trading partners based on their proximity and reputation factor. To incite agents to trade energy with their neighboring agents, a grid service charge is considered to apply a transaction fee to each trade based on the electrical distance between producer and consumer. In order to preserve the privacy of the agents, We propose an Anonymous Proof of Location (A-PoL) algorithm that enables them to prove their location without revealing their real-identity. A decentralized optimization algorithm is employed for the \textit{Negotiation} that allows agents iteratively optimize their welfare in the negotiation process. The privacy of agents in the negotiation process is also preserved, as they do not need to exchange their private information related to their energy and price preferences. The contributions of this paper are summarized as follows: \begin{itemize} \item [-] Designing a pure decentralized P2P trading framework that does not require access to private information of agents; \item[-] Implementing a \textit{prioritization} step to allow agents select their trading partners based on their location and reputation; \item[-] Proposing a A-PoL algorithm which uses a Certificate of Location (CoL) issued by smart meter to approve the location of agents. \end{itemize} The rest of this paper is organized as follows. Section \ref{sec: market structure} outlines the structure of the market, including agents modeling, market objective, and decentralized optimization of the market objective. Section \ref{sec:energy trading} explains the market settlement process and its different steps. Case studies and numerical results are reported in Section \ref{sec: case study}. A detailed analysis of the security and privacy of the proposed framework is presented in Section \ref{sec:security}. Finally, concluding remarks are given in Section \ref{sec:conclusion}. \section{Market Structure}\label{sec: market structure} In this section, we outline the details of the market structure. The proposed architecture consists of two layers that are: i) physical layer that is the underlying physical network to transfer electricity from producers to consumers. The minimum requirement for a successful transaction between a producer and a consumer is existence of a path to transfer power between them, and ii) overlay layer that is where the participating nodes, that includes energy producer, consumer, and the grid operator, connect through a public blockchain to share information. The overlay provides a secure platform for the participating nodes to advertise their energy, negotiate on their offers, and decide on their actions in the market. Market mechanism is implemented in this layer that enables the agents to trade energy and settle energy transactions.\par In the rest of this section, we discuss agent's modeling in Section \ref{sub:sec:agent-modeling}, market objective in Section \ref{sub:sec:market-settlement}, and decentralized optimization method for market settlement in Section \ref{sub:sec:coordination}. \par \subsection{Agents modeling}\label{sub:sec:agent-modeling} We consider an energy network with a set of agents consisting of a set of producer agents with index \(i \in \mathcal{P}=\{1, ..., P\}\) and a set of consumer agents with index \( j \in \mathcal{C}=\{1, ..., C\}\) connected to a physical layer manged by a grid operator. Producers have energy producing capability and can use their own generated energy for their demand. In case that there is more generation than demand, producers can sell it to consumers or the grid. Consumers on the other side can buy energy from producers or the grid. Producers and consumers can negotiate for energy trading in a forward market for any time interval \(t \in \mathcal{T} =\{1, ..., T\}\) with equal duration (e.g., one hour). Agents are equipped with smart meters, which determines the energy surplus/deficit for trade in each time slot. (more discussion on application of smart meter) After energy management, agents indicate their energy surplus and deficit. Each agent can trade with the grid or with other agents in the network. The total energy surplus and deficit of producers and consumers can be represented as \begin{equation}\label{tot energ pro} e_i=e_i^G +\sum_{j \in \mathcal{C}}{e_{ij}^P} \end{equation} \begin{equation}\label{tot energ cons} e_j= e_j^G+\sum_{i \in \mathcal{P}}{e_{ji}^P} \end{equation} where, \(e_i^G\) and \(e_j^G\) are the traded energy between producer $i$/consumer $j$ and the grid respectively, \(e_{ij}^P\) is sold energy by producer $i$ to consumer $j$, and \(e_{ji}^P\) is the bought energy by consumer $j$ from producer $i$. Each agent in the P2P market aims to maximize its welfare. The welfare incorporates the utility/cost of energy consumption/generation, cost/revenue of trade with other agents or the grid, and the cost of using grid for P2P trading. The welfare function of producers and consumers can be modeled by (\ref{pro welf}) and (\ref{con welf}) respectively \begin{equation}\label{pro welf} W_i(e_i, \lambda_{ij},\gamma_{ij})= \underline{\lambda}^G e_i^G +\sum_{j \in \mathcal{C}}{e_{ij}^P (\lambda_{ij}-\gamma_{ij})}-C_i(e_i) \end{equation} \begin{equation}\label{con welf} W_j(e_j, \lambda_{ij},\gamma_{ij})= U_j(e_j)- \overline{\lambda}^G e_j^G -\sum_{i \in \mathcal{P}}{e_{ij}^P (\lambda_{ij}+\gamma_{ij})} \end{equation} where \(\underline{\lambda}^G\) denotes FiT representing the price for selling energy to the grid; \(\lambda_{ij}\) is energy price in transaction between producer $i$ and consumer $j$; \(\gamma_{ij}\) is grid service charge for using grid infrastructure for this trade; \(\overline{\lambda}^G\) denotes the price for selling from the grid which is usually a fixed value over the time (e.g. time of use tariff). The grid selling and buying prices limit energy price in the P2P market, i.e. for any bilateral trade \begin{equation}\label{price lim} \underline{\lambda}^G \leq \lambda_{ij} \leq \overline{\lambda}^G. \end{equation} The cost function of the producer represents the cost of generating energy \(e_i\) and can be modeled as \cite{grainger2003power} \begin{equation}\label{cost-func-producer} C_i(e_i)=a_i e_i^2 +b_i e_i +c_i \end{equation} where \(a_i, b_i\) and $c_i$ are positive constants, which can be adjusted by the producer to reflect the generation cost. Since producers usually have renewable generation resources with zero marginal costs, the cost function can represent the cost associated with battery degradation cost, if producer needs to discharge its battery to sell the energy. On the other side, the utility function of a consumer represents its satisfaction level by consuming energy $e_j$ and can be modeled as \cite{samadi2010optimal} \begin{equation}\label{cost-function-consumer} U_j(e_j)= \begin{cases} -a_j e_j^2 +b_j e_j \;\;\;\; :0 \leq e_j \leq \frac{b_j}{2 a_j}\\ \frac{b_j^2}{4 a_j} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\ :e_j \geq \frac{b_j}{2 a_j} \end{cases} \end{equation} where \(a_j\), and \(b_j\) are unique positive constants for each consumer. These parameters reflect the valuation of the energy by the consumer and denotes the price that consumer is willing to pay for the energy. To incite agents to trade with their closest electrical partner, a grid service charge is considered for each transaction. This fee represents the price that agents need to pay for using the grid infrastructure for each trade and is expressed by \begin{equation}\label{service charge} \gamma_{ij}=\omega d_{ij} \end{equation} where \(\omega\) is grid service charge per unit of electrical distance for each unit of energy, and \(d_{ij}\) is the electrical distance between producer $i$ and consumer $j$. This distance can be calculated based on power transfer distance, which aggregates the absolute value of Power Transfer Distribution Factor (PTDF) induced by a trade as in \begin{equation} \label{eq:ptdf} d_{ij}=\sum_{l \in \mathcal{L}}{\phi^l_{ij}}. \end{equation} For any trade, \(\phi^l_{ij}\) indicates the fraction of transacted power from producer $i$ to consumer $j$ that flows over a line $l$, and can be calculated using the method presented in \cite{wood2013power}. \subsection{Market objective} \label{sub:sec:market-settlement} The market objective for P2P trading is formulated as social welfare maximization, which maximizes the total welfare of players in the market subject to the constraints, and mathematically can be modeled as: \begin{equation}\label{tot objective} \begin{aligned} & \underset{\textbf{\textit{$e_i$,$e_j$}}}{\text{max}} \sum_{i \in \mathcal{P}}{W_i} + \sum_{j \in \mathcal{C}}{W_j}\\ & \text{s.t. constraints} \end{aligned} \end{equation} As stated in (\ref{price lim}), the price in P2P market is always beneficial for both producers and consumers. Hence, it is rational to assume that all agents try to maximize their traded energy in the P2P market and minimize trading with the grid, by setting $e_i^G=e_j^G=0$. Therefore, Eq. (\ref{tot objective}) can be rewritten as: \begin{subequations}\label{social welfare} \begin{equation} \begin{aligned} & \underset{\text{\textbf{{$e_{ij}$,$e_{ji}$}}}}{\text{max}} \sum_{j \in \mathcal{C}}{U_j(e_{ji})} - \sum_{i \in \mathcal{P}}{C_i({e_{ij}})} - \sum_{j \in \mathcal{C}}{\sum_{i \in \mathcal{P}}}{(e_{ij}+e_{ji})\gamma_{ij}}\\ \end{aligned} \end{equation} \begin{equation}\label{producer flex} \text{s.t.} \;\;\; \underline{e_i} \leq \sum_{j \in \mathcal{C}}{e_{ij}^P} \leq \overline{e_i} \;\;\;\;\;\;\;\;\;\;\;\;\;\; :\underline{\mu_i}, \overline{\mu_i} \end{equation} \begin{equation}\label{consumer flex} \underline{e_j} \leq \sum_{i \in \mathcal{P}}{e_{ji}^P} \leq \overline{e_j} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; :\underline{\mu_j}, \overline{\mu_j} \end{equation} \begin{equation}\label{demand supply} e_{ij}^P=e_{ji}^P \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; :\lambda_{ij} \end{equation} \end{subequations} where (\ref{producer flex}) and (\ref{consumer flex}) represents the flexibility constraints of producer, and consumer, respectively. The constraint (\ref{demand supply}) represents the power balance in transaction between producer $i$ and consumer $j$. \(\underline{\mu_i}, \overline{\mu_i}, \underline{\mu_j}, \overline{\mu_j}, \lambda_{ij}\) are dual variables associated with these constraints. \subsection{Decentralized optimization} \label{sub:sec:coordination} In this paper, our aim is to solve (\ref{social welfare}) with only P2P communications to ensure data privacy of the agents. To do so, a decentralized optimization approach is employed to formulate market settlement in P2P market \cite{khorasany2019decentralised}. In this approach, dual variables are used to decouple constraints, and then, the problem is decomposed to local subproblems solvable by producers and consumers. The local subproblem is solved by deploying the sub-gradient projection method \cite{boyd2011distributed}. Each agent, contributes to solving the global problem by updating its local decision variables. The set of decision variables for producers and consumers are \{\(\lambda_{ij}, e_{ij}^P, \underline{\mu_i}, \overline{\mu_i}\}\), and \{\( e_{ji}^P, \underline{\mu_j}, \overline{\mu_j}\)\}, respectively. The market settlement approach is an iterative process, in which agents update their decision variables iteratively and exchange information without revealing their private information. The updates of the decision variables of agents are based on the Karush–Kuhn–Tucker (KKT) optimality conditions of the local problems, and can be developed using first order deviate of the relaxed problem as follows:\\ \(\forall i \in \mathcal{P}\) \begin{subequations}\label{sell update dec} \begin{equation}\label{sell price update} \lambda_{ij}^{k+1}=\left[\lambda_{ij}^{k}-\rho_{\lambda}^k(e_{ij}^{P,k}-e_{ji}^{P,k})\right]^{+} \end{equation} \begin{equation}\label{sell mu low update} \underline{\mu_i}^{k+1}=\left[\underline{\mu_i}^{k}+\rho_{\mu}^k(\underline{e_i}-e_i^k)\right]^{+} \end{equation} \begin{equation}\label{sell mu up update} \overline{\mu_i}^{k+1}=\left[\overline{\mu_i}^{k}+\rho_{\mu}^k(e_i^k-\overline{e_i})\right]^{+} \end{equation} \begin{equation}\label{sell energy update} e_{ij}^{{P,k+1}}= \left[e_{ij}^{{P,k}}+\zeta_{ij}^k(\tilde{e}_{ij}^{P,k+1}-e_i^k)\right]^{+}\\ \end{equation} \begin{equation}\label{set point sell update} \tilde{e}_{ij}^{P,k+1}=\frac{\lambda_{ij}^{k+1}-\gamma_{ij}-\overline{\mu_i}^{k+1}+\underline{\mu_i}^{k+1}-b_i}{2 a_i} \end{equation} \end{subequations} \(\forall j \in \mathcal{C}\) \begin{subequations}\label{buyer update dec} \begin{equation}\label{buyer mu low update} \underline{\mu_j}^{k+1}=\left[\underline{\mu_j}^{k}+\rho_{\mu}^k(\underline{e_j}-e_j^k)\right]^{+} \end{equation} \begin{equation}\label{buyer mu up update} \overline{\mu_j}^{k+1}=\left[\overline{\mu_j}^{k}+\rho_{\mu}^k(e_j^k-\overline{e_j})\right]^{+} \end{equation} \begin{equation}\label{buyer power update} e_{ji}^{P,k+1}= \left[e_{ji}^{P,k}+\zeta_{ji}^k(\tilde{e}_{ji}^{P,k+1}-e_j^k)\right]^{+} \end{equation} \begin{equation}\label{buyer set point update} \tilde{e}_{ji}^{P,k+1}=\frac{b_j-\lambda_{ij}^{k+1}-\gamma_{ij}-\overline{\mu_j}^{k+1}+\underline{\mu_j}^{k+1}}{2 a_j} \end{equation} \end{subequations} where $k$ is the iteration index, \(\tilde{e}_{ij}^{P}, \tilde{e}_{ji}^{P}\) are optimal power set points of producer and consumer at the price \(\lambda_{ij}\). \(\zeta_{ij}, \zeta_{ji}\) are asymptotically proportional factors, \(\rho\) is a small tuning parameter, and \([.]^+\) denotes max \{.,0\}. The information exchange between producers and consumers during the decentralized optimization process is explained in section \ref{sec:neg}. \section{Market Settlement Process}\label{sec:energy trading} In this section, we outline the details of the market settlement process for P2P trading. The proposed framework consists of four main phases that are summarized in Fig. \ref{fig:full algorithm} and discussed in details in the rest of this section. \begin{figure} \input{marketsettlement} \caption{Market settlement algorithm}\label{fig:full algorithm} \end{figure} The proposed framework considers the location of the participants denoted by $\sigma$ in the grid service charge calculation as denoted in (\ref{service charge}). Thus, it is critical for the involved parties to prove their location. This potentially enables the malicious nodes to track the activities of a particular user which in turn compromises the user privacy. To address this challenge, we propose a A-PoL algorithm that enables the users to prove their location while protecting their real-identity which in turn enhances the level of anonymity offered to the users. The proposed PoL algorithm involves a CoL that is issued by a smart meter in the network as shown in Fig. \ref{fig:CoL}. The energy companies maintain a local record of the accurate location of the installed smart meters. During the installation process, the energy company deploys a key pair in each smart meter (step 1 in Fig. \ref{fig:CoL}) and records $<PK,location>$ tuple in its local storage. The company serves as a certificate authority (CA) for PKs deployed in smart meters. Although the CA is a trusted authority, relying on the PK provided by the CA may potentially compromise the privacy of the users as the company can build a virtual profile of the user and their energy trading (by observing the proof of location transactions). To address this challenge, we propose the CoL. \par \begin{figure} \centering \includegraphics[scale=0.5]{COL.jpeg} \caption{An overview of the proposed CoL.} \label{fig:CoL} \end{figure} CoL is a certificate received from a verifier that is an independent smart meter in the network. Assume smart meter \textit{A} is going to establish a CoL. Once deployed on site, \textit{A} explores the CA to find potential smart meters that can function as the verifier. Assume smart meter \textit{B} is selected by \textit{A} to function as the verifier. Recall that we assume the smart meters are tamper resistant and thus \textit{A} can send its request to any meter listed in CA. \textit{A} may request a group of nodes to function as verifiers. \textit{A} sends a CoL request transaction to \textit{B} that is structured as $<T\_ID, MTR, \sigma, PK, Sign>$, where \textit{T\_ID} is the unique identifier of the transaction which is the hash of the transaction content. \par \textit{A} populates the root hash of a Merkle tree constructed by recursively hashing a number of PKs in the \textit{MTR} field. The PKs in the leaves of the tree are later employed by \textit{A} to prove ownership on the CoL which is discussed in greater detail later in this section. The number of PKs may vary based on the application. $\sigma$ is the noisy location of the smart meter that can be the location at a lower resolution, e.g., the street in which the meter is installed. This potentially protects the privacy of the smart meter against deanonymization as studied later in Section \ref{sec:security}. \textit{PK} is the PK of \textit{A} allocated by the CA and \textit{Sign} is the corresponding signature which proves that \textit{A} owns the private key corresponding to the PK. \par When the verifier, i.e.,\textit{B} receives the CoL request, it verifies that the transaction is generated by a genuine smart meter that is done by requesting the CA (step 3). To protect the privacy of the user, the CA does not reveal the actual location of smart meter to \textit{B} instead only confirms if the PK is registered and genuine smart meter. Once verified, the verifier creates a CoL that is \textit{sign(hash(MTR, $\sigma$))} and replies back to \textit{A} by sending the reply transaction structured as $<CoL, PK, Sign> $, where CoL is as outlined above, \textit{PK } is the PK of the verifier, i.e., \textit{B} registered by the CA, and \textit{Sign} is the corresponding signature of the verifier. \par \textit{A} employs the received CoL to anonymously prove its location to the nodes in the overlay. To do so, \textit{A} appends $CoL_f\textsuperscript{A} = (CoL_A, PK\textsubscrtip{ver}, Sig\textsubscrtip{ver}, MTR_A, \sigma_A, PK_A, MTL_A, \\ Sign_A)$ where the first three fields are extracted from the response of the verifier \textit{B}. MTR and $\sigma_A$ are extracted from the CoL request sent to the verifier. \textit{$PK_A$} is a PK that was part of the leaves of the Merkle tree. $MTL_A$ is the leaves of the Merkle tree that are necessary to prove the existence of $PK_A$ in the MTR and $Sign_A$ is the signature corresponding to PK. The last three fields ensure that only the owner of the certificate, who knows the PKs in the Merkle tree and the corresponding private key, can use the certificate. \par To verify the location of \textit{A}, the participating nodes in the blockchain, say \textit{C}, must first verify that $PK_A$ belongs to $MTR_R$ using $MTL_A$. Next, \textit{C} verifies if $Sign_A$ matches $PK_A$. The third step in verifying is to verify if $hash(MTR_A, \sigma_A) = CoL_A$. The final step is for \textit{C} to verify PK\textsubscrtip{ver} using CA. This ensures that a genuine smart meter has signed the CoL. If all the above steps successfully passed, the location of \textit{A} is verified. \par Having discussed the details of the proposed PoL algorithm, we study the process of P2P energy trading in the rest of this section. The algorithms implementd by producer and consumer agents are represented in Algorithm \ref{producer lag} and \ref{consumer alg}, respectively and are discussed with greater details in the rest of this section. \subsection{Advertisement}\label{sub:sec:advertisement} The main aim of this phase is for the agents to advertise their energy to the potential consumers/producers. In each market interval, agents participate in the forward market by submitting their offers and asks in the form of \textit{advertisement transaction (AT)} that is structured as follow: \begin{equation} AT = (T\_ID, price/amount, \eta_i, CoL_f\textsuperscript{i}) \end{equation} where \textit{price/amount} can be either the price of the energy, i.e., $\lambda_i, \forall i\in \mathcal{P}$, if AT is generated by a producer or the amount of requested energy, i.e., $e_j, \forall j\in \mathcal{C}$, if AT is generated by a consumer, and \(\eta\) is the reputation factor of agent.\par In conventional blockchain-based frameworks, the negotiation transactions are stored in the blockchain that provides high auditability. However, this potentially increases the computational and storage resource consumption of the blockchain and limits throughput, i.e., the total number of transactions that can be stored per second. These potentially reduce the blockchain scalability while smart grid comprises of a broad range of devices that may produce transactions frequently. To address this challenge, in our framework the negotiation transactions are stored in a public database managed by the grid operator that is referred to as \textit{Advertisement Database} (AD). The write access of AD is restricted only to the grid operator and other nodes in the overlay have read-only permission. The parties involved in an AT transaction, i.e., energy producer and consumer, may store the transactions locally and employs in case of dispute between parties. The final price and amount of energy agreed by the participants is later defined during negotiation, thus the reference to AT transaction is limited. Consequently, we skipped storage of such transactions in the blockchain. \par \subsection{Prioritization} After \textit{advertisement} step, agents explore the available ATs in the AD for the negotiation. One-on-one negotiation may potentially increase the delay associated with finalizing the trade as either side of the trade may have higher welfare in trading with another party in the network. Also, negotiation with every agent in the market increases the computation and communication overheads which potentially leads to low scalability \cite{khorasany2019enhancing}. Thus, agents need to prioritize their trading partners based on their preferences and only negotiate with a given number of agents. As stated in section \ref{sub:sec:agent-modeling}, agents have to pay a grid service charge for each transaction as defined in (\ref{service charge}). This charge is directly associated with the distance between the producer \textit{i} and the consumer \textit{j}, and impacts the welfare of the agents. Accordingly, agents are incentivized to trade with trading partners located in close vicinity. This in turn reduces the load on the transformation lines and thus reduces the cost in managing the physical layer. On the other hand, agents may prefer to negotiate with the trading partners with higher reputation factors, indicating their past performance in fulfilling their commitments. Thus, agents need to prioritize their trading partners based on their preferences, which includes reputation factor and distance, and only negotiate with a given number of them. Agents may have varied preferences over reputation factor and distance of their trading partners. Hence, we define a priority index for each possible transaction between producers and consumers. This index for offer from consumer $j$ received by producer $i$, and offer from producer $i$ received by consumer $j$ are defined using (\ref{sell pri index}) and (\ref{buy pri index}) respectively. \begin{equation}\label{sell pri index} \Upsilon_{ij}=\alpha_i \rho_j +\beta_i (1-\cfrac{|\sigma_i -\sigma_j|}{D_{ij}})\\ \end{equation} \begin{equation}\label{buy pri index} \Upsilon_{ji}=\alpha_j \rho_i +\beta_j (1-\cfrac{|\sigma_i -\sigma_j|}{D_{ji}})\\ \end{equation} where \(\alpha\) and \(\beta\) are the weights that agent places on the reputation factor and proximity of other agents, respectively such that \(\alpha+\beta=1\), \(D_{ij}=\underset{\sigma_j}{\max} |\sigma_i-\sigma_j|\), \(D_{ji}=\underset{\sigma_i}{\max} |\sigma_i-\sigma_j|\). After calculating priority index, each agent divides its trading partners to a set of $\mathcal{N}$ groups and then, starts to negotiate with the agents from the group with the highest priority and proceed to the group with the lowest priority. Each group of potential consumers/producers for producer $i$/consumer $j$ can be formed as \begin{equation}\label{prioritization sell} \Omega_i^n=\{j \in \mathcal{C}| (N-n)/N \leq \Upsilon_{ij} \leq (N-n+1)/N \} , \forall n \in \mathcal{N} \end{equation} \begin{equation}\label{prioritization buy} \Omega_j^n=\{i \in \mathcal{P}| (N-n)/N \leq \Upsilon_{ji} \leq (N-n+1)/N \} , \forall n \in \mathcal{N} \end{equation} in which for consumer $j$, producers in group $\Omega_j^n$ have higher priority than producers in $\Omega_j^{n+1}$. Similarly, for producer $i$, consumers in group $\Omega_i^n$ have higher priority than consumers in $\Omega_i^{n+1}$. \subsection{Negotiation} After \textit{prioritization}, each consumer starts negotiating with producer agents in $\Omega_j^1$. The first step is for consumer \textit{A} to request the grid service charge from the grid operator. \textit{A} sends the list of agents in $\Omega_j^1$ to the grid operator who calculates grid service charge using (\ref{service charge}) and sends response back to \textit{A}. Once, \textit{A} received the grid service charge, it starts the negotiation with agents in $\Omega_j^1$ by sending an \textit{Energy Negotiation} (EN) transaction that is structured as $ <T\_ID, Amount, Price, PK\textsuperscript{D}, Sign\textsuperscript{D}, PK\textsuperscript{+}, Sign, PK\textsuperscript{+},\\ Agreement_P, Agreement_C>$ where \textit{Amount} identifies the total amount of energy demanded by \textit{A} and \textit{price} is the price in which the consumer is willing to pay. \textit{PK\textsuperscript{D}} and \textit{ Sign\textsuperscript{D}} represent PK of the destination agent and its corresponding signature. The destination can be either the producer, when \textit{A} generates the transaction, or \textit{A}, when a producer is the generator of the transaction. This potentially enables routing the transaction to the destination using routing algorithm proposed in \cite{dorri2019spb}. \textit{Sign } and \textit{PK} represent the PK and signature of the transaction generator. Finally, \textit{$Agreement_P$} and \textit{$Agreement_C $} are flags that indicate whether the energy producer or consumer, respectively, agree with all situations in the trade. The generator of EN signs the hash of the transaction content which potentially ensures the integrity of the transaction content. After populating EN, \textit{A} sends the transaction to the energy producers in $\Omega_j^1$. \par The energy producer may potentially receive several EN transactions from consumers. The producer only responds to those who are in the set of $\Omega_i^1$. For each received EN from consumers in this set, the producer updates the price using (\ref{sell price update}). When the response from the producer is received by the consumers, they update their energy using (\ref{buyer power update}) and again respond to the producer. This process continues till both side of the trade agree on the trade conditions and thus set $Agreement_P$ and $ Agreement_C$ as '1'. An EN transaction is considered as valid transaction to be stored in the blockchain only when both energy producer and consumer sign the transaction and $Agreement_P$ and $ Agreement_C$ are set as '1'. This ensures that only the last transaction that contains the trade conditions is stored in the blockchain which in turn increases the blockchain throughput and reduces delay of the negotiation. \begin{algorithm}[t] \caption{Producer's Algorithm}\label{producer lag} \footnotesize{ \begin{algorithmic}[1] \State Submit offer AT \Comment{\textit{Advertisement}}\\ \state Explore AD to find potential trading partners \State Divide consumers in groups $\Omega_i^1, ..., \Omega_i^N$ using (\ref{prioritization sell}) \Comment{\textit{Prioritization}} \State Set $n\gets 1$ \State Receive EN transactions from consumers \Comment{\textit{Negotiation}} \While{$|\lambda_{ij}^{P,k+1}-\lambda_{ij}^{P,k}| \geq \epsilon$} \For {$j \in \Omega_i$} \State Receive $\gamma_{ij}$ from grid operator \State Receive $e_{ji}^{k}$ from consumer \State Calculate $\lambda_{ij}^{k+1}$ using (\ref{sell price update}) \State Update $\overline{\mu}_i^{k+1}$ and $\underline{\mu}_i^{k+1}$ using (\ref{sell mu low update}) and (\ref{sell mu up update}) \State Calculate $e_{ij}^{P,k+1}$ using (\ref{sell energy update}) \State Broadcast $\lambda_{ij}^{P,k+1}$ to consumers \EndFor \EndWhile \State Check if more energy is available \State Set $n\gets n+1$ \State Repeat Negotiation with new consumers \State Receive LP from consumers \Comment{\textit{Energy trading}} \State Inject agreed energy \State Sign EI \end{algorithmic} } \end{algorithm} \begin{algorithm}[t] \caption{Consumer's Algorithm}\label{consumer alg} \footnotesize{ \begin{algorithmic}[1] \State Submit ask AT \Comment{\textit{Advertisement}}\\ \state Explore AD to find potential trading partners \State Divide producers in groups $\Omega_j^1, ..., \Omega_j^N$ using (\ref{prioritization buy}) \Comment{\textit{Prioritization}} \State Set $n\gets 1$ \State Send EN transactions to producers \Comment{\textit{Negotiation}} \While{$|e_j^{P,k+1}-e_j^{P,k}| \geq \epsilon$} \For {$i \in \Omega_j^n$} \State Receive $\gamma_{ij}$ from grid operator \State Receive $\lambda_{ij}^{k}$ from producer \State Update $\overline{\mu}_j^{k+1}$ and $\underline{\mu}_j^{k+1}$ using (\ref{buyer mu low update}) and (\ref{buyer mu up update}) \State Calculate $e_{ji}^{P,k+1}$ using (\ref{buyer power update}) \State Broadcast $e_{ji}^{P,k+1}$ to producers \EndFor \EndWhile \State Check if more energy is needed \State Set $n\gets n+1$ \State Repeat Negotiation with new producers \State Send LP to producers \Comment{\textit{Energy trading}} \State Sign EI \end{algorithmic} } \end{algorithm} \subsection{Energy trading} In this section, we discuss the process in trading energy. Once agents agree on the trade condition, during the negotiation step, the consumer generates a \textit{Late Payment (LP)} transaction that is structured as $<T\_ID, Price, Input, Output, EN\_Ref, Expiry\_Time, \\ Sign>$, where \textit{price} is price to be paid to the energy producer. \textit{Input} is the address of an unspent transaction that has enough balance to pay the transaction price, and \textit{output} is the address of the energy producer as in the last EN transaction. \textit{En\_Ref} is the address of the EN transaction that is stored in the public blockchain. Conventional energy trading frameworks rely on a TTP to oversee the trade and ensure that both side of the trade commit to their obligations which in turn reduces the privacy of the users. To address this challenge, our framework relies on atomic transactions. In the latter, two transactions are considered valid if and only if both are generated within a specified time frame. Any incompatible transaction is considered as invalid and thus is not stored in the blockchain \cite{dorri2019spb}. \textit{Expiry\_Time} represents the time period in which the second transaction corresponding to the current LP must be generated, otherwise, LP is discarded. \textit{Sign} is the signature of the transaction generator that must be corresponding to the PK used in EN transaction. The consumer then broadcasts LP transaction.\par The energy producer starts injecting energy to the grid when it receives the LP transaction. Once the total amount of agreed energy is injected to the grid, the smart meter of the producer generates an \textit{Energy Injection} (EI) transaction which is a multisig transaction and is structured as $ <Amount, LP\_ID, PK\_P, Sign\_P, PK\_C, Sign\_C>$, where \textit{Amount} is the amount of energy injected into the grid by the producer. \textit{LP\_ID} is \textit{T\_ID} of the corresponding LP that is used for verification of the trade as outlined in the next paragraph. EI requires two out of two signatures to be considered as a valid transaction that are the energy producer signature, populated in \textit{PK\_P} and \textit{Sign\_P}, and energy consumer signature, populated in \textit{PK\_C} and \textit{Sign\_C}.\par Once EI is broadcasted to the network, the participating nodes in the blockchain start verifying the energy trade. First, the participants must locate the associated LP and EN. Recall that EI contains the \textit{LP\_ID} and LP contains \textit{EN\_Ref} that is the identifier of the EN transaction. The verifiers first match the signatures and PKs in the transactions. The next step is for the verifiers to validate that the amount and price agreed in the transactions match. Once all the above steps are successfuly validated, EI and LP transactions are stored in the blockchain which triggers the payment of the energy price to the energy producer. If the price in these transactions differ, the verifiers discard those transactions. \par In case of detecting inconsistency in the amount of the injected energy in EI, the verifiers call a \textit{Dispute Resolution} (DR) smart contract. The DR smart contract contains rules to manage the situation where the energy producer failed to transfer the promised amount of energy to the consumer, for example, the energy produced by the solar panel of an energy producer may be less than the estimated production which potentially impacts the traded energy. Based on the amount of transferred energy identified in EI, DR calculates a new energy price and generates a \textit{Price Update (PU)} transaction requesting the energy consumer to generate a new LP with exactly the same condition as the previous one while updating the price. PU is structured as $<LP\_ID, Old\_Price, New\_Price>$. The new LP is broadcast to the network and is stored in the blockchain with the corresponding EI. \par Recall that in our framework, we defined reputation that impacts the decision making of the nodes. The reputation is given by the DR smart contract based on the commitments of an energy producer. In case of the above example, the DR will reduce the reputation of the node and inform all participants. In this study we consider the negative reputation only which is when a node misbehaved in the network. \section{Case Studies}\label{sec: case study} \begin{figure} \input{casestudy} \caption{33-Bus test system}\label{fig:test system} \end{figure} \begin{table}[] \centering \caption{Parameter Setup}\label{tab:parameters} \begin{tabular}{cccc} \hline \multicolumn{4}{c}{\textbf{Key parameters}} \\ \hline Parameter & \multicolumn{1}{c|}{Value} & Parameter & Value \\ \hline $P$ & \multicolumn{1}{c|}{14} & $C$ & 18 \\ $\rho_\lambda$ & \multicolumn{1}{c|}{0.01} & $\rho_\mu$ & 0.001 \\ $\overline{\lambda}^G$& \multicolumn{1}{c|}{5 \cent/kWh^2} &$\underline{\lambda}^G$ & 25 \cent/kWh^2 \\ $N$& \multicolumn{1}{c|}{2} & $\omega$ & 2 \cent/kWh/ km \\ \hline \multicolumn{2}{c|}{\textbf{Producers' parameters}} & \multicolumn{2}{c}{\textbf{Consumers' parameters}} \\ \hline $a_i$ & \multicolumn{1}{c|}{(0.5-1] \cent/kWh^2} & $a_j$ & (0.5-10] \cent/kWh^2 \\ $b_i$& \multicolumn{1}{c|}{[5-10] \cent/kWh} & $b_j$ & [10-20] \cent/kWh \\ $\underline{e}_i$& \multicolumn{1}{c|}{[0-5] kWh} & $\underline{e}_j$ & [1-4] kWh\\ $\overline{e}_i$& \multicolumn{1}{c|}{[5-10] kWh} & $\overline{e}_j$ & [6-10] kWh \\ $\eta_i,\alpha_i,\beta_i$& \multicolumn{1}{c|}{[0-1]} & $\eta_j,\alpha_j,\beta_j$ & [0-1] \\ \hline \end{tabular} \end{table} In this section, simulation case studies are provided to verify the operation of the proposed framework. As shown in Fig. \ref{fig:test system}, the considered test system is the IEEE 33-bus distribution system with 16 producers and 16 consumers. Table \ref{tab:parameters} summarizes key parameters, and range of values for producers and consumers parameters. Fig. \ref{fig: power and price res} illustrates the results of P2P trading in the test system. The traded energy and price in different transactions have various values based on the agents preferences. Agents tend to trade energy with their closest neighbor agents to pay less grid service charge. For example, consumer at bus 1 buys energy from producer at bus 18. However, if the offer/ask from the agent at nearest node is not available, or does not satisfy the requirement of the agents, they have to trade with other neighboring agents. For instance, while the nearest agents to agent 5 are agents at bus 4 and 6, this agent trades energy with producers at bus 26 and 27. Since agents 4 and 6 have preferred to trade with other neighboring nodes (agents at bus 3 and 7 respectively), their offers are not available to the agent 5. We can see that agents 16 and 17 do not buy any energy in the market. These agents have lower utility function parameter compared to their neighboring agents, which means that their willingness to pay for energy is less than agent 15, and hence, producer at bus 14 prefers to trade with agent at bus 15. \begin{figure} \centering \includegraphics[scale=0.51]{figures/powerandprice.pdf} \caption{Transactions between producers and consumers; a) traded energy, b) energy price.}\label{fig: power and price res} \end{figure} To investigate the impact of considering grid service charge on the number of transactions ($n_T$), we implemented the energy trading algorithm for different values of $\omega$. The results are reported in Fig. \ref{fig:line flow}, where each marker indicates a transaction between producer and consumer and total number of transactions in each case is given in parentheses. The case with \(\omega=0\) means that there is not limit on the distance of agents and they can trade with any number of agents. Therefore, the number transactions in this case is significantly higher. Increasing the value of ($\omega$) reduces the number of transactions, and correspondingly reduces the welfare of agents as they have to trade less energy and pay more cost for utilizing the grid. \begin{figure} \centering \includegraphics[scale=0.9]{figures/lineflow.pdf} \caption{Impact of considering grid service charge on the number of transactions.}\label{fig:line flow} \end{figure} The negotiation among agents is an iterative process and the time required for agents to reach agreement depends on several factors including, number of agents, number of iterations required for convergence in Algorithm \ref{producer lag} and \ref{consumer alg}, computation time required to solve (\ref{sell update dec}) and (\ref{buyer update dec}) by agents in each iteration, and communication time. The results of the implementing the market settlement algorithm with and without implementing \textit{prioritization} step are compared as reported in Table \ref{tab:prioritization}. The \textit{prioritization} step reduces the number on negotiating agents, and hence, reduces the number of communication in each iteration. On the other hand, agents need less time to solve (\ref{sell update dec}) and (\ref{buyer update dec}), as they have less decision variables after \textit{prioritization}. Therefore, applying \textit{prioritization} reduces the negotiation time among agents. \begin{table}[tb!] \centering \caption{Impact of Prioritization}\label{tab:prioritization} \setlength{\tabcolsep}{5pt} \begin{tabular}{lcc} \cline{2-3} \\[-0.7em] & \textbf{w prioritization} & \textbf{w/o prioritization} \\ \\[-0.7em] \hline No. of decision variables & \multirow{2}{*}{38, 16} & \multirow{2}{*}{20, 6} \\ \\[-1em] (producer, consumer) & & \\ \\[-1em] No. of communications & \multirow{2}{*}{63} & \multirow{2}{*}{252} \\ \\[-1em] (in each iteration) & & \\ \\[-1em] No. of iterations for convergence & 163 & 206 \\ \\[-1em] \hline Negotiation time (s) & 1.04 & 1.88 \\ \\[-1em] \hline \end{tabular} \end{table} \begin{table}[tb!] \centering \caption{Comparative results of P2P market} \label{tab:p2p results} \setlength{\tabcolsep}{5pt} \begin{tabular}{lcc} \cline{2-3} \\[-0.7em] & \textbf{P2P} & \textbf{No P2P} \\ \\[-0.7em] \hline Total imported energy from grid (kWh) [$\sum_j{e_j^G}$] & 22.31 & 119 \\ \\[-1em] Total exported energy to grid (kWh) [$\sum_i{e_i^G}$] & 8.46 & 105 \\ \\[-1em] Total welfare of consumers ($\cent$) [$\sum_j{W_j}$] & 62.73 & -4143.04 \\ \\[-1em] Total welfare of producers ($\cent$) [$\sum_i{W_i}$] & 242.64 & -302.03 \\ \\[-1em] Total paid grid service charge ($\cent$) [$\sum_j \sum_i {e_{ij}^p \gamma_{ij}}$] & 50.44 & 0 \\ \hline \end{tabular} \end{table} In order to demonstrate the efficacy of the P2P market, the results of this market are compared with the case that producers and consumers only trade with the grid. Comparative results are reported in Table \ref{tab:p2p results}. As it can be inferred from the results, P2P market reduces the imported and exported energy by agent to the grid. Also, since the P2P market price is more beneficial for agents (see (\ref{price lim})), they can reach higher value of welfare in the P2P market, though they have to pay a grid service charge to the grid operator. As stated in Section \ref{sub:sec:advertisement}, in the proposed framework the ATs are stored off the chain in AD. Here, we study the impact of using AD by evaluating the blockchain size and number of consensus rounds. Blockchain size shows the amount of saved storage space by employing AD while number of consensus rounds evaluates the amount of computational overhead saved by running less consensus rounds. We employed the structure and configuration of IEEE 33-bus distribution to implement a distributed network using Java programming language. We assumed that the size of each block is 10 transactions and we disregarded the consensus algorithm as it does not impact the functionality of the proposed method. We run the network for 10 epochs. During each epoch, each energy producer generates an AT. To benchmark the results, we assume a baseline method where all ATs are stored in the blockchain. To focus on the impact of AD, we disregard the rest of the functions and assume ATs are the only transactions generated during each epoch. Based on the implementation results, the size of each advertisement transaction is 1776 B. After 10 epochs the baseline blockchain includes 16 blocks with the cumulative size of 314KB. Thus, each node must allocate 314KB storage space to store blockchain. Our solution off-loads this overhead to a central trusted node who is managing the AD, thus there is no memory overhead on the participating nodes in the blockchain. Assume $\nu $ represents the mining overhead that can include computational, packet and memory overhead. Then, the proposed framework incurs no overhead during advertisement process in the miners while conventional methods 16$\nu$. We next evaluate the processing overhead associated with CoL. We proposed a CoL that enables the users to anonymously verify the location of the parties involved in energy trading. CoL enhances the anonymity level of users and thus protects the user privacy. On the flip side, it increases the processing overhead on the users to generate and verify CoL. To evaluate the incurred overhead, we implemented our framework using Java programming language on Raspberry Pi 2 which represents low-resource devices. We measured the processing time for generating the CoL request, which involves generating a set of keys and forming the Merkle tree, and verifying the CoL which involves verifying the existence of the PK in the Merkle tree and validating the corresponding signatures. The implementation results are shown in Table \ref{tab:COL-performance}. The verification of the CoL involves verifying two signatures which potentially takes longer time than generating CoL. In addition to the processing overhead, CoL increases the size of the transactions. Table \ref{tab:COL-packet} compares the size of transactions and represents the impact of CoL. \par \begin{table}[tb!] \centering \setlength{\tabcolsep}{5pt} \caption{Analyzes of CoL performance.}\label{tab:COL-performance} \begin{tabular}{ccc} \hline & CoL formation & CoL verification \\\hline Processing time (ms) & 663.2 & 1795 \\\hline \end{tabular} \end{table} \begin{table}[tb!] \centering \setlength{\tabcolsep}{5pt} \caption{Analyzes of CoL performance.}\label{tab:COL-packet} \begin{tabular}{ccccc} \hline & AT & EN & LP & EI \\\hline Including CoL (B) & 2193 & 1928 & 1056 & 1912 \\\hline Excluding CoL (B) & 1041 & 1928 & 1056 & 1912 \\\hline \end{tabular} \end{table} \section{Security and Privacy Analysis}\label{sec:security} In this section we analyze the security and privacy of the proposed framework. We first outline threat mode and then discuss possible attacks and how to protect against those.\par \textbf{\textit{Threat Model:}} We assume that the adversary (or cooperative adversaries) can sniff the communications, discard transactions, generate fake transactions, and pretend to be another node in the network. The adversary may attempt to deanonymize a user by classifying blockchain transactions and monitoring real-time communications in blockchain. We assume standard secure encryption algorithms are in place which cannot be compromised by the adversary. We assume smart meters are tamper resistance and thus the end users cannot modify the transactions generated by the meters. \par \subsection{Security} In the following we discuss possible attacks and how the proposed framework protects against those. \par \textit{CoL Reply Attack:} In this attack, the malicious node attempts to employ CoL of another node to generate transactions. The node that employs a CoL is required to sign the corresponding transaction with the private key corresponding to a PK that exists in MTR that is only known to the CoL generator. Thus, it is impossible for a malicious node to utilize the CoL of another node. \par \textit{Fake CoL:} In this attack, a malicious node pretends to be a genuine smart meter generates fake CoL that can later be used in its energy tradings. The CoL must be signed only by a genuine smart meter and the CA validates the PK of the verifier. In case of this attack, CA will not validate PK and thus the attack can be detected. \par \textit{Double spending:} In this attack, a malicious energy producer attempts to sell the same amount of energy to different consumers. Recall from Section \ref{sec:energy trading} that an energy trade involves three main transactions which are EN, LP, and EI. Once the agreed energy is injected to the grid, the smart meter of the energy producer generates a EI transaction that trigers the payment of the energy price to the producer. The smart meter generates only one EI that includes a reference to the corresponding LP and LP includes a reference to the corresponding EN. Thus, it is impossible for the energy producer to sell the same energy to multiple nodes. \par An energy producer may attempt to inject less energy than the agreed amount and claim the full price. The smart meter of the producer will only generate the EI if the full amount of agreed energy is injected to the grid. If the energy producer injects part of the energy and the expiry time approach, the smart meter will generate an EI reflecting the amount that is injected to the grid. In this case, DR smart contract is called that adjusts the price of the energy and ensues the producer is only paid for the amount of energy injected to the grid. \par \textit{Reputation Modification:} In this attack, the malicious node attempts to improve their reputation or reduce the reputation of another node in the network. Recall that blockchain is an immutable ledger that makes it impossible to modify or remove previously stored transactions, which makes it impossible for the attacker to modify their reputation. To reduce the reputation of another node, the malicious node shall modify the code of the smart contract which is impossible due to the immutability of the blockchain. DR smart contract is the only entity that can reduce the reputation of a node. All participants know the address of the valid DR contract. When participating nodes receive reputation reduction from a contract, they first verify if the contract address matches with the genuine DR smart contract. If so, they accept the new reputation. Otherwise, they discard the transaction. \subsection{Privacy} In the following we analyze the proposed framework from privacy perspective. Recall from Section \ref{sec:energy trading} that the grid operator charges a grid service charge per each transaction that depends on the distance between the energy consumer and producer. Thus, the consumer and producer require to prove their location, however, this may compromise their privacy as malicious nodes can classify the blockchain transactions to deanonymize the user. To address this challenge, we proposed A-PoL that enables the participants in the blockchain to verify the location of an anonymous smart meter using a CoL. Assume node \textit{A} is using A-PoL. The privacy of \textit{A} can be studied from the perspective of the following entities: i) CA: \textit{A} uses the PK populated by the CA only to prove its identity to the verifier. CoL employed by \textit{A} includes PK of the verifier and not \textit{A}. Given that the verifier is selected randomly by \textit{A} and there is no link between \textit{A} and the verifier, the CA is unable to identify the transactions generated by \textit{A}, ii) verifier: \textit{A} only sends MTR to the verifier that hides the actual PKs of \textit{A} from the verifier. \textit{A} reveals the PKs in the Merkle tree to prove ownership on CoL. A group of smart meters may choose to create a single MTR which further protects their privacy, and iii) network participants: the network participants only receive CoL that contains of the verifier and MTR. As outlined earlier, there is no link between the verifier and \textit{A}, thus knowledge of the identity of the verifier does not impact the privacy of \textit{A}. The Merkle tree includes a number of PKs that are employed by \textit{A} (or other involved smart meters) to generate transactions, thus, \textit{A} may generate multiple transactions with the same PK. This potentially reduces the anonymity level of the user as malicious nodes may attempt to deanonymize a user by classifying their transactions. The anonymity level of \textit{A} largely depends on the number of PKs employed in the Merkle tree. Large number of PKs incur overheads on \textit{A} to manage the keys. Thus, there is a trade-off between the number of keys in Merkle tree and the user anonymity. \par Recall from Section \ref{sub:sec:market-settlement} that the energy producer and consumer employ a cost/utility function, as shown in (\ref{cost-func-producer}) and (\ref{cost-function-consumer}), which represent their willingness to pay or accept energy price based on their preferences and concerns. These functions depends on $a_i$, $b_i$ and $c_i$ and thus it is critical for the producers and consumers to keep these values private. In the proposed framework, the market settlement does not need the nodes to reveal $a_i$, $b_i$ and $c_i$ which in turn enhances the privacy of the users. \section{Conclusion and Future Work}\label{sec:conclusion} In this paper a blockchain-enabled framework for P2P energy trading is proposed. -We can relax the assumption that the smart meters are seure - We assumed that the smart meter location is fixed and known by the CA. how the mobility of the meter may impact this? \bibliographystyle{IEEEtran} \section{Introduction} In response to climate change concerns, the decarbonization and digitization of the electricity sector have been accelerated in recent years. The path toward decarbonization is associated with the high penetration of Distributed Energy Resources (DERs), such as rooftop solar, home batteries, and Electric Vehicles (EVs) with the potential to support a reliable, affordable and lower-emissions electricity grid. Progressive deployment of DERs raises several opportunities and challenges in the electricity systems \citep{openenergy}. Technical challenges include the increase in bidirectional power flows, the raise in the voltage levels, and lack of visibility \citep{coster2010integration}. On the other side, efficient DER integration can provide substantial benefits for energy customers \citep{aemo}. DER owners can benefit from Feed-In-Tariffs (FiTs) programs by selling energy back to the utility grid for a fixed price \citep{ye2017analysis}. Alternatively, they can be coordinated and orchestrated for participation in different markets. \begin{figure}[t!] \centering \includegraphics[scale=0.5]{figures/no.png} \end{figure} Given this context, Transactive Energy (TE) is emerging as a new approach for orchestrating and coordinating operation of DERs. TE is a market-based approach for energy management, which uses price signals to coordinate demand and supply across the network and among all users and entities \citep{council2015gridwise}. TE approach facilitates integration of DERs in the grid, while maintaining the system reliability \citep{liu2017transactive}. It provides a transformative solution to technological and socioeconomic challenges of the power grid modernization \citep{li2019blockchain}. Built upon the context of TE, a novel technique for the integration of small-scale producers and consumers to energy markets is Peer-to-Peer (P2P) trading, which allows bilateral energy transactions between users \citep{tushar2020peer}. P2P trading provides significant benefits for both end users and grid operators such as increasing welfare by preference satisfaction \citep{morstyn2018using}, lowering operational costs, and improving system reliability \citep{mengelkamp2018designing}. P2P trading offers the flexibility required to coordinate agents with diverse preferences. Recent advances in the Internet of Things (IoT) and digital technologies have paved the path toward grid digitization. Grid digitization provides a two-way communication network that allows DER owners and energy consumers to act as proactive agents in the energy markets and facilitates P2P market implementation. Through direct interaction of agents in a decentralized platform, small-scale producers are allowed to compete with large traditional suppliers, and consumers have the freedom to select their energy suppliers based on their preferences. In the P2P trading, it is expected that market participants can settle the bilateral transactions with least influence from a central authority \citep{giotitsas2015peer}. Hence, designing a decentralized platform for managing market participants with diverse preferences is a challenging task. Blockchain technology offers new opportunities for decentralized market designs due to its salient features which includes decentralization, trust, anonymity, and auditability. Blokchain enables energy producers and consumers to directly negotiate and trade energy without reliance on Trusted Third Party (TTP). It provides a platform to store and share data in a secure and verifiable manner, even when the identity and trustworthiness of market participants are unknown \citep{van2020integrated}. The participating nodes in the blockchain, that includes energy consumers, producers, prosumers, or grid operators, jointly form an overlay network where they can exchange information \citep{wu2019comprehensive}. Given the properties of blockchain, and the need for a truly decentralized platform for P2P trading, designing blockchain-enabled frameworks for P2P trading is gaining momentum in recent years. The blockchain framework for P2P trading should incorporate an appropriate market settlement approach to match trading peers and to settle the bilateral transactions among them. Compared to traditional markets, P2P market offers more flexibility and trading options to the agents, and hence, it needs a pricing mechanism that incentivizes both producers and consumers to participate in the market. There are several approaches that can be applied to P2P market clearing and peer matching, such as auction-based mechanism, game theory, and optimization-based methods \citep{li2018location, paudel2018peer, khorasany2019decentralised}. In the auction-based mechanism, agents express their interest in energy trading by submitting their offers, and the energy allocation and price would be determined based on the market clearing rules \citep{li2018location}. The game theory approaches aim to provide a stable solution that is beneficial for all parties \citep{paudel2018peer}. In the optimization-based methods, the market settlement is formulated as an optimization problem, which can be decomposed to local subproblems solvable by each agent \citep{khorasany2019decentralised}. The optimization-based methods can be implemented in a decentralized manner without any need for third party interventions, which allows agents to optimize their welfare by participating in the P2P market. Hence, these methods are well-suited for blockchain-enabled P2P markets. However, the computation and communication overheads of these methods are of concern, as they require agents to iteratively negotiate to reach agreements on their actions. Therefore, reducing these overheads is a key requirement This paper designs a blockchain-enabled P2P market that provides a secure and transparent environment for the energy trading of producer and consumer agents. In the proposed approach, agents \textit{Advertise} their surplus/deficit energy during each market interval. We use an Advertisement Database (AD) that is centrally managed by the grid operator to skip storing advertisement in public blockchain, which in turn, reduces the blockchain memory footprint. A decentralized optimization algorithm is employed for the \textit{Negotiation} that allows agents to iteratively optimize their welfare in the negotiation process. In order to reduce the computation and communication overheads, a \textit{Prioritization} step is considered in the market settlement process that enables agents to prioritize their trading partners based on their proximity and reputation factor. Network constraints should be incorporated in the P2P trading to ensure that energy transactions respect electric grid constraints. Instead of enforcing network constraints directly to the proposed framework, we define a grid service charge for each transaction. To incentivize agents to trade energy with their neighboring agents and reducing network loading, this charge is calculated based on the electrical distance between producer and consumer. To this end, we propose an Anonymous Proof of Location (A-PoL) algorithm that enables agents to prove their location without revealing their real identity. Once the energy consumer and producer agree on the conditions of the trade, they start trading the energy. To reduce the reliance on TTP yet ensure the security of the trade, we adopt the concept of atomic meta-transactions~\citep{dorri2019spb} where two transactions are considered valid only if they are generated within a particular period of time. The contributions of this paper are summarized as follows: \begin{itemize} \item [-] a decentralized P2P trading framework that does not require access to private information of agents in any stage of the market settlement; \item[-] a novel \textit{prioritization} step to allow agents to select their trading partners based on their location and reputation in order to reduce the system overheads; \item[-] a new A-PoL algorithm, which uses a Certificate of Location (CoL) issued by smart meters to approve the location of agents without revealing their real identity. \end{itemize} The rest of this paper is organized as follows. Section \ref{sec:related} explains some related work. Section \ref{sec: market structure} outlines the structure of the market, including agents modeling, market objective, and decentralized optimization of the market objective. Section \ref{sec:energy trading} explains the market settlement process and its different steps. Case studies and numerical results are reported in Section \ref{sec: case study}. A detailed analysis of the security and privacy of the proposed framework is presented in Section \ref{sec:security}. Finally, concluding remarks are given in Section \ref{sec:conclusion}. \section{Prior Art}\label{sec:related} In recent years blockchain applications in energy systems, such as demand response, microgrid energy management, EV integration, and energy trading, has attracted tremendous attention due to its salient features including decentralization, transparency, trust, and immutability \citep{andoni2019blockchain,musleh2019blockchain}. In \citep{van2020cooperative}, a decentralized framework for cooperative demand response is presented, in which smart contracts are employed to allow participants to collaboratively decide on their planning profiles. Su \textit{et. al} \citep{su2018secure} employed blockchain to implement secure charging services for EVs with the execution of smart contracts. In \citep{noor2018energy}, blockchain technology is utilized to implement a demand side energy management method for the efficient operation of microgrids. Hassan \textit{et. al} \citep{hassan2019deal} developed a blockchain-based approach for microgrid energy auction, in which to reduce computational complexity, at every node consortium blockchain is used that authorizes only selected nodes to add a new block in the blockchain. The application of blockchain for P2P energy trading has been investigated in several studies. The Brooklyn microgrid is a prototype of a blockchain-enabled P2P market, in which a blockchain framework is employed to implement a decentralized microgrid energy market \citep{mengelkamp2018designing}. A unified blockchain framework for various scenarios of energy trading in an industrial IoT is proposed in \citep{li2017consortium}, in which a credit-based payment scheme is employed to overcome the transaction limitation caused by transaction confirmation delays. An operation framework for P2P energy trading at the distribution level is presented in \citep{wang2019energy}, where the system operator clears the market using a crowdsourced energy system model. Dang \textit{et. al} \citep{dang2019demand} proposed a blockchain-based P2P market for optimal load management of big industrial users, in which users can organize their own P2P market to save their electricity costs. In \citep{luo2018distributed}, a two-layer system for distributed electricity trading among prosumers is proposed, in which in the first layer prosumers can form coalitions and negotiate energy trading, and in the second layer blockchain is employed for settlement of transactions formed in the first layer. A methodology for the co-simulation of power distribution networks and local P2P energy trading platforms is presented in \citep{hayes2020co}, in which a blockchain-based distributed double auction trade mechanism is employed to model the trading platform. The existing blockchain-based frameworks for energy trading suffer from: \textit{(i) negotiation overheads:} the decentralized optimization methods rely on iterative negotiation between involved agents. In a market with large number of participants, this iterative process increases the communication and computation overheads, and consequently the negotiation time. \textit{(ii) Reliance on a TTP:} most of the existing studies rely on a TTP to oversee the trade and ensure that both sides of the trade commit to their obligations. However, this potentially may lead to centralization and privacy concerns as TTP can build virtual profile about the users. \textit{(iii) Blockchain overheads:} in conventional blockchain-based methods in smart grids, all communications are stored in the blockchain, which in turn increases the blockchain memory footprint and thus reduces scalability. In this regard, this paper proposes a blockchain-enabled P2P market that alleviates the above-mentioned limitations. \section{The Market Structure}\label{sec: market structure} \begin{figure}[tb!] \centering \includegraphics[scale=0.8]{figures/marketlayer.pdf} \caption{An overview of the proposed blockchain-enabled P2P energy trading.} \label{fig:layers} \end{figure} In this section, we outline the details of the market structure. As shown in Fig. \ref{fig:layers}, the proposed architecture consists of two layers: i) physical layer that is the underlying physical network to transfer electricity from producers to consumers. The minimum requirement for a successful transaction between a producer and a consumer is the existence of a path to transfer power between them, and ii) information layer where the participating nodes, that includes energy producer, consumer, and the grid operator, connect through a public blockchain to share information. The information layer provides a secure platform for the participating nodes to advertise their energy, negotiate on their offers, and decide on their actions in the market. The market mechanism is implemented in this layer that enables the agents to trade energy and settle energy transactions. \par \subsection{Agents modeling}\label{sub:sec:agent-modeling} We consider an energy network with a set of agents consisting of a set of producer agents with index \(i \in \mathcal{P}=\{1, ..., P\}\) and a set of consumer agents with index \( j \in \mathcal{C}=\{1, ..., C\}\) connected to a physical layer manged by a grid operator. Producers have energy producing capability and can use their own generated energy for their demand. In case there is more generation than demand, producers can sell surplus energy to consumers or to the grid. Consumers, on the other side, can buy energy from producers or the grid. Producers and consumers can negotiate for energy trading in a forward market for any time interval \(t \in \mathcal{T} =\{1, ..., T\}\) with equal duration (e.g., one hour). Agents are equipped with smart meters, which determines the energy surplus/deficit for trade in each time slot. The smart meters are connected to the information layer and thus, can exchange information in blockchain. It is assumed that the smart meters are tamper-resistant. We assume that each agent is equipped with an energy management system that can quantitatively estimate the energy surplus or deficit that needs to be traded in each market interval. Each agent can trade with the grid or with other agents in the network. The total energy surplus and deficit of producers and consumers is represented as \begin{equation}\label{tot energ pro} e_i=e_i^G +\sum_{j \in \mathcal{C}}{e_{ij}^P} \end{equation} \begin{equation}\label{tot energ cons} e_j= e_j^G+\sum_{i \in \mathcal{P}}{e_{ji}^P} \end{equation} where, \(e_i^G\) and \(e_j^G\) are the traded energy between producer $i$/consumer $j$ and the grid respectively, \(e_{ij}^P\) is the sold energy by producer $i$ to consumer $j$, and \(e_{ji}^P\) is the purchased energy by consumer $j$ from producer $i$. Each agent in the P2P market aims to maximize its welfare. The welfare incorporates the utility/cost of energy consumption/generation, cost/revenue of trade with other agents or the grid, and the cost of using grid for the P2P trading. The welfare function of producers and consumers can be modeled by (\ref{pro welf}) and (\ref{con welf}), respectively \begin{equation}\label{pro welf} W_i(e_i, \lambda_{ij},\gamma_{ij})= \underline{\lambda}^G e_i^G +\sum_{j \in \mathcal{C}}{e_{ij}^P (\lambda_{ij}-\gamma_{ij})}-C_i(e_i) \end{equation} \begin{equation}\label{con welf} W_j(e_j, \lambda_{ij},\gamma_{ij})= U_j(e_j)- \overline{\lambda}^G e_j^G -\sum_{i \in \mathcal{P}}{e_{ij}^P (\lambda_{ij}+\gamma_{ij})} \end{equation} where \(\underline{\lambda}^G\) denotes FiT representing the price for selling energy to the grid; \(\lambda_{ij}\) is energy price in transaction between producer $i$ and consumer $j$; \(\gamma_{ij}\) is grid service charge for using grid infrastructure for this trade; \(\overline{\lambda}^G\) denotes the price for selling from the grid which is usually a fixed value over the time (e.g. time of use tariff). The grid selling and buying prices limit energy price in the P2P market, i.e. for any bilateral trade \begin{equation}\label{price lim} \underline{\lambda}^G \leq \lambda_{ij} \leq \overline{\lambda}^G. \end{equation} The cost function of the producer represents the cost of generating energy \(e_i\) and can be modeled as \citep{grainger2003power} \begin{equation}\label{cost-func-producer} C_i(e_i)=a_i e_i^2 +b_i e_i +c_i \end{equation} where \(a_i, b_i\) and $c_i$ are positive constants, which can be adjusted by the producer to reflect the generation cost. Since producers usually have renewable generation resources with zero marginal costs, the cost function can represent the cost associated with battery degradation cost, if the producer needs to discharge its battery to sell the energy. On the other side, the utility function of a consumer represents its satisfaction level by consuming energy $e_j$ and can be modeled as \citep{samadi2010optimal} \begin{equation}\label{cost-function-consumer} U_j(e_j)= \begin{cases} -a_j e_j^2 +b_j e_j \;\;\;\; :0 \leq e_j \leq \frac{b_j}{2 a_j}\\ \frac{b_j^2}{4 a_j} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\ :e_j \geq \frac{b_j}{2 a_j} \end{cases} \end{equation} where \(a_j\), and \(b_j\) are unique positive constants for each consumer. These parameters reflect the valuation of the energy by the consumer and denotes the price that consumer is willing to pay for the energy. A fundamental challenge in implementing P2P markets is how to deal with network constraints, without having a central dispatch mechanism. In this work, instead of enforcing network constraints directly, we use a grid service charge for each trade \citep{baroche2019exogenous}. This fee represents the price that agents need to pay for using the grid infrastructure for each trade. To incite agents to trade with their closest electrical partner and to reduce network loading, this charge is calculated based on the electrical distance between producer and consumer as in \begin{equation}\label{service charge} \gamma_{ij}=\omega d_{ij} \end{equation} where \(\omega\) is the grid service charge per unit of electrical distance for each unit of energy, and \(d_{ij}\) is the electrical distance between producer $i$ and consumer $j$. This distance can be calculated based on power transfer distance, which aggregates the absolute value of Power Transfer Distribution Factor (PTDF) induced by a trade as in \begin{equation} \label{eq:ptdf} d_{ij}=\sum_{l \in \mathcal{L}}{\phi^l_{ij}}. \end{equation} For any trade, \(\phi^l_{ij}\) indicates the fraction of transacted power from producer $i$ to consumer $j$ that flows over a line $l$, and can be calculated using the method presented in \citep{wood2013power}. \subsection{Market objective} \label{sub:sec:market-settlement} The market objective for P2P trading is formulated as social welfare maximization, which maximizes the total welfare of players in the market subject to the constraints, and mathematically can be modeled as: \begin{equation}\label{tot objective} \begin{aligned} & \underset{\textbf{\textit{$e_i$,$e_j$}}}{\text{max}} \sum_{i \in \mathcal{P}}{W_i} + \sum_{j \in \mathcal{C}}{W_j}\\ & \text{s.t. constraints.} \end{aligned} \end{equation} As stated in (\ref{price lim}), the prices in the P2P market should always be beneficial for both producers and consumers. Hence, it is reasonable to assume that all agents try to maximize their traded energy in the P2P market and to minimize trading with the grid by setting $e_i^G=e_j^G=0$. Therefore, (\ref{tot objective}) can be rewritten as: \begin{subequations}\label{social welfare} \begin{equation} \begin{aligned} & \underset{\text{\textbf{{$e_{ij}$,$e_{ji}$}}}}{\text{max}} \sum_{j \in \mathcal{C}}{U_j(e_{ji})} - \sum_{i \in \mathcal{P}}{C_i({e_{ij}})} - \sum_{j \in \mathcal{C}}{\sum_{i \in \mathcal{P}}}{(e_{ij}+e_{ji})\gamma_{ij}}\\ \end{aligned} \end{equation} \begin{equation}\label{producer flex} \text{s.t.} \;\;\; \underline{e_i} \leq \sum_{j \in \mathcal{C}}{e_{ij}^P} \leq \overline{e_i} \;\;\;\;\;\;\;\;\;\; :\underline{\mu_i}, \overline{\mu_i} \end{equation} \begin{equation}\label{consumer flex} \underline{e_j} \leq \sum_{i \in \mathcal{P}}{e_{ji}^P} \leq \overline{e_j} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; :\underline{\mu_j}, \overline{\mu_j} \end{equation} \begin{equation}\label{demand supply} e_{ij}^P=e_{ji}^P \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; :\lambda_{ij} \end{equation} \end{subequations} where (\ref{producer flex}) and (\ref{consumer flex}) represents the flexibility constraints of producer, and consumer, respectively. The constraint (\ref{demand supply}) represents the power balance in transaction between producer $i$ and consumer $j$. \(\underline{\mu_i}, \overline{\mu_i}, \underline{\mu_j}, \overline{\mu_j}, \lambda_{ij}\) are dual variables associated with these constraints. \subsection{Decentralized optimization} \label{sub:sec:coordination} In this paper, our aim is to solve (\ref{social welfare}) with only P2P communications to ensure data privacy of the agents. To do so, a decentralized optimization approach is employed to formulate the market settlement in P2P market \citep{khorasany2019decentralised}. In this approach, dual variables are used to decouple constraints, and then, the problem is decomposed to local subproblems solvable by producers and consumers. The local subproblem is solved by deploying the sub-gradient projection method \citep{boyd2011distributed}. Each agent contributes to solving the global problem by updating its local decision variables. The set of decision variables for producers and consumers are \{\(\lambda_{ij}, e_{ij}^P, \underline{\mu_i}, \overline{\mu_i}\}\), and \{\( e_{ji}^P, \underline{\mu_j}, \overline{\mu_j}\)\}, respectively. The market settlement approach is an iterative process, in which agents update their decision variables iteratively and exchange information without revealing their private information. The updates of the decision variables of agents are based on the Karush–Kuhn–Tucker (KKT) optimality conditions of the local problems, and can be developed using first order deviate of the relaxed problem as follows:\\ \(\forall i \in \mathcal{P}\) \begin{subequations}\label{sell update dec} \begin{equation}\label{sell price update} \lambda_{ij}^{k+1}=\left[\lambda_{ij}^{k}-\rho_{\lambda}^k(e_{ij}^{P,k}-e_{ji}^{P,k})\right]^{+} \end{equation} \begin{equation}\label{sell mu low update} \underline{\mu_i}^{k+1}=\left[\underline{\mu_i}^{k}+\rho_{\mu}^k(\underline{e_i}-e_i^k)\right]^{+} \end{equation} \begin{equation}\label{sell mu up update} \overline{\mu_i}^{k+1}=\left[\overline{\mu_i}^{k}+\rho_{\mu}^k(e_i^k-\overline{e_i})\right]^{+} \end{equation} \begin{equation}\label{sell energy update} e_{ij}^{{P,k+1}}= \left[e_{ij}^{{P,k}}+\zeta_{ij}^k(\tilde{e}_{ij}^{P,k+1}-e_i^k)\right]^{+}\\ \end{equation} \begin{equation}\label{set point sell update} \tilde{e}_{ij}^{P,k+1}=\frac{\lambda_{ij}^{k+1}-\gamma_{ij}-\overline{\mu_i}^{k+1}+\underline{\mu_i}^{k+1}-b_i}{2 a_i} \end{equation} \end{subequations} \(\forall j \in \mathcal{C}\) \begin{subequations}\label{buyer update dec} \begin{equation}\label{buyer mu low update} \underline{\mu_j}^{k+1}=\left[\underline{\mu_j}^{k}+\rho_{\mu}^k(\underline{e_j}-e_j^k)\right]^{+} \end{equation} \begin{equation}\label{buyer mu up update} \overline{\mu_j}^{k+1}=\left[\overline{\mu_j}^{k}+\rho_{\mu}^k(e_j^k-\overline{e_j})\right]^{+} \end{equation} \begin{equation}\label{buyer power update} e_{ji}^{P,k+1}= \left[e_{ji}^{P,k}+\zeta_{ji}^k(\tilde{e}_{ji}^{P,k+1}-e_j^k)\right]^{+} \end{equation} \begin{equation}\label{buyer set point update} \tilde{e}_{ji}^{P,k+1}=\frac{b_j-\lambda_{ij}^{k+1}-\gamma_{ij}-\overline{\mu_j}^{k+1}+\underline{\mu_j}^{k+1}}{2 a_j} \end{equation} \end{subequations} where $k$ is the iteration index, \(\tilde{e}_{ij}^{P}, \tilde{e}_{ji}^{P}\) are optimal power set points of producer and consumer at the price \(\lambda_{ij}\). \(\zeta_{ij}, \zeta_{ji}\) are asymptotically proportional factors, \(\rho\) is a small tuning parameter, and \([.]^+\) denotes max \{.,0\}. The information exchange between producers and consumers during the decentralized optimization process is explained in Section \ref{sec:neg}. \begin{figure}[tb!] \centering \input{marketsettlement} \caption{Market settlement algorithm.}\label{fig:full algorithm} \end{figure} \section{Market Settlement Process}\label{sec:energy trading} In this section, we outline the details of the market settlement process for P2P trading. The proposed framework consists of four main phases namely; \textit{(i) Advertisement:} to enable agents to advertise the energy that they want to trade, \textit{(ii) Prioritization:} to allow agent prioritize their trading partners based on their preferences, \textit{(iii) Negotiation:} in which agents negotiate on the energy quantity and price in each transaction, and \textit{(iv) Energy trading:} which is the step that energy transfer and payment would be taking place. These steps are summarized in Fig. \ref{fig:full algorithm} and discussed in detail in the rest of this section. \subsection{Anonymous proof of location} As shown in (\ref{service charge}) in the proposed framework the grid service charge is calculated based on the location of the participants denoted by $\sigma$ which requires the involved parties to reveal their location. However, this potentially enables the malicious nodes to track the activities of a particular user, which in turn compromises user privacy. Additionally, the distributed and anonymous nature of blockchain makes it challenging for the users to verify the location claimed by another node. To address these challenges, we propose an A-PoL algorithm that enables the users to prove their location while protecting their real identity, which in turn enhances the level of anonymity offered to the users. The proposed A-PoL algorithm involves a CoL that is issued by a smart meter in the network, as shown in Fig. \ref{fig:CoL}. The energy companies maintain a local record of the accurate location of the installed smart meters. During the installation process, the energy company deploys a key pair in each smart meter (step 1 in Fig. \ref{fig:CoL}) and records $<PK,location>$ tuple in its local storage. The company serves as a certificate authority (CA) for PKs deployed in smart meters. Although the CA is a trusted authority, relying on the PK provided by the CA may compromise the privacy of the users as the company can build a virtual profile of the user and their energy trading (by observing the proof of location transactions). To address this challenge, we propose CoL. \par CoL is a certificate received from a verifier that is an independent smart meter in the network. Assume smart meter \textit{A} is going to establish a CoL. Once deployed on site, \textit{A} explores the CA to find potential smart meters that can function as the verifier and selects one randomly. Assume smart meter \textit{B} is selected by \textit{A} to function as the verifier. Recall that we assume the smart meters are tamper resistant, and thus \textit{A} can send its request to any meter listed in CA. \textit{A} sends a CoL request transaction to \textit{B} that is structured as $<T\_ID, MTR, \sigma, PK, Sign>$, where \textit{T\_ID} is the unique identifier of the transaction which essentially is the hash of the transaction content. \par \textit{A} populates the root hash of a Merkle tree constructed by recursively hashing a number of PKs in the \textit{MTR} field. The PKs in the leaves of the tree are later employed by \textit{A} to prove ownership on the CoL, which is discussed in greater detail later in this section. The number of PKs may vary based on the application. $\sigma$ is the noisy location of the smart meter that can be the location at a lower resolution, e.g., the street in which the meter is installed. This protects the privacy of the smart meter against deanonymization, as studied later in Section \ref{sec:security}. \textit{PK} is the PK of \textit{A} allocated by the CA, and \textit{Sign} is the corresponding signature, which proves that \textit{A} owns the private key corresponding to the PK. \par \begin{figure}[tb!] \centering \input{col} \caption{An overview of the proposed CoL.} \label{fig:CoL} \end{figure} When the verifier, i.e. \textit{B} receives the CoL request, it verifies that the transaction is generated by a genuine smart meter that is done by requesting the CA (step 3). To protect the privacy of the user, the CA does not reveal the actual location of the smart meter to \textit{B} instead, only confirms if the PK is registered and genuine smart meter. Once verified, the verifier creates a CoL that is \textit{sign(hash(MTR, $\sigma$))} and replies back to \textit{A} by sending the reply transaction structured as $<CoL, PK, Sign>$, where CoL is as outlined above, \textit{PK} is the PK of the verifier, i.e., \textit{B} registered by the CA, and \textit{Sign} is the corresponding signature of the verifier. \par \textit{A} employs the received CoL to anonymously prove its location to the nodes in the overlay. To do so, \textit{A} appends $CoL_f^{A} = (CoL_A, PK_{ver}, Sig_{ver}, MTR_A, \sigma_A, PK_A, MTL_A, Sign_A)$ where the first three fields are extracted from the response of the verifier \textit{B}. MTR and $\sigma_A$ are extracted from the CoL request sent to the verifier. \textit{$PK_A$} is a PK that was part of the leaves of the Merkle tree. $MTL_A$ is the leaves of the Merkle tree that are necessary to prove the existence of $PK_A$ in the MTR, and $Sign_A$ is the signature corresponding to PK. The last three fields ensure that only the owner of the certificate, who knows the PKs in the Merkle tree and the corresponding private key, can use the certificate. \par To verify the location of \textit{A}, the participating nodes in the blockchain, say \textit{C}, must first verify that $PK_A$ belongs to $MTR_R$ using $MTL_A$. Next, \textit{C} verifies if $Sign_A$ matches $PK_A$. The third step in verifying is to verify if $hash(MTR_A, \sigma_A) = CoL_A$. The final step is for \textit{C} to verify $PK_{ver}$ using CA. This ensures that a genuine smart meter has signed the CoL. If all the above steps successfully passed, the location of \textit{A} is verified. \par Having discussed the details of the proposed A-PoL algorithm, we study the process of P2P energy trading in the rest of this section. The algorithms implemented by producer and consumer agents are represented in Algorithm \ref{producer lag} and \ref{consumer alg}, respectively, and are discussed with greater details in the rest of this section. \subsection{Advertisement}\label{sub:sec:advertisement} The main aim of this phase is for the agents to advertise their energy to the potential consumers/producers. In each market interval, agents participate in the forward market by submitting their offers and asks in the form of \textit{advertisement transaction (AT)} that is structured as follow $ AT = (T\_ID, price/amount, \eta_i, CoL_f^{i})$, where \textit{price/amount} can be either the price of the energy, i.e., $\lambda_i, \forall i\in \mathcal{P}$, if AT is generated by a producer or the amount of requested energy, i.e., $e_j, \forall j\in \mathcal{C}$, if AT is generated by a consumer, and \(\eta\) is the reputation factor of agent.\par In conventional blockchain-based frameworks, the negotiation transactions are stored in the blockchain that provides high auditability. However, this increases the computational and storage resource consumption of the blockchain and limits the throughput, i.e., the total number of transactions that can be stored per second \citep{zhang2020privacy}. These can limit the blockchain scalability while smart grid comprises of a broad range of devices that may produce transactions frequently. To address this challenge, in the proposed framework, the negotiation transactions are stored in a public database managed by the grid operator that is referred to as \textit{Advertisement Database} (AD). The write access of AD is restricted only to the grid operator and other nodes in the overlay have read-only permission. The parties involved in an AT transaction, i.e., energy producers and consumers, may store the transactions locally to be employed as a reference in case of a dispute between parties. The final price and the amount of energy agreed by the participants are later defined during a negotiation which limits future references to AT transaction. Thus, we store AT in AD which potentially reduces the packet overhead and blockchain memory footprint. \begin{algorithm}[tb!] \caption{Producer's Algorithm}\label{producer lag} \footnotesize{ \begin{algorithmic}[1] \State Submit offer AT \Comment{\textit{Advertisement}} \State Explore AD to find potential trading partners \State Divide consumers in groups $\Omega_i^1, ..., \Omega_i^N$ using (\ref{prioritization sell}) \Comment{\textit{Prioritization}} \State Set $n\gets 1$ \State Receive EN transactions from consumers \Comment{\textit{Negotiation}} \While{$|\lambda_{ij}^{P,k+1}-\lambda_{ij}^{P,k}| \geq \epsilon$} \For {$j \in \Omega_i$} \State Receive $\gamma_{ij}$ from grid operator \State Receive $e_{ji}^{k}$ from consumer \State Calculate $\lambda_{ij}^{k+1}$ using (\ref{sell price update}) \State Update $\overline{\mu}_i^{k+1}$ and $\underline{\mu}_i^{k+1}$ using (\ref{sell mu low update}) and (\ref{sell mu up update}) \State Calculate $e_{ij}^{P,k+1}$ using (\ref{sell energy update}) \State Broadcast $\lambda_{ij}^{P,k+1}$ to consumers \EndFor \EndWhile \State Check if more energy is available \State Set $n\gets n+1$ \State Repeat Negotiation with new consumers \State Receive LP from consumers \Comment{\textit{Energy trading}} \State Inject agreed energy \State Sign EI \end{algorithmic} } \end{algorithm} \subsection{Prioritization} After \textit{advertisement} step, agents explore the available ATs in the AD for the negotiation. One-on-one negotiation can increase the delay associated with finalizing the trade as either side of the trade may have higher welfare in trading with another party in the network. Also, negotiation with every agent in the market increases the computation and communication overheads, which potentially leads to low scalability \citep{khorasany2019enhancing}. Thus, agents need to prioritize their trading partners based on their preferences and only negotiate with a given number of agents. As stated in Section \ref{sub:sec:agent-modeling}, agents have to pay a grid service charge for each transaction as defined in (\ref{service charge}). This charge is directly associated with the distance between the producer \textit{i} and the consumer \textit{j}, and impacts the welfare of the agents. Accordingly, agents are incentivized to trade with trading partners located in close vicinity. This reduces the load on the transmission lines and thus reduces the cost in managing the physical layer. On the other hand, agents may prefer to negotiate with the trading partners with higher reputation factors, indicating their past performance in fulfilling their commitments. Thus, agents need to prioritize their trading partners based on their preferences to limit the partner search space. These preferences include reputation factor and distance. Agents may have varied preferences over reputation factor and distance of their trading partners. Hence, we define a priority index for each possible transaction between producers and consumers. This index for offer from consumer $j$ received by producer $i$, and offer from producer $i$ received by consumer $j$ are defined using (\ref{sell pri index}) and (\ref{buy pri index}), respectively. \begin{equation}\label{sell pri index} \Upsilon_{ij}=\alpha_i \rho_j +\beta_i (1-\cfrac{|\sigma_i -\sigma_j|}{D_{ij}})\\ \end{equation} \begin{equation}\label{buy pri index} \Upsilon_{ji}=\alpha_j \rho_i +\beta_j (1-\cfrac{|\sigma_i -\sigma_j|}{D_{ji}})\\ \end{equation} where \(\alpha\) and \(\beta\) are the weights that agent places on the reputation factor and proximity of other agents, respectively such that \(\alpha+\beta=1\), \(D_{ij}=\underset{\sigma_j}{\max} |\sigma_i-\sigma_j|\), \(D_{ji}=\underset{\sigma_i}{\max} |\sigma_i-\sigma_j|\). The second term in (\ref{sell pri index}) and (\ref{buy pri index}) denotes the proximity of each trading partner to the agent, such that for the furthest trading partner of each agent this value is zero. After calculating priority index, each agent divides its trading partners to a set of $\mathcal{N}$ groups and then, starts to negotiate with the agents from the group with the highest priority and proceed to the group with the lowest priority. Each group of potential consumers/producers for producer $i$/consumer $j$ can be formed as \begin{equation}\label{prioritization sell} \Omega_i^n=\{j \in \mathcal{C}| (N-n)/N \leq \Upsilon_{ij} \leq (N-n+1)/N \} , \forall n \in \mathcal{N} \end{equation} \begin{equation}\label{prioritization buy} \Omega_j^n=\{i \in \mathcal{P}| (N-n)/N \leq \Upsilon_{ji} \leq (N-n+1)/N \} , \forall n \in \mathcal{N} \end{equation} in which for consumer $j$, producers in group $\Omega_j^n$ have higher priority than producers in $\Omega_j^{n+1}$. Similarly, for producer $i$, consumers in group $\Omega_i^n$ have higher priority than consumers in $\Omega_i^{n+1}$. \begin{algorithm}[tb!] \caption{Consumer's Algorithm}\label{consumer alg} \footnotesize{ \begin{algorithmic}[1] \State Submit ask AT \Comment{\textit{Advertisement}} \State Explore AD to find potential trading partners \State Divide producers in groups $\Omega_j^1, ..., \Omega_j^N$ using (\ref{prioritization buy}) \Comment{\textit{Prioritization}} \State Set $n\gets 1$ \State Send EN transactions to producers \Comment{\textit{Negotiation}} \While{$|e_j^{P,k+1}-e_j^{P,k}| \geq \epsilon$} \For {$i \in \Omega_j^n$} \State Receive $\gamma_{ij}$ from grid operator \State Receive $\lambda_{ij}^{k}$ from producer \State Update $\overline{\mu}_j^{k+1}$ and $\underline{\mu}_j^{k+1}$ using (\ref{buyer mu low update}) and (\ref{buyer mu up update}) \State Calculate $e_{ji}^{P,k+1}$ using (\ref{buyer power update}) \State Broadcast $e_{ji}^{P,k+1}$ to producers \EndFor \EndWhile \State Check if more energy is needed \State Set $n\gets n+1$ \State Repeat Negotiation with new producers \State Send LP to producers \Comment{\textit{Energy trading}} \State Sign EI \end{algorithmic} } \end{algorithm} \subsection{Negotiation}\label{sec:neg} After \textit{prioritization}, each consumer starts negotiating with producer agents in $\Omega_j^1$. The first step is for consumer \textit{A} to request the grid service charge from the grid operator. \textit{A} sends the list of agents in $\Omega_j^1$ to the grid operator who calculates grid service charges using (\ref{service charge}) and sends the response back to \textit{A}. Once \textit{A} receives the grid service charge, it starts the negotiation with agents in $\Omega_j^1$ by sending an \textit{Energy Negotiation} (EN) transaction that is structured as $ <T\_ID, Amount, Price, PK^{D}, Sign^{D}, PK^{+}, Sign, PK^{+},\\ Agreement_P, Agreement_C>$ where \textit{Amount} identifies the total amount of energy demanded by \textit{A} and \textit{price} is the price in which the consumer is willing to pay. \textit{PK\textsuperscript{D}} and \textit{ Sign\textsuperscript{D}} represent PK of the destination agent and its corresponding signature. The destination can be either the producer, when \textit{A} generates the transaction, or \textit{A}, when a producer is the generator of the transaction. This potentially enables routing the transaction to the destination using the routing algorithm proposed in \citep{dorri2019spb}. \textit{Sign } and \textit{PK} represent the PK and signature of the transaction generator. Finally, \textit{$Agreement_P$} and \textit{$Agreement_C $} are flags that indicate whether the energy producer or consumer, respectively, agree with all situations in the trade. The generator of EN signs the hash of the transaction content which potentially ensures the integrity of the transaction content. After populating EN, \textit{A} sends the transaction to the energy producers in $\Omega_j^1$. \par The energy producer may receive several EN transactions from consumers. The producer only responds to those who are in the set of $\Omega_i^1$. For each received EN from consumers in this set, the producer updates the price using (\ref{sell price update}). When the response from the producer is received by the consumers, they update their energy using (\ref{buyer power update}) and again respond to the producer. This process continues till both sides of the trade agree on the trade conditions and thus set $Agreement_P$ and $ Agreement_C$ as '1'. An EN transaction is considered as a valid transaction to be stored in the blockchain only when both energy producer and consumer sign the transaction and $Agreement_P$ and $ Agreement_C$ are set as '1'. This ensures that only the last transaction that contains the trade conditions is stored in the blockchain, which in turn increases the blockchain throughput and reduces the delay of the negotiation. \subsection{Energy trading} In this section, we discuss the energy trading process. Once agents agree on the trade condition, during the negotiation step, the consumer generates a \textit{Late Payment} (LP) transaction that pays the energy price to the producer. Conventional energy trading frameworks rely on a TTP to oversee the trade and ensure that both sides of the trade commit to their obligations, which in turn reduces the privacy of the users. To address this challenge, our framework relies on atomic meta-transactions. In the latter, two transactions are considered valid if and only if both are generated within a specified time frame. Any incompatible transaction is considered as invalid and thus is not stored in the blockchain \citep{dorri2019spb}. LP is an atomic meta-transactions thus the energy price is not transferred to the producer accoutn unless LP is coupled with another transaction that is discussed in next paragraph. LP is structured as $<T\_ID, Price, Input, Output, EN\_Ref, Expiry\_Time, \\ Sign>$, where \textit{price} is price to be paid to the energy producer. \textit{Input} is the address of an unspent transaction that has enough balance to pay the transaction price, and \textit{output} is the address of the energy producer as in the last EN transaction. \textit{En\_Ref} is the address of the EN transaction that is stored in the public blockchain. \textit{Expiry\_Time} represents the time period in which the second transaction corresponding to the current LP must be generated, otherwise, LP is discarded. \textit{Sign} is the signature of the transaction generator that must be corresponding to the PK used in EN transaction. The consumer then broadcasts LP transaction.\par The energy producer starts injecting energy to the grid when it receives the LP transaction. Once the total amount of agreed energy is injected to the grid, the smart meter of the producer generates an \textit{Energy Injection} (EI) transaction which is a multisig transaction and is structured as $ <Amount, LP\_ID, PK\_P, Sign\_P,\\ PK\_C, Sign\_C>$, where \textit{Amount} is the amount of energy injected into the grid by the producer. \textit{LP\_ID} is \textit{T\_ID} of the corresponding LP that is used for verification of the trade as outlined in the next paragraph. EI requires two out of two signatures to be considered as a valid transaction that are the energy producer signature, populated in \textit{PK\_P} and \textit{Sign\_P}, and energy consumer signature, populated in \textit{PK\_C} and \textit{Sign\_C}.\par Once EI is broadcasted to the network, the participating nodes in the blockchain start verifying the energy trade. First, the participants must locate the associated LP and EN. Recall that EI contains the \textit{LP\_ID} and LP contains \textit{EN\_Ref} that is the identifier of the EN transaction. The verifiers first match the signatures and PKs in the transactions. The next step is for the verifiers to validate that the amount and price agreed in the transactions match. Once all the above steps are successfully validated, EI and LP transactions are stored in the blockchain, which triggers the payment of the energy price to the energy producer. If the price in these transactions differs, the verifiers discard those transactions. \par In case of detecting inconsistency in the amount of the injected energy in EI, the verifiers call a \textit{Dispute Resolution} (DR) smart contract. The DR smart contract contains rules to manage the situation where the energy producer failed to transfer the promised amount of energy to the consumer, for example, the energy produced by the solar panel of an energy producer may be less than the estimated production, which potentially impacts the traded energy. Based on the amount of transferred energy identified in EI, DR calculates a new energy price and generates a \textit{Price Update (PU)} transaction requesting the energy consumer to generate a new LP with exactly the same condition as the previous one while updating the price. PU is structured as $<LP\_ID, Old\_Price, New\_Price>$. The new LP is broadcast to the network and is stored in the blockchain with the corresponding EI. \par Recall that in the proposed framework, we defined the reputation factor that impacts the decision making of the nodes. The reputation is given by the DR smart contract based on the commitments of an energy producer. In the case of the above example, the DR will reduce the reputation of the node and inform all participants. In this study, we consider the negative reputation only, which is when a node misbehaved in the network. \section{Case Studies}\label{sec: case study} \begin{figure} \centering \input{casestudy} \caption{33-Bus test system.}\label{fig:test system} \end{figure} \begin{table}[] \centering \footnotesize{ \caption{Parameter Setup.}\label{tab:parameters} \begin{tabular}{cccc} \hline \multicolumn{4}{c}{\textbf{Key parameters}} \\ \hline Parameter & \multicolumn{1}{c|}{Value} & Parameter & Value \\ \hline $P$ & \multicolumn{1}{c|}{14} & $C$ & 18 \\ $\rho_\lambda$ & \multicolumn{1}{c|}{0.01} & $\rho_\mu$ & 0.001 \\ $\overline{\lambda}^G$& \multicolumn{1}{c|}{5 \cent/kWh^2} &$\underline{\lambda}^G$ & 25 \cent/kWh^2 \\ $N$& \multicolumn{1}{c|}{2} & $\omega$ & 2 \cent/kWh/ km \\ \hline \multicolumn{2}{c|}{\textbf{Producers' parameters}} & \multicolumn{2}{c}{\textbf{Consumers' parameters}} \\ \hline $a_i$ & \multicolumn{1}{c|}{(0.5-1] \cent/kWh^2} & $a_j$ & (0.5-10] \cent/kWh^2 \\ $b_i$& \multicolumn{1}{c|}{[5-10] \cent/kWh} & $b_j$ & [10-20] \cent/kWh \\ $\underline{e}_i$& \multicolumn{1}{c|}{[0-5] kWh} & $\underline{e}_j$ & [1-4] kWh\\ $\overline{e}_i$& \multicolumn{1}{c|}{[5-10] kWh} & $\overline{e}_j$ & [6-10] kWh \\ $\eta_i,\alpha_i,\beta_i$& \multicolumn{1}{c|}{[0-1]} & $\eta_j,\alpha_j,\beta_j$ & [0-1] \\ \hline \end{tabular} } \end{table} In this section, simulation case studies are provided to verify the operation of the proposed framework. As shown in Fig. \ref{fig:test system}, the considered test system is the IEEE 33-bus distribution system with 16 producers and 16 consumers. Table \ref{tab:parameters} summarizes key parameters and range of values for producers and consumers parameters. Fig. \ref{fig: power and price res} illustrates the results of P2P trading in the test system. The traded energy and price in different transactions have various values based on the agents' preferences. Agents tend to trade energy with their closest neighbor agents to pay less grid service charge. For example, consumer at bus 1 buys energy from producer at bus 18. However, if the offer/ask from the agent at the nearest node is not available, or does not satisfy the requirement of the agents, they have to trade with other neighboring agents. For instance, while the nearest agents to agent 5 are agents at bus 4 and 6, this agent trades energy with producers at bus 26 and 27. Since agents 4 and 6 have preferred to trade with other neighboring nodes (agents at bus 3 and 7 respectively), their offers are not available to the agent 5. It can be seen that agents 16 and 17 do not buy any energy in the market. These agents have lower utility function parameters compared to their neighboring agents, which means that their willingness to pay for energy is less than agent 15, and hence, producer at bus 14 prefers to trade with the agent at bus 15. \begin{figure}[tb!] \centering \includegraphics[scale=0.75]{figures/tradedpowernew.pdf} \caption{Transactions between producers and consumers; a) traded energy, b) energy price.}\label{fig: power and price res} \end{figure} To investigate the impact of considering the grid service charge on the number of transactions ($n_T$), we implemented the energy trading algorithm for different values of $\omega$. The results are reported in Fig. \ref{fig:line flow}, where each marker indicates a transaction between the producer and consumer, and the total number of transactions in each case is given in parentheses. The case with \(\omega=0\) means that there is no limit on the distance of agents, and they can trade with any number of agents. Therefore, the number of transactions in this case is significantly higher. Increasing the value of $\omega$ reduces the number of transactions. The welfare of agents depend on the grid service charge that they pay (see (\ref{pro welf}) and (\ref{con welf})), and hence, increase in $\omega$ reduces their welfare as they have to trade less energy and pay more cost for utilizing the grid. The negotiation among agents is an iterative process and the time required for agents to reach agreement depends on several factors including, number of agents, number of iterations required for convergence in Algorithm \ref{producer lag} and \ref{consumer alg}, the computation time required to solve (\ref{sell update dec}) and (\ref{buyer update dec}) by agents in each iteration, and communication time. The results of implementing the market settlement algorithm with and without implementing \textit{prioritization} step are compared, as reported in Table \ref{tab:prioritization}. The \textit{prioritization} step reduces the number of negotiating agents, and hence, reduces the number of communication in each iteration. On the other hand, agents need less time to solve (\ref{sell update dec}) and (\ref{buyer update dec}), as they have fewer decision variables after \textit{prioritization}. Therefore, applying \textit{prioritization} reduces negotiation time among agents by nearly 45\%. \begin{figure} \centering \includegraphics[scale=0.85]{figures/lineflow.pdf} \caption{Impact of considering grid service charge on the number of transactions.}\label{fig:line flow} \end{figure} \begin{table}[tb!] \centering \footnotesize{ \caption{Impact of Prioritization.}\label{tab:prioritization} \setlength{\tabcolsep}{5pt} \begin{tabular}{lcc} \cline{2-3} \\[-0.7em] & \textbf{w prioritization} & \textbf{w/o prioritization} \\ \\[-0.7em] \hline No. of decision variables & \multirow{2}{*}{38, 16} & \multirow{2}{*}{20, 6} \\ \\[-1em] (producer, consumer) & & \\ \\[-1em] No. of communications & \multirow{2}{*}{63} & \multirow{2}{*}{252} \\ \\[-1em] (in each iteration) & & \\ \\[-1em] No. of iterations for convergence & 163 & 206 \\ \\[-1em] \hline Negotiation time (s) & 1.04 & 1.88 \\ \\[-1em] \hline \end{tabular} } \end{table} \begin{table}[tb!] \centering \footnotesize{ \caption{Comparative results of P2P market.} \label{tab:p2p results} \setlength{\tabcolsep}{5pt} \begin{tabular}{lcc} \cline{2-3} \\[-0.7em] & \textbf{P2P} & \textbf{No P2P} \\ \\[-0.7em] \hline Total imported energy from grid (kWh) [$\sum_j{e_j^G}$] & 22.31 & 119 \\ \\[-1em] Total exported energy to grid (kWh) [$\sum_i{e_i^G}$] & 8.46 & 105 \\ \\[-1em] Total welfare of consumers ($\cent$) [$\sum_j{W_j}$] & 62.73 & -4143.04 \\ \\[-1em] Total welfare of producers ($\cent$) [$\sum_i{W_i}$] & 242.64 & -302.03 \\ \\[-1em] Total paid grid service charge ($\cent$) [$\sum_j \sum_i {e_{ij}^p \gamma_{ij}}$] & 50.44 & 0 \\ \hline \end{tabular} } \end{table} In order to demonstrate the efficacy of the P2P market, the results of this market are compared with the case that producers and consumers only trade with the grid. Comparative results are reported in Table \ref{tab:p2p results}. As it can be inferred from the results, P2P market reduces the imported and exported energy by agents to the grid, meaning they trade more with other P2P agents. Also, since the P2P market price is more beneficial for agents (see (\ref{price lim})), they can reach a higher value of welfare in the P2P market, though they have to pay a grid service charge to the grid operator. As stated in Section \ref{sub:sec:advertisement}, in the proposed framework, the ATs are stored off chain in AD. Here, we study the impact of using AD by evaluating the blockchain size and number of consensus rounds, i.e., evaluating how many times the validators shall run the consensus algorithm. Blockchain size shows the amount of saved storage space by employing AD while the number of consensus rounds evaluates the amount of computational overhead saved by running less consensus rounds. We employed the structure and configuration of the IEEE 33-bus distribution to implement a distributed network using Java programming language on Raspberry Pi 2. It is assumed that the size of each block is 10 transactions. The process involved in the consensus algorithm is abstracted out as it does not impact the functionality of the proposed method. Ten market intervals are implemented during which each energy producer generates an AT. To benchmark the results, a baseline method is considered where all ATs are stored in the blockchain. To focus on the impact of AD, we disregard the rest of the functions and assume ATs are the only transactions generated during each market interval. Based on the implementation results, the size of each AT is 1776 B. After 10 epochs, the baseline blockchain includes 16 blocks with the cumulative size of 314KB. Thus, each node must allocate 314KB storage space to store blockchain. Our solution off-loads this overhead to a central trusted node who is managing the AD, thus there is no memory overhead on the participating nodes in the blockchain. Assume $\nu $ represents the overhead associated with appending a new block that includes computational, packet and memory overhead. The proposed framework incurs no overhead during the \textit{advertisement} process on the validators while the overhead is 16$\nu$ in the conventional methods. We next evaluate the processing overhead associated with CoL. We proposed a CoL that enables the users to anonymously verify the location of the parties involved in energy trading. CoL enhances the anonymity level of users and thus protects user privacy. On the flip side, it increases the processing overhead on the users to generate and verify CoL. To evaluate the incurred overhead, we implemented our framework using Java programming language on Raspberry Pi 2, which represents low-resource devices. We measured the processing time for generating the CoL request, which involves generating a set of keys and forming the Merkle tree, and verifying the CoL, which involves verifying the existence of the PK in the Merkle tree and validating the corresponding signatures. The implementation results are shown in Table \ref{tab:COL-performance}. The verification of the CoL involves verifying two signatures, which potentially takes longer time than generating CoL. In addition to the processing overhead, CoL increases the size of the transactions. Table \ref{tab:COL-packet} compares the size of transactions and shows that CoL only affects the AT. It nearly doubles the size of AT, but this does not affect the size of the blockchain as AT's are stored off-chain only. All other transactions are unaffected by CoL. \par \section{Security and Privacy Analysis}\label{sec:security} In this section, we analyze the security and privacy of the proposed framework. We first outline threat mode and then discuss possible attacks and how to protect against those.\par \textbf{\textit{Threat Model:}} We assume that the adversary (or cooperative adversaries) can sniff the communications, discard transactions, generate fake transactions, and pretend to be another node in the network. The adversary may attempt to deanonymize a user by classifying blockchain transactions and monitoring real-time communications in blockchain. We assume standard secure encryption algorithms are in place, which cannot be compromised by the adversary. We assume smart meters are tamper resistance, and thus the end users cannot modify the transactions generated by the meters. \par \begin{table}[tb!] \centering \footnotesize{ \setlength{\tabcolsep}{5pt} \caption{CoL processing time.}\label{tab:COL-performance} \begin{tabular}{ccc} \hline & CoL formation & CoL verification \\\hline Processing time (ms) & 663.2 & 1795 \\\hline \end{tabular} } \end{table} \begin{table}[tb!] \centering \footnotesize{ \setlength{\tabcolsep}{5pt} \caption{Comparison of transaction sizes.}\label{tab:COL-packet} \begin{tabular}{ccccc} \hline & AT & EN & LP & EI \\\hline Including CoL (Bytes) & 2193 & 1928 & 1056 & 1912 \\\hline Excluding CoL (Bytes) & 1041 & 1928 & 1056 & 1912 \\\hline \end{tabular} } \end{table} \subsection{Security} In the following, we discuss possible attacks and how the proposed framework protects against those. \par \textit{CoL Reply Attack:} In this attack, the malicious node attempts to employ CoL of another node to generate transactions. The node that employs a CoL is required to sign the corresponding transaction with the private key corresponding to a PK that exists in MTR that is only known to the CoL generator. Thus, it is impossible for a malicious node to utilize the CoL of another node. \par \textit{Fake CoL:} In this attack, a malicious node pretends to be a genuine smart meter generates fake CoL that can later be used in its energy tradings. The CoL must be signed only by a genuine smart meter, and the CA validates the PK of the verifier. In the case of this attack, CA will not validate PK, and thus the attack can be detected. \par \textit{Double selling:} In this attack, a malicious energy producer attempts to sell the same amount of energy to different consumers. Recall from Section \ref{sec:energy trading} that an energy trade involves three main transactions, which are EN, LP, and EI. Once the agreed energy is injected to the grid, the smart meter of the energy producer generates a EI transaction that triggers the payment of the energy price to the producer. The smart meter generates only one EI that includes a reference to the corresponding LP, and LP includes a reference to the corresponding EN. Thus, it is impossible for the energy producer to sell the same energy to multiple nodes. \par An energy producer may attempt to inject less energy than the agreed amount and claim the full price. The smart meter of the producer will only generate the EI if the full amount of agreed energy is injected to the grid. If the energy producer injects part of the energy and the expiry time approaches, the smart meter will generate an EI reflecting the amount that is injected to the grid. In this case, DR smart contract is called that adjusts the price of the energy and ensues the producer is only paid for the amount of energy injected to the grid. \par \textit{Reputation Modification:} In this attack, the malicious node attempts to improve their reputation or reduce the reputation of another node in the network. Recall that blockchain is an immutable ledger that makes it impossible to modify or remove previously stored transactions, which makes it impossible for the attacker to modify their reputation. To reduce the reputation of another node, the malicious node shall modify the code of the smart contract, which is impossible due to the immutability of the blockchain. DR smart contract is the only entity that can reduce the reputation of a node. All participants know the address of the valid DR contract. When participating nodes receive reputation reduction from a contract, they first verify if the contract address matches with the genuine DR smart contract. If so, they accept the new reputation. Otherwise, they discard the transaction. \subsection{Privacy} In the following, we analyze the proposed framework from the privacy perspective. Recall from Section \ref{sec:energy trading} that the grid operator charges a grid service charge per each transaction that depends on the distance between the energy consumer and producer. Thus, the consumer and producer must prove their location, however, this may compromise their privacy as malicious nodes can classify the blockchain transactions to deanonymize the user. To address this challenge, we proposed A-PoL that enables the participants in the blockchain to verify the location of an anonymous smart meter using a CoL. Assume node \textit{A} is using A-PoL. The privacy of \textit{A} can be studied from the perspective of the following entities: i) CA: \textit{A} uses the PK populated by the CA only to prove its identity to the verifier. CoL employed by \textit{A} includes PK of the verifier and not \textit{A}. Given that the verifier is selected randomly by \textit{A} and there is no link between \textit{A} and the verifier, the CA is unable to identify the transactions generated by \textit{A}, ii) verifier: \textit{A} only sends MTR to the verifier that hides the actual PKs of \textit{A} from the verifier. \textit{A} reveals the PKs in the Merkle tree to prove ownership of CoL. A group of smart meters may choose to create a single MTR, which further protects their privacy, and iii) network participants: the network participants only receive CoL that contains of the verifier and MTR. As outlined earlier, there is no link between the verifier and \textit{A}, thus knowledge of the identity of the verifier does not impact the privacy of \textit{A}. The Merkle tree includes a number of PKs that are employed by \textit{A} (or other involved smart meters) to generate transactions, thus, \textit{A} may generate multiple transactions with the same PK. This potentially reduces the anonymity level of the user as malicious nodes may attempt to deanonymize a user by classifying their transactions. The anonymity level of \textit{A} largely depends on the number of PKs employed in the Merkle tree. The large number of PKs incur overheads on \textit{A} to manage the keys. Thus, there is a trade-off between the number of keys in the Merkle tree and user anonymity. \par Recall from Section \ref{sub:sec:market-settlement} that the energy producer and consumer employ a cost/utility function, as shown in (\ref{cost-func-producer}) and (\ref{cost-function-consumer}), which represent their willingness to pay or accept energy price based on their preferences and concerns. These functions depend on $a_i$, $b_i$, and $c_i$, and thus it is critical for the producers and consumers to keep these values private. In the proposed framework, the market settlement does not need the nodes to reveal $a_i$, $b_i$ and $c_i$, which in turn enhances the privacy of the users. \section{Conclusion and Future Works}\label{sec:conclusion} In this paper, we propose a blockchain-enabled P2P energy market, which provides a secure and privacy-preserving environment for energy trading between producers and consumers. A decentralized market settlement process is designed, which allows agents to trade energy without revealing their private information. The grid service charge, calculated based on the distance between producer and consumer, is used to incite agents trade energy locally and to reduce the possibility of overloading electricity grid lines.\par To reduce the blockchain memory footprint, we propose AD that stores the energy advertisements and is maintained by the grid operator. A \textit{prioritization} step is implemented to enable agents to select their trading partners based on their location and reputation factors. In order to allow agents to prove their location without revealing their real identity, an A-PoL algorithm is proposed using CoL issued by smart meters. Simulation results on IEEE 33-bus test system confirm that the proposed framework improves the welfare of agents through P2P trading, while their privacy is protected. Furthermore, employing AD to store ATs, and limiting the number of trading partners through \textit{prioritization} decrease the system overheads. For future work is needed to relax the tamper resistance assumption considered for smart meters. Relaxing this assumption complicates the trust issue as the smart meters may generate fake transactions. As another research direction, the impact of mobility of the smart meters on A-PoL can be studied. In such cases, the CA must ensure that the location of a meter is as claimed before granting a certificate. It is critical for the CA to be able to revoke granted certificates as smart meters may change its location. Another challenge for future work is to explore ways of decentralizing the AD without increasing the blockchain memory footprint to achieve an even more decentralized energy marketplace. \bibliographystyle{IEEEtran} \section{Introduction} \IEEEPARstart{I}{n response} to climate change concerns, the decarbonization and digitization of the electricity sector have been accelerated in recent years. The path toward decarbonization is associated with the high penetration of Distributed Energy Resources (DERs), such as rooftop solar, home batteries, and Electric Vehicles (EVs) with the potential to support a reliable, affordable and lower-emissions electricity grid. Progressive deployment of DERs raises several opportunities and challenges in the electricity systems \cite{openenergy}. Technical challenges include the increase in bidirectional power flows, the raise in the voltage levels, and lack of visibility \cite{coster2010integration}. On the other side, efficient DER integration can provide substantial benefits for energy customers \cite{aemo}. DER owners can benefit from Feed-In-Tariffs (FiTs) programs by selling energy back to the utility grid for a fixed price \cite{ye2017analysis}. Alternatively, they can be coordinated and aggregated for participation in different markets. Virtual Power Plants (VPPs) are an example of DERs aggregation to exploit their inherent flexibility \cite{pudjianto2007virtual}. An emerging technique for the integration of small-scale producers and consumers to energy markets is Peer-to-Peer (P2P) trading, which allows bilateral energy transactions between users \cite{tushar2020peer}. P2P trading provides significant benefits for both end users and grid operators such as increasing welfare by preference satisfaction \cite{morstyn2018using}, lowering operational costs, and improving system reliability \cite{mengelkamp2018designing}. P2P trading offers the flexibility required to coordinate agents with diverse preferences. Recent advances in the Internet of Things (IoT) and digital technologies have paved the path toward grid digitization. Grid digitization provides a two-way communication network that allows DER owners and energy consumers to act as proactive agents in the energy markets and facilitates P2P market implementation. Through direct interaction of agents in a decentralized platform, small-scale producers are allowed to compete with large traditional suppliers, and consumers have the freedom to select their energy suppliers based on their preferences. In the P2P trading, it is expected that market participants can settle the bilateral transactions with least influence from a central authority \cite{giotitsas2015peer}. Hence, designing a decentralized platform for managing market participants with diverse preferences is a challenging task. Blockchain technology offers new opportunities for decentralized market designs due to its salient features which includes decentralization, trust, anonymity, and auditability. Blokchain enables energy producers and consumers to directly negotiate and trade energy without reliance on Trusted Third Party (TTP). It provides a platform to store and share data in a secure and verifiable manner, even when the identity and trustworthiness of market participants are unknown \cite{van2020integrated}. The participating nodes in the blockchain, that includes energy consumers, producers, prosumers, or grid operators, jointly form an overlay network where they can exchange information \cite{wu2019comprehensive}. Given the properties of blockchain, and the need for a truly decentralized platform for P2P trading, designing blockchain-enabled frameworks for P2P trading is gaining momentum in recent years. The blockchain framework for P2P trading should incorporate an appropriate market settlement approach to match trading peers and to settle the bilateral transactions among them. Compared to traditional markets, P2P market offers more flexibility and trading options to the agents, and hence, it needs a pricing mechanism that incentivizes both producers and consumers to participate in the market. There are several approaches that can be applied to P2P market clearing and peer matching, such as auction-based mechanism, game theory, and optimization-based methods \cite{li2018location, paudel2018peer, khorasany2019decentralised}. In the auction-based mechanism, agents express their interest in energy trading by submitting their offers, and the energy allocation and price would be determined based on the market clearing rules \cite{li2018location}. The game theory approaches aim to provide a stable solution that is beneficial for all parties \cite{paudel2018peer}. In the optimization-based methods, the market settlement is formulated as an optimization problem, which can be decomposed to local subproblems solvable by each agent \cite{khorasany2019decentralised}. The optimization-based methods can be implemented in a decentralized manner without any need for third party interventions, which allows agents to optimize their welfare by participating in the P2P market. Hence, these methods are well-suited for blockchain-enabled P2P markets. However, the computation and communication overheads of these methods are of concern, as they require agents to iteratively negotiate to reach agreements on their actions. Therefore, reducing these overheads is a key requirement This paper designs a blockchain-enabled P2P market that provides a secure and transparent environment for the energy trading of producer and consumer agents. In the proposed approach agents \textit{Advertise} their surplus/deficit energy during each market interval. We use an Advertisement Database (AD) that is centrally managed by the grid operator to skip storing advertisement in public blockchain, which in turn, reduces the blockchain memory footprint. A decentralized optimization algorithm is employed for the \textit{Negotiation} that allows agents to iteratively optimize their welfare in the negotiation process. In order to reduce the computation and communication overheads, a \textit{Prioritization} step is considered in the market settlement process that enables agents to prioritize their trading partners based on their proximity and reputation factor. Network constraints should be incorporated in the P2P trading to ensure that energy transactions respect electric grid constraints. Instead of enforcing network constraints directly to the proposed framework, we define a grid service charge for each transaction. To incentivize agents to trade energy with their neighboring agents and reducing network loading, this charge is calculated based on the electrical distance between producer and consumer. To this end, we propose an Anonymous Proof of Location (A-PoL) algorithm that enables agents to prove their location without revealing their real identity. Once the energy consumer and producer agree on the conditions of the trade, they start trading the energy. To reduce the reliance on TTP yet ensure the security of the trade, we adopt the concept of atomic meta-transactions~\cite{dorri2019spb} where two transactions are considered valid only if they are generated within a particular period of time. The contributions of this paper are summarized as follows: \begin{itemize} \item [-] a decentralized P2P trading framework that does not require access to private information of agents in any stage of the market settlement; \item[-] a novel \textit{prioritization} step to allow agents to select their trading partners based on their location and reputation in order to reduce the system overheads; \item[-] a new A-PoL algorithm, which uses a Certificate of Location (CoL) issued by smart meters to approve the location of agents without revealing their real identity. \end{itemize} The rest of this paper is organized as follows. Section \ref{sec:related} explains some related work. Section \ref{sec: market structure} outlines the structure of the market, including agents modeling, market objective, and decentralized optimization of the market objective. Section \ref{sec:energy trading} explains the market settlement process and its different steps. Case studies and numerical results are reported in Section \ref{sec: case study}. A detailed analysis of the security and privacy of the proposed framework is presented in Section \ref{sec:security}. Finally, concluding remarks are given in Section \ref{sec:conclusion}. \section{Prior Art}\label{sec:related} In recent years blockchain applications in energy systems, such as microgrid energy management, EV integration, and energy trading, has attracted tremendous attention due to its salient features including decentralization, transparency, trust, and immutability \cite{andoni2019blockchain,musleh2019blockchain}. Su \textit{et. al} \cite{su2018secure} employed blockchain to implement secure charging services for EVs with the execution of smart contracts. In \cite{noor2018energy}, blockchain technology is utilized to implement a demand side energy management method for the efficient operation of microgrids. Hassan \textit{et. al} \cite{hassan2019deal} developed a blockchain-based approach for microgrid energy auction, in which to reduce computational complexity, at every node consortium blockchain is used that authorizes only selected nodes to add a new block in the blockchain. The application of blockchain for P2P energy trading has been investigated in several studies. The Brooklyn microgrid is a prototype of a blockchain-enabled P2P market, in which a blockchain framework is employed to implement a decentralized microgrid energy market \cite{mengelkamp2018designing}. A unified blockchain framework for various scenarios of energy trading in an industrial IoT is proposed in \cite{li2017consortium}, in which a credit-based payment scheme is employed to overcome the transaction limitation caused by transaction confirmation delays. An operation framework for P2P energy trading at the distribution level is presented in \cite{wang2019energy}, where the system operator clears the market using a crowdsourced energy system model. Dang \textit{et. al} \cite{dang2019demand} proposed a blockchain-based P2P market for optimal load management of big industrial users, in which users can organize their own P2P market to save their electricity costs. In \cite{luo2018distributed}, a two-layer system for distributed electricity trading among prosumers is proposed, in which in the first layer prosumers can form coalitions and negotiate energy trading, and in the second layer blockchain is employed for settlement of transactions formed in the first layer. The existing blockchain-based frameworks for energy trading suffer from: \textit{(i) negotiation overheads:} the decentralized optimization methods rely on iterative negotiation between involved agents. In a market with large number of participants, this iterative process increases the communication and computation overheads, and consequently the negotiation time. \textit{(ii) reliance on a TTP:} most of the existing studies rely on a TTP to oversee the trade and ensure that both sides of the trade commit to their obligations. However, this potentially may lead to centralization and privacy concerns as TTP can build virtual profile about the users. \textit{(iii) Blockchain overheads:} in conventional blockchain-based methods in smart grids, all communications are stored in the blockchain, which in turn increases the blockchain memory footprint and thus reduces scalability. In this regard, this paper proposes a blockchain-enabled P2P market that alleviates the above-mentioned limitations. \section{The Market Structure}\label{sec: market structure} \begin{figure}[tb!] \centering \includegraphics[scale=0.8]{figures/market layer.pdf} \caption{An overview of the proposed blockchain-enabled P2P energy trading.} \label{fig:layers} \end{figure} In this section, we outline the details of the market structure. As shown in Fig. \ref{fig:layers}, the proposed architecture consists of two layers: i) physical layer that is the underlying physical network to transfer electricity from producers to consumers. The minimum requirement for a successful transaction between a producer and a consumer is the existence of a path to transfer power between them, and ii) information layer where the participating nodes, that includes energy producer, consumer, and the grid operator, connect through a public blockchain to share information. The information layer provides a secure platform for the participating nodes to advertise their energy, negotiate on their offers, and decide on their actions in the market. The market mechanism is implemented in this layer that enables the agents to trade energy and settle energy transactions. \par \subsection{Agents modeling}\label{sub:sec:agent-modeling} We consider an energy network with a set of agents consisting of a set of producer agents with index \(i \in \mathcal{P}=\{1, ..., P\}\) and a set of consumer agents with index \( j \in \mathcal{C}=\{1, ..., C\}\) connected to a physical layer manged by a grid operator. Producers have energy producing capability and can use their own generated energy for their demand. In case there is more generation than demand, producers can sell surplus energy to consumers or to the grid. Consumers, on the other side, can buy energy from producers or the grid. Producers and consumers can negotiate for energy trading in a forward market for any time interval \(t \in \mathcal{T} =\{1, ..., T\}\) with equal duration (e.g., one hour). Agents are equipped with smart meters, which determines the energy surplus/deficit for trade in each time slot. The smart meters are connected to the information layer and thus, can exchange information in blockchain. It is assumed that the smart meters are tamper-resistant. We assume that each agent is equipped with an energy management system that can quantitatively estimate the energy surplus or deficit that needs to be traded in each market interval. Each agent can trade with the grid or with other agents in the network. The total energy surplus and deficit of producers and consumers is represented as \begin{equation}\label{tot energ pro} e_i=e_i^G +\sum_{j \in \mathcal{C}}{e_{ij}^P} \end{equation} \begin{equation}\label{tot energ cons} e_j= e_j^G+\sum_{i \in \mathcal{P}}{e_{ji}^P} \end{equation} where, \(e_i^G\) and \(e_j^G\) are the traded energy between producer $i$/consumer $j$ and the grid respectively, \(e_{ij}^P\) is the sold energy by producer $i$ to consumer $j$, and \(e_{ji}^P\) is the purchased energy by consumer $j$ from producer $i$. Each agent in the P2P market aims to maximize its welfare. The welfare incorporates the utility/cost of energy consumption/generation, cost/revenue of trade with other agents or the grid, and the cost of using grid for the P2P trading. The welfare function of producers and consumers can be modeled by (\ref{pro welf}) and (\ref{con welf}), respectively \begin{equation}\label{pro welf} W_i(e_i, \lambda_{ij},\gamma_{ij})= \underline{\lambda}^G e_i^G +\sum_{j \in \mathcal{C}}{e_{ij}^P (\lambda_{ij}-\gamma_{ij})}-C_i(e_i) \end{equation} \begin{equation}\label{con welf} W_j(e_j, \lambda_{ij},\gamma_{ij})= U_j(e_j)- \overline{\lambda}^G e_j^G -\sum_{i \in \mathcal{P}}{e_{ij}^P (\lambda_{ij}+\gamma_{ij})} \end{equation} where \(\underline{\lambda}^G\) denotes FiT representing the price for selling energy to the grid; \(\lambda_{ij}\) is energy price in transaction between producer $i$ and consumer $j$; \(\gamma_{ij}\) is grid service charge for using grid infrastructure for this trade; \(\overline{\lambda}^G\) denotes the price for selling from the grid which is usually a fixed value over the time (e.g. time of use tariff). The grid selling and buying prices limit energy price in the P2P market, i.e. for any bilateral trade \begin{equation}\label{price lim} \underline{\lambda}^G \leq \lambda_{ij} \leq \overline{\lambda}^G. \end{equation} The cost function of the producer represents the cost of generating energy \(e_i\) and can be modeled as \cite{grainger2003power} \begin{equation}\label{cost-func-producer} C_i(e_i)=a_i e_i^2 +b_i e_i +c_i \end{equation} where \(a_i, b_i\) and $c_i$ are positive constants, which can be adjusted by the producer to reflect the generation cost. Since producers usually have renewable generation resources with zero marginal costs, the cost function can represent the cost associated with battery degradation cost, if the producer needs to discharge its battery to sell the energy. On the other side, the utility function of a consumer represents its satisfaction level by consuming energy $e_j$ and can be modeled as \cite{samadi2010optimal} \begin{equation}\label{cost-function-consumer} U_j(e_j)= \begin{cases} -a_j e_j^2 +b_j e_j \;\;\;\; :0 \leq e_j \leq \frac{b_j}{2 a_j}\\ \frac{b_j^2}{4 a_j} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\ :e_j \geq \frac{b_j}{2 a_j} \end{cases} \end{equation} where \(a_j\), and \(b_j\) are unique positive constants for each consumer. These parameters reflect the valuation of the energy by the consumer and denotes the price that consumer is willing to pay for the energy. A fundamental challenge in implementing P2P markets is how to deal with network constraints, without having a central dispatch mechanism. In this work, instead of enforcing network constraints directly, we use a grid service charge for each trade \cite{baroche2019exogenous}. This fee represents the price that agents need to pay for using the grid infrastructure for each trade. To incite agents to trade with their closest electrical partner and to reduce network loading, this charge is calculated based on the electrical distance between producer and consumer as in \begin{equation}\label{service charge} \gamma_{ij}=\omega d_{ij} \end{equation} where \(\omega\) is the grid service charge per unit of electrical distance for each unit of energy, and \(d_{ij}\) is the electrical distance between producer $i$ and consumer $j$. This distance can be calculated based on power transfer distance, which aggregates the absolute value of Power Transfer Distribution Factor (PTDF) induced by a trade as in \begin{equation} \label{eq:ptdf} d_{ij}=\sum_{l \in \mathcal{L}}{\phi^l_{ij}}. \end{equation} For any trade, \(\phi^l_{ij}\) indicates the fraction of transacted power from producer $i$ to consumer $j$ that flows over a line $l$, and can be calculated using the method presented in \cite{wood2013power}. \subsection{Market objective} \label{sub:sec:market-settlement} The market objective for P2P trading is formulated as social welfare maximization, which maximizes the total welfare of players in the market subject to the constraints, and mathematically can be modeled as: \begin{equation}\label{tot objective} \begin{aligned} & \underset{\textbf{\textit{$e_i$,$e_j$}}}{\text{max}} \sum_{i \in \mathcal{P}}{W_i} + \sum_{j \in \mathcal{C}}{W_j}\\ & \text{s.t. constraints.} \end{aligned} \end{equation} As stated in (\ref{price lim}), the prices in the P2P market should always be beneficial for both producers and consumers. Hence, it is reasonable to assume that all agents try to maximize their traded energy in the P2P market and to minimize trading with the grid by setting $e_i^G=e_j^G=0$. Therefore, (\ref{tot objective}) can be rewritten as: \begin{subequations}\label{social welfare} \begin{equation} \begin{aligned} & \underset{\text{\textbf{{$e_{ij}$,$e_{ji}$}}}}{\text{max}} \sum_{j \in \mathcal{C}}{U_j(e_{ji})} - \sum_{i \in \mathcal{P}}{C_i({e_{ij}})} - \sum_{j \in \mathcal{C}}{\sum_{i \in \mathcal{P}}}{(e_{ij}+e_{ji})\gamma_{ij}}\\ \end{aligned} \end{equation} \begin{equation}\label{producer flex} \text{s.t.} \;\;\; \underline{e_i} \leq \sum_{j \in \mathcal{C}}{e_{ij}^P} \leq \overline{e_i} \;\;\;\;\;\;\;\;\;\;\;\;\;\; :\underline{\mu_i}, \overline{\mu_i} \end{equation} \begin{equation}\label{consumer flex} \underline{e_j} \leq \sum_{i \in \mathcal{P}}{e_{ji}^P} \leq \overline{e_j} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; :\underline{\mu_j}, \overline{\mu_j} \end{equation} \begin{equation}\label{demand supply} e_{ij}^P=e_{ji}^P \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; :\lambda_{ij} \end{equation} \end{subequations} where (\ref{producer flex}) and (\ref{consumer flex}) represents the flexibility constraints of producer, and consumer, respectively. The constraint (\ref{demand supply}) represents the power balance in transaction between producer $i$ and consumer $j$. \(\underline{\mu_i}, \overline{\mu_i}, \underline{\mu_j}, \overline{\mu_j}, \lambda_{ij}\) are dual variables associated with these constraints. \begin{figure}[tb!] \input{marketsettlement} \caption{Market settlement algorithm.}\label{fig:full algorithm} \end{figure} \subsection{Decentralized optimization} \label{sub:sec:coordination} In this paper, our aim is to solve (\ref{social welfare}) with only P2P communications to ensure data privacy of the agents. To do so, a decentralized optimization approach is employed to formulate the market settlement in P2P market \cite{khorasany2019decentralised}. In this approach, dual variables are used to decouple constraints, and then, the problem is decomposed to local subproblems solvable by producers and consumers. The local subproblem is solved by deploying the sub-gradient projection method \cite{boyd2011distributed}. Each agent contributes to solving the global problem by updating its local decision variables. The set of decision variables for producers and consumers are \{\(\lambda_{ij}, e_{ij}^P, \underline{\mu_i}, \overline{\mu_i}\}\), and \{\( e_{ji}^P, \underline{\mu_j}, \overline{\mu_j}\)\}, respectively. The market settlement approach is an iterative process, in which agents update their decision variables iteratively and exchange information without revealing their private information. The updates of the decision variables of agents are based on the Karush–Kuhn–Tucker (KKT) optimality conditions of the local problems, and can be developed using first order deviate of the relaxed problem as follows:\\ \(\forall i \in \mathcal{P}\) \begin{subequations}\label{sell update dec} \begin{equation}\label{sell price update} \lambda_{ij}^{k+1}=\left[\lambda_{ij}^{k}-\rho_{\lambda}^k(e_{ij}^{P,k}-e_{ji}^{P,k})\right]^{+} \end{equation} \begin{equation}\label{sell mu low update} \underline{\mu_i}^{k+1}=\left[\underline{\mu_i}^{k}+\rho_{\mu}^k(\underline{e_i}-e_i^k)\right]^{+} \end{equation} \begin{equation}\label{sell mu up update} \overline{\mu_i}^{k+1}=\left[\overline{\mu_i}^{k}+\rho_{\mu}^k(e_i^k-\overline{e_i})\right]^{+} \end{equation} \begin{equation}\label{sell energy update} e_{ij}^{{P,k+1}}= \left[e_{ij}^{{P,k}}+\zeta_{ij}^k(\tilde{e}_{ij}^{P,k+1}-e_i^k)\right]^{+}\\ \end{equation} \begin{equation}\label{set point sell update} \tilde{e}_{ij}^{P,k+1}=\frac{\lambda_{ij}^{k+1}-\gamma_{ij}-\overline{\mu_i}^{k+1}+\underline{\mu_i}^{k+1}-b_i}{2 a_i} \end{equation} \end{subequations} \(\forall j \in \mathcal{C}\) \begin{subequations}\label{buyer update dec} \begin{equation}\label{buyer mu low update} \underline{\mu_j}^{k+1}=\left[\underline{\mu_j}^{k}+\rho_{\mu}^k(\underline{e_j}-e_j^k)\right]^{+} \end{equation} \begin{equation}\label{buyer mu up update} \overline{\mu_j}^{k+1}=\left[\overline{\mu_j}^{k}+\rho_{\mu}^k(e_j^k-\overline{e_j})\right]^{+} \end{equation} \begin{equation}\label{buyer power update} e_{ji}^{P,k+1}= \left[e_{ji}^{P,k}+\zeta_{ji}^k(\tilde{e}_{ji}^{P,k+1}-e_j^k)\right]^{+} \end{equation} \begin{equation}\label{buyer set point update} \tilde{e}_{ji}^{P,k+1}=\frac{b_j-\lambda_{ij}^{k+1}-\gamma_{ij}-\overline{\mu_j}^{k+1}+\underline{\mu_j}^{k+1}}{2 a_j} \end{equation} \end{subequations} where $k$ is the iteration index, \(\tilde{e}_{ij}^{P}, \tilde{e}_{ji}^{P}\) are optimal power set points of producer and consumer at the price \(\lambda_{ij}\). \(\zeta_{ij}, \zeta_{ji}\) are asymptotically proportional factors, \(\rho\) is a small tuning parameter, and \([.]^+\) denotes max \{.,0\}. The information exchange between producers and consumers during the decentralized optimization process is explained in Section \ref{sec:neg}. \section{Market Settlement Process}\label{sec:energy trading} In this section, we outline the details of the market settlement process for P2P trading. The proposed framework consists of four main phases namely; \textit{(i) Advertisement:} to enable agents to advertise the energy that they want to trade, \textit{(ii) Prioritization:} to allow agent prioritize their trading partners based on their preferences, \textit{(iii) Negotiation:} in which agents negotiate on the energy quantity and price in each transaction, and \textit{(iv) Energy trading:} which is the step that energy transfer and payment would be taking place. These steps are summarized in Fig. \ref{fig:full algorithm} and discussed in detail in the rest of this section. \subsection{Anonymous proof of location} As shown in (\ref{service charge}) in the proposed framework the grid service charge is calculated based on the location of the participants denoted by $\sigma$ which requires the involved parties to reveal their location. However, this potentially enables the malicious nodes to track the activities of a particular user, which in turn compromises user privacy. Additionally, the distributed and anonymous nature of blockchain makes it challenging for the users to verify the location claimed by another node. To address these challenges, we propose an A-PoL algorithm that enables the users to prove their location while protecting their real identity, which in turn enhances the level of anonymity offered to the users. The proposed A-PoL algorithm involves a CoL that is issued by a smart meter in the network, as shown in Fig. \ref{fig:CoL}. The energy companies maintain a local record of the accurate location of the installed smart meters. During the installation process, the energy company deploys a key pair in each smart meter (step 1 in Fig. \ref{fig:CoL}) and records $<PK,location>$ tuple in its local storage. The company serves as a certificate authority (CA) for PKs deployed in smart meters. Although the CA is a trusted authority, relying on the PK provided by the CA may compromise the privacy of the users as the company can build a virtual profile of the user and their energy trading (by observing the proof of location transactions). To address this challenge, we propose CoL. \par \begin{figure}[tb!] \centering \input{col} \caption{An overview of the proposed CoL.} \label{fig:CoL} \end{figure} CoL is a certificate received from a verifier that is an independent smart meter in the network. Assume smart meter \textit{A} is going to establish a CoL. Once deployed on site, \textit{A} explores the CA to find potential smart meters that can function as the verifier and selects one randomly. Assume smart meter \textit{B} is selected by \textit{A} to function as the verifier. Recall that we assume the smart meters are tamper resistant, and thus \textit{A} can send its request to any meter listed in CA. \textit{A} sends a CoL request transaction to \textit{B} that is structured as $<T\_ID, MTR, \sigma, PK, Sign>$, where \textit{T\_ID} is the unique identifier of the transaction which essentially is the hash of the transaction content. \par \textit{A} populates the root hash of a Merkle tree constructed by recursively hashing a number of PKs in the \textit{MTR} field. The PKs in the leaves of the tree are later employed by \textit{A} to prove ownership on the CoL, which is discussed in greater detail later in this section. The number of PKs may vary based on the application. $\sigma$ is the noisy location of the smart meter that can be the location at a lower resolution, e.g., the street in which the meter is installed. This protects the privacy of the smart meter against deanonymization, as studied later in Section \ref{sec:security}. \textit{PK} is the PK of \textit{A} allocated by the CA, and \textit{Sign} is the corresponding signature, which proves that \textit{A} owns the private key corresponding to the PK. \par When the verifier, i.e. \textit{B} receives the CoL request, it verifies that the transaction is generated by a genuine smart meter that is done by requesting the CA (step 3). To protect the privacy of the user, the CA does not reveal the actual location of the smart meter to \textit{B} instead, only confirms if the PK is registered and genuine smart meter. Once verified, the verifier creates a CoL that is \textit{sign(hash(MTR, $\sigma$))} and replies back to \textit{A} by sending the reply transaction structured as $<CoL, PK, Sign> $, where CoL is as outlined above, \textit{PK } is the PK of the verifier, i.e., \textit{B} registered by the CA, and \textit{Sign} is the corresponding signature of the verifier. \par \textit{A} employs the received CoL to anonymously prove its location to the nodes in the overlay. To do so, \textit{A} appends $CoL_f\textsuperscript{A} = (CoL_A, PK\textsubscrtip{ver}, Sig\textsubscrtip{ver}, MTR_A, \sigma_A, PK_A, MTL_A, \\ Sign_A)$ where the first three fields are extracted from the response of the verifier \textit{B}. MTR and $\sigma_A$ are extracted from the CoL request sent to the verifier. \textit{$PK_A$} is a PK that was part of the leaves of the Merkle tree. $MTL_A$ is the leaves of the Merkle tree that are necessary to prove the existence of $PK_A$ in the MTR, and $Sign_A$ is the signature corresponding to PK. The last three fields ensure that only the owner of the certificate, who knows the PKs in the Merkle tree and the corresponding private key, can use the certificate. \par To verify the location of \textit{A}, the participating nodes in the blockchain, say \textit{C}, must first verify that $PK_A$ belongs to $MTR_R$ using $MTL_A$. Next, \textit{C} verifies if $Sign_A$ matches $PK_A$. The third step in verifying is to verify if $hash(MTR_A, \sigma_A) = CoL_A$. The final step is for \textit{C} to verify PK\textsubscrtip{ver} using CA. This ensures that a genuine smart meter has signed the CoL. If all the above steps successfully passed, the location of \textit{A} is verified. \par Having discussed the details of the proposed A-PoL algorithm, we study the process of P2P energy trading in the rest of this section. The algorithms implemented by producer and consumer agents are represented in Algorithm \ref{producer lag} and \ref{consumer alg}, respectively, and are discussed with greater details in the rest of this section. \subsection{Advertisement}\label{sub:sec:advertisement} The main aim of this phase is for the agents to advertise their energy to the potential consumers/producers. In each market interval, agents participate in the forward market by submitting their offers and asks in the form of \textit{advertisement transaction (AT)} that is structured as follow: \begin{equation} AT = (T\_ID, price/amount, \eta_i, CoL_f\textsuperscript{i}) \end{equation} where \textit{price/amount} can be either the price of the energy, i.e., $\lambda_i, \forall i\in \mathcal{P}$, if AT is generated by a producer or the amount of requested energy, i.e., $e_j, \forall j\in \mathcal{C}$, if AT is generated by a consumer, and \(\eta\) is the reputation factor of agent.\par In conventional blockchain-based frameworks, the negotiation transactions are stored in the blockchain that provides high auditability. However, this increases the computational and storage resource consumption of the blockchain and limits the throughput, i.e., the total number of transactions that can be stored per second. These can limit the blockchain scalability while smart grid comprises of a broad range of devices that may produce transactions frequently. To address this challenge, in the proposed framework, the negotiation transactions are stored in a public database managed by the grid operator that is referred to as \textit{Advertisement Database} (AD). The write access of AD is restricted only to the grid operator and other nodes in the overlay have read-only permission. The parties involved in an AT transaction, i.e., energy producers and consumers, may store the transactions locally to be employed as a reference in case of a dispute between parties. The final price and the amount of energy agreed by the participants are later defined during a negotiation which limits future references to AT transaction. Thus, we store AT in AD which potentially reduces the packet overhead and blockchain memory footprint. \par \subsection{Prioritization} After \textit{advertisement} step, agents explore the available ATs in the AD for the negotiation. One-on-one negotiation can increase the delay associated with finalizing the trade as either side of the trade may have higher welfare in trading with another party in the network. Also, negotiation with every agent in the market increases the computation and communication overheads, which potentially leads to low scalability \cite{khorasany2019enhancing}. Thus, agents need to prioritize their trading partners based on their preferences and only negotiate with a given number of agents. As stated in Section \ref{sub:sec:agent-modeling}, agents have to pay a grid service charge for each transaction as defined in (\ref{service charge}). This charge is directly associated with the distance between the producer \textit{i} and the consumer \textit{j}, and impacts the welfare of the agents. Accordingly, agents are incentivized to trade with trading partners located in close vicinity. This reduces the load on the transmission lines and thus reduces the cost in managing the physical layer. On the other hand, agents may prefer to negotiate with the trading partners with higher reputation factors, indicating their past performance in fulfilling their commitments. Thus, agents need to prioritize their trading partners based on their preferences to limit the partner search space. These preferences include reputation factor and distance. Agents may have varied preferences over reputation factor and distance of their trading partners. Hence, we define a priority index for each possible transaction between producers and consumers. This index for offer from consumer $j$ received by producer $i$, and offer from producer $i$ received by consumer $j$ are defined using (\ref{sell pri index}) and (\ref{buy pri index}), respectively. \begin{equation}\label{sell pri index} \Upsilon_{ij}=\alpha_i \rho_j +\beta_i (1-\cfrac{|\sigma_i -\sigma_j|}{D_{ij}})\\ \end{equation} \begin{equation}\label{buy pri index} \Upsilon_{ji}=\alpha_j \rho_i +\beta_j (1-\cfrac{|\sigma_i -\sigma_j|}{D_{ji}})\\ \end{equation} where \(\alpha\) and \(\beta\) are the weights that agent places on the reputation factor and proximity of other agents, respectively such that \(\alpha+\beta=1\), \(D_{ij}=\underset{\sigma_j}{\max} |\sigma_i-\sigma_j|\), \(D_{ji}=\underset{\sigma_i}{\max} |\sigma_i-\sigma_j|\). The second term in (\ref{sell pri index}) and (\ref{buy pri index}) denotes the proximity of each trading partner to the agent, such that for the furthest trading partner of each agent this value is zero. After calculating priority index, each agent divides its trading partners to a set of $\mathcal{N}$ groups and then, starts to negotiate with the agents from the group with the highest priority and proceed to the group with the lowest priority. Each group of potential consumers/producers for producer $i$/consumer $j$ can be formed as \begin{equation}\label{prioritization sell} \Omega_i^n=\{j \in \mathcal{C}| (N-n)/N \leq \Upsilon_{ij} \leq (N-n+1)/N \} , \forall n \in \mathcal{N} \end{equation} \begin{equation}\label{prioritization buy} \Omega_j^n=\{i \in \mathcal{P}| (N-n)/N \leq \Upsilon_{ji} \leq (N-n+1)/N \} , \forall n \in \mathcal{N} \end{equation} in which for consumer $j$, producers in group $\Omega_j^n$ have higher priority than producers in $\Omega_j^{n+1}$. Similarly, for producer $i$, consumers in group $\Omega_i^n$ have higher priority than consumers in $\Omega_i^{n+1}$. \subsection{Negotiation}\label{sec:neg} After \textit{prioritization}, each consumer starts negotiating with producer agents in $\Omega_j^1$. The first step is for consumer \textit{A} to request the grid service charge from the grid operator. \textit{A} sends the list of agents in $\Omega_j^1$ to the grid operator who calculates grid service charges using (\ref{service charge}) and sends the response back to \textit{A}. Once \textit{A} receives the grid service charge, it starts the negotiation with agents in $\Omega_j^1$ by sending an \textit{Energy Negotiation} (EN) transaction that is structured as $ <T\_ID, Amount, Price, PK\textsuperscript{D}, Sign\textsuperscript{D}, PK\textsuperscript{+}, Sign, PK\textsuperscript{+},\\ Agreement_P, Agreement_C>$ where \textit{Amount} identifies the total amount of energy demanded by \textit{A} and \textit{price} is the price in which the consumer is willing to pay. \textit{PK\textsuperscript{D}} and \textit{ Sign\textsuperscript{D}} represent PK of the destination agent and its corresponding signature. The destination can be either the producer, when \textit{A} generates the transaction, or \textit{A}, when a producer is the generator of the transaction. This potentially enables routing the transaction to the destination using the routing algorithm proposed in \cite{dorri2019spb}. \textit{Sign } and \textit{PK} represent the PK and signature of the transaction generator. Finally, \textit{$Agreement_P$} and \textit{$Agreement_C $} are flags that indicate whether the energy producer or consumer, respectively, agree with all situations in the trade. The generator of EN signs the hash of the transaction content which potentially ensures the integrity of the transaction content. After populating EN, \textit{A} sends the transaction to the energy producers in $\Omega_j^1$. \par The energy producer may receive several EN transactions from consumers. The producer only responds to those who are in the set of $\Omega_i^1$. For each received EN from consumers in this set, the producer updates the price using (\ref{sell price update}). When the response from the producer is received by the consumers, they update their energy using (\ref{buyer power update}) and again respond to the producer. This process continues till both sides of the trade agree on the trade conditions and thus set $Agreement_P$ and $ Agreement_C$ as '1'. An EN transaction is considered as a valid transaction to be stored in the blockchain only when both energy producer and consumer sign the transaction and $Agreement_P$ and $ Agreement_C$ are set as '1'. This ensures that only the last transaction that contains the trade conditions is stored in the blockchain, which in turn increases the blockchain throughput and reduces the delay of the negotiation. \begin{algorithm}[tb!] \caption{Producer's Algorithm}\label{producer lag} \footnotesize{ \begin{algorithmic}[1] \State Submit offer AT \Comment{\textit{Advertisement}}\\ \state Explore AD to find potential trading partners \State Divide consumers in groups $\Omega_i^1, ..., \Omega_i^N$ using (\ref{prioritization sell}) \Comment{\textit{Prioritization}} \State Set $n\gets 1$ \State Receive EN transactions from consumers \Comment{\textit{Negotiation}} \While{$|\lambda_{ij}^{P,k+1}-\lambda_{ij}^{P,k}| \geq \epsilon$} \For {$j \in \Omega_i$} \State Receive $\gamma_{ij}$ from grid operator \State Receive $e_{ji}^{k}$ from consumer \State Calculate $\lambda_{ij}^{k+1}$ using (\ref{sell price update}) \State Update $\overline{\mu}_i^{k+1}$ and $\underline{\mu}_i^{k+1}$ using (\ref{sell mu low update}) and (\ref{sell mu up update}) \State Calculate $e_{ij}^{P,k+1}$ using (\ref{sell energy update}) \State Broadcast $\lambda_{ij}^{P,k+1}$ to consumers \EndFor \EndWhile \State Check if more energy is available \State Set $n\gets n+1$ \State Repeat Negotiation with new consumers \State Receive LP from consumers \Comment{\textit{Energy trading}} \State Inject agreed energy \State Sign EI \end{algorithmic} } \end{algorithm} \begin{algorithm}[tb!] \caption{Consumer's Algorithm}\label{consumer alg} \footnotesize{ \begin{algorithmic}[1] \State Submit ask AT \Comment{\textit{Advertisement}}\\ \state Explore AD to find potential trading partners \State Divide producers in groups $\Omega_j^1, ..., \Omega_j^N$ using (\ref{prioritization buy}) \Comment{\textit{Prioritization}} \State Set $n\gets 1$ \State Send EN transactions to producers \Comment{\textit{Negotiation}} \While{$|e_j^{P,k+1}-e_j^{P,k}| \geq \epsilon$} \For {$i \in \Omega_j^n$} \State Receive $\gamma_{ij}$ from grid operator \State Receive $\lambda_{ij}^{k}$ from producer \State Update $\overline{\mu}_j^{k+1}$ and $\underline{\mu}_j^{k+1}$ using (\ref{buyer mu low update}) and (\ref{buyer mu up update}) \State Calculate $e_{ji}^{P,k+1}$ using (\ref{buyer power update}) \State Broadcast $e_{ji}^{P,k+1}$ to producers \EndFor \EndWhile \State Check if more energy is needed \State Set $n\gets n+1$ \State Repeat Negotiation with new producers \State Send LP to producers \Comment{\textit{Energy trading}} \State Sign EI \end{algorithmic} } \end{algorithm} \subsection{Energy trading} In this section, we discuss the energy trading process. Once agents agree on the trade condition, during the negotiation step, the consumer generates a \textit{Late Payment} (LP) transaction that pays the energy price to the producer. Conventional energy trading frameworks rely on a TTP to oversee the trade and ensure that both sides of the trade commit to their obligations, which in turn reduces the privacy of the users. To address this challenge, our framework relies on atomic meta-transactions. In the latter, two transactions are considered valid if and only if both are generated within a specified time frame. Any incompatible transaction is considered as invalid and thus is not stored in the blockchain \cite{dorri2019spb}. LP is an atomic meta-transactions thus the energy price is not transferred to the producer accoutn unless LP is coupled with another transaction that is discussed in next paragraph. LP is structured as $<T\_ID, Price, Input, Output, EN\_Ref, Expiry\_Time, \\ Sign>$, where \textit{price} is price to be paid to the energy producer. \textit{Input} is the address of an unspent transaction that has enough balance to pay the transaction price, and \textit{output} is the address of the energy producer as in the last EN transaction. \textit{En\_Ref} is the address of the EN transaction that is stored in the public blockchain. \textit{Expiry\_Time} represents the time period in which the second transaction corresponding to the current LP must be generated, otherwise, LP is discarded. \textit{Sign} is the signature of the transaction generator that must be corresponding to the PK used in EN transaction. The consumer then broadcasts LP transaction.\par The energy producer starts injecting energy to the grid when it receives the LP transaction. Once the total amount of agreed energy is injected to the grid, the smart meter of the producer generates an \textit{Energy Injection} (EI) transaction which is a multisig transaction and is structured as $ <Amount, LP\_ID, PK\_P, Sign\_P, PK\_C, Sign\_C>$, where \textit{Amount} is the amount of energy injected into the grid by the producer. \textit{LP\_ID} is \textit{T\_ID} of the corresponding LP that is used for verification of the trade as outlined in the next paragraph. EI requires two out of two signatures to be considered as a valid transaction that are the energy producer signature, populated in \textit{PK\_P} and \textit{Sign\_P}, and energy consumer signature, populated in \textit{PK\_C} and \textit{Sign\_C}.\par Once EI is broadcasted to the network, the participating nodes in the blockchain start verifying the energy trade. First, the participants must locate the associated LP and EN. Recall that EI contains the \textit{LP\_ID} and LP contains \textit{EN\_Ref} that is the identifier of the EN transaction. The verifiers first match the signatures and PKs in the transactions. The next step is for the verifiers to validate that the amount and price agreed in the transactions match. Once all the above steps are successfully validated, EI and LP transactions are stored in the blockchain, which triggers the payment of the energy price to the energy producer. If the price in these transactions differs, the verifiers discard those transactions. \par In case of detecting inconsistency in the amount of the injected energy in EI, the verifiers call a \textit{Dispute Resolution} (DR) smart contract. The DR smart contract contains rules to manage the situation where the energy producer failed to transfer the promised amount of energy to the consumer, for example, the energy produced by the solar panel of an energy producer may be less than the estimated production, which potentially impacts the traded energy. Based on the amount of transferred energy identified in EI, DR calculates a new energy price and generates a \textit{Price Update (PU)} transaction requesting the energy consumer to generate a new LP with exactly the same condition as the previous one while updating the price. PU is structured as $<LP\_ID, Old\_Price, New\_Price>$. The new LP is broadcast to the network and is stored in the blockchain with the corresponding EI. \par Recall that in the proposed framework, we defined the reputation factor that impacts the decision making of the nodes. The reputation is given by the DR smart contract based on the commitments of an energy producer. In the case of the above example, the DR will reduce the reputation of the node and inform all participants. In this study, we consider the negative reputation only, which is when a node misbehaved in the network. \section{Case Studies}\label{sec: case study} \begin{figure} \input{casestudy} \caption{33-Bus test system.}\label{fig:test system} \end{figure} \begin{table}[] \centering \caption{Parameter Setup.}\label{tab:parameters} \begin{tabular}{cccc} \hline \multicolumn{4}{c}{\textbf{Key parameters}} \\ \hline Parameter & \multicolumn{1}{c|}{Value} & Parameter & Value \\ \hline $P$ & \multicolumn{1}{c|}{14} & $C$ & 18 \\ $\rho_\lambda$ & \multicolumn{1}{c|}{0.01} & $\rho_\mu$ & 0.001 \\ $\overline{\lambda}^G$& \multicolumn{1}{c|}{5 \cent/kWh^2} &$\underline{\lambda}^G$ & 25 \cent/kWh^2 \\ $N$& \multicolumn{1}{c|}{2} & $\omega$ & 2 \cent/kWh/ km \\ \hline \multicolumn{2}{c|}{\textbf{Producers' parameters}} & \multicolumn{2}{c}{\textbf{Consumers' parameters}} \\ \hline $a_i$ & \multicolumn{1}{c|}{(0.5-1] \cent/kWh^2} & $a_j$ & (0.5-10] \cent/kWh^2 \\ $b_i$& \multicolumn{1}{c|}{[5-10] \cent/kWh} & $b_j$ & [10-20] \cent/kWh \\ $\underline{e}_i$& \multicolumn{1}{c|}{[0-5] kWh} & $\underline{e}_j$ & [1-4] kWh\\ $\overline{e}_i$& \multicolumn{1}{c|}{[5-10] kWh} & $\overline{e}_j$ & [6-10] kWh \\ $\eta_i,\alpha_i,\beta_i$& \multicolumn{1}{c|}{[0-1]} & $\eta_j,\alpha_j,\beta_j$ & [0-1] \\ \hline \end{tabular} \end{table} In this section, simulation case studies are provided to verify the operation of the proposed framework. As shown in Fig. \ref{fig:test system}, the considered test system is the IEEE 33-bus distribution system with 16 producers and 16 consumers. Table \ref{tab:parameters} summarizes key parameters and range of values for producers and consumers parameters. Fig. \ref{fig: power and price res} illustrates the results of P2P trading in the test system. The traded energy and price in different transactions have various values based on the agents' preferences. Agents tend to trade energy with their closest neighbor agents to pay less grid service charge. For example, consumer at bus 1 buys energy from producer at bus 18. However, if the offer/ask from the agent at the nearest node is not available, or does not satisfy the requirement of the agents, they have to trade with other neighboring agents. For instance, while the nearest agents to agent 5 are agents at bus 4 and 6, this agent trades energy with producers at bus 26 and 27. Since agents 4 and 6 have preferred to trade with other neighboring nodes (agents at bus 3 and 7 respectively), their offers are not available to the agent 5. It can be seen that agents 16 and 17 do not buy any energy in the market. These agents have lower utility function parameters compared to their neighboring agents, which means that their willingness to pay for energy is less than agent 15, and hence, producer at bus 14 prefers to trade with the agent at bus 15. \begin{figure}[tb!] \centering \includegraphics[scale=0.75]{figures/tradedpowernew.pdf} \caption{Transactions between producers and consumers; a) traded energy, b) energy price.}\label{fig: power and price res} \end{figure} To investigate the impact of considering the grid service charge on the number of transactions ($n_T$), we implemented the energy trading algorithm for different values of $\omega$. The results are reported in Fig. \ref{fig:line flow}, where each marker indicates a transaction between the producer and consumer, and the total number of transactions in each case is given in parentheses. The case with \(\omega=0\) means that there is no limit on the distance of agents, and they can trade with any number of agents. Therefore, the number of transactions in this case is significantly higher. Increasing the value of $\omega$ reduces the number of transactions. The welfare of agents depend on the grid service charge that they pay (see (\ref{pro welf}) and (\ref{con welf})), and hence, increase in $\omega$ reduces their welfare as they have to trade less energy and pay more cost for utilizing the grid. The negotiation among agents is an iterative process and the time required for agents to reach agreement depends on several factors including, number of agents, number of iterations required for convergence in Algorithm \ref{producer lag} and \ref{consumer alg}, the computation time required to solve (\ref{sell update dec}) and (\ref{buyer update dec}) by agents in each iteration, and communication time. The results of implementing the market settlement algorithm with and without implementing \textit{prioritization} step are compared, as reported in Table \ref{tab:prioritization}. The \textit{prioritization} step reduces the number of negotiating agents, and hence, reduces the number of communication in each iteration. On the other hand, agents need less time to solve (\ref{sell update dec}) and (\ref{buyer update dec}), as they have fewer decision variables after \textit{prioritization}. Therefore, applying \textit{prioritization} reduces negotiation time among agents by nearly 45\%. \begin{figure} \centering \includegraphics[scale=0.85]{figures/lineflow.pdf} \caption{Impact of considering grid service charge on the number of transactions.}\label{fig:line flow} \end{figure} \begin{table}[tb!] \centering \caption{Impact of Prioritization.}\label{tab:prioritization} \setlength{\tabcolsep}{5pt} \begin{tabular}{lcc} \cline{2-3} \\[-0.7em] & \textbf{w prioritization} & \textbf{w/o prioritization} \\ \\[-0.7em] \hline No. of decision variables & \multirow{2}{*}{38, 16} & \multirow{2}{*}{20, 6} \\ \\[-1em] (producer, consumer) & & \\ \\[-1em] No. of communications & \multirow{2}{*}{63} & \multirow{2}{*}{252} \\ \\[-1em] (in each iteration) & & \\ \\[-1em] No. of iterations for convergence & 163 & 206 \\ \\[-1em] \hline Negotiation time (s) & 1.04 & 1.88 \\ \\[-1em] \hline \end{tabular} \end{table} \begin{table}[tb!] \centering \caption{Comparative results of P2P market.} \label{tab:p2p results} \setlength{\tabcolsep}{5pt} \begin{tabular}{lcc} \cline{2-3} \\[-0.7em] & \textbf{P2P} & \textbf{No P2P} \\ \\[-0.7em] \hline Total imported energy from grid (kWh) [$\sum_j{e_j^G}$] & 22.31 & 119 \\ \\[-1em] Total exported energy to grid (kWh) [$\sum_i{e_i^G}$] & 8.46 & 105 \\ \\[-1em] Total welfare of consumers ($\cent$) [$\sum_j{W_j}$] & 62.73 & -4143.04 \\ \\[-1em] Total welfare of producers ($\cent$) [$\sum_i{W_i}$] & 242.64 & -302.03 \\ \\[-1em] Total paid grid service charge ($\cent$) [$\sum_j \sum_i {e_{ij}^p \gamma_{ij}}$] & 50.44 & 0 \\ \hline \end{tabular} \end{table} In order to demonstrate the efficacy of the P2P market, the results of this market are compared with the case that producers and consumers only trade with the grid. Comparative results are reported in Table \ref{tab:p2p results}. As it can be inferred from the results, P2P market reduces the imported and exported energy by agents to the grid, meaning they trade more with other P2P agents. Also, since the P2P market price is more beneficial for agents (see (\ref{price lim})), they can reach a higher value of welfare in the P2P market, though they have to pay a grid service charge to the grid operator. As stated in Section \ref{sub:sec:advertisement}, in the proposed framework, the ATs are stored off chain in AD. Here, we study the impact of using AD by evaluating the blockchain size and number of consensus rounds, i.e., evaluating how many times the validators shall run the consensus algorithm. Blockchain size shows the amount of saved storage space by employing AD while the number of consensus rounds evaluates the amount of computational overhead saved by running less consensus rounds. We employed the structure and configuration of the IEEE 33-bus distribution to implement a distributed network using Java programming language on Raspberry Pi 2. It is assumed that the size of each block is 10 transactions. The process involved in the consensus algorithm is abstracted out as it does not impact the functionality of the proposed method. Ten market intervals are implemented during which each energy producer generates an AT. To benchmark the results, a baseline method is considered where all ATs are stored in the blockchain. To focus on the impact of AD, we disregard the rest of the functions and assume ATs are the only transactions generated during each market interval. Based on the implementation results, the size of each AT is 1776 B. After 10 epochs, the baseline blockchain includes 16 blocks with the cumulative size of 314KB. Thus, each node must allocate 314KB storage space to store blockchain. Our solution off-loads this overhead to a central trusted node who is managing the AD, thus there is no memory overhead on the participating nodes in the blockchain. Assume $\nu $ represents the overhead associated with appending a new block that includes computational, packet and memory overhead. The proposed framework incurs no overhead during the \textit{advertisement} process on the validators while the overhead is 16$\nu$ in the conventional methods. We next evaluate the processing overhead associated with CoL. We proposed a CoL that enables the users to anonymously verify the location of the parties involved in energy trading. CoL enhances the anonymity level of users and thus protects user privacy. On the flip side, it increases the processing overhead on the users to generate and verify CoL. To evaluate the incurred overhead, we implemented our framework using Java programming language on Raspberry Pi 2, which represents low-resource devices. We measured the processing time for generating the CoL request, which involves generating a set of keys and forming the Merkle tree, and verifying the CoL, which involves verifying the existence of the PK in the Merkle tree and validating the corresponding signatures. The implementation results are shown in Table \ref{tab:COL-performance}. The verification of the CoL involves verifying two signatures, which potentially takes longer time than generating CoL. In addition to the processing overhead, CoL increases the size of the transactions. Table \ref{tab:COL-packet} compares the size of transactions and shows that CoL only affects the AT. It nearly doubles the size of AT, but this does not affect the size of the blockchain as AT's are stored off-chain only. All other transactions are unaffected by CoL. \par \section{Security and Privacy Analysis}\label{sec:security} In this section, we analyze the security and privacy of the proposed framework. We first outline threat mode and then discuss possible attacks and how to protect against those.\par \textbf{\textit{Threat Model:}} We assume that the adversary (or cooperative adversaries) can sniff the communications, discard transactions, generate fake transactions, and pretend to be another node in the network. The adversary may attempt to deanonymize a user by classifying blockchain transactions and monitoring real-time communications in blockchain. We assume standard secure encryption algorithms are in place, which cannot be compromised by the adversary. We assume smart meters are tamper resistance, and thus the end users cannot modify the transactions generated by the meters. \par \begin{table}[tb!] \centering \setlength{\tabcolsep}{5pt} \caption{CoL processing time.}\label{tab:COL-performance} \begin{tabular}{ccc} \hline & CoL formation & CoL verification \\\hline Processing time (ms) & 663.2 & 1795 \\\hline \end{tabular} \end{table} \begin{table}[tb!] \centering \setlength{\tabcolsep}{5pt} \caption{Comparison of transaction sizes.}\label{tab:COL-packet} \begin{tabular}{ccccc} \hline & AT & EN & LP & EI \\\hline Including CoL (Bytes) & 2193 & 1928 & 1056 & 1912 \\\hline Excluding CoL (Bytes) & 1041 & 1928 & 1056 & 1912 \\\hline \end{tabular} \end{table} \subsection{Security} In the following, we discuss possible attacks and how the proposed framework protects against those. \par \textit{CoL Reply Attack:} In this attack, the malicious node attempts to employ CoL of another node to generate transactions. The node that employs a CoL is required to sign the corresponding transaction with the private key corresponding to a PK that exists in MTR that is only known to the CoL generator. Thus, it is impossible for a malicious node to utilize the CoL of another node. \par \textit{Fake CoL:} In this attack, a malicious node pretends to be a genuine smart meter generates fake CoL that can later be used in its energy tradings. The CoL must be signed only by a genuine smart meter, and the CA validates the PK of the verifier. In the case of this attack, CA will not validate PK, and thus the attack can be detected. \par \textit{Double selling:} In this attack, a malicious energy producer attempts to sell the same amount of energy to different consumers. Recall from Section \ref{sec:energy trading} that an energy trade involves three main transactions, which are EN, LP, and EI. Once the agreed energy is injected to the grid, the smart meter of the energy producer generates a EI transaction that triggers the payment of the energy price to the producer. The smart meter generates only one EI that includes a reference to the corresponding LP, and LP includes a reference to the corresponding EN. Thus, it is impossible for the energy producer to sell the same energy to multiple nodes. \par An energy producer may attempt to inject less energy than the agreed amount and claim the full price. The smart meter of the producer will only generate the EI if the full amount of agreed energy is injected to the grid. If the energy producer injects part of the energy and the expiry time approaches, the smart meter will generate an EI reflecting the amount that is injected to the grid. In this case, DR smart contract is called that adjusts the price of the energy and ensues the producer is only paid for the amount of energy injected to the grid. \par \textit{Reputation Modification:} In this attack, the malicious node attempts to improve their reputation or reduce the reputation of another node in the network. Recall that blockchain is an immutable ledger that makes it impossible to modify or remove previously stored transactions, which makes it impossible for the attacker to modify their reputation. To reduce the reputation of another node, the malicious node shall modify the code of the smart contract, which is impossible due to the immutability of the blockchain. DR smart contract is the only entity that can reduce the reputation of a node. All participants know the address of the valid DR contract. When participating nodes receive reputation reduction from a contract, they first verify if the contract address matches with the genuine DR smart contract. If so, they accept the new reputation. Otherwise, they discard the transaction. \subsection{Privacy} In the following, we analyze the proposed framework from the privacy perspective. Recall from Section \ref{sec:energy trading} that the grid operator charges a grid service charge per each transaction that depends on the distance between the energy consumer and producer. Thus, the consumer and producer must prove their location, however, this may compromise their privacy as malicious nodes can classify the blockchain transactions to deanonymize the user. To address this challenge, we proposed A-PoL that enables the participants in the blockchain to verify the location of an anonymous smart meter using a CoL. Assume node \textit{A} is using A-PoL. The privacy of \textit{A} can be studied from the perspective of the following entities: i) CA: \textit{A} uses the PK populated by the CA only to prove its identity to the verifier. CoL employed by \textit{A} includes PK of the verifier and not \textit{A}. Given that the verifier is selected randomly by \textit{A} and there is no link between \textit{A} and the verifier, the CA is unable to identify the transactions generated by \textit{A}, ii) verifier: \textit{A} only sends MTR to the verifier that hides the actual PKs of \textit{A} from the verifier. \textit{A} reveals the PKs in the Merkle tree to prove ownership of CoL. A group of smart meters may choose to create a single MTR, which further protects their privacy, and iii) network participants: the network participants only receive CoL that contains of the verifier and MTR. As outlined earlier, there is no link between the verifier and \textit{A}, thus knowledge of the identity of the verifier does not impact the privacy of \textit{A}. The Merkle tree includes a number of PKs that are employed by \textit{A} (or other involved smart meters) to generate transactions, thus, \textit{A} may generate multiple transactions with the same PK. This potentially reduces the anonymity level of the user as malicious nodes may attempt to deanonymize a user by classifying their transactions. The anonymity level of \textit{A} largely depends on the number of PKs employed in the Merkle tree. The large number of PKs incur overheads on \textit{A} to manage the keys. Thus, there is a trade-off between the number of keys in the Merkle tree and user anonymity. \par Recall from Section \ref{sub:sec:market-settlement} that the energy producer and consumer employ a cost/utility function, as shown in (\ref{cost-func-producer}) and (\ref{cost-function-consumer}), which represent their willingness to pay or accept energy price based on their preferences and concerns. These functions depend on $a_i$, $b_i$, and $c_i$, and thus it is critical for the producers and consumers to keep these values private. In the proposed framework, the market settlement does not need the nodes to reveal $a_i$, $b_i$ and $c_i$, which in turn enhances the privacy of the users. \section{Conclusion and Future Works}\label{sec:conclusion} In this paper, we propose a blockchain-enabled P2P energy market, which provides a secure and privacy-preserving environment for energy trading between producers and consumers. A decentralized market settlement process is designed, which allows agents to trade energy without revealing their private information. The grid service charge, calculated based on the distance between producer and consumer, is used to incite agents trade energy locally and to reduce the possibility of overloading electricity grid lines.\par To reduce the blockchain memory footprint, we propose AD that stores the energy advertisements and is maintained by the grid operator. A \textit{prioritization} step is implemented to enable agents to select their trading partners based on their location and reputation factors. In order to allow agents to prove their location without revealing their real identity, an A-PoL algorithm is proposed using CoL issued by smart meters. Simulation results on IEEE 33-bus test system confirm that the proposed framework improves the welfare of agents through P2P trading, while their privacy is protected. Furthermore, employing AD to store ATs, and limiting the number of trading partners through \textit{prioritization} decrease the system overheads. For future work is needed to relax the tamper resistance assumption conisdered for smart meters. Relaxing this assumption complicates the trust issue as the smart meters may generate fake transactions. As another research direction, the impact of mobility of the smart meters on A-PoL can be studied. In such cases, the CA must ensure that the location of a meter is as claimed before granting a certificate. It is critical for the CA to be able to revoke granted certificates as smart meters may change its location. Another challenge for future work is to explore ways of decentralizing the AD without increasing the blockchain memory footprint to achieve an even more decentralized energy marketplace. \bibliographystyle{IEEEtran}
{'timestamp': '2020-06-01T02:08:47', 'yymm': '2005', 'arxiv_id': '2005.14452', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14452'}
arxiv
\subsubsection{\@startsection{subsubsection}{3}% \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{F}_p}{\mathbb{F}_p} \newcommand{\mathbb{Z}_p}{\mathbb{Z}_
p} \newcommand{\mathfrak{p}}{\mathfrak{p}} \DeclareMathOperator{\rk}{rk} \newcommand{\rk_p}{\rk_p} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\operatorname{H}^{*}}{\operatorname{H}^{*}} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Image}{Im} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\res}{res} \DeclareMathOperator{\Ass}{Ass} \DeclareMathOperator{\Ext}{Ext} \DeclareMathOperator{\YExt}{YExt} \DeclareMathOperator{\XExt}{XExt} \DeclareMathOperator{\depth}{depth} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \DeclarePairedDelimiter{\gen}{\langle}{\rangle} \newcommand{\cong}{\cong} \newcommand{\unlhd}{\unlhd} \newcommand{\overbar}[1]{\mkern 1.5mu\overline{\mkern-1.5mu#1\mkern-1.5mu}\mkern 1.5mu} \theoremstyle{plain} \newtheorem{MainTheorem}{Theorem} \renewcommand\theMainTheorem{\Alph{MainTheorem}} \newtheorem*{Notation*}{Notation} \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}[lemma]{Proposition} \newtheorem{theorem}[lemma]{Theorem} \newtheorem{corollary}[lemma]{Corollary} \newtheorem{conjecture}[lemma]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[lemma]{Definition} \newtheorem{remark}[lemma]{Remark} \newtheorem{remarks}[lemma]{Remarks} \newtheorem{example}[lemma]{Example} \newtheorem{examples}[lemma]{Examples} \title{A family of finite $p$-groups satisfying Carlson's depth conjecture} \author[O.\ Garaialde Oca\~na]{Oihana Garaialde Oca\~na} \address{Matematika Saila, Euskal Herriko Unibertsitatearen Zientzia eta Teknologia Fakultatea, posta-kutxa 644, 48080 Bilbo, Spain} \email{oihana.garayalde@ehu.eus} \author[L.\ Guerrero Sanchez]{Lander Guerrero Sanchez} \address{Matematika Saila, Euskal Herriko Unibertsitatearen Zientzia eta Teknologia Fakultatea, posta-kutxa 644, 48080 Bilbo, Spain} \email{lander.guerrero@ehu.eus} \author[J.\ Gonz\'alez-S\'anchez]{Jon Gonz\'alez-S\'anchez} \address{Matematika Saila, Euskal Herriko Unibertsitatearen Zientzia eta Teknologia Fakultatea, posta-kutxa 644, 48080 Bilbo, Spain} \email{jon.gonzalez@ehu.eus} \thanks{The second author was supported by the University of the Basque Country predoctoral fellowship PIF19/44 . The three authors were partially supported by the Spanish Government project MTM2017-86802- P and by the Basque Government project IT974-16} \begin{document} \subjclass[2010]{20J06, 13C15, 20D15} \keywords{mod-$p$ cohomology ring, depth, finite $p$-groups} \maketitle \begin{abstract} Let $p>3$ be a prime number and let $r$ be an integer with $1<r<p-1$. For each $r$, let moreover $G_r$ denote the unique quotient of the maximal class pro-$p$ group of size $p^{r+1}$. We show that the mod-$p$ cohomology ring of $G_r$ has depth one and that, in turn, it satisfies the equalities in Carlson's depth conjecture \cite{Carlson95}. \end{abstract} \section{Introduction} \label{sec:introduction} Let $p$ be a prime number, let $G$ be a finite $p$-group and let $\mathbb{F}_p$ denote the finite field of $p$ elements with trivial $G$-action. Then, the mod-$p$ cohomology ring $\operatorname{H}^{*}(G;\mathbb{F}_p)$ is a finitely generated, graded commutative $\mathbb{F}_p$-algebra (see \cite[Corollary 7.4.6]{Evens91}), and so many ring-theoretic notions can be defined; Krull dimension, associated primes and depth, among others. Some of the aforementioned concepts have a group-theoretic interpretation; for instance, the Krull dimension $\dim \operatorname{H}^{*}(G;\mathbb{F}_p)$ of $\operatorname{H}^{*}(G;\mathbb{F}_p)$ equals the $p$-rank $\rk_p G$ of $G$, i.e., the largest integer $s\geq 1$ such that $G$ contains an elementary abelian subgroup of rank $s$. However, the depth of $\operatorname{H}^{*}(G;\mathbb{F}_p)$, written as $\depth \operatorname{H}^{*}(G;\mathbb{F}_p)$, is the length of the longest regular sequence in $\operatorname{H}^{*}(G;\mathbb{F}_p)$, and it seems to be far more difficult to compute. There are, however, lower and upper bounds for this number. For instance, in \cite{Duflot81}, Duflot proved that the depth of $\operatorname{H}^{*}(G;\mathbb{F}_p)$ is at least as big as the $p$-rank of the centre $Z(G)$ of $G$, i.e., $\depth \operatorname{H}^{*}(G;\mathbb{F}_p)\geq \rk_p Z(G)$ and, in \cite{Notbohm09}, Notbohm proved that for every elementary abelian subgroup $E$ of $G$ with centralizer $C_G(E)$ in $G$, the inequality $\depth \operatorname{H}^{*}(G;\mathbb{F}_p)\leq \depth \operatorname{H}^{*}\big(C_G(E);\mathbb{F}_p\big)$ holds. In \cite{Carlson95}, J. Carlson provided further upper bounds for the depth (see Theorem \ref{thm:DepthCarlson}) and stated a conjecture that still remains open (see Conjecture \ref{conj:Carlson}). The aim of the present work is to compute the depth of the mod-$p$ cohomology rings of certain quotients of the maximal class pro-$p$ group that moreover satisfy the equalities in the aforementioned conjecture. Let $p$ be an odd prime number, let $\mathbb{Z}_p$ denote the ring of $p$-adic integers and let $\zeta$ be a primitive $p$-th root of unity. Consider the cyclotomic extension $\mathbb{Z}_p[\zeta]$ of degree $p-1$ and note that its additive group is isomorphic to $\mathbb{Z}_p^{p-1}$. The cyclic group $C_p=\gen{\sigma}$ acts on $\mathbb{Z}_p[\zeta]$ via multiplication by $\zeta$, i.e., for any $x\in\mathbb{Z}_p$, the action is given as $x^{\sigma}=\zeta x$. Using the ordered basis $1, \zeta, \dots, \zeta^{p-2}$ in $\mathbb{Z}_p[\zeta]\cong \mathbb{Z}_p^{p-1}$, this action is given by the matrix \begin{displaymath} \begin{pmatrix} 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & 1 \\ -1 & -1 & -1 & \dots & -1 \end{pmatrix}. \end{displaymath} We form the semidirect product $S=C_p\ltimes \mathbb{Z}_p^{p-1}$, which is the unique pro-$p$ group of maximal nilpotency class. Note that this is the analogue of the infinite dihedral pro-$2$ group for the $p$ odd case. Moreover, $S$ is a uniserial $p$-adic space group with cyclic point group $C_p$ (compare \cite[Section 7.4]{LeedGreenBook02}). We write $[x, _{k}\sigma]=[x, \sigma, \overset{k}{\vphantom{,}\smash{\dotsc}}\,, \sigma]$ for the iterated group commutator. Set $T_0=\mathbb{Z}_p[\zeta]$ and define, for each integer $i\geq 1$, \[ T_i=(\zeta-1)^i\mathbb{Z}_p[\zeta]=[T_0,_i \sigma]=\gamma_{i+1}(S). \] These subgroups are all the $C_p$-invariant subgroups of $T_0$, and the successive quotients satisfy \begin{displaymath} T_i/T_{i+1}\cong \mathbb{Z}_p[\zeta]/(\zeta-1)\mathbb{Z}_p[\zeta]\cong C_p. \end{displaymath} Hence, $\abs{T_0:T_i}=p^i$ for every $i\geq 0$. For each integer $r$ with $1<r<p-1$, consider the finite quotient $T_0/T_r=\mathbb{Z}_p[\zeta]/(\zeta-1)^r\mathbb{Z}_p[\zeta]$ and choose a generating set for $T_0/T_r$ as follows, \begin{displaymath} a_1=1+T_r,\quad a_2=(\zeta-1)+T_r,\quad\dots ,\quad a_r=(\zeta-1)^{r-1}+T_r. \end{displaymath} Using the multiplicative notation, we obtain that $$ T_0/T_r=\gen{a_1,\dotsc,a_r}\cong C_p\times \overset{r}{\vphantom{=}\smash{\dotsb}} \times C_p, $$ and since for all $i\geq 0$, the subgroups $T_i$ are $C_p$-invariant, we can form the semidirect product \begin{equation}\label{eq:Grassemidirectproduct} G_r=C_p\ltimes T_0/T_r\cong C_p\ltimes(C_p\times \overset{r}{\vphantom{=}\smash{\dotsb}} \times C_p). \end{equation} The finite $p$-groups $G_r$ have size $p^{r+1}$ and exponent $p$. Note that in particular, $G_2$ is the extraspecial group of size $p^3$ and exponent $p$. We state the main result. \begin{MainTheorem}\label{thm:mainresultintro} Let $p>3$ be a prime number, let $r$ be an integer with $1<r<p-1$ and let $G_r$ be given as in \eqref{eq:Grassemidirectproduct}. Then, $\operatorname{H}^{*}(G_r;\mathbb{F}_p)$ has depth one. \end{MainTheorem} \subsubsection*{Notation}\label{notation} Throughout let $p$ be an odd prime number and let $G$ denote a finite group. Let $R$ be a commutative ring with unity. A $G$-module $A$ will be a right $RG$-module. For such $G$-modules, we shall use additive notation in Sections \ref{sec:preliminaries} and \ref{sec:productsextensions}, and multiplicative notation in Section \ref{sec:depthonepgroups}, for our convenience. Moreover, if $a\in A$ and $g\in G$, we write $a^g$ to denote the action of $g$ on $a$. Let $A$ be a $G$-module and let $P_{*}\longrightarrow R$ be a projective resolution of the trivial $G$-module $R$, then for every $n\geq 0$, the $n$-th cohomology group $\operatorname{H}^n(G;A)$ is defined as $\Ext^n(R,A)=\operatorname{H}^n(\Hom_G(P_{*},A))$. Let $K\leq G$ be a subgroup of $G$ and let $\iota\colon K\lhook\joinrel\longrightarrow G$ denote an inclusion map. This map induces the restriction map in cohomology, which will be denoted by $\res^G_K\colon \operatorname{H}^{*}(G;A)\longrightarrow \operatorname{H}^{*}(K;A)$. Group commutators are given as $[g,h]=g^{-1}h^{-1}gh=g^{-1}g^h$ and for every $k\geq 1$, iterated commutators are written as $[x, y, \overset{k}{\vphantom{,}\smash{\dotsc}}\,, y]=[x, _{k}y]$, where we use left normed group commutators, i.e., $[x,y,z]=[[x,y],z]$. Also, the $k$-th term of the lower central series of $G$ is denoted by $\gamma_k(G)=[G, \overset{k}{\vphantom{,}\smash{\dotsc}}\,, G]$. \section{Preliminaries}\label{sec:preliminaries} \subsection{Depth} In this section we give background on the depth of mod-$p$ cohomology rings of finite $p$-groups and we also state one of the key results for the proof of Theorem \ref{thm:mainresultintro}. Let $n\geq 1$ be an integer number. We say that a sequence of elements $x_1,\dotsc,x_n\in \operatorname{H}^{*}(G;\mathbb{F}_p)$ is regular if, for every $i=1, \dots, n$, the element $x_i$ is not a zero divisor in the quotient $\operatorname{H}^{*}(G;\mathbb{F}_p)/(x_1,\dotsc,x_{i-1})$, where $(x_1,\dotsc,x_{i-1})$ denotes the ideal generated by the elements $x_1, \dots, x_{i-1}$ in $\operatorname{H}^{*}(G;\mathbb{F}_p)$. \begin{definition} The \emph{depth} of $\operatorname{H}^{*}(G;\mathbb{F}_p)$, denoted by $\depth \operatorname{H}^{*}(G;\mathbb{F}_p)$, is the maximal length of a regular sequence in $\operatorname{H}^{*}(G;\mathbb{F}_p)$. \end{definition} Recall that a prime ideal $\mathfrak{p}\subseteq \operatorname{H}^{*}(G;\mathbb{F}_p)$ is an \emph{associated prime} of $\operatorname{H}^{*}(G;\mathbb{F}_p)$ if, for some $\varphi\in \operatorname{H}^{*}(G;\mathbb{F}_p)$, it is of the form \begin{displaymath} \mathfrak{p}=\{\psi\in \operatorname{H}^{*}(G;\mathbb{F}_p)\mid \varphi\cup\psi=0\}. \end{displaymath} The set of all associated primes of $\operatorname{H}^{*}(G;\mathbb{F}_p)$ is denoted by $\operatorname{Ass} \operatorname{H}^{*}(G;\mathbb{F}_p)$. It is known that for every $\mathfrak{p}\in \operatorname{Ass}\operatorname{H}^{*}(G;\mathbb{F}_p)$, the following inequality holds \begin{displaymath} \depth \operatorname{H}^{*}(G;\mathbb{F}_p)\leq \dim \operatorname{H}^{*}(G;\mathbb{F}_p)/\mathfrak{p}. \end{displaymath} In particular, $\depth \operatorname{H}^{*}(G;\mathbb{F}_p)\leq \dim \operatorname{H}^{*}(G;\mathbb{F}_p)$ (\cite[Proposition 12.2.5]{CarlsonBook03}) and, when the two values coincide, the mod-$p$ cohomology ring is said to be \emph{Cohen-Macaulay}. In the following proposition, we recall the lower and upper bounds for the depth of $\operatorname{H}^{*}(G;\mathbb{F}_p)$ by Duflot \cite{Duflot81} and Notbohm \cite{Notbohm09}, respectively. \begin{proposition}\label{prop:DuflotNotbohm} Let $G$ be a finite $p$-group. The following inequalities hold $$ 1\leq \rk_p Z(G)\leq \depth \operatorname{H}^{*}(G;\mathbb{F}_p)\leq \depth \operatorname{H}^{*}\big(C_G(E);\mathbb{F}_p\big). $$ \end{proposition} Before stating the crucial result for our construction, we introduce the concept of detection in cohomology. \begin{definition}\label{def:detection} Let $G$ be a finite $p$-group and let $\mathcal{H}$ be a collection of subgroups of $G$. We say that $\operatorname{H}^{*}(G;\mathbb{F}_p)$ is \emph{detected} by $\mathcal{H}$ if \begin{displaymath} \bigcap_{H\in \mathcal{H}}\Ker \res^G_H=0. \end{displaymath} \end{definition} Given a finite $p$-group $G$ and a subgroup $E\leq G$, let $C_G(E)$ denote the centralizer of $E$ in $G$. For $s\geq 1$, define \begin{align*} &\mathcal{H}_s(G)=\big\{C_G(E) \mid E \text{ is an elementary abelian subgroup of } G,\; \rk_p E=s\big\},\\ &\omega_a(G)=\min \big\{\dim \operatorname{H}^{*}(G;\mathbb{F}_p)/\mathfrak{p}\mid \mathfrak{p}\in\operatorname{Ass}\operatorname{H}^{*}(G;\mathbb{F}_p)\big\},\\ &\omega_d(G)=\max\big\{s\geq 1\mid \operatorname{H}^{*}(G;\mathbb{F}_p) \text{ is detected by } \mathcal{H}_s(G)\big\}. \end{align*} \begin{theorem}[\cite{Carlson95}]\label{thm:DepthCarlson} Let $G$ be a finite $p$-group. Then, the following inequalities hold \begin{displaymath} \depth\operatorname{H}^{*}(G;\mathbb{F}_p)\leq\omega_a(G)\leq \omega_d(G). \end{displaymath} \end{theorem} In fact, in the same article, J. F. Carlson conjectured that the previous inequalities are actual equalities. \begin{conjecture}[Carlson]\label{conj:Carlson} Let $G$ be a finite $p$-group. Then, \begin{displaymath} \depth\operatorname{H}^{*}(G;\mathbb{F}_p)=\omega_a(G)= \omega_d(G). \end{displaymath} \end{conjecture} A particular case of the above conjecture was proven by D. Green in \cite{Green03}. \subsection{Yoneda extensions} Let $G$ be a finite group and let $R$ be a commutative ring with unity. We describe the mod-$p$ cohomology ring $\operatorname{H}^{*}(G;R)$ in terms of Yoneda extensions and the Yoneda product. For a more detailed account on this topic, we refer to \cite[Chapter IV]{Maclane95} and \cite{Niwasaki92}. \begin{definition} Let $A$ and $B$ be $G$-modules. For every integer $n\geq 1$, a \emph{Yoneda $n$-fold extension} $\varphi$ \emph{of $B$ by $A$} is an exact sequence of $G$-modules of the form \[\varphi:\begin{tikzcd} 0 \rar & A \rar & M_n \rar & \cdots \rar & M_1 \rar & B \rar & 0. \end{tikzcd}\] Given two Yoneda $n$-fold extensions \[\varphi:\begin{tikzcd} 0 \rar & A \rar & M_n \rar & \cdots \rar & M_1 \rar & B \rar & 0 \end{tikzcd}\] and \[\varphi':\begin{tikzcd} 0 \rar & A' \rar & M_n' \rar & \cdots \rar & M_1' \rar & B' \rar & 0, \end{tikzcd}\] we say that there is a \emph{morphism between the Yoneda $n$-fold extensions $\varphi$ and $\varphi'$}, if there exist $G$-module homomorphisms $f_0\colon B\longrightarrow B'$, $f_{n+1}\colon A\longrightarrow A'$ and, for every $i=1,\dots,n$, $f_i\colon M_i\longrightarrow M'_i$, making the following diagram commute \[ \begin{tikzcd} \varphi: 0 \rar & A \dar{f_{n+1}} \rar & M_n \dar{f_n} \rar & \cdots \rar & M_1 \dar{f_1} \rar & B \dar{f_0} \rar & 0\\ \varphi': 0 \rar & A' \rar & M_n' \rar & \cdots \rar & M_1' \rar & B' \rar & 0. \end{tikzcd}\] In particular, if $\varphi, \varphi'$ are both Yoneda $n$-fold extensions of $A$ by $B$ and, there is a morphism from $\varphi$ to $\varphi'$ with identity maps $f_0=\id_B$ and $f_{n+1}=\id_A$, we write $\varphi\Rightarrow \varphi'$. \end{definition} \begin{definition}\label{def:Yequivalent} Let $n\geq 1$ be an integer and let $\varphi$ and $\varphi'$ be as above. We say that $\varphi$ is \emph{equivalent} to $\varphi'$, denoted by $\varphi\equiv \varphi'$, if there are Yoneda $n$-fold extensions $\varphi_1,\dotsc,\varphi_r$ of $B$ by $A$ such that \begin{displaymath} \varphi \Rightarrow \varphi_1 \Leftarrow \varphi_2 \Rightarrow \cdots \Leftarrow \varphi_{r-1} \Rightarrow \varphi_r \Leftarrow \varphi'. \end{displaymath} Moreover, we denote by $\YExt^n(B,A)$ the set of all Yoneda $n$-fold extensions of $B$ by $A$ up to equivalence. \end{definition} We recall the uniqueness of pushouts and pullbacks of Yoneda extensions \cite[Section II.6]{Hilton97} and endow the set $\YExt^n(B,A)$ with the Baer sum so that $\YExt^n(B,A)$ becomes an abelian group. The proof of the following result can be found in \cite[Section IV.9]{Hilton97} \begin{proposition} Let $\varphi\in \YExt^n(B,A)$ be represented by a Yoneda extension \[\begin{tikzcd} 0 \rar & A \rar & M_n \rar & \cdots \rar & M_1 \rar & B \rar & 0. \end{tikzcd}\] \begin{enumerate} \item [(a)]Given a $G$-module homomorphism $\alpha\colon A\longrightarrow A'$, there is a unique equivalence class $\alpha_*\varphi\in\YExt^n(B,A')$ represented by a Yoneda extension \[\begin{tikzcd} 0 \rar & A' \rar & M_n' \rar & \cdots \rar & M_1' \rar & B \rar & 0, \end{tikzcd}\] admitting a morphism of Yoneda extensions of the following form: \[\begin{tikzcd} 0 \rar & A \rar \dar{\alpha} & M_n \dar \rar & \dotsb \rar & M_1 \rar \dar & B \rar \dar[Equal] & 0 \\ 0 \rar & A' \rar & M_n' \rar & \dotsb \rar & M_1' \rar & B \rar & 0. \end{tikzcd}\] We say that the Yoneda extension $\alpha_*\varphi$ is the \emph{pushout of $\varphi$ via $\alpha$}. \item [(b)]Given a $G$-module homomorphism $\beta\colon B'\longrightarrow B$ there is a unique equivalence class $\beta^*\varphi\in\YExt^n(B',A)$ represented by a Yoneda extension \[\begin{tikzcd} 0 \rar & A \rar & M_n'' \rar & \cdots \rar & M_1'' \rar & B' \rar & 0, \end{tikzcd}\] admitting a morphism of Yoneda extensions of the following form: \[\begin{tikzcd} 0 \rar & A \rar \dar[Equal] & M_n'' \dar \rar & \dotsb \rar & M_1'' \rar \dar & B' \rar \dar{\beta} & 0 \\ 0 \rar & A \rar & M_n \rar & \dotsb \rar & M_1 \rar & B \rar & 0. \end{tikzcd}\] We say that the Yoneda extension $\beta^*\varphi$ is the \emph{pullback} of $\varphi$ via $\beta$. \end{enumerate} \end{proposition} Let now $A$ and $B$ be $G$-modules and let \[ \nabla_A\colon A\times A\longrightarrow A\;\; \text{and}\;\; \Delta_B\colon B\longrightarrow B\times B \] denote the codiagonal and the diagonal homomorphism, respectively. \begin{definition} Let $n\geq 1$ be an integer and let $\varphi,\varphi'\in\YExt^n(B,A)$ be two Yoneda extension classes. We define the \emph{Baer sum} of $\varphi$ and $\varphi'$ as \begin{displaymath} \varphi +\varphi'=(\nabla_A)_*(\Delta_B)^*(\varphi\times\varphi')\in \YExt^n(B,A). \end{displaymath} \end{definition} Then, for every integer $n\geq1$, the set $\YExt^n(B,A)$ endowed with the Baer sum is an abelian group. Indeed, the zero element of $\YExt^1(B,A)$ is the split extension \[\begin{tikzcd} 0 \rar & A \rar & A\times B \rar & B \rar & 0, \end{tikzcd}\] and for $n>1$, the zero element of $\YExt^n(B,A)$ is the Yoneda extension \[\begin{tikzcd}[column sep=2em] 0 \rar & A \rar & A \rar & 0 \rar & \overset{n-2}{\vphantom{=}\smash{\cdots}} \rar & 0\rar & B \rar & B \rar & 0. \end{tikzcd}\] \begin{theorem}[{\cite[Theorem 6.4]{Maclane95}}] For every $G$-module $A$ and integer $n\geq 1$, there is a group isomorphism $\operatorname{H}^n(G;A)\cong \YExt^{n}(R,A)$ that is natural in $A$. \end{theorem} Let $A$, $B$ and $C$ be $G$-modules and let $n,m\geq 1$ be integers. Given $\varphi\in\YExt^n(B,A)$ represented by the Yoneda extension \[\begin{tikzcd} 0 \rar & A \rar & N_n \rar & \cdots \rar & N_1 \rar & B \rar & 0, \end{tikzcd}\] and $\varphi'\in\YExt^m(C,B)$ represented by the Yoneda extension \[\begin{tikzcd} 0 \rar & B \rar & M_m \rar & \cdots \rar & M_1 \rar & C \rar & 0, \end{tikzcd}\] we define their \emph{Yoneda product} $\varphi\cup\varphi'\in \YExt^{n+m}(G,B) $ as the Yoneda extension \[\begin{tikzcd}[column sep=1.5em] 0 \rar & A \rar & N_n \rar & \cdots \rar & N_1 \rar & M_m \rar & \cdots \rar & M_1 \rar & C \rar & 0. \end{tikzcd}\] This product defines a bilinear pairing. In particular, the Yoneda product in $\operatorname{H}^{*}(G;R)\cong\YExt^{*}(R,R)$ coincides with the usual cup product (see \cite[Proposition 3.2.1]{Benson91}). \subsection{Crossed extensions} Let $G$ be a finite group. In this section we describe, for every integer $n\geq 2$, the cohomology group $\operatorname{H}^n(G; R)$ using crossed extensions. For more information about this subject, see \cite{Holt79}, \cite{Huebschmann80} and \cite{Niwasaki92}. \begin{definition} Let $M_1$ and $M_2$ be groups with $M_1$ acting on $M_2$. A \emph{crossed module} is a group homomorphism $\rho\colon M_2\longrightarrow M_1$ satisfying the following properties: \begin{enumerate} \item[(i)] $y_2^{\rho(y'_2)}=y_2^{y'_2}$ for all $y_2,y'_2\in M_2$, and \item[(ii)] $\rho(y_2^{y_1})=\rho(y_2)^{y_1}$ for all $y_1\in M_1$ and $y_2\in M_2$. \end{enumerate} \end{definition} \begin{definition} Let $n\geq 1$ be an integer and let $A$ be a $G$-module. A \emph{crossed $n$-fold extension $\psi$ of $G$ by $A$} is an exact sequence of groups of the form \[\begin{tikzcd} \psi: 0 \rar & A \rar{\rho_{n}} & M_n \rar & \cdots \rar & M_2 \rar{\rho_1} & M_1 \rar & G \rar & 1, \end{tikzcd}\] satisfying the following conditions: \begin{enumerate} \item[(i)] $\rho_1\colon M_2\longrightarrow M_1$ is a crossed module, \item[(ii)] $M_i$ is a $G$-module for every $i=3,\dotsc,n$, and \item[(iii)] $\rho_i$ is a $G$-module homomorphism for every $i=2,\dotsc,n$. \end{enumerate} \end{definition} \begin{definition} A \emph{morphism of crossed $n$-fold extensions} $\psi$ and $\psi'$ is a morphism of exact sequences of groups \[\begin{tikzcd}[column sep=2em] \psi: 0 \rar & A \dar{f_{n+1}} \rar & M_n \dar{f_n} \rar & \cdots \rar & M_2 \dar{f_2} \rar & M_1 \dar{f_1} \rar & G \dar{f_0} \rar & 1 \\ \psi': 0 \rar & A' \rar & M_n' \rar & \cdots \rar & M_2' \rar & M_1' \rar & G' \rar & 1, \end{tikzcd}\] where for each $i=3,\dotsc,n+1$, the morphism $f_i$ is a $G$-module homomorphism and, $f_1$ and $f_2$ are compatible with the actions of $M_1$ on $M_2$ and of $M_1'$ on $M_2'$, respectively. In particular, if $\psi$ and $\psi'$ are both crossed $n$-fold extensions of $G$ by $A$ and, there is a morphism from $\psi$ to $\psi'$ with identity maps $f_0=\id_G$ and $f_{n+1}=\id_A$, we write $\psi \Rightarrow \psi'$. \end{definition} Moreover, we can define an equivalence relation on crossed $n$-fold extensions of $G$ by $A$ as for Yoneda extensions in Definition \ref{def:Yequivalent}, denoted by $\psi\equiv\psi'$. We will also denote by $\XExt^n(G,A)$ the set of all crossed $n$-fold extensions of $G$ by $A$ up to equivalence For the $n=2$ case, we can use the following characterization of equivalent crossed extensions. \begin{proposition}[\cite{Holt79}]\label{prop: Equivalent Crossed X Diagram} Let $G$ be a finite group and let $A$ be a $G$-module. Then, two crossed $2$-fold extensions of $G$ by $A$ \begin{footnotesize} \begin{equation*} \psi: 0 \longrightarrow A \overset{\rho_2}\longrightarrow M_2 \overset{\rho_1}\longrightarrow M_1 \overset{\rho_0}\longrightarrow G \longrightarrow 1\; \text{ and } \; \psi': 0 \longrightarrow A \overset{\tau_2}\longrightarrow N_2 \overset{\tau_1}\longrightarrow N_1 \overset{\tau_0}\longrightarrow G \longrightarrow 1, \end{equation*} \end{footnotesize}are equivalent if and only if there exist a group $X$ and a commutative diagram \begin{equation}\label{diag: Crossed diagram} \begin{tikzcd}[column sep={4.em,between origins},row sep=2em] & 1 \drar & & & & 1 & \\ & & M_2 \drar[swap]{\mu_1} \arrow[rr,"\rho_1"] & & M_1 \drar{\rho_0} \urar & & \\ 0 \rar & A \urar{\rho_2} \drar[swap]{-\tau_2} & & X \urar[swap]{\nu_1} \drar{\nu_2} & & G \rar & 1 \\ & & N_2 \urar{\mu_2} \arrow[rr,swap,"\tau_1"] & & N_1 \arrow[ur,swap,"\tau_0"] \drar & & \\ & 1 \urar & & & & 1 & \end{tikzcd} \end{equation} satisfying the following properties: \begin{itemize} \item[(a)] $-\tau_2\colon A\longrightarrow N_2$ is given by $(-\tau_2)(a)=\tau_2(-a)$ for $a\in A$, \item[(b)] the diagonals are short exact sequences, \item[(c)] $\mu_1\circ \rho_2(A)=\mu_1(M_2)\cap\mu_2(N_2)$, and \item[(d)] conjugation in $X$ coincides with the actions of both $M_1$ on $M_2$ and $N_1$ on $N_2$. \end{itemize} \end{proposition} Analogous to Yoneda extensions, for an integer $n\geq 1$, given an $n$-crossed extension $\varphi\in \XExt^n(G,A)$ and a $G$-module homomorphism $\alpha\colon A\longrightarrow A'$, we can find a unique pushout $\alpha_*\varphi\in \XExt^n(G,A')$ of $\varphi$ via $\alpha$, and given a group homomorphism $\beta\colon G'\longrightarrow G$ we can find a unique pullback $\beta^*\varphi\in\XExt^n(G',A)$ of $\varphi$ via $\beta$ (see \cite[Proposition 4.1]{Holt79}). We can also endow $\XExt^n(G,A)$ with an abelian group structure. Given two crossed $n$-fold extension classes $\varphi,\varphi'\in\XExt^n(G,A)$, we define their \emph{Baer sum} as \begin{displaymath} \varphi +\varphi'=(\nabla_A)_*(\Delta_G)^*(\varphi\times\varphi'). \end{displaymath} The zero element of $\XExt^1(G,A)$ is represented by the split extension \[\begin{tikzcd} 0 \rar & A \rar & G\ltimes A \rar & G \rar & 1, \end{tikzcd}\] and for $n>1$, the zero element of $\XExt^n(G,A)$ is represented by the Yoneda extension \[\begin{tikzcd}[column sep=2em] 0 \rar & A \rar & A \rar & 0 \rar & \overset{n-2}{\vphantom{=}\smash{\cdots}} \rar & 0\rar & G \rar & G \rar & 1. \end{tikzcd}\] \begin{theorem}[{\cite[Theorem 4.5]{Holt79}}] Let $G$ be a finite group. For every $G$-module $A$ and every integer $n\geq 1$, there is a group isomorphism $\operatorname{H}^{n+1}(G;A)\cong \XExt^{n}(G,A)$ that is natural in both $G$ and $A$. \end{theorem} \section{Product between extensions}\label{sec:productsextensions} \subsection{Product of Yoneda extensions and crossed extensions} We now describe the Yoneda product between two cohomology classes, one of them represented by a Yoneda extension and the other one by a crossed extension. \begin{definition}\label{def:mixedyonedaproduct} Let $G$ be a finite group, let $A$ and $B$ be $G$-modules and let $n,m\geq 1$ be integer numbers. Given a Yoneda $n$-fold extension class $\varphi\in \YExt^n(A,B)$ represented by \[\begin{tikzcd} 0 \rar & B \rar & N_n \rar & \cdots \rar & N_1 \rar & A \rar & 0, \end{tikzcd}\] and a crossed $m$-fold extension class $\psi\in\XExt^m(G,A)$ represented by \[\begin{tikzcd} 0 \rar & A \rar & M_m \rar & \cdots \rar & M_1 \rar & G \rar & 1, \end{tikzcd}\] we define their \emph{Yoneda product} $\varphi\cup\psi$ as the extension \[\begin{tikzcd}[column sep=1.5em] 0 \rar & B \rar & N_n \rar & \cdots \rar & N_1 \rar & M_m \rar & \cdots \rar & M_1 \rar & G \rar & 1. \end{tikzcd}\] \end{definition} \begin{remark} It can be readily checked that \[ \YExt^n(A,B)\times \XExt^m(G,A)\longrightarrow \XExt^{n+m}(G,B) \] given by $(\varphi, \psi)\mapsto \varphi \cup \psi$ is well defined. \end{remark} The following result shows that this product respects the pushouts and pullbacks. \begin{lemma} Let $G$ and $G'$ be finite groups, let $A$, $A'$, $B$ and $B'$ be $G$-modules and let $n,m\geq 1$ be integer numbers. Let, moreover, $\varphi\in\YExt^n(A,B)$, $\varphi'\in\YExt^n(A',B)$ and $\psi\in\XExt^m(G,A)$. Then, the following relations are satisfied. \begin{enumerate} \item Given a $G$-module homomorphism $\alpha\colon A\longrightarrow A'$, we have that \begin{displaymath} (\alpha^*\varphi')\cup\psi \equiv \varphi'\cup (\alpha_*\psi)\in \XExt^{n+m}(G,B). \end{displaymath} \item Given a $G$-module homomorphism $\beta\colon B\longrightarrow B'$, we have that \begin{displaymath} (\beta_*\varphi)\cup\psi\equiv\beta_*(\varphi\cup\psi)\in \XExt^{n+m}(G,B'). \end{displaymath} \item Given a group homomorphism $\tau\colon G'\longrightarrow G$, we have that \begin{displaymath} \varphi\cup(\tau^*\psi)\equiv\tau^*(\varphi\cup\psi)\in \XExt^{n+m}(G',B). \end{displaymath} \end{enumerate} \end{lemma} \begin{proof} The proofs of 2 and 3 are straightforward. For 1, follow the proof of the analogous result for Yoneda extensions mutatis mutandis (compare \cite[Proposition III.5.2]{Maclane95}). \end{proof} \begin{proposition} Let $G$ be a finite group, let $A$ and $B$ be $G$-modules and let $n,m\geq 1$ be integers. Then, the Yoneda product induces a well-defined bilinear pairing \[\begin{tikzcd} \YExt^n(A,B) \otimes \XExt^m(G,A) \rar & \XExt^{n+m}(G,B). \end{tikzcd}\] \end{proposition} \begin{proof} Let $\varphi,\varphi'\in \YExt^n(A,B)$ and $\psi,\psi'\in\XExt^m(G,A)$. On the one hand, using that $(\Delta_A)_*\psi\equiv(\Delta_G)^*(\psi\times\psi)$, we have that \begin{align*} (\varphi + \varphi')\cup \psi & \equiv \big[(\nabla_B)_*(\Delta_A)^*(\varphi\times\varphi')\big]\cup \psi \\ & \equiv (\nabla_B)_*(\Delta_G)^* \big[(\varphi\times\varphi')\cup(\psi\times\psi)\big] \\ & \equiv (\nabla_B)_*(\Delta_G)^* \big[(\varphi\cup\psi)\times(\varphi'\cup\psi)\big] \\ & \equiv \varphi\cup \psi + \varphi'\cup \psi. \end{align*} On the other hand, we have that \begin{align*} \varphi\cup (\psi+\psi') & \equiv \varphi \cup \big[(\nabla_A)_*(\Delta_G)^*(\psi\times\psi')\big] \\ & \equiv (\nabla_B)_*(\Delta_G)^* \big[(\varphi\times\varphi)\cup(\psi\times\psi')\big] \\ & \equiv (\nabla_B)_*(\Delta_G)^* \big[(\varphi\cup\psi)\times(\varphi\cup\psi')\big] \\ & \equiv \varphi\cup \psi + \varphi\cup \psi'. \end{align*} \end{proof} \subsection{Yoneda and cup products coincide} In order to show that the Yoneda product of Yoneda extensions with crossed extensions coincides with the usual cup product, we will follow a construction by B. Conrad \cite{conrad}, giving an explicit correspondence between crossed extensions and Yoneda extensions. Let $G$ be a finite group and let $A$ be a $G$-module. Let $\psi\in\XExt^n(G,A)$ be a class represented by a crossed $n$-fold extension \[\begin{tikzcd} 0 \rar & A \rar{\rho_n} & M_n \rar & \cdots \rar & M_2 \rar{\rho_1} & M_1 \rar{\rho_0} & G \rar & 1, \end{tikzcd}\] with $M_2$ abelian (such a representative always exists, see \cite[Proposition 2.7]{Holt79}). Consider the $G$-module $\Image\rho_1\leq M_1$. Then, we have an extension $\psi_0\in\XExt^1(G,\Image\rho_1)$ of the form \[\begin{tikzcd} \psi_0: \;\; 0 \rar & \Image \rho_1 \rar & M_1 \rar & G \rar & 1. \end{tikzcd} \label{ext: conrad}\] Now, we can embed $\Image \rho_1$ into an injective $G$-module $I$. As $I$ is injective, we have that $\XExt^1(G,I)\cong \operatorname{H}^2(G,I)=0$, and so the pushout of $\psi_0$ via the embedding of $\Image\rho_1$ into $I$ splits, i.e., there is a group homomorphism $\Phi\colon M_1\longrightarrow G\ltimes I$ such that the following diagram commutes: \[\begin{tikzcd} 0 \rar & \Image \rho_1 \rar \dar[hook] & M_1 \rar{\rho_0} \dar{\Phi} & G \rar \dar[Equal] & 1 \\ 0 \rar & I \rar & G\ltimes I \rar & G \rar & 1. \end{tikzcd}\] We can find a group homomorphism $\nu\colon M_1\longrightarrow G$ and a map $\chi\colon M_1\longrightarrow I$ that for every $x,y\in M_1$ satisfies \begin{align} \label{eqn: cocycle cond semidir} \chi(xy)=\chi(x)^{\nu(y)}\chi(y), \end{align} such that for every $x,y\in M_1$ we can write \begin{displaymath} \Phi(x)=\big(\nu(x),\chi(x)\big). \end{displaymath} Moreover, if we denote by $\pi\colon I\longrightarrow I/\Image\rho_1$ the canonical projection, there is a unique map $\tau\colon G\longrightarrow I/\Image\rho_1$ such that $\tau\circ \nu=\pi\circ \chi$. Furthermore, because $\chi$ satisfies (\ref{eqn: cocycle cond semidir}) and $\nu=\rho_0$ is surjective, we have that for every $g,h\in G$, \begin{displaymath} \tau(gh)=\tau(g)^h+\tau(h), \end{displaymath} and so $\tau$ is a $1$-cocycle. Hence, $\tau$ can be represented as a cohomology class in $\operatorname{H}^1(G,I/\Image\rho_1)\cong \YExt^1(R,I/\Image\rho_1)$ by a Yoneda extension of the form \[\begin{tikzcd} 0 \rar & I/\Image\rho_1 \rar & E_{\tau} \rar & R \rar & 0. \end{tikzcd}\] \begin{remark}\label{rmk: conrad choice r1} The choices of the $G$-module $I$ and the cocycle $\tau$, and consequently $E_{\tau}$, only depend on $\Image\rho_1\leq M_1$. \end{remark} Finally, we can construct the element $\Upsilon(\psi)\in\YExt^{n+1}(R,A)$ given by the Yoneda extension \begin{equation}\label{eq:Upsilon} 0 \longrightarrow A \longrightarrow M_n \longrightarrow \cdots \longrightarrow M_2 \longrightarrow I \longrightarrow E_{\tau} \longrightarrow R \longrightarrow 0. \end{equation} This construction gives rise to a group isomorphism $$\Upsilon\colon \XExt^n(G,A)\longrightarrow \YExt^{n+1}(R,A).$$ \begin{proposition Let $G$ be a finite group and let $n,m\geq 1$ be integer numbers. Then, the Yoneda product \[\begin{tikzcd} \YExt^n(A,B) \otimes \XExt^m(G,A) \rar & \XExt^{n+m}(G,B) \end{tikzcd}\] coincides with the Yoneda product \[\begin{tikzcd} \YExt^n(A,B) \otimes \YExt^{m+1}(R,A) \rar & \XExt^{n+m+1}(R,B). \end{tikzcd}\] In particular, if $A=B=R$, the above product coincides with the cup product \[\begin{tikzcd} \cup: \;\operatorname{H}^n(G;R) \otimes \operatorname{H}^{m+1}(G;R) \rar & \operatorname{H}^{n+m+1}(G;R). \end{tikzcd}\] \end{proposition} \begin{proof} Let $\varphi\in \YExt^n(A,B)$ be a class represented by an extension \[\begin{tikzcd} 0 \rar & B \rar & N_n \rar & \cdots \rar & N_1 \rar{\mu_0} & A \rar & 0, \end{tikzcd}\] and let $\psi\in\XExt^m(G,A)$ be a class represented by an extension \[\begin{tikzcd} 0 \rar & A \rar{\rho_m} & M_m \rar & \cdots \rar & M_2 \rar{\rho_1} & M_1 \rar & G \rar & 1, \end{tikzcd}\] with $M_2$ abelian. We need to prove that $\Upsilon(\varphi\cup\psi)=\varphi\cup\Upsilon(\psi)$. By \eqref {eq:Upsilon}, for $m>1$, the extension $\Upsilon(\psi)\in\YExt^{m+1}(R,A)$ is of the form \[\begin{tikzcd}[column sep=2em] 0 \rar & A \rar & M_m \rar & \cdots \rar & M_2 \rar & I \rar & E_{\tau} \rar & R \rar & 0, \end{tikzcd}\] and $\varphi\cup\psi\in\XExt^{n+m}(G,A)$ is represented by the crossed $(n+m)$-fold extension \[\begin{tikzcd}[column sep=1.5em] 0 \rar & B \rar & N_n \rar & \cdots \rar & N_1 \rar & M_m \rar & \cdots \rar & M_1 \rar & G \rar & 1. \end{tikzcd}\] By Remark~\ref{rmk: conrad choice r1}, we can use the same $I$ and $\tau$ in the construction of $\Upsilon(\psi)$. Therefore, $\Upsilon(\varphi\cup\psi)\in\YExt^{n+m+1}(G,A)$ is represented by \[\begin{tikzcd}[column sep=0.95em] 0 \rar & B \rar & N_n \rar & \cdots \rar & N_1 \rar & M_m \rar & \cdots \rar & M_2 \rar & I \rar & E_{\tau} \rar & R\rar & 0, \end{tikzcd}\] which coincides with $\varphi\cup\Upsilon(\psi)$. For $m=1$, we have that $\psi\in\XExt^1(G,A)$ is represented by a crossed 1-fold extension of the form \[\begin{tikzcd} 0 \rar & A \rar{\rho_1} & M_1 \rar & G \rar & 1. \end{tikzcd}\] Then, $\varphi\cup\psi$ is given by the crossed $(n+1)$-fold extension \[\begin{tikzcd} 0 \rar & B \rar & N_n \rar & \cdots \rar & N_1 \rar{\gamma_1} & M_1 \rar & G\rar & 1, \end{tikzcd}\] where $\gamma_1=\rho_1\circ \mu_0$. Now, we have that $\Image \gamma_1= \Image \rho_1$, and so we can once again use the same $I$ and $\tau$ in the construction of both $\Upsilon(\psi)$ and $\Upsilon(\varphi\cup\psi)$. Therefore, both $\varphi\cup\Upsilon(\psi)$ and $\Upsilon(\varphi\cup\psi)$ are given by the same extension \[\begin{tikzcd}[column sep=2em] 0 \rar & B \rar & N_n \rar & \cdots \rar & N_1 \rar & I \rar & E_{\tau} \rar & R\rar & 0. \end{tikzcd}\] Finally, if $A=B=R$ then the Yoneda product of Yoneda extensions coincides with the cup product of cohomology classes. \end{proof} \section{Finite $p$-groups of depth one mod-$p$ cohomology}\label{sec:depthonepgroups} Let $p$ be an odd prime. For each integer $r$ with $1<r<p-1$, the finite $p$-group $G_r$, described in \eqref{eq:Grassemidirectproduct}, is generated by the elements $\sigma,a_1,\dotsc,a_r $ satisfying the following relations: \begin{itemize} \item $\sigma^p=a_i^p=[a_i,a_j]=[a_r,\sigma]=1$, for $i=1,\dotsc,r$ and $j=1,\dotsc,r-1$, \item $[a_j,\sigma]=a_{j+1}$ for $j=1,\dotsc,r-1$. \end{itemize} The aim of this section is to prove Theorem \ref{thm:mainresultintro}. Consider the elementary abelian $p$-group $E=\gen{\sigma,a_r}$ with centralizer $C_{G_r}(E)$ in $G_r$ equal to $E$. By Proposition \ref{prop:DuflotNotbohm}, we have that \begin{equation}\label{eq:lowerupperbound} 1=\rk_p(Z(G_r))\leq \depth \operatorname{H}^{*}(G_r;\mathbb{F}_p)\leq \depth \operatorname{H}^{*}(C_{G_r}(E);\mathbb{F}_p)=2. \end{equation} To show the result, we construct a non-trivial mod-$p$ cohomology class of $G_r$ that restricts trivially to the mod-$p$ cohomology of the centralizers of all rank 2 elementary abelian subgroups of $G_r$. Then, $\omega_d(G_r)=1$ and Theorem \ref{thm:DepthCarlson} yields that $\depth \operatorname{H}^{*}(G_r;\mathbb{F}_p)=1$. \subsection{Construction}\label{sec:constructionthetar} We follow the assumptions in \nameref{notation} and, additionally, suppose that $p>3$. In this section, we construct for each integer $r$ with $1<r<p-1$, a cohomology class $\theta_r \in \operatorname{H}^3(G_r;\mathbb{F}_p)$ that is a cup product of a Yoneda 1-fold extension and a crossed $2$-fold extension. We start by defining a cohomology class $\sigma^*\in \operatorname{H}^1(G_r;\mathbb{F}_p)=\Hom(G_r,\mathbb{F}_p)$. To that aim, for each $r$, consider the homomorphism $\sigma^*\colon G_r\longrightarrow \mathbb{F}_p$ satisfying \begin{align*} \sigma^*(\sigma)=1, \quad \sigma^*(a_1)=\dotsb =\sigma^*(a_r)=0. \end{align*} The class $\sigma^*$ can be represented by the Yoneda extension \[\begin{tikzcd} 1 \rar & C_p=\gen{a_{r+2}} \rar & C_p\times C_p \rar & C_p=\gen{a_{r+1}} \rar & 1, \end{tikzcd}\] where the action of $G_r$ on $C_p\times C_p=\gen{a_{r+1},a_{r+2}}$ is described by \begin{displaymath} \text{for}\; \; g\in G_r, \;\; \text{set}\;\; a_{r+1}^{g}= a_{r+1}a_{r+2}^{\sigma^*(g)}, \quad a_{r+2}^{g}=a_{r+2}. \end{displaymath} We continue by defining a crossed 2-fold extension $\eta_r\in \operatorname{H}^2(G_r;\mathbb{F}_p)$ as follows. For $r>1$, let \[ \lambda_r\colon T_0/T_{r+1}\times T_0/T_{r+1}\longrightarrow T_0/T_{r+1} \] be the alternating bilinear map satisfying $\lambda_r(a_{r-1},a_{r})=a_{r+1}$. Now, define $(T_0/T_{r+1})_{\lambda_r}$ to be the group with underlying set $T_0/T_{r+1}$ and with group operation given by \begin{displaymath} \text{for} \;\; x,y \in T_0/T_{r+1}\;\; \text{we define,}\;\; x\cdot_{\lambda_r}y= xy\lambda_r(x,y)^{1/2}. \end{displaymath} Finally, define the $p$-group $\widehat G_r=C_p\ltimes (T_0/T_{r+1})_{\lambda_r}$ of size $\abs{\widehat G_r}=p^{r+2}$ and exponent $p$. Let $\eta_r\in \operatorname{H}^2(G_r,\mathbb{F}_p)$ be the cohomology class represented by the crossed 2-fold extension \begin{equation}\label{eq:etar} 1 \longrightarrow C_p=\gen{a_{r+1}} \longrightarrow \widehat G_r \longrightarrow G_r \longrightarrow 1. \end{equation} Then, we define the cohomology class $\theta_r=\sigma^*\cup\eta_r\in \operatorname{H}^3(G_r;\mathbb{F}_p)$, which is represented by the crossed 3-fold extension \begin{equation}\label{eq:thetar} 1 \longrightarrow C_p \longrightarrow C_p\times C_p \longrightarrow \widehat G_r \longrightarrow G_r \longrightarrow 1. \end{equation} \subsection{Non-triviality} In the present section we will show that the cohomology class $\theta_r$ described in \eqref{eq:thetar} is non-trivial. \begin{proposition}\label{prop: theta =/= 0} Let $p>3$ be a prime number. For each integer $r$ with $1<r<p-1$, let $\theta_r\in \operatorname{H}^3(G_r;\mathbb{F}_p)$ be the cohomology class constructed in \eqref{eq:thetar}. Then, $\theta_r\not=0$. \end{proposition} \begin{proof} Assume by contradiction that $\theta_r =0$. Then, by Proposition~\ref{prop: Equivalent Crossed X Diagram} there exists a group $X$ such that the following diagram commutes: \[\begin{tikzcd}[column sep={4.em,between origins},row sep=2em] & 1 \drar & & & & 1 & \\ & & C_p\times C_p \drar[swap]{\mu} \arrow[rr] & & \widehat G_r \drar \urar & & \\ 1 \rar & C_p \urar \arrow[dr,Equal] & & X \urar[swap]{\nu} \drar & & G_r \rar & 1 \\ & & C_p \urar \arrow[rr] & & G_r \arrow[ur,Equal] \drar & & \\ & 1 \urar & & & & 1. & \end{tikzcd}\] We have that $X=\gen{\bar{\sigma},\bar a_1,\dotsc,\bar a_{r+2}}$ with elements $\bar \sigma,\bar a_1,\dotsc,\bar a_{r+1}, \bar a_{r+2}\in X $ that satisfy \[ \bar a_{r+2}=\mu(a_{r+2}), \;\; \nu(\bar \sigma)=\sigma \;\; \text{ and }\; \nu(\bar a_i)=a_i \;\text{for all}\; i=1,\dotsc,r+1, \] and we have that $Z(X)=\gen{\bar a_{r+2}}$ and $\gamma_r(X)=\gen{\bar a_r,\bar a_{r+1}, \bar a_{r+2}}$. Consider the normal subgroup \begin{displaymath} Y=\gen{\bar a_{r-1},\bar a_r,\bar a_{r+1}, \bar a_{r+2}}\unlhd X, \end{displaymath} which fits into the following commutative diagram: \[\begin{tikzcd}[column sep={4.em,between origins},row sep=2em] & 1 \drar & & & & 1 & \\ & & \gen{a_{r+1},a_{r+2}} \drar \arrow[rr] & & \gen{a_{r-1},a_r,a_{r+1}} \drar \urar & & \\ 1 \rar & \gen{a_{r+2}} \urar \arrow[dr,Equal] & & Y \urar[swap] \drar & & \gen{a_{r-1},a_r} \rar & 1 \\ & & \gen{a_{r+2}} \urar \arrow[rr] & & \gen{a_{r-1},a_r} \arrow[ur,Equal] \drar & & \\ & 1 \urar & & & & 1. & \end{tikzcd}\] Then, we have that $Z(Y)=\gen{\bar a_{r+1},\bar a_{r+2}}$, and moreover, \[ \; \big[\bar \sigma,Y,\gamma_r(X)\big]=\big[\gamma_r(X),\gamma_r(X)\big]=1 \;\text{ and } \; \big[\gamma_r(X),\bar \sigma,Y\big]=\big[Z(Y),Y\big]=1. \] Therefore, the three subgroup lemma (see \cite[5.1.10]{Robinson96}) leads us to the conclusion that $\big[Y,\gamma_r(X),\bar \sigma\big]=1$. Nevertheless, a direct computation shows that \begin{displaymath} \big[Y,\gamma_r(X),\bar \sigma\big]=\big[Z(Y),\bar\sigma\big]=Z(X)\not=1, \end{displaymath} which gives a contradiction. Hence, $\theta_r\neq 0$. \end{proof} \subsection{Trivial restriction} In this section we show that for every elementary abelian subgroup $E$ of $G_r$ of $p$-rank $\rk_p E=2$, the image of $\theta_r$ via the restriction map, $$ \res_{C_{G_r}(E)}^{G_r}\colon \operatorname{H}^3(G_r;\mathbb{F}_p) \longrightarrow \operatorname{H}^3(C_{G_r}(E);\mathbb{F}_p), $$ is trivial, i.e., $\res^{G_r}_{C_{G_r}(E)}\theta_r=0$. This will imply that the cohomology class $\theta_r$ is not detected by $\mathcal{H}_2(G_r)$, and so $\omega_d(G_r)=1$. \begin{proposition}\label{prop: res C(E) theta = 0} Let $p>3$ be a prime number and let $r$ be an integer such that $1<r<p-1$. Let $E\leq G_r$ be an elementary abelian subgroup with $\rk_p E=2$. Then, $\res^{G_r}_{C_G(E)}\theta_r=0$. \end{proposition} \begin{proof} There are two types of elementary abelian subgroups $E\leq G_r$, either $E\leq \gen{a_1,\dotsc,a_r}$ or $E\not\leq \gen{a_1,\dotsc,a_r}$. Assume first that $E\leq \gen{a_1,\dotsc,a_r}$. Then, $C_{G_r}(E)=\gen{a_1,\dotsc,a_r}$ and we have that $\res^{G_r}_{C_{G_r}(E)}\sigma^*=0$. Therefore, \[ \res^{G_r}_{C_{G_r}(E)} \theta_r =(\res^{G_r}_{C_r(E)} \sigma^*)\cup(\res^{G_r}_{C_{G_r}(E)}\eta_r)=0. \] Assume now that $E\not\leq \gen{a_1,\dotsc,a_r}$. Then, $E=\gen{b, a_r}$ with $b=\sigma x$ for some $x\in\gen{a_1,\dotsc,a_{r-1}}$, and $C_{G_r}(E)=E$. Moreover, $\res^{G_r}_{C_G(E)}\eta_r$ is represented by the extension that is obtained by taking the pullback of $\eta_r$ via the inclusion $E\lhook\joinrel\longrightarrow G_r$, as illustrated in the following diagram \[\begin{tikzcd} 1 \rar & \gen{a_{r+1}} \rar \dar[Equal] & \widehat E =\gen{b, a_r,a_{r+1}} \rar \dar & E \rar \dar & 1 \\ 1 \rar & \gen{a_{r+1}} \rar & \widehat G_r \rar & G_r \rar & 1. \end{tikzcd}\] Observe that $\widehat E\cong C_p\ltimes(C_p\times C_p)$ is the extraspecial group of order $p^3$ and exponent $p$. Hence, $\res^{G_r}_{C_{G_r}(E)}\eta_r$ is represented by the extension \begin{equation}\label{eq:extraspecialextensionclass} 1 \longrightarrow C_p=\gen{a_{r+1}} \rightarrow \widehat E=C_p\ltimes(C_p\times C_p) \rightarrow C_p\times C_p=\gen{b,a_r} \rightarrow 1. \end{equation} It can be readily checked (following the construction in \cite[Section IV.3]{Brown82}) that the extension class of \eqref{eq:extraspecialextensionclass} coincides with the cup-product $b^*\cup a_r^*$, and so $\res^{G_r}_{C_{G_r}(E)}\eta_r= b^*\cup a_r^*$. Consequently, \begin{displaymath} \res^{G_r}_{C_{G_r}(E)}\theta_r = (\res^{G_r}_{C_{G_r}(E)}\sigma^*) \cup b^*\cup a_r^* = 0, \end{displaymath} as the product of any three elements of degree one is trivial in $\operatorname{H}^3(E;\mathbb{F}_p)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:mainresultintro}] In \eqref{eq:lowerupperbound}, we obtained that $1\leq \depth\operatorname{H}^{*}(G_r;\mathbb{F}_p)\leq 2$. In Proposition \ref{prop: theta =/= 0}, we constructed a cohomology class $\theta_r\in \operatorname{H}^{*}(G_r;\mathbb{F}_p)$ that is non-trivial and that, for every elementary abelian subgroup $E\leq G_r$ of rank 2, satisfies that $\res^{G_r}_{C_{G_r}(E)}\theta_r=0$ (see Proposition \ref{prop: res C(E) theta = 0}). This implies that $ \omega_d(G)=\rk_p Z(G) = 1$. Then, by Theorem \ref{thm:DepthCarlson}, we conclude that $\depth\operatorname{H}^{*}(G_r;\mathbb{F}_p)=\rk_p Z(G)=1$. \end{proof} \section{Remarks and further work} Let $p$ be an odd prime number and let $r$ be an integer with $r\geq p-1$. Consider the finite $p$-groups $G_r=C_p\ltimes T_0/T_r$ defined in \eqref{eq:Grassemidirectproduct}. For each prime $p$, if $r=p-1$, then $G_r$ has size $p^{r+1}$, has exponent $p$ and is of maximal nilpotency class; while if $r>p-1$, then $G$ has size $p^{r+1}$ and exponent bigger than $p$. In particular, for the $p=3$ and $r=2$ case, $G_2$ is the extraspecial $3$-group of order $27$ and exponent $3$, and it is known that the depth of its mod-$3$ cohomology ring is $2$ (compare \cite{Leary92} and \cite{Minh01}). We believe that this phenomena will occur with more generality; namely, for every prime number $p\geq 3$ and $r\geq p-1$, the following equality will hold $\depth \operatorname{H}^{*}(G_r;\mathbb{F}_p)=2$. For these groups, if we mimic the construction of the mod-$p$ cohomology class $\theta_r$ in Section \ref{sec:constructionthetar}, it is no longer true that its restriction in the mod-$p$ cohomology of the centralizer of all elementary abelian subgroups of $G_r$ of rank 2 vanishes. We propose the following conjecture. \begin{conjecture} Let $p$ be an odd prime, let $r\geq p-1$ be an integer, and let $$G_r=C_p\ltimes T_0/T_r$$ be as in \eqref{eq:Grassemidirectproduct}. Then $\operatorname{H}^{*}(G_r;\mathbb{F}_p)$ has depth $2$. \end{conjecture} The above conjecture is known to be true for the particular cases where $p=3$ and $r=2$ or $r=3$. In these two cases the mod-$p$ cohomology rings have been calculated using computational sources (see \cite{SKing}). Another argument supporting the conjecture is that for a fixed prime $p$ and $r\geq p-1$, the groups $G_r$ have isomorphic mod-$p$ cohomology groups; not as rings, but as $\mathbb{F}_p$-modules (see \cite{Garaialde18}). This last isomorphism comes from a universal object described in the category of cochain complexes together with a quasi-isomorphism that induces an isomorphism at the level of spectral sequences.
{'timestamp': '2020-06-01T02:07:09', 'yymm': '2005', 'arxiv_id': '2005.14400', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14400'}
arxiv
\section{Introduction} \label{sec:intro} Traditional multispectral images (MSIs, \textit{e}.\textit{g}. RGB images) usually contain a reduced number of spectral bands providing a limited spectral inf
ormation. It is well-known that, the more spectral bands we have, the better we would understand the latent spectral structure. Since hyperspectral imaging can obtain more spectral bands, it has become a non-negligible technology that is able to capture the intrinsic properties of different materials. However, due to the physical limitation of imaging sensors, there is a trade-off between the spatial resolution and the spectral resolution in a hyperspectral image (HSI), therefore it is burdensome to obtain an HSI with a high spatial resolution. In this condition, hyperspectral image super-resolution by fusing a low-resolution hyperspectral image (LR-HSI) with a high-resolution multispectral image (HR-MSI) is a promising way to address the problem. \begin{figure}[!t] \begin{center} \begin{minipage}{ 0.98\linewidth} {\includegraphics[width=1\linewidth]{intro1.png}} \vspace{0.1pt} \end{minipage} \begin{minipage}{ 0.98\linewidth} \begin{minipage}{ 0.32\linewidth} {\includegraphics[width=1\linewidth]{intro-lttr.png}} \centering {(a) LTTR \cite{LTTR}} \end{minipage} \begin{minipage}{ 0.32\linewidth} {\includegraphics[width=1\linewidth]{intro-mhf.png}} \centering {(b) MHFnet \cite{xie2019multispectral}} \end{minipage} \begin{minipage}{ 0.32\linewidth} {\includegraphics[width=1\linewidth]{intro-proposed.png}} \centering {(c) HSRnet} \end{minipage} \centering \end{minipage} \end{center} \caption{First row: the schematic diagram of hyperspectral image resolution on a test image from the Harvard dataset ($ h $ and $ w $ represent the height and width of LR-HSI, $ H $ and $ W $ denote the height and width of HR-MSI, $ s$ and $ S $ denote the spectral band number of HR-MSI and LR-HSI, respectively). The right image is the ground-truth HR-HSI, $ \mathcal{X}$. Second row: the results obtained by (a) LTTR (PSNR = 41.20dB), (b) MHFnet (PSNR = 38.70dB), and (c) the proposed HSRnet (PSNR = 43.93dB), where PSNR stands for the peak signal-to-noise ratio. Note that all the images are displayed with pseudo-color red, green, and blue (RGB) format using R = 28-th band, G = 12-th band, and B = 1-st band. Besides, MHFnet and HSRnet are both trained on the same CAVE dataset.}\label{fig:topimg} \end{figure} Many researchers have focused on hyperspectral image super-resolution to increase the spatial resolution of LR-HSI proposing several algorithms. These latter are mainly based on the following models: \begin{equation}\label{eq:re1} \begin{aligned} \mathbf{Y} = \mathbf{XBS}, ~~ \mathbf{Z} = \mathbf{RX}, \end{aligned} \end{equation} where $\mathbf{Y} \in \mathbb{R}^{S \times hw}$, $ \mathbf{Z} \in \mathbb{R}^{s \times HW}$ and $\mathbf{X} \in \mathbb{R}^{S \times HW}$ represent the mode-3 unfolding matrices of LR-HSI ($ \mathcal{Y} \in \mathcal{R}^{h \times w \times S}$), HR-MSI ($ \mathcal{Z} \in \mathcal{R}^{H \times W \times s}$) and the latent HR-HSI ($ \mathcal{X} \in \mathcal{R}^{H \times W \times S}$), respectively, $ h $ and $ w $ represent the height and width of LR-HSI, $ H $ and $ W $ denote the height and width of HR-MSI, $ s$ and $ S $ denote the spectral band number of HR-MSI and LR-HSI, respectively. Additionally, $ \mathbf{B} \in \mathbb{R}^{HW \times HW} $ is the blur matrix, $ \mathbf{S} \in \mathbb{R}^{HW \times hw} $ denotes the downsampling matrix, and $ \mathbf{R} \in \mathbb{R}^{s \times S} $ represents the spectral response matrix. It is worth to be remarked that coherently with the notation adopted above, in this paper, we denote scalar, matrix, and tensor in non-bold case, bold upper case, and calligraphic upper case letters, respectively. Based on the models in (\ref{eq:re1}), many related approaches have been proposed. Different prior knowledge or regularization terms are integrated in those methods. However, the spectral response matrix $ \mathbf{R} $ is usually unknown, thus the traditional methods need to select or estimate the matrix $ \mathbf{R} $ and other involved parameters. Additionally, the related regularization parameters used in these kinds of approaches are often image-dependent. Recently, with the tremendous development of neural networks, deep learning has become a promising way to deal with the hyperspectral image super-resolution problem. In \cite{dian2018deep}, Dian \textit{et al}. mainly focus on the spatial detail recovery learning image priors via a convolutional neural network (CNN). These learned priors have been included into a traditional regularization model to improve the final outcomes getting better image features than traditional regularization model-based methods. In \cite{xie2019multispectral}, Xie \textit{et al}. propose a model-enlightened deep learning method for hyperspectral image super-resolution. This method has exhibited an ability to preserve the spectral information and spatial details, thus obtaining state-of-the-art hyperspectral image super-resolution results. However, deep learning-based approaches for hyperspectral image super-resolution also encounter some challenges. First of all, these methods sometimes have \textit{complicated architectures} with millions of parameters to estimate. Second, due to the complicated architecture and large-scale training data, \textit{expensive computation and storage} are usually involved. Third, deep learning-based methods are data-dependent, which usually holds a \textit{weak network generalization}. Thus, the model trained on a specific dataset could poorly perform on a different kind of dataset. Instead, \textit{the proposed network architecture can easily handle the above-mentioned drawbacks}. In this paper, the proposed network architecture (called HSRnet from hereon) can be decomposed into two parts. One part is to preserve the spectral information of HR-HSI by upsampling the LR-HSI. The other part is mainly to get the spatial details of HR-HSI by training a convolutional neural network with the high-frequency information of HR-MSI and LR-HSI as inputs. By imposing the similarity between the network output and the reference (ground-truth) image, we can efficiently estimate the parameters involved in the network. In summary, this paper mainly consists of the following contributions: \begin{enumerate} \item The proposed network architecture is \textit{simple} and \textit{efficient}. As far as we know, it obtains better qualitative and quantitative performance than recent state-of-the-art hyperspectral image super-resolution methods. For example, our method shows significant improvements with respect to two state-of-the-art methods, one is deep learning-based \cite{xie2019multispectral} and the other one is regularization-based \cite{LTTR}, see also Fig. \ref{fig:topimg}. Besides, the proposed architecture involves fewer network parameters than other deep learning-based approaches thanks to our simple network design, more details are presented in Sec. \ref{vs}. \item The network architecture has a \textit{promising generalization} ability to yield competitive results for different datasets even though the network is trained only on a specific dataset. This is due to the use of high-pass filters to feed the network with high-frequency spatial information. Extensive experiments corroborate this conclusion, see Fig. \ref{F:harvard-1} and Tab. \ref{harvard10-ave}. \item Multi-scale information is integrated into our network architecture, which significantly improves the performance of the proposed method. The effectiveness of a multi-scale module has been proven in many computer vision works \cite{8767931,ms2,ren2016single,zhang2018multi,8931240,8049485} and further discussed in Sec. \ref{newstruct}. \item The network shows a good \textit{robustness to the number of training samples}, which indicates that our method could get very high performance with a different amount of training data. Furthermore, \textit{shorter training and testing times} compared with a state-of-the-art deep learning-based approach (see Tab. \ref{diffrernt-num-t}) have been remarked. \end{enumerate} The rest of the paper is outlined as follows. Section \ref{related} presents the related works about the hyperspectral super-resolution problem. Section \ref{main} introduces the proposed network architecture. In Section \ref{exp}, extensive experiments are conducted to assess the effectiveness of the proposed architecture. Furthermore, some discussions about the image spectral response, the network generalization, the computational burden, and the use of the multi-scale module are provided to the readers. \begin{figure*}[t] \begin{center} \begin{minipage}{ 0.82\linewidth} {\includegraphics[width=1\linewidth]{network_structure.png}} \centering {(a)} \end{minipage} \begin{minipage}{ 0.14\linewidth} {\includegraphics[width=1\linewidth]{resblock.png}} \centering {(b)} \end{minipage} \end{center} \caption{The flowchart of the proposed network architecture (HSRnet). (a) Architecture of our HSRnet. LR-HSI $\mathcal{Y}$ and HR-MSI $\mathcal{Z}$ are the two inputs, and the $\mathcal{O}$ is the final output. (b) Illustration of one ResNet block with two layers and 64 kernels (size $3\times 3$) for each layer. }\label{structure} \end{figure*} \section{Related Works}\label{related} Hyperspectral image super-resolution is a popular topic, which is receiving more and more attention. In particular, the combination of hyperspectral data with higher spatial resolution multispectral images is representing a fruitful scheme leading to satisfying results. Recent fusion or super-resolution approaches can be roughly categorized into two families: model-based approaches and deep learning-based methods. Model-based approaches are classic solutions. Indeed, many works have been already published \cite{6502715,8444767,fu2019variational,kanatsoulis2018hyperspectral,zhang2018spatial,8768351,xing2018pansharpening,aiazzi2007improving,han2017hyperspectral,GLP-HS,1518950,CNMF,LTMR,LTTR,pan2019multispectral,zhang2016multispectral,yuan2018multiscale,zhang2019pan,dian2019multispectral,rong2014pansharpening,rong2012low,aly2014regularized,liu2017variational,7312998} for the super-resolution problem. For instance, Dian \textit{et al}. \cite{dian2019multispectral} exploit the spectral correlations and the non-local similarities by clustering the HR-MSI in order to create clusters with similar structures. Low tensor-train rank prior is used in \cite{oseledets2011tensor}, the so-called \cite{LTTR} method. The tensor train (TT) rank consists of ranks of matrices formed by a well-balanced matricization scheme. The effectiveness of low TT rank (LTTR) prior has been utilized in \cite{bengua2017efficient}, which shows ability in image and video reconstruction. Compared to normal matrix ranks, the tensor rank keeps more abundant information about the data cube. Then, they regard the super-resolution as an optimization problem that, with the help of low tensor-train rank constraint, has a satisfying solution under the well-known alternating direction multipliers minimization (ADMM) \cite{bioucasdias2010alternating} framework. Deep learning-based methods have recently showed exceptional performance in the field of image super-resolution, see \textit{e}.\textit{g}. \cite{sr1,sr2,sr3,xie2019multispectral,dian2018deep,Palsson2017Multispectral,inproceedings,yang2018hyperspectral,shao2018remote,li2017hyperspectral,yao2018pixel,vitale2019cnn,palsson2018sentinel,han2019hyperspectral,liu2018deep,huang2015new,rao2017residual,liu2018psgan}. A powerful example is provided by the so-called PanNet developed in \cite{inproceedings}. Here, Yang \textit{et al}. designed a new architecture training the deep-learning network with high-pass filtered details rather than original images. This is done in order to simultaneously preserve the spatial and spectral structures. Thanks to the use of high-pass filters, a greater generalization capability is observed. Another instance of deep learning-based methods for solving the hyperspectral image super-resolution issue is provided in \cite{xie2019multispectral}, where a model-based deep learning method is proposed. The method exhibits a great ability to preserve structures and details, as well as it obtains state-of-the-art results. Unlike other deep learning-based methods that mainly regard the image super-resolution issue as a simple regression problem, this approach is based on the generation mechanism of the HSI and the MSI to build a novel fusion model. It adopts the low rankness knowledge along with the spectral mode of the HR-HSI under analysis. Instead of solving the model by traditional alternating iterative algorithms, the authors design a deep network learning the proximal operators and model parameters by exploiting CNNs. \section{The Proposed HSRnet}\label{main} In this section, we introduce first the regularization-based model for the hyperspectral image super-resolution problem. Motivated by the above-mentioned model, we propose our network architecture that will be detailed in Sec. \ref{N_A}. \subsection{Problem Formulation} Estimating the HR-HSI from LR-HSI and HR-MSI is an ill-posed inverse problem. Thus, prior knowledge is introduced exploiting regularization terms under the maximum a posteriori (MAP) framework. Those methods can be formulated as: \begin{equation} \min_{\mathbf{X}} \mathcal{L} = \lambda_{1}f_{1}(\mathbf{X}, \mathbf{Y})+ \lambda_{2}f_{2}(\mathbf{X}, \mathbf{Z})+ R(\mathbf{X}), \end{equation} where $\mathbf{X}, \mathbf{Y}, \mathbf{Z}$ are the mode-3 unfolding matrices of tensor HR-HSI, LR-HSI, and HR-MSI, respectively, which have been introduced in Sec. \ref{sec:intro}. $\lambda_{1}$ and $\lambda_{2}$ represent two regularization parameters, $ f_{1} $ and $ f_{2} $ force the spatial and spectral consistency, respectively, and $ R$ stands for the regularization term depending on the prior knowledge. In general, $ f_{1} $ and $ f_{2} $ are defined based on the relations in (\ref{eq:re1}), \textit{i}.\textit{e}. , \begin{equation} \begin{aligned} f_{1} (\mathbf{X},\mathbf{Y}) &= \|\mathbf{Y}-\mathbf{X B S}\|_{F}^{2},\\ f_{2} (\mathbf{X},\mathbf{Z}) &= \|{\mathbf{Z}}-\mathbf{R} \mathbf{X}\|_{F}^{2}, \end{aligned} \end{equation} where $ {\left\| {\bf{X}} \right\|_F}$ = $\sqrt {\sum {\sum {x_{ij}^2} } }$ is the Frobenius norm. In particular, the regularization term $R$ is crucial for regularization-based methods. Deep learning can be viewed as an estimation problem of a function mapping input data with ground-truth (labeled) data. In our case, starting from the input images (\textit{i.e.}, LR-HSI and HR-MSI), we can estimate the mapping function $\mathit{f}$ by minimizing the following expression: \begin{equation}\label{mapping} \begin{aligned} \min_{\mathbf{\Theta}} ~\mathcal{L}=\left\|f_\mathbf{\Theta}(\mathbf{Y},\mathbf{Z})-\mathbf{X}\right\|_{F}^{2}, \end{aligned} \end{equation} where $\mathbf{Y}$ and $\mathbf{Z}$ are the LR-HSI and the HR-MSI, respectively, and $\mathbf{X}$ is the reference (ground-truth) HR-HSI. The mapping function $\mathit{f}$ can be viewed as a deep convolutional neural network, thus $\mathbf{\Theta}$ represents the parameters of the network. Besides, the prior knowledge can be viewed as being implicitly expressed by the learned parameters. In the next subsection, we will present the network architecture recasting the problem as in (\ref{mapping}), where the function $\mathit{f}$ is estimated thanks to several examples provided to the network during the training phase. \subsection{Network Architecture}\label{N_A} Fig. \ref{structure} shows the proposed HSRnet for the hyperspectral image super-resolution problem. From the figure, it is easy to see that we decompose the network into two parts, such that the two parts can preserve the most crucial characteristics of a hyperspectral image, \textit{i.e.,} the spectral information and the spatial details. \subsubsection{Spectral preservation} The LR-HSI $ \mathcal{Y} \in \mathbb{R}^{h \times w \times S}$ \footnote{We use three coordinates format to better represent the 3D hyperspectral image, \textit{i.e.}, ${h \times w \times S}$.} has the same spectral band number as the ground-truth HR-HSI $ \mathcal{X} \in \mathbb{R}^{H \times W \times S}$. Indeed, most of the spectral information of the HR-HSI is contained in the LR-HSI (the remaining part is due to the spectral information of the high resolution spatial details). In order to corroborate it, we plot the sampled spectral signatures obtained by the ground-truth HR-HSI $\mathcal{X}$ and by the corresponding upsampled LR-HSI $\mathcal{Y}^{U} \in \mathbb{R}^{H \times W \times S}$ in Fig. \ref{sp-gt-lms}. It is easy to be noted that the plots are very close to each other indicating that $\mathcal{Y}^{U}$ holds most of the spectral content of $\mathcal{X}$. Therefore, in order to guarantee a spectral preservation, we simply upsample $ \mathcal{Y}$ getting $\mathcal{Y}^{U}$ (as shown in the top part of Fig. \ref{structure}(a)). \begin{figure}[t] \begin{center} {\includegraphics[height=0.65\linewidth,width=0.9\linewidth]{sp-up-gt-out-eps-converted-to.pdf}} \caption{Sampled spectral signatures for the object at pixel (175, 400) as obtained by the (ground-truth) HR-HSI, the upsampled LR-HSI $\mathcal{Y}^{U}$, and the estimated version of the high resolution HSI exploiting the proposed HSRnet.}\label{sp-gt-lms} \end{center} \end{figure} Admittedly, $\mathcal{Y}^{U}$ is able to preserve the spectral information, but many spatial details are lost (which can retain part of the spectral information). Instead, the proposed HSRnet can learn the spectral information of the HR-HSI, even preserving the spatial counterpart. As a result, the final outcome of the proposed HSRnet clearly shows an almost perfect spectral preservation, see Fig. \ref{sp-gt-lms}. \subsubsection{Spatial preservation} Since the HR-MSI $\mathcal{Z} \in \mathbb{R}^{H \times W \times s}$ contains high spatial resolution information, we aim to use $\mathcal{Z}$ to extract spatial details injecting them into the final hyperspectral super-resolution image. Moreover, $\mathcal{Y}$ still contains some spatial details, thus we also consider employing $\mathcal{Y}$ to extract them. However, we do not simply concatenate $\mathcal{Z}$ and $\mathcal{Y}$ together taking them into the network because that will not lead to a satisfying detail preservation. Indeed, we calculate first the spatial details at the LR-HSI scale, called $\mathcal{Y}_{HP}$ in Fig. \ref{structure}. In particular, we simply take them from high-pass filtering the LR-HSI. Moreover, we add other details at the same scale by extracting them from the HR-MSI $\mathcal{Z}$. This is done by filtering and then downsampling the HR-MSI $\mathcal{Z}$ getting $\mathcal{Z}_{HP}^{D}$, see Fig. \ref{structure} again. This information has the advantage to occupy less memory and to require less computational burden to be processed compared to the original information in $\mathcal{Z}$. Finally, we concatenate this information, \textit{i.e.} $\mathcal{Y}_{HP}$ and $\mathcal{Z}_{HP}^{D}$, to get $\mathcal{C}_0 \in \mathbb{R}^{h \times w \times (S+s)}$. In order to complete the multiresolution analysis, thus introducing a multi-scale module in our network, the details at the HR-MSI scale are also extracted. This is performed by simply filtering them using a properly designed high-pass filter. These details, denoted as $\mathcal{Z}_{HP}$, can be concatenated with $\mathcal{C}_0$ (\textit{i.e.}, the details at the lower scale) after this latter is properly convoluted and upsampled to the HR-MSI scale. Thus, $\mathcal{C}_1 \in \mathbb{R}^{H \times W \times (64+s)}$ indicates the concatenation of the details at two different scales (the LR-HSI one and the HR-MRI one). This represents the input of the ResNet implementing the well-known concept of multi-resolution analysis often considered in previously developed researches (\textit{e}.\textit{g}. \cite{ms2,ren2016single,zhang2018multi,8931240,8049485}) either by designing diverse kernel sizes for convolution \cite{ms2,ren2016single} or extracting different spatial resolutions by filtering input data \cite{zhang2018multi,8931240,8049485}. It is worth to be remarked that the high-pass filtering step is realized by the subtraction of the original image and its low-pass version, which is obtained by an average filter with a kernel size equal to $6 \times 6$. The upsampled operation is implemented by deconvolving with a kernel of size $6 \times 6$. Moreover, the concatenation operator is about adding the multispectral bands with high spatial resolution (3 bands, RGB image) into the hyperspectral bands (as shown in Fig. \ref{structure}). In this work, the red, the green, and the blue slices of $\mathcal{Z}_{HP}^{U}$ and $\mathcal{Z}_{HP}$ are inserted as the head, the middle, and the tail frontal slices to complement the spectral information of the hyperspectral image. Fig. \ref{resnet} shows a comparison between $\mathcal{E}$ and $\mathcal{E}_{gt}$. From the figure, it is clear that $\mathcal{E}$ (\textit{i.e.}, the details extracted by the proposed approach) and $\mathcal{E}_{gt}$ (\textit{i.e.}, the details extracted by using the reference image) are very close to each other validating the effectiveness of the proposed network design. This result is only obtained thanks to the use of a multi-scale module combining details at two different scales guaranteeing a better spatial detail content in input of the ResNet. \begin{figure}[t] \begin{center} \begin{minipage}{ 0.4\linewidth} {\includegraphics[width=1\linewidth]{output-lms-eps-converted-to.pdf}} \centering {(a)} \end{minipage} \begin{minipage}{ 0.4\linewidth} {\includegraphics[width=1\linewidth]{gt-lms-eps-converted-to.pdf}} \centering {(b)} \end{minipage} \begin{minipage}{14pt} \centering {\includegraphics[height=98pt]{colobargray_01-eps-converted-to.pdf}} \vspace{2.5pt} \end{minipage} \caption{The residual maps: (a) $\mathcal{E}= \mathcal{O} - \mathcal{Y}^{U}$ and (b) $\mathcal{E}_{gt}= \mathcal{X} - \mathcal{Y}^{U}$.}\label{resnet} \end{center} \end{figure} \subsubsection{Loss function} After obtaining the spectral preserved $\mathcal{Y}^{U}$ image and the spatial preserved $\mathcal{E}$ image from the ResNet fed by the image cube $\mathcal{C}_1$, we subsequently add the two outputs together to get the outcome. Thus, the loss function exploited during the training phase to drive the estimation of the function mapping in (\ref{mapping}) can be defined as \begin{equation}\label{loss} \begin{aligned} \min_{\mathbf{\Theta}}\mathcal{L}=\left\|f_\mathbf{\Theta}(\mathcal{Y}_{HP},\mathcal{Z}_{HP}^{D}, \mathcal{Z}_{HP})+ \mathcal{Y}^{U}-\mathcal{X}\right\|_{F}^{2}, \end{aligned} \end{equation} where $f_\mathbf{\Theta}(\cdot)$ is the mapping function that has as input the details at the two different scales used to estimate the spatial preserved image $\mathcal{E}$ and the upsampled LR-HSI $\mathcal{Y}^{U}$. The loss function imposes the similarity between the network output $f_\mathbf{\Theta}(\mathcal{Y}_{HP}^{U},\mathcal{Z}_{HP}^{D},\mathcal{Z}_{HP}) + \mathcal{Y}^{U}$ and the reference (ground-truth) $\mathcal{X}$ image. \subsection{Network Training}\label{Sec-Train} \subsubsection{Training data} In the work, we mainly use the CAVE dataset \cite{yasuma2010generalized} for training the network. It contains 32 hyperspectral images with size $512 \times 512$ and $31$ spectral bands. Additionally, each hyperspectral image also has a corresponding RGB image with size $512 \times 512$ and $3$ spectral bands (\textit{i.e.}, the HR-MSI image). We selected $20$ images \footnote{We selected the same 20 images as for the training of the MHFnet.}for training the network, and the other $11$ images to be considered for testing\footnote{One image, \textit{i.e.}, ``Watercolors'', is discarded as it is unavailable for use.}, as done for the MHFnet in \cite{xie2019multispectral}. The CAVE test images are shown in Fig. \ref{cave_test}. \begin{figure}[t] \begin{center} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{balloons_RGB-eps-converted-to.pdf}} \centering {(a)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{cd_RGB-eps-converted-to.pdf}} \centering {(b)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{chart_and_stuffed_toy_RGB-eps-converted-to.pdf}} \centering {(c)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{clay_RGB-eps-converted-to.pdf}} \centering {(d)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{fake_and_real_beers_RGB-eps-converted-to.pdf}} \centering {(e)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{fake_and_real_lemon_slices_RGB-eps-converted-to.pdf}} \centering {(f)} \end{minipage} \centering \end{center} \begin{center} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{fake_and_real_tomatoes_RGB-eps-converted-to.pdf}} \centering {(g)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{feathers_RGB-eps-converted-to.pdf}} \centering {(h)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{flowers_RGB-eps-converted-to.pdf}} \centering {(i)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{hairs_RGB-eps-converted-to.pdf}} \centering {(j)} \end{minipage} \begin{minipage}{ 0.155\linewidth} {\includegraphics[width=1\linewidth]{jelly_beans_RGB-eps-converted-to.pdf}} \centering {(k)} \end{minipage} \centering \end{center} \caption{The $11$ testing images from the CAVE dataset. (a) \textit{balloons}, (b) \textit{cd}, (c)\textit{chart and stuffed toy}, (d) \textit{clay}, (e) \textit{fake and real beers}, (f) \textit{fake and real lemon slices}, (g) \textit{fake and real tomatoes},(h)\textit{feathers},(i)\textit{flowers},(j)\textit{hairs},(k)\textit{jelly beans}.} \label{cave_test} \end{figure} \subsubsection{Data simulation} We extracted 3920 overlapped patches with a size of $64 \times 64 \times 31$ from the 20 images of the CAVE dataset used as ground-truth, thus forming the HR-HSI patches. Accordingly, the LR-HSI patches are generated starting from the HR-HSI by applying a Gaussian blur with kernel size equal to $3\times 3$ and standard deviation equal to 0.5 and then downsampling the blurred patches to the size of $16 \times 16$, \textit{i.e.,} with a downsampling factor of 4. Moreover, the HR-MSI patches (\textit{i.e.,}, the RGB patches) are generated similarly as for the HR-HSI patches, but using the corresponding (already available) RGB data. Thus, other 3920 patches of size of $64 \times 64 \times 3$ are available to represent the HR-MSI. Following these indications, the patches for the training phase are the $80\%$ of the whole dataset and the rest (\textit{i.e.}, the $20\%$) is used for the testing phase. \subsubsection{Training platform and parameters setting} The proposed network is trained on Python 3.7.4 with Tensorflow 1.14.0 and Linux operating system with NVIDA GPU GeForce GTX 2080Ti. We use Adam optimizer with a learning rate equal to 0.0001 in order to minimize the loss function (\ref{loss}) by 100,000 iterations and 32 batches. The ResNet block in our network architecture is crucial. Indeed, we use 6 ResNet blocks (each one with two layers and 64 kernels of size $3\times 3$ for each layer. See Fig. \ref{structure}). Fig. \ref{error} shows the training and validation errors of the proposed HSRnet confirming the convergence of the proposed convolutional neural network using the above-mentioned parameters setting. \begin{figure}[t] \begin{center} {\includegraphics[width=0.9\linewidth]{train_error-eps-converted-to.pdf}} \caption{Training and validation errors for the proposed HSRnet.}\label{error} \end{center} \end{figure} \section{Experimental Results}\label{exp} In this section, we compare the proposed HSRnet with several state-of-the-art methods for the hyperspectral super-resolution problem. In particular, the benchmark consists of the CNMF method\footnote{http://naotoyokoya.com/Download.html} \cite{CNMF}, the FUSE approach\footnote{http://wei.perso.enseeiht.fr/publications.html} \cite{FUSE}, the GLP-HS method\footnote{http://openremotesensing.net/knowledgebase/hyperspectral-and-multispectral-data-fusion/}\cite{GLP-HS}, the LTTR technique\footnote{https://github.com/renweidian}\cite{LTTR}, the LTMR approach\footnote{https://github.com/renweidian}\cite{LTMR}, the MHFnet\footnote{https://github.com/XieQi2015/MHF-net} \cite{xie2019multispectral}, and the proposed HSRnet approach. For a fair comparison, the MHFnet is trained on the same training data as the proposed approach. Furthermore, the batch size and the training iterations of the MHFnet are set to 32 and 100,000, respectively, as for the proposed approach. Two widely used benchmark datasets, \textit{i.e.,} the CAVE database\footnote{http://www.cs.columbia.edu/CAVE/databases/multispectral/} \cite{yasuma2010generalized} and the Harvard database\footnote{http://vision.seas.harvard.edu/hyperspec/download.html}\cite{chakrabarti2011statistics}, are selected. For quantitative evaluation, we adopt four quality indexes (QIs), \textit{i.e.,} the peak signal-to-noise ratio (PSNR), the spectral angle mapper (SAM) \cite{yuhas1993determination}, the erreur relative globale adimensionnelle de synth\`{e}se (ERGAS) \cite{wald2002data}, and the structure similarity (SSIM) \cite{wang2004image}. The SAM measures the average angle between the spectral vectors of the target and of the reference image. Instead, the ERGAS represents the fidelity of the image based on the weighted sum of mean squared errors. The ideal value in both the cases is zero. The lower the index, the better the quality. Finally, PSNR and SSIM are widely used to evaluate the similarity between the target and the reference image. The higher the index, the better the quality. The ideal value for SSIM is one. \subsection{Results on CAVE Dataset}\label{caveexp} \begin{table}[t] \centering\renewcommand\arraystretch{1}\setlength{\tabcolsep}{6pt}\footnotesize \caption{Average QIs and related standard deviations of the results on 100 patches extracted from the testing images on the CAVE dataset. The best values are highlighted in boldface.} \begin{tabular}{l|c|c|c|c} \Xhline{1.2pt} Method & PSNR & SAM & ERGAS & SSIM \\ \hline CNMF& 31.4$\pm$3.3 & 5.95$\pm$4.0 & 8.19$\pm$19.2 & 0.96$\pm$0.04 \\ FUSE& 28.9$\pm$3.3 & 10.36$\pm$6.4& 7.23$\pm$6.0 & 0.91$\pm$0.07 \\ GLP-HS& 30.8$\pm$4.0 & 6.28$\pm$3.9 & 5.9$\pm$4.8 & 0.94$\pm$0.05 \\ LTTR& 31.1$\pm$3.6 & 7.36$\pm$3.5 & 6.55$\pm$5.3 & 0.94$\pm$0.04 \\ LTMR& 30.6$\pm$3.5 & 7.61$\pm$3.6 & 7.25$\pm$6.8 & 0.93$\pm$0.04 \\ MHFnet& 35.1$\pm$5.9 & 7.29$\pm$7.2 & 30.7$\pm$146.4 & 0.96$\pm$0.03 \\ HSRnet& \textbf{38.2}$\pm$5.3 & \textbf{2.94}$\pm$1.8 & \textbf{2.99}$\pm$3.6 & \textbf{0.99}$\pm$0.01 \\ \hline Best value& +$ \infty $ & 0 & 0 & 1 \\ \Xhline{1.2pt} \end{tabular} \label{cave-ave} \end{table} In order to point out the effectiveness of all the methods on different kinds of scenarios, we divide first the remaining 11 testing images on the CAVE dataset into small patches of size $128 \times 128$. Then, 100 patches are randomly selected. We exhibit the average QIs and corresponding standard deviations of the results for the different methods on these patches in Table \ref{cave-ave}. From Table \ref{cave-ave}, we can find that the proposed HSRnet significantly outperforms the compared methods. In particular, the SAM value of our method is much lower than that of the compared approaches (about the half with respect to the best compared method). This is in agreement with our previously developed analysis, namely that the proposed HSRnet is able to preserve the spectral features of the acquired scene. \begin{table}[t] \centering\renewcommand\arraystretch{1}\setlength{\tabcolsep}{6pt}\footnotesize \caption{Average QIs and related standard deviations of the results on 11 testing images on the CAVE datasets. The best values are highlighted in boldface.} \begin{tabular}{l|c|c|c|c} \Xhline{1.2pt} Method & PSNR & SAM & ERGAS & SSIM \\ \hline CNMF & 32.2$\pm$4.5 & 14.96$\pm$5.2 & 8.79$\pm$4.8 & 0.911$\pm$0.04 \\ FUSE & 31.5$\pm$2.5 & 17.71$\pm$7.8 & 9.07$\pm$6.2 & 0.870$\pm$0.05 \\ GLP-HS & 35.4$\pm$2.7 & 7.91$\pm$3.0 & 5.61$\pm$3.6 & 0.946$\pm$0.02 \\ LTTR & 36.8$\pm$2.8 & 6.65$\pm$2.5 & 5.66$\pm$2.8 & 0.957$\pm$0.03 \\ LTMR & 36.2$\pm$2.7 & 7.66$\pm$2.9 & 5.70$\pm$2.7 & 0.949$\pm$0.03 \\ MHFnet & 43.3$\pm$2.8 & 4.34$\pm$1.5 & 2.33$\pm$1.4 & 0.989$\pm$0.01 \\ HSRnet & \textbf{44.0}$\pm$2.9 & \textbf{3.09}$\pm$1.0 & \textbf{1.93}$\pm$1.0 & \textbf{0.992}$\pm$0.00 \\ \hline Best value& +$ \infty $ & 0 & 0 & 1 \\ \Xhline{1.2pt} \end{tabular} \label{cave11-ave} \end{table} \begin{table}[t] \setlength{\tabcolsep}{2pt} \caption{QIs of the results by different methods and the running times on (a) \emph{balloons}, (d) \emph{clay}, and (e) \emph{fake and real beers} on the CAVE dataset. G indicates that the method is running on the GPU device, while C denotes the use of the CPU. The best values are highlighted in boldface.} \centering{ \begin{tabular}{l|ccccccc} \Xhline{1.2pt} \multicolumn{8}{c}{(a) 512 $\times$ 512} \\ \hline Method & CNMF & FUSE & GLPHS & LTTR & LTMR & MHFnet & HSRnet\\ PSNR & 31.26 & 32.02 & 39.73 & 39.13 & 39.21 & 45.24 & \textbf{49.51} \\ SAM & 9.89 & 10.56 & 3.29 & 3.29 & 4.15 & 2.91 & \textbf{1.64} \\ ERGAS & 4.57 & 4.30 & 1.81 & 2.11 & 2.11 & 1.06 & \textbf{0.59} \\ SSIM & 0.926 & 0.928 & 0.975 & 0.980 & 0.980 & 0.992 & \textbf{0.996} \\ \Xhline{1.2pt} \multicolumn{8}{c}{(d) 512 $\times$ 512} \\ \hline Method & CNMF & FUSE & GLPHS & LTTR & LTMR & MHFnet & HSRnet\\ PSNR & 31.35 & 32.18 & 37.59 & 37.09 & 37.06 & 43.09 & \textbf{45.06} \\ SAM & 17.56 & 17.68 & 10.68 & 7.00 & 7.64 & 7.71 & \textbf{4.60} \\ ERGAS & 7.19 & 9.25 & 4.78 & 5.20 & 5.23 & 2.94 & \textbf{2.06} \\ SSIM & 0.926 & 0.900 & 0.963 & 0.976 & 0.973 & 0.986 & \textbf{0.993} \\ \Xhline{1.2pt} \multicolumn{8}{c}{(e) 512 $\times$ 512} \\ \hline Method & CNMF & FUSE & GLPHS & LTTR & LTMR & MHFnet & HSRnet\\ PSNR & 30.41 & 35.98 & 37.57 & 38.99 & 38.66 & 41.97 & \textbf{45.97} \\ SAM & 4.81 & 3.97 & 1.25 & 1.97 & 2.18 & 1.62 & \textbf{0.94} \\ ERGAS & 2.19 & 1.70 & 1.23 & 1.25 & 1.26 & 0.76 & \textbf{0.42} \\ SSIM & 0.965 & 0.962 & 0.969 & 0.975 & 0.972 & 0.986 & \textbf{0.992} \\ \hline \makecell[c]{Average \\ time(s)}&\footnotesize{27.1(C)}&\footnotesize {1.9(C)}&\footnotesize {4.6(C)}&\footnotesize {767.8(C)}&\footnotesize {271.3(C)}&\footnotesize {4.4(G)}&\footnotesize {\textbf{1.7}(G)}\\ \Xhline{1.2pt} \end{tabular}} \label{qresult-4CAVE} \end{table} \begin{figure*}[htb] \centering \begin{minipage}[t]{0.94\linewidth} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_1_gt-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_1_lms-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_1_cnmf-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_1_cnmf_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_1_fuse-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_1_fuse_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_1_glphs-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_1_glphs_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_1_lttr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_1_lttr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_1_ltmr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_1_ltmr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_1_cvpr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_1_cvpr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_1_net-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_1_net_res-eps-converted-to.pdf}} \centering \end{minipage} \vspace{5pt} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_4_gt-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_4_lms-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_4_cnmf-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_4_cnmf_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_4_fuse-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_4_fuse_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_4_glphs-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_4_glphs_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_4_lttr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_4_lttr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_4_ltmr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_4_ltmr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_4_cvpr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_4_cvpr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_4_net-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_4_net_res-eps-converted-to.pdf}} \centering \end{minipage} \vspace{5pt} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_5_gt-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_5_lms-eps-converted-to.pdf}} \centering { GT} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_5_cnmf-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_5_cnmf_res-eps-converted-to.pdf}} \centering {CNMF\cite{CNMF}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_5_fuse-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_5_fuse_res-eps-converted-to.pdf}} \centering { FUSE\cite{FUSE}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_5_glphs-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_5_glphs_res-eps-converted-to.pdf}} \centering {GLP-HS\cite{GLP-HS}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_5_lttr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_5_lttr_res-eps-converted-to.pdf}} \centering {LTTR\cite{LTTR}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_5_ltmr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_5_ltmr_res-eps-converted-to.pdf}} \centering {LTMR\cite{LTMR}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_5_cvpr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_5_cvpr_res-eps-converted-to.pdf}} \centering {MHFnet\cite{xie2019multispectral}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{cave11_5_net-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{cave11_5_net_res-eps-converted-to.pdf}} \centering {HSRnet} \end{minipage} \end{minipage} \begin{minipage}[t]{0.04\linewidth} \vspace{0.7cm} {\includegraphics[height=10\linewidth,width=1\linewidth]{colobargray_01-eps-converted-to.pdf}} \end{minipage} \caption{The first column: the true pseudo-color images from the original CAVE dataset and the corresponding LR-HSI images of \textit{balloons} (R-23, G-18, B-7) (1st-2nd rows), \textit{clay} (R-3, G-16, B-2) (3rd-4th rows), and \textit{fake and real beers} (R-24, G-23, B-18) (5th-6th rows). 2nd-8th columns: the true pseudo-color fused products and the corresponding residuals for the different methods in the benchmark pointing out some close-ups to facilitate the visual analysis.} \label{F:cave} \end{figure*} \begin{figure}[t] \centering\renewcommand\arraystretch{0.65}\small \begin{tabular}{c} \includegraphics[height= 0.5\linewidth,width=0.85\linewidth]{sp-cave-1.png}\\ (a) \textit{balloons} $(276,277)$\\\\ \includegraphics[height= 0.5\linewidth,width=0.85\linewidth]{sp-cave-5.png}\\ (b) \textit{fake and real beers} $(272,19)$ \end{tabular} \caption{Selected spectral vectors for the outcomes coming from the different fusion methods and the ground-truth (GT). The indications of the specific dataset and the location of the pixel under analysis are also provided. } \label{spetral-rCAVE}\vspace{-4mm} \end{figure} Afterwards, we conduct the experiments on the whole 11 testing images. Table \ref{cave11-ave} presents the average QIs on the 11 testing images. To ease the readers' burden, we only show the visual results on \textit{balloons}, \textit{clay}, and \textit{fake and real bears}. Table \ref{qresult-4CAVE} lists the specific QIs of the results on these two images for the different methods. The proposed method outperforms the compared approaches. Furthermore, the running time of the HSRnet is also the lowest one. In Fig. \ref{F:cave}, we display the pseudo-color images of the fusion results and the corresponding error maps on three images. From the error maps in Fig. \ref{F:cave}, it can be observed that the proposed HSRnet approach has a better reconstruction of the high resolution details with respect to the compared methods, thus clearly reducing the errors in the corresponding error maps. The spectral fidelity is of crucial importance when the fusion of hyperspectral images is considered. In order to illustrate the spectral reconstruction provided by the different methods, we plot the spectral vectors for two exemplary cases, see Fig. \ref{spetral-rCAVE}. It is worth to be remarked that the spectral vectors estimated by our method and the ground-truth ones are very close to each other. \subsection{Results on Harvard Dataset}\label{exp:general} The Harvard dataset is a public dataset that has 77 HSIs of indoor and outdoor scenes including different kinds of objects and buildings. Every HSI has a spatial size of 1392$\times$1040 with 31 spectral bands, and the spectral bands are acquired at an interval of 10nm in the range of 420-720nm. 10 images are randomly selected for testing. The test images are shown in Fig. \ref{harvard_test}. \begin{figure}[t] \begin{center} \begin{minipage}{ 0.98\linewidth} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{tree-eps-converted-to.pdf}} \centering {(a)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{cushion-eps-converted-to.pdf}} \centering {(b)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{bikes-eps-converted-to.pdf}} \centering {(c)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{roof-eps-converted-to.pdf}} \centering {(d)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{door-eps-converted-to.pdf}} \centering {(e)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{cabinet-eps-converted-to.pdf}} \centering {(f)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{window-eps-converted-to.pdf}} \centering {(g)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{wall-eps-converted-to.pdf}} \centering {(h)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{chairs-eps-converted-to.pdf}} \centering {(i)} \end{minipage} \begin{minipage}{ 0.19\linewidth} {\includegraphics[width=1\linewidth]{baskets-eps-converted-to.pdf}} \centering {(j)} \end{minipage} \centering \end{minipage} \end{center} \caption{The 10 testing images from the Harvard dataset. (a) \textit{tree}, (b) \textit{cushion}, (c)\textit{bikes}, (d) \textit{roof}, (e) \textit{door}, (f) \textit{cabinet}, (g) \textit{window}, (h) \textit{wall}, (i) \textit{chairs}, (j) \textit{baskets}.} \label{harvard_test} \end{figure} As in the previous settings, the original data is regarded as the ground-truth HR-HSI. The LR-HSI data is simulated as in Sec. \ref{Sec-Train}. Instead, the HR-MSI (not already available for this dataset) is obtained by applying the method provided by \cite{cie2006fundamental}, where the spectral response functions are obtained from CIE\footnote {http://www.cvrl.org}. We would like to remark that both our method and the MHFnet are trained on the CAVE dataset, and we directly test them on the Harvard dataset without any retraining or fine-tuning. Thus, the performance on the Harvard dataset of these two methods could reflect their generalization abilities. \begin{table}[t] \centering\renewcommand\arraystretch{1}\setlength{\tabcolsep}{6pt}\footnotesize \caption{Average QIs and related standard deviations of the results on 100 patches extracted from the images on the Harvard dataset. The best values are highlighted in boldface.} \begin{tabular}{l|c|c|c|c} \Xhline{1.2pt} Method & PSNR & SAM & ERGAS & SSIM \\ \hline CNMF &27.6$\pm$3.7 & 3.62$\pm$2.2 & 3.86$\pm$4.1 & 0.95$\pm$0.05 \\ FUSE &26.7$\pm$3.7 & 5.40$\pm$4.1 & 4.07$\pm$4.0 & 0.94$\pm$0.06 \\ GLP-HS &26.0$\pm$3.4 & 4.74$\pm$3.3 & 4.26$\pm$3.3 & 0.93$\pm$0.06 \\ LTTR & 27.4$\pm$3.5& 4.65$\pm$2.5 & 4.87$\pm$3.1 & 0.94$\pm$0.06 \\ LTMR &26.9$\pm$3.7 & 6.06$\pm$3.0 & 4.29$\pm$3.3 & 0.92$\pm$0.07 \\ MHFnet &26.6$\pm$5.2 & 8.09$\pm$4.6 & 62.18$\pm$178.2 & 0.88$\pm$0.11 \\ HSRnet &\textbf{29.3}$\pm$4.4 & \textbf{3.44}$\pm$2.0 & \textbf{3.5}$\pm$2.2 & \textbf{0.97}$\pm$0.03 \\ \hline Best value& +$ \infty $ & 0 & 0 & 1 \\ \Xhline{1.2pt} \end{tabular} \label{harvard-ave} \end{table} \begin{table}[t] \centering\renewcommand\arraystretch{1}\setlength{\tabcolsep}{6pt}\footnotesize \caption{Average QIs and related standard deviations of the results for 10 testing images on the Harvard dataset. The best values are highlighted in boldface.} \begin{tabular}{l|c|c|c|c} \Xhline{1.2pt} Method & PSNR & SAM & ERGAS & SSIM \\ \hline CNMF & 34.3$\pm$3.8 & 4.72$\pm$2.3 & 4.37$\pm$2.4 & 0.94$\pm$0.02 \\ FUSE & 32.9$\pm$3.8 & 7.48$\pm$3.5 & 4.79$\pm$2.0 & 0.93$\pm$0.03 \\ GLP-HS & 35.0$\pm$4.8 & 4.87$\pm$2.2 & 4.26$\pm$1.6 & 0.93$\pm$0.04 \\ LTTR & 36.1$\pm$5.4 & 6.06$\pm$2.3 & 6.19$\pm$2.2 & 0.90$\pm$0.07 \\ LTMR & 37.2$\pm$4.5 & 6.13$\pm$2.3 & 4.82$\pm$3.1 & 0.93$\pm$0.05 \\ MHFnet & 36.4$\pm$5.5 & 7.03$\pm$4.0 & 16.57$\pm$14.6 & 0.91$\pm$0.08 \\ HSRnet & \textbf{39.5}$\pm$\textbf{4.7} & \textbf{3.38}$\pm$1.1 & \textbf{3.27}$\pm$1.5 & \textbf{0.97}$\pm$0.02 \\ \hline Best value& +$ \infty $ & 0 & 0 & 1 \\ \Xhline{1.2pt} \end{tabular} \label{harvard10-ave} \end{table} \begin{table}[t] \setlength{\tabcolsep}{1.47pt} \caption{QIs of the results for the different methods and the running times on (a) \emph{trees} ,(c) \emph{bikes}, and (h) \emph{wall} for the Harvard dataset. G indicates that the method is running on the GPU device, while C denotes the use of the CPU. The best values are highlighted in boldface.} \centering{ \begin{tabular}{l|ccccccc} \Xhline{1.2pt} \multicolumn{8}{c}{(a) 1000 $\times$ 1000} \\ \hline Method & CNMF & FUSE & GLPHS & LTTR & LTMR & MHFnet & HSRnet\\ PSNR & 32.54 & 31.27 & 34.03 & 31.63 & 32.99 & 35.25 & \textbf{37.54} \\ SAM & 5.21 & 7.95 & 5.41 & 7.73 & 7.30 & 5.00 & \textbf{3.01} \\ ERGAS & 3.91 & 5.39 & 4.10 & 8.68 & 4.79 & 29.05 & \textbf{3.1} \\ SSIM & 0.912 & 0.882 & 0.911 & 0.829 & 0.867 & 0.917 & \textbf{0.961} \\ \Xhline{1.2pt} \multicolumn{8}{c}{(c) 1000 $\times$ 1000} \\ \hline Method & CNMF & FUSE & GLPHS & LTTR & LTMR & MHFnet & HSRnet\\ PSNR & 33.74 & 31.67 & 33.52 & 34.26 & 36.77 & 38.24 & \textbf{39.25} \\ SAM & 4.15 & 7.75 & 5.12 & 6.19 & 6.06 & 5.00 & \textbf{3.56} \\ ERGAS & 3.28 & 3.80 & 3.87 & 4.56 & 2.90 & 8.05 & \textbf{2.38} \\ SSIM & 0.938 & 0.924 & 0.879 & 0.908 & 0.938 & 0.957 & \textbf{0.974} \\ \Xhline{1.2pt} \multicolumn{8}{c}{(h) 1000 $\times$ 1000} \\ \hline Method & CNMF & FUSE & GLPHS & LTTR & LTMR & MHFnet & HSRnet\\ PSNR & 39.69 & 3.007 & 39.33 & 42.55 & 41.90 & 43.97 & \textbf{44.76} \\ SAM & 5.04 & 9.11 & 5.83 & 5.94 & 6.65 & 5.27 & \textbf{3.91} \\ ERGAS & 7.56 & 7.26 & 7.01 & 7.44 & 6.61 & 14.33 &\textbf{ 3.77} \\ SSIM & 0.921 & 0.942 & 0.959 & 0.974 & 0.972 & 0.977 &\textbf{ 0.989} \\ \hline \makecell[c]{Average \\ time(s)}&\footnotesize{102.1(C)}&\footnotesize {7.3(C)}&\footnotesize {16.1(C)}&\footnotesize {2049.5(C)}&\footnotesize {864.3(C)}&\footnotesize {6.8(G)}&\footnotesize {\textbf{2.4}(G)}\\ \Xhline{1.2pt} \end{tabular}} \label{qresult-4} \end{table} \begin{figure*}[t] \centering \begin{minipage}[t]{0.94\linewidth} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_5_gt-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_5_lms-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_5_cnmf-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_5_cnmf_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_5_fuse-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_5_fuse_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_5_glphs-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_5_glphs_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_5_lttr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_5_lttr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_5_ltmr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_5_ltmr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_5_cvpr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_5_cvpr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_5_net-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_5_net_res-eps-converted-to.pdf}} \centering \end{minipage} \vspace{5pt} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_1_gt-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_1_lms-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_1_cnmf-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_1_cnmf_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_1_fuse-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_1_fuse_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_1_glphs-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_1_glphs_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_1_lttr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_1_lttr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_1_ltmr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_1_ltmr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_1_cvpr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_1_cvpr_res-eps-converted-to.pdf}} \centering \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_1_net-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_1_net_res-eps-converted-to.pdf}} \centering \end{minipage} \vspace{5pt} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_4_gt-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_4_lms-eps-converted-to.pdf}} \centering {GT} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_4_cnmf-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_4_cnmf_res-eps-converted-to.pdf}} \centering {CNMF \cite{CNMF}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_4_fuse-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_4_fuse_res-eps-converted-to.pdf}} \centering {FUSE \cite{FUSE}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_4_glphs-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_4_glphs_res-eps-converted-to.pdf}} \centering {GLP-HS \cite{GLP-HS}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_4_lttr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_4_lttr_res-eps-converted-to.pdf}} \centering {LTTR \cite{LTTR}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_4_ltmr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_4_ltmr_res-eps-converted-to.pdf}} \centering {LTMR \cite{LTMR}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_4_cvpr-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_4_cvpr_res-eps-converted-to.pdf}} \centering {MHFnet \cite{xie2019multispectral}} \end{minipage} \begin{minipage}[t]{0.12\linewidth} {\includegraphics[width=1\linewidth]{harvard10_4_net-eps-converted-to.pdf}} {\includegraphics[width=1\linewidth]{harvard10_4_net_res-eps-converted-to.pdf}} \centering {HSRnet} \end{minipage} \end{minipage} \begin{minipage}[t]{0.04\linewidth} \vspace{0.7cm} {\includegraphics[height=10\linewidth,width=1\linewidth]{colobargray_02-eps-converted-to.pdf}} \end{minipage} \caption{The first column: the true pseudo-color images from the original Harvard dataset and the corresponding LR-HSI images of \textit{tree} (R-30, G-27, B-7) (1st-2nd rows), \textit{bikes} (R-31, G-18, B-9) (3rd-4th rows), and (h) \textit{window} (R-31, G-28, B-1) (5th-6th rows). 2nd-8th columns: the true pseudo-color fused products and the corresponding residuals for the different methods in the benchmark pointing out some close-ups to facilitate the visual analysis. } \label{F:harvard-1} \end{figure*} Moreover, we firstly divide these 10 testing images into patches of size $128 \times 128$ randomly selecting 100 patches. Table \ref{harvard-ave} shows the QIs of the results for the different methods on these 100 patches. We can observe that our method is still the best method for all the different QIs, while the margins between our method and the MHFnet become larger than those in Table \ref{cave-ave}. Particularly, the ERGAS value of the MHFnet ranks last place. Thus, this test corroborates that the proposed approach has a better generalization ability than the compared deep learning-based method (\textit{i.e.}, the MHFnet). Table \ref{harvard10-ave} records the average QIs and the corresponding standard deviations for the different methods using the 10 testing images. Table \ref{qresult-4} gives the QIs and the running times for three specific datasets of the Harvard dataset. The proposed method ranks first with the lowest running time. Finally, considering the details in the pseudo-color images in Fig. \ref{F:harvard-1}, we can see that the results of our method get the highest qualitative performance, thus obtaining error maps that are very dark (\textit{i.e.}, with errors that tend to zero everywhere). \subsection{Ablation Study}\label{newstruct} \subsubsection{High-pass filters} In order to investigate the effects of the use of high-pass filters, we compare our HSRnet with its variant that is similar to the original HSRnet but without any high-pass filtering. After removing the high-pass filters, the data cube $\mathcal{C}_{0}$ in Fig. \ref{structure} is obtained by concatenating the LR-HSI $\mathcal{Y}$ and the downsampled version of the HR-MSI, \textit{i.e.}, $\mathcal{Z}^{D}$. The network is trained on the same training data of the HSRnet with the same training settings. Table \ref{nohp} presents the average QIs of these two networks on the 11 testing images for the CAVE dataset and the 10 testing images for the Harvard dataset. As we can see from Table \ref{nohp}, the mean values and standard deviations of the proposed network are much better than that of the one without the high-pass filters. This demonstrates that the use of high-pass filters lead to better and more stable performance. In particular, the QIs of the Harvard testing images prove that the filters significantly provide better generalization ability. Thus, the high-pass filters are of crucial importance for competitive performance of the proposed HSRnet. \begin{figure}[t] \begin{center} {\includegraphics[width=0.9\linewidth]{single_scale.png}} \caption{Concatenation strategy with single scale. If we use this simple and single scale structure to replace the multiscale concatenation $ \mathcal{C}_{1}$ of our HSRnet in Fig. \ref{structure}, it will get worse outcome than our HSRnet, which validates the importance of our multi-scale concatenation. }\label{single_scale_structure} \end{center} \end{figure} \begin{table}[t] \centering\small \renewcommand\arraystretch{0.9}\setlength{\tabcolsep}{6pt} \caption{Average QIs and related standard deviations of the results on the CAVE and the Harvard datasets using the proposed method with and without the high-pass (HP) filters. The best values are highlighted in boldface.}\label{nohp} \begin{tabular}{l|c|c|c|c} \Xhline{1.2pt} \multicolumn{5}{c}{CAVE} \\ \hline Method & PSNR & SAM & ERGAS & SSIM \\ \hline w/o HP & 39.4$\pm$3.3 & 3.88$\pm$1.3 & 3.60$\pm$2.4 & 0.98$\pm$0.01 \\ with HP & \textbf{44.0$\pm$2.9} & \textbf{3.09$\pm$1.0} & \textbf{1.93$\pm$1.0} & \textbf{0.99$\pm$0.00}\\ \Xhline{1.2pt} \multicolumn{5}{c}{Harvard} \\ \hline Method & PSNR & SAM & ERGAS & SSIM \\ \hline w/o HP & 32.8$\pm$5.6 & 4.96$\pm$2.6 & 8.12$\pm$6.6 & 0.90$\pm$0.07 \\ with HP & \textbf{39.5$\pm$4.7} & \textbf{3.38$\pm$1.3} & \textbf{3.27$\pm$1.5} & \textbf{0.97$\pm$0.02} \\ \Xhline{1.2pt} \end{tabular} \end{table} \subsubsection{Multi-scale module} Concatenating multi-scale images is a key part of our network architecture. This leads to the extraction of several details at two different scales, which represent useful information for the super-resolution processing. To prove the strength of this module, we compare our original HSRnet and the simpler architecture that only uses the main scale, $ \mathcal{C}_{1} $ in proposed Network is replaced by the one in Fig. \ref{single_scale_structure}. The results of the two compared approaches are reported in Table \ref{compare_single}. The QI values show the necessity of the multi-scale module in our HSRnet representing a part of the proposed architecture that is less important than the high-pass filtering, but relevant in order to improve the performance measured by some QIs, see \textit{e.g.} the SAM and the ERGAS. \begin{table}[t] \centering\small \renewcommand\arraystretch{0.9}\setlength{\tabcolsep}{6pt} \caption{Average QIs and related standard deviations of the results on the CAVE and the Harvard datasets using the proposed method with a different number of scales. The best values are highlighted in boldface.}\label{compare_single} \begin{tabular}{l|c|c|c|c} \Xhline{1.2pt} \multicolumn{5}{c}{CAVE} \\ \hline Method & PSNR & SAM & ERGAS & SSIM \\ \hline one scale & 42.9$\pm$3.3 & 3.20$\pm$1.1 & 2.18$\pm$1.2 & \textbf{0.99}$\pm$0.00 \\ HSRnet & \textbf{44.0}$\pm$2.9 & \textbf{3.09}$\pm$1.0 & \textbf{1.93}$\pm$1.0 & \textbf{0.99}$\pm$0.00\\ \Xhline{1.2pt} \multicolumn{5}{c}{Harvard} \\ \hline Method & PSNR & SAM & ERGAS & SSIM \\ \hline one scale & 38.8$\pm$4.4 & 3.66$\pm$1.9 & 3.64$\pm$1.8 & \textbf{0.97}$\pm$0.02 \\ HSRnet & \textbf{39.5}$\pm$4.7 & \textbf{3.38}$\pm$1.3 & \textbf{3.27}$\pm$1.5 & \textbf{0.97}$\pm$0.02 \\ \Xhline{1.2pt} \end{tabular} \end{table} \subsection{Comparison with MHFnet}\label{vs} To our knowledge, the MHFnet developed by Xie \textit{et al}. \cite{xie2019multispectral} outperforms the state-of-the-art of the model-based and the deep learning-based methods, actually representing the best way to address the HSI super-resolution problem. Due to the fact that the MHFnet and our HSRnet are both deep learning-based methods, in this subsection, we keep on discussing about the HSRnet comparing it with the MHFnet. \subsubsection{Sensitivity to the number of training samples} We train the MHFnet and our HSRnet with different numbers of training samples to illustrate their sensitivity with respect to this parameter. We randomly select 500, 1000, 2000, and 3136 samples from the training data. Testing data consists of 7 testing images on the CAVE dataset and 10 testing images on the Harvard dataset. Table \ref{diffrernt-num-t} reports the average QIs of the results obtained by the MHFnet and by our HSRnet varying the number of the training samples. From the results on the CAVE dataset in Table \ref{diffrernt-num-t}, we can note that the MHFnet performs well when the training samples are less. This can be attributed to its elaborately designed network structure. Our method steadily outperforms the MHFnet in the cases of 2000 and 3196 training samples. Instead, from the results on the Harvard dataset, we can remark that the generalization ability of our method is robust with respect to changes of the numbers of the training samples (due to the use of the high-pass filters in the architecture). Whereas the MHFnet shows poor performance due to its manual predefined parameters that are sensitive to scene changes. \begin{table}[t] \centering\renewcommand\arraystretch{1}\setlength{\tabcolsep}{3pt}\footnotesize \caption{Results of the two deep learning-based methods varying the number of the training samples. The best values are highlighted in boldface.}\label{diffrernt-num-t} \begin{tabular}{c|c|l|c|c|c|c} \Xhline{1.2pt} Datasets & \# training data & Methods & PSNR & SAM & ERGAS & SSIM \\ \hline\hline \multirow{8}*{CAVE} & \multirow{2}*{3136} & MHFnet & 43.27 & 4.34 & 2.33 & 0.989\\ &&HSRnet & \textbf{44.00} & \textbf{3.09} & \textbf{1.93} & \textbf{0.992} \\ \cline{2-7} &\multirow{2}*{2000} & MHFnet & 43.37 & 4.50 & 2.39 & 0.988 \\ &&HSRnet &\textbf{43.91} & \textbf{3.03} & \textbf{1.96} &\textbf{ 0.992} \\ \cline{2-7} &\multirow{2}*{1000} & MHFnet & \textbf{43.42} & 4.47 & 2.34 & 0.988 \\ &&HSRnet &43.40 & \textbf{3.16} & \textbf{2.08} & \textbf{0.991} \\ \cline{2-7} &\multirow{2}*{500} & MHFnet & \textbf{42.74} & 4.77 &\textbf{ 2.50} & 0.987 \\ &&HSRnet & 40.99 & \textbf{3.65} & 2.89 & 0.987 \\ \hline\hline \multirow{8}*{Harvard} & \multirow{2}*{3136} & MHFnet & 36.41 & 7.03 & 16.57 & 0.915 \\ && HSRnet & \textbf{39.53} & \textbf{3.38} & \textbf{3.27} & \textbf{0.970} \\ \cline{2-7} &\multirow{2}*{2000} & MHFnet & 36.54 & 6.93 & 18.42 & 0.912 \\ && HSRnet &\textbf{39.87} & \textbf{3.40} & \textbf{3.33} & \textbf{0.970} \\ \cline{2-7} &\multirow{2}*{1000} & MHFnet & 36.16 & 6.99 & 26.49 & 0.916 \\ &&HSRnet & \textbf{39.44} & \textbf{3.47} &\textbf{ 3.54} & \textbf{0.968} \\ \cline{2-7} &\multirow{2}*{500} & MHFnet & 36.18 & 7.41 & 25.95 & 0.903 \\ &&HSRnet & \textbf{38.69} & \textbf{3.55} & \textbf{3.81} &\textbf{ 0.966} \\ \Xhline{1.2pt} \end{tabular} \end{table} \begin{figure}[t] \begin{center} {\includegraphics[width=0.6\linewidth]{train_time-eps-converted-to.pdf}} \caption{The comparison of the training times for the MHFnet and the proposed HSRnet.}\label{timec} \end{center} \end{figure} \subsubsection{Network generalization} In the above content, MHFnet and our HSRnet are both trained with CAVE data. We can find that our HSRnet outperforms the MHFnet in all the experiments on the testing data provided by the Harvard dataset. This shows the remarkable generalization ability of our network. To further corroborate it, we retrain these two networks on training samples provided by the Harvard dataset. Namely, we extract from the Harvard dataset 3763 training samples, in which the HR-MSI is of size $64 \times 64$ and the LR-HSI is of size $16 \times 16$. As previously done, we select the same 11 images from the CAVE dataset and the same 10 images from the Harvard dataset to build the testing set. We show the QIs of the results for these two networks trained on the Harvard dataset in Table \ref{table-trained-harvard}. It can be seen that the generalization ability of the MHFnet is still limited. Instead, the proposed approach still shows an excellent generalization ability when it is used on CAVE data but trained on the Harvard samples. \begin{table}[hpt] \centering\small \renewcommand\arraystretch{0.9}\setlength{\tabcolsep}{6pt} \caption{Average QIs and related standard deviations of the results for the networks trained on the Harvard dataset. The best values are highlighted in boldface.}\label{table-trained-harvard} \begin{tabular}{l|c|c|c|c} \Xhline{1.2pt} \multicolumn{5}{c}{CAVE} \\ \hline Method & PSNR & SAM & ERGAS & SSIM \\ \hline MHFnet & 34.9$\pm$2.5 & 13.15$\pm$4.2 & 5.73$\pm$2.4 & 0.93$\pm$0.02 \\ HSRnet & \textbf{40.5}$\pm$2.8 & \textbf{4.21}$\pm$1.6 & \textbf{3.20}$\pm$1.6 &\textbf{ 0.98}$\pm$0.01 \\ \Xhline{1.2pt} \multicolumn{5}{c}{Harvard} \\ \hline Method & PSNR & SAM & ERGAS & SSIM \\ \hline MHFnet & \textbf{41.0}$\pm$5.3 & 3.36$\pm$1.6 & 3.33$\pm$1.9 & 0.97$\pm$0.02\\ HSRnet & 40.1$\pm$5.5 & \textbf{3.06}$\pm$1.1 & \textbf{2.49}$\pm$1.0 & \textbf{0.98}$\pm$0.02 \\ \Xhline{1.2pt} \end{tabular} \end{table} \subsubsection{Parameters and training time} MHFnet contains 3.6 million parameters, instead, 2.1 million parameters have to be learned by our HSRnet. In Fig. \ref{timec}, we plot the training time with respect to the epochs. We can find that our network needs much less training time than MHFnet. Actually, from Tables \ref{qresult-4CAVE} and \ref{qresult-4}, the testing time of our HSRnet is also less than that of the MHFnet. Indeed, fewer parameters result in less training and testing times, making our method more practical. \section{Conclusions}\label{conclusion} In this paper, a simple and efficient deep network architecture has been proposed for addressing the hyperspectral image super-resolution issue. The network architecture consists of two parts: $i$) a spectral preservation module and $ii$) a spatial preservation module that has the goal to reconstruct image spatial details starting from multi-resolution versions of input data. The combination of these two parts is performed to get the final network output. This latter is compared with the reference (ground-truth) image under the Frobenius norm based loss function. This is done with the aim of estimating the network parameters during the training phase. Extensive experiments demonstrated the superiority of our HSRnet with respect to recent state-of-the-art hyperspectral image super-resolution approaches. Additionally, advantages of our HSRnet have been reported also from other points of view, such as, the network generalization, the limited computational burden, and the robustness with respect to the number of training samples. \bibliographystyle{IEEEtran}
{'timestamp': '2020-06-01T02:11:17', 'yymm': '2005', 'arxiv_id': '2005.14532', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14532'}
arxiv
\section{Introduction} The epidemic of SARS-CoV-2 infection triggered an unprecedented public health response. Given the lack of effective vaccine and treatment in 2020, this response included a var
iety of travel restrictions and social distancing measures \cite{Tian}. While these measures help to slow down the epidemic they come at significant economical and societal cost \cite{ECDCRRA}. As an alternative an approach focusing on rapid diagnosis is increasingly recommended \cite{WHO.S} and prior to lifting social distancing measures large-scale community testing should be in place \cite{EC}. Testing efforts are complemented by identifying and quarantining contacts of the diagnosed cases. Of note, by isolating the asymptomatic contacts from their social networks, this strategy takes into account the pre-symptomatic and asymptomatic spread of the infection \cite{Tong, Huang}, believed to be one of the key drivers of fast spread of COVID-19. As an example, wide spread testing in general population followed by isolation of the infected helped to reduce COVID-19 incidence by 90\% in an Italian village of Vo’Euganeo \cite{Day}. A modelling study in France offers similar conclusions arguing that relaxing social lock-down will be only feasible in case of extensive testing \cite{Di}. While there is already a number of studies estimating the effects of general social distancing measures \cite{Flaxman, Tian, Di, Giordano}, less is known about the impact of quarantine. Hellewell et al. \cite{Hellewell} investigated the potential of rapid isolation of cases and contact tracing to control the epidemic, finding that prohibitively high levels of timely contact tracing are necessary to achieve control. However, new technologies may offer sufficiently fast alternative to traditional contact tracing, in which case the epidemic could be still controlled by contact tracing \cite{Ferretti}. \smallskip Our aim is to develop a SEIR-type model which incorporates the effects of quarantine and validate it in a setting in which measures to reduce contacts are in place. We apply it to investigate the role of quarantine in Poland. The first case of COVID-19 in Poland was diagnosed on March 4th. Social distancing measures were rapidly introduced during the week of 9 - 13th March including closure of schools and universities, cancellation of mass events and closure of recreation facilities such as bars, restaurants, gyms etc. as well as shopping malls. Religious gatherings were limited. Finally, borders were closed for non-citizens \cite{Pinkas}. These measures were fully in place on March 16th. Further, beginning at March 25th restrictions on movement and travel were introduced (lock-down). Wearing face covers became obligatory on April 14th. The restrictions were gradually lifted beginning at April 20th. We focus on modelling the time period when the social distancing measures were in place and then consider different scenarios of relaxation of the restrictions with possible improvement of testing and contact tracing. We note that the procedures for quarantine were in place even before the social distancing measures. They initially focused on individuals arriving from COVID-19 affected areas in China. When the epidemic started spreading in European countries people who came back to Poland from these countries were advised to immediately seek medical attention if they experienced any symptoms consistent with COVID-19. However, adherence to these recommendations was not evaluated. As soon as the first case was diagnosed in Poland quarantine for close contacts was also implemented. \smallskip This paper aims to define a deterministic population model describing the epidemic in classical terms of susceptible, exposed, infectious, removed. In our model the quarantine becomes a separate state that removes individuals from susceptible and exposed states. We show that the reproductive number in our model is given by a simple formula referring to the parameters of transmission and transition, but also to parameters describing the quarantine. We demonstrate that in a real life scenario (case study of Poland) the quarantine effectively reduces the growth of infectious compartment. Increasing the efficiency of contact tracing and testing may may to some extent compensate lifting up the social distancing restrictions. \section{Methods} \subsection{The model} We introduce a modification of the classical SEIR model including effects of quarantine. To underline importance of that extension we call it SEIRQ. Formally the model is described by a system of ordinary differential equations with delay dedicated to the quarantine. \begin{figure}[h!] \label{model} \centering \includegraphics[width=0.75\textwidth]{obrazek_z_kwarantanna.pdf} \caption{{\bf Schematic representation of the states included in the model}. The solid lines represent the transition parameters and the dashed line indicate that the specific quantity is added.} \end{figure} \noindent The following states are included in the model: $S(t)$ -- susceptible $E(t)$ -- exposed (infected, not infectious) $I_d(t)$ -- infectious who will be diagnosed $I_u(t)$ -- infectious who will not be diagnosed $R_d(t)$ -- diagnosed and isolated $R_u(t)$ -- spontaneously recovered without being diagnosed $Q(t)$ -- quarantined \bigskip The figure \ref{model} presents the schematic representation of the model. A susceptible individual (state $S$), when becoming infected first moves to the state $E$, to model the initial period, when the infected individual is not yet infectious. Next the cases progress to one of the infectious states $I_d, I_u$ at the rates $\kappa\sigma$ and $(1-\kappa)\sigma$, respectively. In general, moving through the $I_d$ pathway concerns these individuals who (independently of quarantine) would meet the testing criteria, as relevant to the local testing policy, e.g. testing of people with noticeable symptoms. We shall emphasize that from the point of view of analysis of spread of infection, the quantity $I_u$ shall be regarded rather as not recognized infections, not necessarily asymptomatic or mild. With this interpretation the value of $\kappa$ can be influenced by intensity of testing. The creation of state $E$ is via $I_d$ and $I_u$ with transmission rates $\beta_d$ and $\beta_u$, respectively, normalized to the total population size $N=S+E+I_d+ I_u+R_d+R_u+Q$, which is assumed to be constant in time, births and deaths are neglected. The transition parameter $\sigma$ is assumed identical for both groups, relating to the time between infection and becoming infectious. The infectious individuals then move to the state $R_d$, which is the state of being diagnosed and isolated (and later recovered or deceased), with the rate $\gamma_d$ corresponding to the observed time between onset and diagnosis. On the other hand $R_u$ contains people who spontaneously recovered with rate $\gamma_u$. Our model includes an additional state of being quarantined ($Q$). To mimic the situation of contact tracing, individuals can be put in quarantine from the state $S$ (uninfected contacts) or the state $E$ (infected contacts). These individuals stay in the quarantine for a predefined time period $T$. We assume that the number of people who will be quarantined depend on the number of individuals who are diagnosed. An average number of individuals quarantined per each diagnosed person is denoted as $\alpha$. However, as the epidemic progresses some of the contacts could be identified among people who were already infected, but were not previously diagnosed, i.e. the state $R_u$. We note that moving individuals between the states $Q$ and $R_u$ has no effect on the epidemic dynamics, therefore we assume that only individuals from $S$ and $E$ are quarantined and we reduce the average number of people put on quarantine by the factor $\frac{S(t)}{S(t)+R_u(t)}$. Further, to acknowledge the capacity limits of the public health system to perform the contact tracing, we introduce a quantity $K_{max}$, describing the maximum number of people who can be put in quarantine during one time step. We also assume that among the quarantined a proportion $\theta$ is infected. After the quarantine, the infected part $\theta K(t-T)$ goes to $R_d$ and the rest $(1-\theta)K(t-T)$ returns to $S$. Taking all of the above into account, the model is described with the following SEIRQ system: \begin{equation}\label{seir} \begin{array}{l} \dot S(t)=- S(t) (\beta_d I_d(t)+\beta_uI_u(t)) - (1-\theta)K(t) + (1-\theta)K(t-T), \\[5pt] \dot E(t)= S(t) (\beta_d I_d(t) + \beta_u I_u(t)) - \sigma E(t) - \theta K(t)\\ [5pt] \dot I_d(t)=\kappa \sigma E(t) - \gamma_d I_d(t), \\[5pt] \dot I_u(t)=(1-\kappa) \sigma E(t) -\gamma_u I_u(t),\\[5pt] \dot R_d(t)= \gamma_d I_d(t) + \theta K(t-T),\\ [5pt] \dot R_u(t)=\gamma_u I_u(t),\\[5pt] \dot Q(t)= K(t)-K(t-T),\\[5pt] \mbox{where \ \ } K(t)={\rm min}\{\frac{S(t)}{S(t)+R_u(t)} \alpha \gamma_d I_d(t), K_{max} \}, \mbox{\ \ and \ \ } \alpha,\beta_d,\beta_u,\gamma_d,\gamma_u, \theta, T \geq 0. \end{array} \end{equation} We assume that the parameters $\alpha, \beta_d, \beta_u, \theta$, $\gamma_u$ and $\gamma_d$ depend on the country and time-specific public health interventions and may therefore change in time periods. Due to proper interpretation of the equation on $E$ we require that $\beta_d \geq \theta \alpha \gamma_d$ to ensure positiveness of $E$. \subsection{Basic reproductive number, critical transmission parameter $\beta^*$.} Based on the general theory of SEIR type models \cite{Dik}, we introduce the reproductive number \begin{equation} \label{def:R0} \mathcal{R}=\kappa \left( \frac{ \beta_d}{\gamma_d} - \theta \alpha \right) + (1-\kappa) \frac{\beta_u}{\gamma_u}. \end{equation} It determines the stability of the system as $\mathcal{R} <1$ and instability for $\mathcal{R}>1$ (the growth/decrease of pandemia). This quantity not only explains the importance is testing (in terms of $\kappa$) and quarantine (in terms of $\alpha$), but also gives an indication on levels of optimal testing and contact tracing. We underline that this formula works for the case when the capacity of the contact tracing has not been exceeded $(K(t)<K_{max})$. The details of derivation of \eqref{def:R0} are provided in the Appendix, section \ref{sec:R}. We shall emphasize the formal mathematical derivation holds for the case when $I$ and $E$ are small comparing to $S$. Therefore the complete dynamics of the nonlinear system (\ref{seir}) is not fully determined by \eqref{def:R0}. However in the regime of epidemic suppression, which is the case of COVID-19 epidemic in Poland, $I$ and $E$ are small compared to $S$ and so the formula (\ref{def:R0}) reasonably prescribes spreading of infection in the population. The critical value $\mathcal{R}=1$ defines the level of transmission which is admissible, taking into account the existing quarantine policy, in order to control epidemic. As the level of transmission depends on the level of contacts, this provides information on the necessary level of social distancing measures. The formula \eqref{def:R0} indicates that improving the contact tracing may compensate relaxation of contact restrictions. The key quantity is $\theta \alpha$. Indeed the system with the quarantine has the same stability properties as one without $K$, but with the new transmission rate $\beta_d^{new}=\beta_d - \theta \alpha \gamma_d$. In order to guarantee the positiveness of $E$, $\beta_d^{new}$ must be nonnegative. It generates the constraint \begin{equation} \label{3.1} \theta\alpha\gamma_d \leq \beta_d. \end{equation} The above condition also implies the theoretical maximal admissible level of quarantine. We define it by improving the targeting of the quarantine, i.e. by the highest possible level of $\theta$: \begin{equation}\label{def:thetamax} \theta_{max} = \frac{\beta_d}{\gamma_d\alpha}. \end{equation} As long as the $K_{max}$ threshold is not exceeded the effect of the increase in $\theta$ or in $\alpha$ play the same role at the level of linearization (small $I, E$). However, in general it is not the case and for the purpose of our analysis we fix $\alpha$. For our analysis we assume $\beta_d=\beta_u=\beta$. The reason is that, both $I_d$ and $I_u$ contain a mixture of asymptomatic and symptomatic cases and although there might be a difference we lack information to quantify this difference. Then using formula \eqref{def:R0} we compute critical values $\beta^*(\kappa,\theta,\alpha)$ defined as \begin{equation} \label{def:betacrit} {\mathcal R}(\beta^*)=1, \mbox{ \ namely \ } \beta^*(\kappa,\theta,\alpha,\gamma_d,\gamma_u)=\frac{(1+\theta\alpha\kappa)\gamma_d\gamma_u}{\gamma_u\kappa+\gamma_d(1-\kappa)}. \end{equation} It shows the upper bound on transmission rate $\beta$ which still guarantees the suppression of pandemic. We shall omit the dependence on $\gamma_d,\gamma_u$ as these are fixed in our case, and denote briefly $\beta^*(\kappa,\theta,\alpha)$. In the case of maximal admissible quarantine (\ref{def:thetamax}) we obtain \begin{equation} \label{betamax} \beta^*(\theta_{max},\kappa) = \frac{\gamma_u}{1-\kappa}, \end{equation} which can be regarded as theoretical upper bound for $\beta$ if we assume "optimal admissible" quarantine for fixed $\kappa$, for which the epidemic could be still controlled. It must be kept in mind though that the condition (\ref{3.1}) means that we are able to efficiently isolate all persons infected by every diagnosed, therefore is unrealistic. The resulting $\beta^*(\theta_{max},\kappa)$ should be therefore considered as a theoretical limit for transmission rate. \subsection{Fitting procedure} All simulations are performed using GNU Octave ({\em https://www.gnu.org/software/octave/}). The underlying tool for all computations is a direct finite difference solver with a 1 day time step. \smallskip {\bf Basic assumptions for data fitting.} We estimate the transmission rates $\beta$ by fitting the model predictions to the data on the cumulative number of confirmed cases. Since people with confirmed diagnosis are efficiently isolated, they are immediately included into $R_d$. Therefore, the quantity fitted to the data is $R_d(t)$. The crucial assumption behind our approach is that the parameter $\beta$ changes twice during the period of analysis. The reason is that we can distinguish two important time points in the development of epidemic in Poland. The first is initial restrictions including school closure effective March 12, which was accompanied with restrictions on other social activities. As we do not take migration into account in our model, we assume that the effect of border closing is reflected in $\beta$. The second turning point was a lockdown announced on March 25. For simplicity we comprise the effect of above measures in two jump changes in $\beta$ in $t \in \{t_1,t_2\}$ and choose $t_1=14, \quad t_2=28$. With $t=1$ corresponding to March 3 it means small delay with respect to the above dates which can be justified by the fact that new cases are reported with a delay of approximately 2 days. \smallskip {\bf Choice of fixed parameters (Tab. \ref{tab0}).} We assume that the parameters $\sigma, \gamma_u$ represent the natural course of infection and their values could be based on the existing literature. The parameter $\sigma$ describes the rate of transition from non-infectious incubation state $E$ into the infectious states $I_d$ or $I_u$. The value of $\sigma$ takes into account the incubation period and presymptomatic infectivity period. $\gamma_u$ relates to the period of infectivity, which we select based on the research regarding milder cases, assuming that serious cases are likely diagnosed. Further, $\kappa$ is a parameter related both to the proportion of asymptomatic infection and the local testing strategies. Since the literature findings provide different possible figures, for $\kappa$ we examine three different scenarios. Parameters $\gamma_d, \theta$ and $\alpha$ are fixed in our model for the purpose of data fitting, but informed by available data. One of the scenarios of future dynamics of the epidemic (section \ref{sec:future}) considers possible increase of $\theta$. Parameter $\gamma_d$ was estimated basing on time from onset to diagnosis for diagnosed cases, and $\theta$ as rate of diagnosed among quarantined. Furthermore we fix the parameter $\alpha$ by comparing the number of quarantined people obtained in simulations with actual data. The capacity level of public health services is set in terms of possible number of quarantined per day $K_{max}$, as double the level observed so far. Detailed justification of the values of fixed parameters collected in the following table, is given in the Appendix, section \ref{sec:par}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline & & \\[-8pt] Parameter & Value & Source \\[4pt] \hline& & \\[-8pt] $\sigma$ & $\frac{1}{3.5}$ & Literature: incubation time \cite{Guan, Li.Q, Lauer} \\ & & + presymptomatic spread \cite{Wei, Tong, Huang} \\[4pt] \hline & & \\[-8pt] $\gamma_d$ & $\frac{1}{5.5}$ & Observed data: appendix section \ref{sec:par} \\[4pt] \hline & & \\[-8pt] $\gamma_u$ & $\frac{1}{10}$ & Literature: \cite{Hu}, WHO mission report from China \\[4pt] \hline & & \\[-8pt] $\kappa$ & \{0.2; 0.5; 0.8\} & Literature: proportion asymptomatic\\ & & or undocumented \cite{Li.R, Nishiura, Day,John} \\[4pt] \hline & & \\[-8pt] $\theta$ & 0.006 & Observed data: appendix section \ref{sec:par} \\[4pt] \hline & & \\[-8pt] $\alpha$ & 75 & Observed data: appendix section \ref{sec:par} \\[4pt] \hline & & \\[-8pt] $K_{max}$ & 50 000 & $2 \,\times$ the maximum level observed so far (arbitrary decision) \\[4pt] \hline \end{tabular} \caption{Fixed parameters used in the model} \label{tab0} \end{table} {\bf Optimization algorithm.} In order to fit the values $\beta_1,\beta_2,\beta_3$ we use a standard gradient descent algorithm. The error function is defined as mean square difference between the cumulative number of diagnoses and the $R_d(t)$ predicted from the model. For the initial values the error function is optimized only for a limited number of possible conditions, as these mostly impact $\beta_1$, which is less relevant for future predictions. To estimate confidence intervals we use a method of parametric bootstrap. The optimisation procedures are described in the Appendix, section \ref{sec:opt}, where we also show precise errors of data fitting. \smallskip {\bf Dataset.} The data series contains cumulative number of confirmed cases of COVID-19 in Poland from March 3 (first confirmed case in Poland) till April 26, which amounts to 54 observations. The data are taken from official communications of the Ministry of Health. As explained in table \ref{tab0} and appendix (section \ref{sec:par} additional data sources were used for choosing $\theta$, $\alpha$ and $\gamma_d$. \section{Results} \label{s:sym} \subsection{Estimation of parameters and "no-change" scenario predictions} In Table \ref{tab2.1} we show estimated values of $\beta_i$, where $i=1,2,3$ correspond to the time intervals when different measures were in place, and the ${\mathcal R}$ for the third time interval. Given the social distancing measures in place early April 2020, as well as the quarantine levels, the reproductive number was below 1, independently of the value of $\kappa$, which relates to testing effectiveness. The figure \ref{fig2.0} shows the fit of the models assuming different levels of $\kappa$. Good fit is found for all three models although predictions start to differ in the middle-term prognosis. \begin{figure}[h!] \begin{center} \includegraphics[width=12cm]{dopasowanieRdRu.pdf} \end{center} \caption{Results of model fit to cumulative diagnosed cases ($R_d$) for $\kappa=0.2,0.5,0.8$ (panel A) and corresponding predictions for undiagnosed, recovered compartment, $R_u$ (panel B). Coloured shades correspond to 95\% confidence intervals for the respective colour line.} \label{fig2.0} \end{figure} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline & & & \\[-8pt] & $\kappa=0.2$ & $\kappa=0.5$ & $\kappa=0.8$ \\[4pt] \hline & & & \\[-8pt] $\beta_1$ & 0.635 & 0.684 & 0.738 \\[4pt] & (0.569 , 0.701) & (0.611 , 0.744) & (0.672 , 0.812) \\[4pt] \hdashline & & & \\[-8pt] $\beta_2$ & 0.332 & 0.383 & 0.442 \\[4pt] & (0.288 , 0.397) & (0.336 , 0.443) & (0.4 , 0.514) \\[4pt] \hdashline & & & \\[-8pt] $\beta_3$ & 0.099 & 0.132 & 0.175 \\[4pt] & (0.081 , 0.118) & (0.11 , 0.149) & (0.147 , 0.214) \\[4pt] \hdashline & & & \\[-8pt] $\mathcal{R}(\beta_3, 0.006,75)$ & 0.817 & 0.802 & 0.772 \\[4pt] & (0.651 , 0.977) & (0.648 , 0.915) & (0.569 , 0.874) \\[4pt] \hline \end{tabular} \caption{Estimated values of $\beta_i$ and values of $\mathcal{R}$ corresponding to the latest estimation period with 95\% confidence intervals } \label{tab2.1} \end{table} We proceed with predictions assuming that the restrictions are continued, i.e.keeping $\beta=\beta_3$ (note that the estimated $\beta_3$ is different for each $\kappa$). We calculate the epidemic duration ($t_{max}$), the peak number of infected ($I_d^{max}, I_u^{max}$) and the final size of the epidemic ($R_d(t_{max}), R_u(t_{max})$). In order to show the influence of quarantine we compare the situation with quarantine, keeping the same $\theta,\alpha$, to the situation without quarantine, setting $\alpha\theta=0$. The results of the development of the epidemic during the first 120 days are shown on Figure \ref{fig2.2a}. For $\kappa=0.2$ the difference between the scenarios with and without quarantine is visible but not striking. However for $\kappa=0.5$ and $\kappa=0.8$ a bifurcation in the number of new cases occurs around $t=40$ leading to huge difference in the total time of epidemic and total number of cases. These values are summarized in the table \ref{tab2.2}. We note that given the epidemic state in the first half of April 2020 for all values of $\kappa$ the model predicts epidemic extinction both with quarantine and without quarantine. However, since the epidemic is very near to the endemic state, the predicted duration is very long, especially if no quarantine is applied. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & & & & & & \\[-8pt] $\kappa$ & quarantine factors & $R_d(t_{max})$ & $R_u(t_{max})$ & $I_d^{max}$ & $I_u^{max}$ & $t_{max}$ \\ [4pt] \hline & & & & & & \\[-8pt] 0.2 & $\theta=0.006, \alpha=75$ & 31 & 85 & 1.9 & 10.8 &450 \\ & $\theta,\alpha = 0$ & 44 & 175 & 2.1 & 12.5 & 830 \\[4pt] \hline & & & & & & \\[-8pt] 0.5 & $\theta=0.006, \alpha=75$ & 29 & 20 & 1.9 & 2.7 & 330 \\ & $\theta,\alpha = 0$ & 1078 & 1078 & 5.1 & 9.2 & 3200 \\[4pt] \hline & & & & & & \\[-8pt] 0.8 & $\theta=0.006, \alpha=75$ & 24 & 4 & 1.9 & 0.7 & 230 \\ & $\theta,\alpha = 0$ & 6317 & 1579 & 10.6 & 47.6 & 1280 \\[4pt] \hline \end{tabular} \caption{ Duration of epidemic ($t_{max}$) in days, the final values of $R_d$ and $R_u$, in thousands ($R_d(t_{max}), R_u(t_{max})$) and peak values of $I_d$ and $I_u$, in thousands ($I_d^{max}, I_u^{max}$) according to quarantine and testing scenarios.} \label{tab2.2} \end{table} \begin{figure}[h!] \centering \includegraphics[width=16cm]{PanelPorPredykcji.pdf} \caption{Predicted values of $R_d,R_u$ (panels A -- C) and $I_d, I_u$ (panels D -- E), as depending on the value of $\kappa$ and whether or not the quarantine is implemented. For $t>54$ $\beta=\beta_3$ estimated for each $\kappa$, with the same quarantine parameters or without quarantine at all.} \label{fig2.2a} \end{figure} \subsection{Critical $\beta^*$ for the current situation} Using the formula \eqref{def:betacrit} we can compute critical values $\beta^*$. In Table \ref{tab1} we show the values of $\beta^*(\kappa,0.006,75)$ and for convenience recall also estimated values of $\beta_3$ and ${\mathcal R}$, listed already in Table \ref{tab2.1}. Moreover we compute $\beta^*(\kappa,0,0)$, i.e. without quarantine and show values of ${\mathcal R}$ for our estimated values of $\beta_3$ and the same $\gamma_d, \gamma_u$ but without quarantine. Comparing the estimated values of $\beta_3$ (table \ref{tab1}) for all cases of $\kappa$ are only slightly below $\beta^*$. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\kappa$ & $\beta_3$ & $\beta^*(\kappa,0.006,75)$ & ${\mathcal R}(\beta_3,\kappa,0.006,75)$ & $\beta^*(\kappa,0,0)$ & ${\mathcal R}(\beta_3,\kappa,0,0)$ & $\beta_3 - \theta \alpha \gamma_d$ \\ \hline 0.2 & 0.099 & 0.12 & 0.817 & 0.11 & 0.907 & 0.018 \\ 0.5 & 0.132 & 0.158 & 0.802 & 0.129 & 1.03 & 0.051 \\ 0.8 & 0.175 & 0.211 & 0.772 & 0.155 & 1.132 & 0.074 \\ \hline \end{tabular} \caption{Values of $\beta^*$ and $\mathcal R(\beta_3)$ with quarantine ($i=0.006, \; \alpha=75$) and without quarantine } \label{tab1} \end{table} Eliminating the quarantine, for the estimated values of $\beta_3$, we have different situations depending on the actual value of $\kappa$. In case $\kappa=0.2$, so assuming that currently only 20\% of infections are diagnosed, the low values of ${\mathcal R}$ are due to low $\beta_3$ rather than the effect of quarantine (controlling epidemic by social contact restrictions). In effect even if we remove the quarantine we have still ${\mathcal R}<1$, but very close to 1. On the other hand if $\kappa=0.5$ or $\kappa=0.8$ we estimate higher $\beta_3$, which corresponds to the situation of controlling the epidemic by extensive testing and quarantine. For these cases, if we remove the quarantine, we end up with ${\mathcal R}>1$. The quantity $\beta_3-\theta\alpha\gamma_d$ represents effective transmission rate due to diagnosed cases. In particular it shows by how much the transmission could be reduced by improved contact tracing ($\theta\alpha$) and faster diagnosis ($\gamma_d$). These results confirm that the higher is the ratio of undiagnosed infections, the weaker is influence of quarantine. In the next section we verify these results numerically. \subsection{Impact of quarantine at relaxation of social distancing} \label{sec:future} Our second goal is to simulate loosening of restrictions. In particular we want to verify numerically the critical thresholds $\beta^*$ listed in table \ref{tab1}. For this purpose we assume that at $t=60$ we change $\beta$. For each value of $\kappa$ we consider 3 scenarios:\\[2pt] {\bf (a)} Current level quarantine: i.e. quarantine parameters $\theta=0.006, \; \alpha=75$ are maintained;\\[2pt] {\bf (b)} No quarantine is applied starting from $t=60$; \\[5pt] {\bf (c)} The maximal admissible quarantine is applied, meaning that $\theta_{max}=\frac{\beta}{\alpha \gamma_d}$ (see (\ref{def:R0})). In this case $\alpha=75$. As long as the limit $K_{max}$ is not reached there is no difference whether we increase $\alpha$ or $\theta$, the decisive parameter is $\alpha \theta$. Increasing $\alpha$ would lead to reaching $K=K_{max}$ earlier and hence worse outcomes.\\[5pt] Figures \ref{fig3.1}-\ref{fig3.3} show the final values of $R=R_d+R_u$ and time till the end of epidemic depending on the value of $\beta$ for $t \geq 60$. for above 3 scenarios and different values of $\kappa$. The theoretical values of $\beta^*$ are shown by black lines. The results confirm that around $\beta^*$ a rapid increase in the total number of infected occurs, coinciding with the peak total epidemic duration. Thus the numerical computations confirm that the critical $\beta^*$ calculated for the linear approximation in the section 2.2 are adequate, with a small bias towards lower values. \begin{figure}[h!] \centering \includegraphics[width=14cm]{koniecBYbeta_02.pdf} \caption{Duration of epidemic and the final epidemic size as dependent on $\beta$, for $\kappa=0.2$.} \label{fig3.1} \end{figure} The case $\kappa=0.2$ shows that the influence of quarantine is not high, even for the maximal admissible case, when we are able to efficiently isolate all persons infected by every diagnosed. \begin{figure}[h!] \centering \includegraphics[width=14cm]{koniecBYbeta_05.pdf} \caption{Duration of epidemic and the final epidemic size as dependent on $\beta$, for $\kappa=0.5$.} \label{fig3.2} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=14cm]{koniecBYbeta_08.pdf} \caption{Duration of epidemic and the final epidemic size as dependent on $\beta$, for $\kappa=0.8$.} \label{fig3.3} \end{figure} A striking feature in the behaviour of total number of infected are jumps for certain critical value of $\beta$ observed for $\kappa=0.5$ and $\kappa=0.8$, both in case $\theta=0.006$ and $\theta=\theta_{max}$. The values of $R_d$ and $R_u$ before and after these qualitative changes are summarized in Table \ref{tab3.1}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|} \hline & $\beta$ & $R_d(t_{max})$ & $R_u(t_{max})$ & $t_{max}$ \\ \hline & & & & \\[-8pt] $\kappa=0.5, \; \theta=0.006$, & 0.163 & 1171 & 811 & 4170 \\[3pt] & 0.164 & 6160 & 5875 & 1200 \\[4pt] \hline & & & & \\[-8pt] $\kappa=0.5, \; \theta=\theta_{max}(\beta)$, & 0.211 & 1423 & 666 & 4800 \\[3pt] & 0.212 & 11458 & 10971 & 840 \\[4pt] \hline & & & & \\[-8pt] $\kappa=0.8, \; \theta=0.006$ & 0.218 & 1137 & 236 & 4740 \\[3pt] & 0.219 & 13706 & 3365 & 1060 \\[4pt] \hline & & & & \\[-8pt] $\kappa=0.8, \; \theta=\theta_{max}(\beta)$ & 0.566 & 1762 & 108 & 2850 \\[3pt] & 0.567 & 27602 & 6729 & 570 \\[4pt] \hline \end{tabular} \caption{Critical values of $\beta$ obtained in simulations and corresponding final numbers of diagnosed/undiagnosed (in thousands) and total time of epidemic.} \label{tab3.1} \end{table} A closer investigation for these values of $\beta$ shows that in all 4 cases the jump occurs for the first value of $\beta$ for which the limit number of quarantined, $K_{max} = 50 000$, is achieved. Notice that immediately after passing the threshold the values become very close to those without quarantine. Therefore the effect of quarantine is immediately and almost completely cancelled after passing the critical value of $\beta$. The transition is milder in the case $\kappa=0.2$ which can be explained by the fact that the transition takes place for lower values of $\beta$. Results of our simulations confirm the theoretical prediction that the margin in relaxation of restrictions is very narrow if we want to avoid a blow up of the number infections. Strengthening of quarantine allows to remain in a stable regime while increasing $\beta$. \section{Discussion} We propose a simple SEIR-type model (SEIRQ), which includes the effects of testing and contact tracing. The model formulation allows to calculate an interpretable formula for the reproductive number $\mathcal{R}$ \eqref{def:R0}. As typical for this class of models, $\mathcal{R}$ depends on transmission parameters $\beta$. Increasing $\beta$ corresponding in e.g. to higher frequency of social contacts increases $\mathcal{R}$. Decreasing $\beta$, for example in consequence of widespread use of face masks, has the opposite effect. On the other hand $\gamma_d$ reflects the time to diagnosis and the formula indicates that more rapid diagnosis is associated with lower $\mathcal{R}$. In addition, our model offers a clear interpretation of the quarantine effect. The transmission rate due to diagnosed cases, $\beta_d$, is decreased by the factor $\theta\alpha\gamma_d$ indicating that both the number of quarantined per diagnosed individual ($\alpha$) and proper targeting of the quarantine (the infection rate among the quarantined $\theta$) equally contribute to this factor. Also the parameter related to testing: the delay in diagnosis, $\gamma_d^{-1}$, plays similar role. This quantifies the potential of a wide range of interventions to improve testing and contact tracing, as outlined in e.g. in ECDC recommendations \cite{ECDCCT}. In particular, as the number of people put in quarantine per each case and the infection rate among the quarantined impact $\mathcal{R}$ in similar fashion, our results support the recommendations to focus on the high risk contacts when the resources do not allow to follow all contacts. Our model takes into consideration only the effective contact tracing, i.e. the situation when the infected contacts are identified and put in quarantine before they become infectious. People who are identified later would be modelled as passing through one of the $I$ states to the $R$ states. This means that the number of quarantined in our model can be also increased by faster contact tracing. The timely identification of contacts may be a significant challenge in the quarantine approach given that the incubation time can be as short as 2 days in 25\% of cases \cite{Guan}. As mentioned by other Authors \cite{Ferretti}, the delays in manual contact tracing are usually at least 3 days and under such circumstances the contact tracing and quarantine alone may be insufficient to control the epidemic. This could be improved with digital contact tracing. Notably, mixed contact tracing strategies implemented in South Korea indeed helped to control the epidemic at the early stages \cite{Korea}. The use of "smart contact tracing" with mobile phone location data and administrative databases were also key to rapid identification and self-quarantine of contacts in Taiwan \cite{Chen} and implementation of such strategy helped Singapore to control the epidemic without major disruptions of social activities \cite{Ng}. We note that the quarantine effect relates only to transmission due to diagnosed cases. As expected, in order to control the epidemic the transmission due to undiagnosed cases has to be negligible. This can be controlled by general measures such as {\it lockdown}, which universally decrease the frequency of social contacts and are therefore likely to reduce $\beta_u$. In our model the part of $\mathcal{R}$ representing transmission due to undiagnosed cases is scaled by $(1-\kappa)$, the parameter relating to the efficiency of the testing system. Again, the examples of Singapore as well as the Italian village of Vo’Euganeo show that the widespread testing complementing the efficient contact tracing was essential to suppress epidemic. Testing unrelated to epidemiological links decreases $(1-\kappa)$ factor, thus making the factors impacting transmission due to diagnosed cases, such as quarantine, more powerful to decrease $\mathcal{R}$. Further, our model allows to study the effect of the situation, in which the contact tracing capacities are exceeded. In this situation the epidemic is likely to quickly develop to the levels observed without quarantine. It is therefore quite crucial to implement the aggressive contact tracing system, when the epidemic is still at low levels and it is possible to bring the epidemic to suppression phase. We demonstrate the high impact of contact tracing and quarantine on the observed numbers of cases in Poland. This effect was coupled with substantial reduction in the transmission parameter $\beta$ resulting from social contact restrictions. Depending on the scenario, $\beta$ decreased by 76\% to 84\%, bringing $\mathcal{R}$ below 1. The estimated effect of the quarantine in Poland would depend on which of the considered scenarios regarding testing efficiency was the most relevant to our situation. In our model the quarantine is estimated to be the most effective for the scenario in which most of the cases are diagnosed ($\kappa=0.8$). Testing strategies that comprise testing of all individuals with symptoms of respiratory illness could theoretically identify up to 82\% of infected, assuming they would all present to medical care. This could be coupled with random screening of high risk individuals, in e.g. health care workers, or - in case of high incidence - even random screening of entire community to achieve the $\kappa$ of the order of 0.8. The Polish clinical recommendations specifically mention only testing all individuals with severe infections \cite{AOTMIT}. In addition testing is provided to health care workers. The severe course corresponds to approximately 18\% of all infections \cite{Guan}. Therefore, the $\kappa=0.8$ scenario is unlikely to be realistic in Poland. We believe that the plausible current $\kappa$ in our country lies between 0.2 and 0.5. For these scenarios the model shows that the control of the epidemic is largely achieved through suppression of $\beta$. In case of relaxation of social contact restrictions, the efforts should be focused on increasing the level of testing in order to decrease the proportion of undiagnosed cases as well as maintaining or increasing the effectiveness of quarantine. For smaller $\kappa$, even substantially increasing the effectiveness of quarantine does not allow to go back to the level of social contacts from before the epidemic ($\beta_1$). Finally, the contact tracing effort was manageable in Poland due to relatively small number of cases. Should the case load increase substantially longer delays in contact tracing would occur, which can substantially decrease the effects of quarantine \cite{Hellewell, Ferretti}. \smallskip {\bf Limitations and future directions of research.} We do not consider the likely reduced transmission from undiagnosed cases who are more likely to be asymptomatic or paucisymtopmatic cases $(\beta_u<\beta_d)$. The reduction factor for infectiousness of asymptomatic is still under investigation. One study found a 60-fold lower viral loads in asymptomatic cases \cite{Liu}, but another estimated the transmissibility reduction by 50\% \cite{Li.R}. Moreover, we did not have sufficient data to include this additional parameter. We calibrated our model only to diagnosed cases, which were officially available. Calibration to mortality data is another approach successfully implemented in e.g. \cite{Flaxman} that potentially removes bias due to different testing policies. As there were relatively fewer fatalities in Poland and little data on clinical progression we decided on simplified model without explicit modelling of the outcomes. Furthermore, we did not consider the sub-optimal adherence to quarantine. It is likely that some individuals would not fully comply to strict quarantine rules. However, only anecdotal evidence for such phenomenon is available at this time. In our model it would decrease the effective $\alpha\theta$, which was chosen to fit to observed number of people put in quarantine. Finally, the analysis of $\mathcal{R}$ is suitable for small size of epidemic, when $S \approx N$. For other cases the results are still useful, but the approximation may be biased, as we have shown for $\beta^*$. Due to little available data and policy changes we did not have sufficient data to determine which $\kappa$ scenario is the most appropriate. \smallskip In conclusion we present a simple model, which allows to understand the effects of testing, contact tracing and quarantining of the contacts. We apply the model to the data in Poland and we show that despite a substantial impact of contact tracing and quarantine, it is unlikely that the control of the epidemic could be achieved without any reduction of social contacts. \smallskip {\bf Acknowledgments.} This work was partially supported by the Polish National Science Centre’s grant No2018/30/M/ST1/00340 (HARMONIA).
{'timestamp': '2020-06-01T02:07:22', 'yymm': '2005', 'arxiv_id': '2005.14412', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14412'}
arxiv
\section{Introduction and Summary} It is well known that effective action for Dp-brane is Dirac-Born-Infeld (DBI) action \cite{Polchinski:1995mt,Leigh:1989jq}(For review, see for example \cite{Sim
on:2011rw}). The problem with this action is that it is non-polynomial in world-volume coordinates and in world-volume gauge field that makes its analysis rather complicated. Similar situation occurs with fundamental string whose dynamics is governed by Nambu-Gotto form of the action which is again non-polynomial in string coordinates. On the other hand it is well known that it is possible to replace this action with classically equivalent form of the action with world-sheet metric \cite{Brink:1976sc,Deser:1976rb,Howe:1977hp}. An analogue procedure can be performed in case of $p-$brane \cite{Howe:1977hp,AbouZeid:1997mt}. On the other hand situation is more involved in case of DBI action due to the presence of the gauge field so that auxiliary metric contains symmetric and anti-symmetric part \cite{AbouZeid:1997mt,Lindstrom:1987cv}. On the other hand an interesting form of the action for Dp-brane was proposed in \cite{AbouZeid:1998he} which is known as geometrical form of the action. The main advantage of this action is that it depends on quadratic combination of field strength and hence we can introduce auxiliary metric which is symmetric. Since the form of the action is very interesting and not well known we mean that it deserves to be studied further. In this note we focus on the canonical analysis of the geometrical action as was introduced in \cite{AbouZeid:1998he}. We would like to see whether Hamiltonian is different from the Hamiltonian of DBI action which is given as linear combination of $p+1$ first class constraints. In case of geometrical action the canonical analysis is more complicated due to the form of Lagrangian density but we again find that this theory has $p+1$ first class constraints which has the same form as constraints derived from DBI action. This is certainly nice consistency check that demonstrates that different Lagrangian formulations that are in some way equivalent have the same Hamiltonian. Well known example is Nambu-Gotto and Polyakov form of the string action. We also briefly discuss geometrical form of unstable D(p+1)-brane action \cite{Sen:1999md,Bergshoeff:2000dq,Kluson:2000iy} when we easily generalize analysis that leads to geometrical action. We also study the tachyon condensation in the form of the tachyon kink which is profile of the tachyon field that depends on one world-volume coordinate. We study this problem following very nice analysis presented in \cite{Erkal:2009xq} where it was argued that it is possible to interpret non-BPS D(p+1)-brane as $p+1$-dimensional object moving in $11$ dimensional space-time where additional coordinate corresponds to the tachyon field $T$. In fact, such a structure is also manifest in the geometrical form of unstable D(p+1)-brane action studied here. Then the tachyon kink corresponds to the partial gauge fixing when one world-volume coordinate can be identified with the tachyon field. Generally all world-volume fields on this gauge fixed form of the action depend on the tachyon. Then when we restrict ourselves to the low energy effective action we can presume that all world-volume fields do not depend on tachyon we find that resulting action corresponds to the geometrical form of Dp-brane action. Situation when the fluctuations modes depend on $T$ was nicely analyzed in \cite{Erkal:2009xq} and it was argued there that they are not normalizable and hence cannot be considered as open strings excitations. Rather they can be interpreted as excited closed string states and we recommend the original paper \cite{Erkal:2009xq} for more details. Let us outline our result. We found canonical structure of the geometrical action and we showed that it has the same form as in case of the Hamiltonian analysis of DBI action. We also found geometrical form of unstable D(p+1)-brane action and studied tachyon condensation on its world-volume where we showed that it leads to the geometrical form of stable Dp-brane which is nice consistency check of the tachyon condensation. \section{Canonical Formalism of Geometrical Action for Dp-brane} To begin with we review construction of geometrical action that was performed in \cite{AbouZeid:1998he}. We start with the standard DBI action for Dp-brane that has the form \begin{equation} S=-T_p \int d^{p+1}\xi e^{-\phi}\sqrt{-\det (g_{\alpha\beta}+\mF_{\alpha\beta})} \ , \end{equation} where $T_p=\frac{1}{l_s^{p+1}}$ is Dp-brane tension where $l_s$ is string length. Further, $g_{\alpha\beta}=G_{MN}\partial_\alpha x^M\partial_\beta x^N$ where $G_{MN}$ is background metric, $x^M,M,N=0,1,\dots,9$ parameterize embedding of Dp-brane in the target space-time. Further, world-volume of Dp-brane is parameterized by coordinates $\xi^\alpha,\alpha,\beta=0,1,\dots,p$, where $\partial_\alpha=\frac{\partial}{\partial \xi^\alpha}$. $\phi$ is the background field known as dilaton and $\mF_{\alpha\beta}=b_{\alpha\beta}+l_s^2F_{\alpha\beta} \ , F_{\alpha\beta}= \partial_\alpha A_\beta-\partial_\beta A_\alpha$, where $A_\alpha$ is the gauge field that propagates on the world-volume of Dp-brane. Finally $b_{\alpha\beta}=B_{MN}\partial_\alpha x^M\partial_\beta x^N$ is an embedding of background NSNS two form to the world-volume of Dp-brane. Geometrical form of the action was derived in \cite{AbouZeid:1998he} using of an important fact that \begin{equation} \det (g_{\alpha\beta}+\mF_{\alpha\beta})=\det (g_{\alpha\beta}-\mF_{\alpha\beta}) \end{equation} so that we can write \begin{eqnarray}\label{defgeom} & &\sqrt{-\det (g_{\alpha\beta}+\mF_{\alpha\beta})}= (-\det (g_{\alpha\beta}+\mF_{\alpha\beta}))^{1/4}(-\det (g_{\alpha\beta}-\mF_{\alpha\beta}))^{1/4}= \nonumber \\ & &= (- g)^{1/4}(-\mG)^{1/4} \ , \quad g=\det g_{\alpha\beta} \ , \quad \mG=\det \mG_{\alpha\beta} \ , \nonumber \\ & &\mG_{\alpha\beta}= g_{\alpha\beta}-\mF_{\alpha\gamma}g^{\gamma\delta}\mF_{\delta\beta} \ , \quad \mG_{\alpha\beta }=\mG_{\beta\alpha} \ . \nonumber \\ \end{eqnarray} As a result we obtain geometrical form of the action \begin{equation}\label{geoform} S=-T_p \int d^{p+1}\xi e^{-\phi}(-g)^{1/4}(- \mG)^{1/4} \ . \end{equation} It was shown in \cite{AbouZeid:1998he} that the main advantage of this action is that when we introduce auxiliary world-sheet metric we obtain an action that is quadratic in gauge fields. Further, since the action (\ref{geoform}) is apparently different from DBI form of the action it is interesting to study it in more detail. In fact, in this note we focus on the canonical analysis of the action (\ref{geoform}). Now from (\ref{geoform}) we obtain conjugate momenta \begin{eqnarray}\label{momen} & &p_M=\frac{\delta \mL}{\delta \partial_0 x^M}=\nonumber \\ &&= -\frac{1}{2}T_p e^{-\phi}g_{MN}\partial_\beta x^N g^{\beta 0}(-g)^{1/4} (-\mG)^{1/4} -\frac{1}{2}T_p g_{MN}\partial_\beta x^N \mG^{\beta 0}(-g)^{1/4} (-\mG)^{1/4} -\nonumber \\ & &-\frac{1}{2}T_p g_{MN}\partial_\beta x^N g^{\beta\sigma} \mF_{\sigma\omega}\mG^{\omega \delta}\mF_{\delta \rho}g^{\rho 0} (-\mG)^{1/4}(-g)^{1/4}+\nonumber \\ & &+\frac{1}{2}T_p e^{-\phi} (-g)^{1/4}(-\mG)^{1/4}(B_{MN}\partial_\beta x^N g^{\beta\rho} \mF_{\rho\sigma}\mG^{\sigma 0}-B_{MN} \partial_\beta x^N g^{0\rho}\mF_{\rho\sigma}\mG^{\sigma\beta}) \ , \nonumber \\ & &\pi^\alpha=\frac{\delta \mL}{\delta \partial_0 A_\alpha}= \frac{l_s^2}{2}T_p e^{-\phi}(g^{\alpha\beta}\mF_{\beta\rho}\mG^{\rho 0}- g^{0\beta}\mF_{\beta\rho}\mG^{\rho\alpha})(-g)^{1/4}(-\mG)^{1/4} \nonumber \\ \end{eqnarray} so that $\pi^0\approx 0$. Further, the bare Hamiltonian density is equal to \begin{eqnarray} \mH=p_M\partial_0 x^M+\pi^iF_{0i}+\pi^i\partial_i A_0-\mL=\pi^i\partial_i A_0 \ . \nonumber \\ \end{eqnarray} To proceed further we use definition of $\pi^{\alpha}$ given in (\ref{momen}) to introduce $\Pi_M$ defined as \begin{eqnarray} & &\Pi_M=p_M-l_s^{-2}B_{MN}\partial_i x^N \pi^i =\nonumber \\ & &= -\frac{1}{2}T_p e^{-\phi}g_{MN}\partial_\beta x^N g^{\beta 0}(-g)^{1/4} (-\mG)^{1/4} -\frac{1}{2}T_p g_{MN}\partial_\beta x^N \mG^{\beta 0}(-g)^{1/4} (-\mG)^{1/4} -\nonumber \\ & &-\frac{1}{2}T_p g_{MN}\partial_\beta x^N g^{\beta\sigma} \mF_{\sigma\omega}\mG^{\omega \delta}\mF_{\delta \rho}g^{\rho 0} (-\mG)^{1/4}(-g)^{1/4} \ . \nonumber \\ \end{eqnarray} Now using (\ref{momen}) or its equivalent form given above we get \begin{eqnarray} & &\partial_i x^M \Pi_M+F_{ij}\pi^j= \partial_i x^M p_M+F_{ij}\pi^j= \nonumber \\ & &=-\frac{1}{2}T_p (-g)^{1/4}g_{i\beta}\mG^{\beta 0}(-\mG)^{1/4} -\frac{1}{2}T_p (-g)^{1/4}(-\mG)^{1/4}F_{i\omega}\mG^{\omega\delta}F_{\delta\rho}g^{\rho 0} +\nonumber \\ & &+\frac{1}{2}T_p e^{-\phi}F_{ij}g^{j\beta}F_{\beta\rho}\mG^{\rho 0}(-g)^{1/4}(-\mG)^{1/4} -\frac{1}{2}T_p e^{-\phi}F_{ij}g^{0\beta}F_{\beta\rho}\mG^{\rho j}(-g)^{1/4}(-\mG)^{1/4}= \nonumber \\ & &=-\frac{1}{2}T_p e^{-\phi}(-g)^{1/4}(-\mG)^{1/4} (g_{i\beta}-F_{i\gamma}g^{\gamma\rho}F_{\rho\beta})\mG^{\beta 0}=0 \nonumber \\ \end{eqnarray} that implies an existence of $p-$primary constraints $\mH_i$ defined as \begin{equation} \mH_i=p_M\partial_i x^M+F_{ij}\pi^j\approx 0 \ \end{equation} that are standard spatial diffeomorphism constraints. As the next step we should find Hamiltonian constraint. To do this we have to use crucial properties of the matrix $\mG$ as follows from its definition given in (\ref{defgeom}). Namely, it is easy to see that \begin{equation} \mF_{\mu\nu}g^{\nu\rho}\mG_{\rho\sigma}=\mG_{\mu\nu}g^{\nu\sigma}\mF_{\sigma\rho} \end{equation} or in matrix notation \begin{equation}\label{mFgmG} \mF g^{-1}\mG=\mG g^{-1}\mF \ . \end{equation} Using this relation we obtain \begin{eqnarray} g\mG^{-1} \mF g^{-1}\mF=\mF\mG^{-1}\mF \nonumber \\ \end{eqnarray} that, in the end gives an important relation \begin{eqnarray}\label{mG1} \mG^{-1}-g^{-1}=g^{-1}\mF\mG^{-1}\mF g^{-1} \ , \nonumber \\ \end{eqnarray} where we also used the fact that $\mF g^{-1}\mF=g-\mG$. Further, from (\ref{mFgmG}) we obtain \begin{eqnarray}\label{mFgmG1} \mG^{-1} \mF g^{-1}=g^{-1}\mF \mG^{-1} \ . \nonumber \\ \end{eqnarray} On the other hand using definition of $\mG$ we get \begin{equation} g^{\mu\nu}\mF_{\nu\rho}\mG^{\rho\sigma}= -\mG^{\sigma\rho}\mF_{\rho\nu}g^{\nu\mu} \ . \end{equation} Now if we combine this relation with (\ref{mFgmG1}) we obtain \begin{equation}\label{relanti} \mG^{\mu\nu}\mF_{\nu\rho} g^{\rho\sigma}=-\mG^{\sigma\rho}\mF_{\rho\nu}g^{\nu\mu} \ . \end{equation} Then with the help of these results we can simplify expressions for canonical momenta given in (\ref{momen}) as \begin{eqnarray}\label{PiM} \Pi_M=-T_p e^{-\phi}g_{MN}\partial_\beta x^N\mG^{\beta 0}(-g)^{1/4}(-\mG)^{1/4} \ , \quad \pi^\alpha=T_p e^{-\phi}g^{\alpha\beta}\mF_{\beta\delta}\mG^{\delta 0}(-g)^{1/4}(-\mG)^{1/4} \ . \nonumber \\ \end{eqnarray} Now we can proceed to the search for Hamiltonian constraint. We can expect that it will be quadratic in momenta and so that it is natural to consider following combination $\Pi_M G^{MN}\Pi_N+\pi^ig_{ij}\pi^j$. Then, using (\ref{PiM}) we obtain \begin{eqnarray}\label{searchcon} \Pi_M G^{MN}\Pi_N+\pi^i g_{ij}\pi^j=-T_p^2 e^{-2\phi}\mG^{00}(-g)^{1/2}(-\mG)^{1/2} \ . \nonumber \\ \end{eqnarray} To proceed further let us again return to the definition of $\mG$ given in (\ref{defgeom}) and write it in the form \begin{eqnarray} \mG_{\alpha\beta}=(g_{\alpha\gamma}+\mF_{\alpha\gamma})(\delta^\gamma_\beta-g^{\gamma\delta}\mF_{\delta\beta}) \nonumber \\ \end{eqnarray} or in matrix notation \begin{equation} \mG=(g+\mF)(I-g^{-1}\mF) \ . \end{equation} Taking inverse of this relation and performing further manipulation we get \begin{equation} (I-g^{-1}\mF)\mG^{-1}=(g+\mF)^{-1} \end{equation} that implies following relation \begin{equation} \mG^{-1}-g^{-1}\mF\mG^{-1}=(g+\mF)^{-1} \ . \end{equation} Now since $\mG^{0\alpha}\mF_{\alpha\beta}g^{\beta 0}=0$ as follows from (\ref{relanti}) for $\mu=\sigma=0$ we obtain important result \begin{equation} \mG^{00}=(g+\mF)^{00}=\frac{\det (g_{ij}+\mF_{ij})}{\det (g+\mF)} \ . \end{equation} Inserting this result into (\ref{searchcon}) we obtain final form of the Hamiltonian constraint \begin{equation} \mH_\tau=\Pi_M G^{MN}\Pi_N+\pi^ig_{ij}\pi^j+T_p^2 e^{-2\phi}\det (g_{ij}+F_{ij})\approx 0 \ . \end{equation} We see that the Hamiltonian constraint has the same form as in case of DBI action. In summary, we find that the Hamiltonian formulation of the geometrical action for Dp-brane consists $p+1$ primary constraints $\mH_i,\mH_\tau$ that are first class constraints which simply follow from the fact that they have the same form as constraints that follow from DBI action. Further, the requirement of the preservation of the primary constraint $\pi^0\approx 0$ implies secondary constraint $G\equiv \partial_i \pi^i\approx 0$ again with agreement with standard DBI action. In other words despite apparently different Lagrangian structure between geometrical and DBI actions we see that their Hamiltonian formulations are the same. \section{Unstable D(p+1)-brane} The generalization of this approach to the case of unstable D(p+1)-brane is straightforward. To begin with we start with tachyon effective action \cite{Sen:1999md,Bergshoeff:2000dq,Kluson:2000iy} \begin{equation} S=-\tau_{p+1}\int d^{p+2}\xi e^{-\phi}V(T)\sqrt{-\det \bA} \ , \end{equation} where $\bA_{\alpha\beta}=g_{\alpha\beta}+l_s^2\partial_\alpha T\partial_\beta T+ l_s^2\mF_{\alpha\beta}$ where $T$ is the tachyon field, $V(T)$ is tachyon potential with two minima $T_{min}=\pm \infty$ where $V(T_{min})=0$ and one local maximum $T_{max}=0$ where $V(T_{max})=1$ \footnote{For simplicity we restrict ourselves to the case of zero NSNS two form.}. Finally, $\tau_{p+1}$ is tension of unstable D(p+1)-bane. In order to demonstrate an analogy between tachyon and additional target space coordinate let us introducing variables $Y^I=(x^M,T)$ and generalized metric $H_{IJ}, I,J=0,\dots,10$ in the form \begin{equation} H_{IJ}=\left(\begin{array}{cc} G_{MN} & 0 \\ 0 & l_s^2 \\ \end{array}\right) \end{equation} so that $h_{\alpha\beta}=\partial_\alpha Y^IH_{IJ}\partial_\beta Y^J= \partial_\alpha x^M g_{MN}\partial_\beta x^N+l_s^2\partial_\alpha T\partial_\beta T$. Then it is easy to see that the geometrical action for non-BPS D(p+1)-brane has the form \begin{equation}\label{unstablegeo} S=-\tau_{p+1}\int d^{p+2}\xi e^{-\phi}V(-h)^{1/4}(-\mH)^{1/4} \ , \quad \mH_{\alpha\beta}=h_{\alpha\beta}-l_s^4 F_{\alpha\gamma}h^{\gamma\delta}F_{\delta\beta} \ . \end{equation} It is clear that the Hamiltonian analysis of this D(p+1)-brane is the same as in case of stable Dp-brane so that we will not repeat it here. On the other hand we would like to see that the tachyon kink solution corresponds to stable Dp-brane. We study this problem following \cite{Erkal:2009xq}. Explicitly, tachyon kink solution corresponds to the tachyon profile $T=f(z)$ where $z=\xi^{p+1}$ and where $f(z)$ is a function with $\frac{df}{dz}>0$ for all $z$. The simplest possibility is $f(z)=z$ and hence tachyon kink solution corresponds to the gauge fixing in the extended space-time with the metric $H_{IJ}$. Clearly generally all world-volume fields still depend on $T$ through the inverse relation $z=f^{-1}(T)$. Further, we can take $A_z=0$ by $T-$dependent gauge transformations. Following \cite{Erkal:2009xq} and also \cite{Sen:2003tm} we consider situation when all world-volume fields do not depend on $T$ \footnote{For general analysis, see \cite{Erkal:2009xq,Sen:2003tm}. Roughly speaking, it was argued in \cite{Erkal:2009xq} that $T-$dependent fluctuations are non-normalizable and hence cannot correspond to open string excitations. Rather they should be interpreted as creating of non-trivial closed string.}. Let us denote remaining world-volume variables as $\xi^{\balpha} , \balpha=0,1,\dots,p$ so that the matrix $h_{\alpha\beta}$ has the form \begin{equation}\label{mhkink} h_{zz}=l_s^2f'^2(z) \ , \quad h_{\balpha\bbeta}=g_{\balpha\bbeta} \ , \quad h_{\balpha z}=0 \ . \end{equation} Further, the matrix $g^{\alpha\beta}$ is equal to \begin{equation} g^{\alpha\beta}=\left(\begin{array}{cc} g^{\balpha\bbeta} & 0 \\ 0 & \frac{1}{f'^2} \\ \end{array}\right) \end{equation} so that we obtain \begin{eqnarray}\label{mHkink} & &\mH_{zz}=l_s^2f'^2 \ , \quad h_{zz}=f'^2 \ , \quad h_{z\balpha}=0 \ , \nonumber \\ & & \mH_{z\balpha}=0 \ , \quad h_{\balpha\bbeta}=g_{\balpha\bbeta} \ , \quad \mH_{\balpha\bbeta}=g_{\balpha\bbeta}-l_s^4 F_{\balpha\bgamma}g^{\bgamma\bdelta} F_{\bdelta \bbeta}\equiv \mG_{\alpha\beta} \ . \nonumber \\ \end{eqnarray} Inserting (\ref{mhkink}) and (\ref{mHkink}) into (\ref{unstablegeo}) we get \begin{equation} S_{non}^{fixed}(T=f(z))=-\tau_{p+1}l_s\int dz V(f(z))f'(z) \int d^{p+1}\xi e^{-\phi}(-g)^{1/4}(-\mG)^{1/4} \ \end{equation} so that when we identify \begin{equation} T_p=\tau_{p+1}^{non}l_s\int dm V(m) \end{equation} we obtain an geometrical form of action for stable Dp-brane which is again nice consistency check of the tachyon condensation.
{'timestamp': '2020-10-28T01:02:10', 'yymm': '2005', 'arxiv_id': '2005.14331', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14331'}
arxiv
\section{Introduction} The M5-brane is an interesting and important object in M-Theory for a variety of reasons. Its dynamics are described by a six-dimensional field theory with $(2,0)$ supersymmetr
y. For multiple M5-branes this is an interacting, strongly coupled superconformal field theory. However, we currently lack a satisfactory understanding of this theory. Nevertheless, a particularly fruitful application of M5-branes involves compactifying them on a manifold to obtain lower dimensional field theories. In this way, many novel field theories have been identified as well as relations/dualities between them. Recently, we have studied null reductions of the M5-brane (a related abelian construction already appeared in \cite{Bandos:2008fr} as well as in newer work \cite{Townsend:2019ils}). In the simplest cases, this leads to the construction of novel non-abelian field theories in (4+1)-dimensions with 24 (conformal) supersymmetries \cite{Lambert:2018lgt,Lambert:2019jwi}. Due to the fact that one has fixed a particular null direction in the six-dimensional theory, the Lorentz group has been reduced from $SO(1,5)$ to $SO(4)$. However, they still admit a large bosonic spacetime symmetry, including a Lifshitz scaling, coming from the six-dimensional conformal group \cite{Lambert:2019fne}. In this paper we extend this discussion to general null reductions of the M5-brane on a curved manifold. Non-Lorentzian theories with Lifshitz scaling have received a great deal of attention, primarily from the perspective of their AdS dual geometry (for a review see \cite{Taylor:2015glc}). While some supersymmetric Lifshitz theories have been explicitly constructed (for example see \cite{Xue:2010ih,Chapman:2015wha}) these often involve higher derivative terms, as is common in condensed matter systems. In contrast, the field theories we obtain do not have higher derivatives but involve Lagrange multiplier constraints that reduce the dynamics to motion on a moduli space of anti-self-dual gauge fields \cite{Lambert:2011gb,Mouland:2019zjr}, in line with the DLCQ description of the M5-brane \cite{Aharony:1997an,Aharony:1997pm}. Other classes of theories without Lorentz invariance but related to String/M-Theory have recently received attention in works such as \cite{Harmark:2018cdl,Bergshoeff:2019pij,Harmark:2019zkn,Bergshoeff:2020xhv,deBoer:2020xlc,Bergshoeff:2020baa}. These more general null reductions should provide DLCQ-type descriptions of the field theories obtained by reducing M5-branes on other manifolds such as the Gaiotto theories \cite{Gaiotto:2009we}. Since there is no six-dimensional action based on non-abelian fields, the standard construction is to reduce the abelian theory and then find a suitable non-abelian extension that is compatible with supersymmetry. For example, this was performed in \cite{Linander_2012} for the case of a general spacelike circle fibration. This was then followed by \cite{cordova2017five}, who generalised this construction to include additional non-dynamical supergravity background fields. In this paper we will apply these constructions to the case of a null reduction. We will not consider the full background supergravity fields that were discussed in \cite{cordova2017five}, however we will extend our results to backgrounds coming from fluxes in M-theory and a non-trivial connection on the normal bundle. Although conceptually similar, reduction on a null direction is technically distinct from a spatial reduction and involves some interesting features. In particular in a spatial reduction on a coordinate $x^5$ the self-duality constraint on the closed three-form $H$ is solved by eliminating $H_{MNP}$ in terms of $F_{MN}\sim H_{MN 5}$ for $M,N,P\ne 5$. One then introduces a gauge potential for $F$ in order to formulate a Lagrangian. In contrast in a null reduction along $x^+$ $H$ leads to self-dual and anti-self-dual two-forms $G_{ij}\sim H_{ij-}$ and $F_{ij}\sim H_{ij+}$ in the remaining spatial directions $i,j=1,2,3,4$. To construct the action we impose the self-duality of $G$ and introduce a gauge potential for $F$ off-shell. The equations of motion then imply the anti-self-duality of $F$ along with a suitable closure condition on $G$. This paper is organised as follows. In section two we perform the general reduction of the abelian $(2,0)$ theory equations on a general spacetime with a null isometry. While the $(2,0)$ theory is based on a tensor multiplet, upon reduction we obtain vector fields. We then generalise the resulting action to a supersymmetric non-abelian gauge theory in section three. In section four we examine some special cases of the general reduction, and in section five we include couplings to background flux terms. Section six contains our conclusions and comments. Our conventions are summarised in the appendix, along with some formulae for the geometry. \section{The Abelian Dimensional Reduction} In this section we will reduce the equations of motion and supersymmetry variations of the abelian $(2,0)$ tensor multiplet on a six-dimensional manifold with metric $\hat g_{MN}$ which admits a null Killing direction $\hat{k}^M$. We will use hats to denote six-dimensional geometrical objects throughout. \subsection{The Background} Consider a fixed curved background, {\it i.e.} there is no back-reaction on the metric from the matter fields. We will further only consider six-dimensional Lorentzian manifolds which admit a null killing vector field \begin{align} \hat k = \frac{\partial}{\partial x^{+}} \ . \end{align} In coordinates adapted to this isometry, $(x^{+}, x^{-}, x^{i} )$ $i \in \{1, \dots,4\}$ it can be shown that the metric takes the general form (see also \cite{Julia_1995}) \begin{align}\label{gdef} \hat{g}_{M N} = \begin{pmatrix} 0 & -1 & -u_j \\ -1 & -2\sigma & -v_j - 2\sigma \, u_j \\ -u_i & -v_i -2\sigma \, u_i & g_{i j} -2 u_{(i}v_{j)} - 2\sigma \, u_i u_j \end{pmatrix}\ . \end{align} Here $g_{i j}$ is a Euclidean signature metric of a four-dimensional submanifold of the full six-dimensional spacetime. All components of $\hat g_{MN}$ are allowed to depend on $x^-$ and $x^i$. The metric component $g_{+-}=-1$ has been fixed using a suitable choice of the coordinate $x^-$. This somewhat contrived choice of metric was chosen as it leads to the simpler inverse metric \begin{align} \hat{g}^{M N} = \begin{pmatrix} |v|^2 + 2 \sigma & \underline{u}\cdot \underline{v} -1 & -v^j \\ \underline{u}\cdot \underline{v} -1 & |u|^2 & - u^j \\ -v^i & -u^i & g^{ \, i j} \end{pmatrix}\ . \end{align} It is important to note that this geometry is distinct from that invoked in \cite{Bilal:1999ff}, in which a spacelike circle is infinitely Lorentz boosted. Even if limits are examined carefully in that paper, one finds as the boost parameter goes to zero the length of the Killing vector is always positive. In contrast, our Killing vector has length zero, as would be expected from a null reduction. For the time being we do not consider any other background fields other than the metric; in section \ref{sec:flux} off-brane fluxes are added. \subsection{Tensor Multiplet} The six-dimensional abelian $\mathcal{N} = (2,0)$ tensor multiplet contains a self-dual 3-form, \begin{align} H = \hat{\star} H, \end{align} along with five scalar fields, $X^I$, and a symplectic Majorana-Weyl spinor $\psi$. These fields transform in the trivial, fundamental, and spinor representations of the $R$-symmetry group $SO(5)$ (or equivalently $USp(4)$) respectively. The supersymmetry transformations \begin{align} \label{eq:susytrans} \begin{split} &\delta X^I = i \bar{\epsilon} \hat \Gamma^I \psi \\ &\delta H_{MNP} = 3i\partial_{[M}( \bar{\epsilon} \hat\Gamma_{NP]} \psi) \\ &\delta \psi = \hat D_M X^I \hat\Gamma^M \hat\Gamma^I \epsilon + \frac{1}{2 \cdot 3!} H_{MNP}\hat\Gamma^{MNP} \epsilon\ , \end{split} \end{align} close up to the equations of motion: \begin{align} H = \hat{\star} H\ ,\qquad \hat{d} H = 0\ ,\qquad \hat D^M\hat D_M X^I=0\ ,\qquad \hat\Gamma^M \hat D_M \psi=0\ . \end{align} Here the supersymmetry parameter $\epsilon$ has opposite chirality under $\Gamma_{012345}$ to $\psi$, we make the choice $\Gamma_{012345}\epsilon = \epsilon$ and $\Gamma_{012345} \psi = - \psi$. \subsection{Reducing $H = \hat{\star} H$} Let us first define the (4+1)-dimensional fields: \begin{align} F_{ij} = H_{ij+}, \qquad F_{i-} = H_{i-+}, \qquad G_{ij} = H_{ij-} \ . \end{align} In a trivial geometry these three fields are the independent components of the six-dimensional 3-form $H$, and $F$ and $G$ satisfy simple (anti-)self-duality constraints. Our task here is to see the implications of the six-dimensional self-duality condition for a general background. In what follows we use the geometrical quantities associated to the four-dimensional manifold with metric $g_{ij}$. In particular we define the fields $F^{ij}$, $G^{ij}$ and $F_{-}^{i}$ to have their indices raised by $g^{\,ij}$. We also take \begin{align} \varepsilon_{+-ijkl}=\varepsilon_{ijkl}\ , \end{align} with $\varepsilon_{1234}=1$. Along with the metric $g_{ij}$, this allows us to define a four-dimensional Hodge star operator $\star$. To proceed it is convenient to work with forms, we define the one forms $v = v_i dx^i$, $u = u_i dx^i$ and $F_{-} = F_{i-} dx^i$, as well as the two forms $F = \frac{1}{2} F_{ij} dx^i \wedge dx^j$ {\it etc.}. We also define the 3-form $H = \frac{1}{3!}H_{ijk}dx^i\wedge dx^j \wedge dx^k$. \\ \\ Written in forms the self-duality condition on $F_-$ is \begin{align} \label{eq:F-form} F_{-} = \star(v\wedge u \wedge F_{-}) + \star(v \wedge F) + \star(u \wedge G ) - \star H\ . \end{align} Applying $\star$ allows us to solve for $H$ \begin{align} \label{eq:Hform} H = \star F_{-} + v\wedge u \wedge F_{-} + v \wedge F + u \wedge G \ . \end{align} Eliminating $H$ from the other relations we create two equations that depend only on $F_{-}, F, G$ along with the background fields $\sigma,u,v$ and $g $. In particular we find \begin{align} \begin{split} F = - \star \! F + F_{-} \wedge u + \star (F_{-} \wedge u) \end{split} \nonumber\\ \begin{split} {G} = \star {G} - 2 \sigma \star \! F - F_{-} \wedge v + \star (F_{-} \wedge v) + 2\sigma \star(F_{-} \wedge u) \ . \end{split} \end{align} Defining \begin{align} \mathcal{F} &= F - F_{-} \wedge u \nonumber\\ \mathcal{G} &= G - \sigma F - F_-\wedge (v+\sigma u) \ , \end{align} these expressions simplify further to \begin{align} \begin{split} \mathcal{F} = - \star \! \mathcal{F} \end{split} \nonumber \\ \begin{split} {\cal G} = \star {\cal G}\ . \end{split} \end{align} \subsection{Decomposing $\hat{d}H = 0$} The exterior derivative is metric independent, so the results will hold for all backgrounds. In components \begin{align} \partial_{[M}H_{NPQ]} = 0\ . \end{align} Our construction has a $x^{+}$ isometry, so all fields are independent of $x^+$. This gives an expression for each of the combinations of indices $+\! -\! ij, \ +ijk, \ -ijk, \ ijkl$ \begin{align} \partial_{[+}H_{-ij]} = 0 \quad \implies& \quad \partial_{-}F + dF_- = 0 \nonumber \\ \partial_{[+} H_{ijk]} = 0 \quad \implies& \quad dF = 0 \nonumber\\ \partial_{[-}H_{ijk]} = 0 \quad \implies& \quad d {G} = \partial_{-} H \nonumber\\ \partial_{[i} H_{jkl]} = 0 \quad \implies& \quad dH = 0. \end{align} Where we have written the 4 dimensional exterior derivative as $d$. The first and second expressions can be combined to give a simple five-dimensional Bianchi identity \begin{align} \label{eq:dF=0} \quad d_{(5)}F_{(5)} = 0 \ ,\qquad F_{(5)} = F + F_-\wedge dx^-\ . \end{align} Implying that locally there exists $(A_-, A_i)$ such that \begin{align}\label{eq:Adef} F_{ij} = \partial_{i} A_{j} - \partial_{j} A_{i}\ , \qquad F_{i -} = \partial_{i} A_{-} - \partial_{-}A_{i}\ . \end{align} The equations for $\mathcal {G}$ and $\mathcal{F}$ become \begin{align} \begin{split} \label{eq:dH} d(\mathcal {G} + \sigma\mathcal{F} - F_{-}\wedge v ) = \partial_{-} ( \star F_{-} + v\wedge u \wedge F_{-} + v \wedge F + u \wedge (\mathcal {G} + \sigma \mathcal{F} - F_{-} \wedge v)) \end{split} \nonumber \\ \begin{split} d(\star F_{-} + v\wedge u \wedge F_{-} + v \wedge F + u \wedge (\mathcal {G} + \sigma \mathcal{F} - F_{-} \wedge v)) = 0\ . \end{split} \end{align} Using the duality properties of $\mathcal {F}$ and $\mathcal {G}$ we can rewrite these equations in component form as \begin{align} \begin{split} \label{eq:delta A} D_j \mathcal{G}^{i j} + D_j \big( \sigma \star \mathcal{F}^{ij} \big) - D_j \big( \star ( F_{-} \wedge v) ^{ij} \big) + D_{-} F^{i}_{\ -} - D_{-} \big( \star F^{ij}v_j \big) - D_{-} \big( \sigma \star \mathcal{F}^{ij} u_j \big) \\ - D_{-} \big( \mathcal {G}^{ij}u_j \big) = 0 \nonumber \end{split} \end{align} \begin{align} -D_i F^{i}_{\ -} + D_i \big( \star \! F^{ij}v_j \big) + D_i \big( \mathcal {G}^{ij} u_j \big) + D_i \big( \sigma \star \! \mathcal{F}^{ij} u_j \big) = 0\ , \end{align} respectively. \subsection{An Action} Lastly we wish to construct an action that reproduces these equations of motion, along with those of the scalars and fermions. In the latter cases a six-dimensional action already exists which can be trivially reduced to find an appropriate five-dimensional action. Somewhat remarkably the equations for $F_-, F$ and $\mathcal {G}$ can be derived from a Lagrangian density on a four-dimensional manifold with Euclidean signature, whose fields also depend on the `time' coordinate $x^-$. To this end we assume that $F_-$ and $F$ arise from a potential $(A_-,A_i)$ as in (\ref{eq:Adef}). However, we do not impose a potential for $\mathcal {G}$ but rather impose $\mathcal{G}=\star\mathcal{G}$ \footnote{Note that this is a legitimate imposition, as self-dual tensors are an irreducible representation of the Lorentz group in even dimension}. Some trial and error shows that the equations of motion (\ref{eq:delta A}) then arise from the lagrangian \begin{align} \mathcal{L}_H = \frac{1}{2} \star \! F_{-} \wedge F_{-} - \frac{1}{2} \sigma \star \! \mathcal{F} \wedge \mathcal{F} + \frac{1}{2} \mathcal{F} \wedge {\mathcal G} - \frac{1}{2} F \wedge F_{-} \wedge v \ . \end{align} Where \begin{align} F_{ij} &= \partial_i A_j -\partial_j A_i\nonumber\\ F_{i-} & = \partial_i A_--\partial_-A_i\nonumber\\ \mathcal{F}_{ij} & = F_{ij} + 2u_{[i}F_{j]-}\nonumber\\ \mathcal{G}_{ij} &= \frac12 \sqrt{g}\varepsilon_{ijkl}\mathcal{G}^{kl}\ , \end{align} and the $k,l$ indices are raised with respect to $g^{ij}$. Variation with respect to $\mathcal{ G}$ immediately gives the anti-self-dual condition ${\cal F}=-\star{\cal F}$. On the other hand, varying $A_{i}$ and $A_{-}$ give (\ref{eq:delta A}) respectively. Inclusion of the scalars and fermions is easier, as there is a Lagrangian formulation for the free conformal case in any dimension; \begin{align} \mathcal{L}_{matter} = -\sqrt{-\hat g}\left(\frac12\hat g^{MN} \partial_M X^I\partial_N X^I +\frac{1}{8}\frac{d-2}{d-1} \hat R X^IX^I- \frac i2 \bar\psi\hat\Gamma^M \hat D_M\psi\right) \ . \end{align} Performing the reduction by assuming $\partial_+=0$, and inserting $d=6$, we find \begin{align} \mathcal{L}_{matter} = &- \sqrt{g} \left(\frac12\partial_i X^I \partial^i X^I - \frac12 |u|^2 \partial_{-}X^I \partial_{-} X^I + u^i \partial_i X^I \partial_{-}X^I - \frac{1}{10} \hat{R} X^I X^I\right. \nonumber\\ &\qquad \quad -\left. \frac{i}{2} \bar{\psi} \hat \Gamma^{-}\hat{D}_{-} \psi - \frac{i}{2} \bar{\psi}\hat \Gamma^{i}\hat{D}_{i} \psi -\frac{1}{2}i\bar\psi\hat M\psi \right)\ , \end{align} where \begin{align} \hat M &= \frac14\hat \Gamma^{+} \hat\omega_{+ MN}\hat \Gamma^{MN}\nonumber\\ & = \frac14\partial_-u_i \hat \Gamma^{+} \hat\Gamma^{-i} + \frac14 \partial_iu_j\hat \Gamma^{+} \hat\Gamma^{ij} \ . \end{align} Note that we keep the fermionic terms and $\hat R$ in their six-dimensional form. In principle these can be computed from the expression (\ref{eq:viel}), (\ref{SpinC}) and (\ref{GammaRel}) found in the appendix. However, expanding everything out in full detail for a general background leads to rather unwieldy expressions. Rather, we will provide more explicit expressions in various special cases below. It is helpful to introduce \begin{align} \nabla_i = \partial_i -u_i \partial_-\ , \end{align} This derivative generally has torsion; \begin{align} \nabla_{[i} \nabla_{j]} X^I = - 2 \nabla_{[i}u_{j]} \partial_{-}X^I\ . \end{align} One also finds that \begin{align} \label{eq:bianchi} \nabla_{[i}\mathcal{F}_{jk]} = F_{- [i}(\partial_{j} u_{k]} - u_{j} \partial_{-} u_{k]}) \ . \end{align} Putting all these together we can write the full abelian action as \begin{align} \begin{split}\label{AAction} S = \frac{1}{{g^2_{\text{YM}}}}\int d x^{-} d^4 x \sqrt{g}\Big\{ \frac{1}{2} F_{i -} F^{i}_{-} - \frac{1}{4} \sigma \mathcal{F}_{ij}\mathcal{F}^{ij} + \frac{1}{2} {\cal G}_{ij}\mathcal{F}^{ij} - \frac{1}{2\sqrt{g}} \varepsilon^{ijkl} F_{i-} v_j F_{kl}\\ - \frac{1}{2} {\nabla}_{i} X^I {\nabla}^i X^I - \frac{1}{10}\hat{R} X^I X^I + \frac{1}{2}i \bar{\psi} \Gamma^{-}\hat{D}_{-} \psi + \frac{1}{2}i \bar{\psi} \Gamma^{i}\hat{\nabla}_{i} \psi + \frac{1}{2}i\bar\psi\hat M\psi \Big\}\ . \end{split} \end{align} \section{Supersymmetry and Non-Abelian Generalization} Next we want to show that the action (\ref{AAction}) is supersymmetric. To this end we assume there there exists a solution to the conformal Killing spinor equation \begin{align} \hat D_M\epsilon = \hat\Gamma_M\eta \ , \end{align} with $\partial_+\epsilon=0$. In particular this implies \begin{align}\label{econ} \hat{D}_{+} \epsilon = \frac{1}{4} \hat{\omega}_{+MN}\hat{\Gamma}^{MN} \epsilon = \hat{\Gamma}_{+} \eta\ , \end{align} which is a further condition that we must impose on the geometry. As it stands the action (\ref{AAction}) is not invariant under the transformations that follow directly from (\ref{eq:susytrans}). One problem is that the variation $\delta \mathcal{G}_{ij}$ obtained from (\ref{eq:susytrans}) is not self-dual off-shell. Thus, we must adjust the algebra in a way that ensures $\delta \mathcal{G}_{ij}$ is self-dual. A deeper issue is that although we impose the isometry $\partial_+\psi=0$, this does not imply that $\hat{D}_{+}\psi=0$. For the bosonic fields, this distinction does not cause a problem as both $X^I$ and $H_{MNP}$ do not couple to the spacetime connection (due to the fact that $H_{MNP}$ is anti-symmetric). But for $\psi$ this leads to the Scherk-Schwarz-like mass term $\frac{i}{2}\bar\psi\hat M\psi$ in (\ref{AAction}). On-shell this is also not a problem as $\delta H_{MNP}$ in (\ref{eq:susytrans}) contains terms involving $\hat D_+\psi $ which lead to the closure of the algebra and invariance of the equations of motion. However, we find that the $\bar \psi\hat M\psi $ term can only be made supersymmetric in general by modifying the variation of $F_{-i}$ and $F_{ij}$ in a way that means they are no longer closed. This in turn implies that a suitable expression for the supersymmetry variation of the gauge field cannot be defined. Since the existence of such a gauge field was crucial for the construction of the action, having no definable variation is not tenable. Alternatively, one might question why we start with the supersymmetry algebra (\ref{eq:susytrans}) and not simply \begin{align} \begin{split} &\delta X^I = i \bar{\epsilon} \hat \Gamma^I \psi \\ &\delta B_{MN} = 2i \bar{\epsilon} \hat\Gamma_{MN} \psi \\ &\delta \psi = \hat D_M X^I \hat\Gamma^M\hat\Gamma^I \epsilon + \frac{1}{2 \cdot 2!} \partial_MB_{NP}\hat\Gamma^{MNP} \epsilon\ , \end{split} \end{align} identify $H = dB$ and impose $H=\hat\star H$ as an equation of motion. However, in this case one finds that $G_{ij} = 2\partial_{[i} B_{j]-} + \partial_-B_{ij}$ and hence imposing an off-shell self-duality constraint on $\mathcal{G}_{ij}$ and $\delta \mathcal{G}_{ij}$ becomes non-trivial. Thus, to obtain a supersymmetric action after reduction on $x^+$ we find ourselves in a balancing act of finding off-shell expressions for $\delta A_-,\delta A_i$ and $\delta \mathcal{G}_{ij}=\star \delta \mathcal{G}_{ij}$ when $\hat D_+\psi\ne0$. \subsection{Correcting $\delta\mathcal {G}$} The next problem is that $\delta\mathcal {G}$ is not self-dual off-shell, but to write the action we require that $\mathcal {G}$ is self-dual. A short calculation shows that \begin{align} \delta \mathcal{G}_{ij} - \star \delta \mathcal{G}_{ij} = i \bar{\epsilon} \Gamma_{-} \Gamma_{ij} E(\psi)\ , \end{align} where $E(\psi)$ denotes the fermion equation of motion. Therefore, we simply shift $\delta \mathcal{G}_{ij} \longrightarrow \delta'\mathcal{G}_{ij} = \delta \mathcal{G}_{ij} - \frac{1}{2} i \bar{\epsilon} \Gamma_{-} \Gamma_{ij} E(\psi) $, resulting in \begin{align} \delta' \mathcal{G}_{ij} - \star \delta' \mathcal{G}_{ij} = \delta \mathcal{G}_{ij} - \frac{1}{2} i \bar{\epsilon} \Gamma_{-} \Gamma_{ij} E(\psi) - \star \big( \delta \mathcal{G}_{ij} - \frac{1}{2} i \bar{\epsilon} \Gamma_{-} \Gamma_{ij} E(\psi) \big ) = 0\ , \end{align} relabelling $\delta' \mathcal{G}_{ij}$ to $\delta \mathcal{G}_{ij}$ gives us a self-dual $\delta\mathcal{G}$. \\ \\ Next the $\sigma\mathcal{F}_{ij} \mathcal{F}^{ij}$ term, not present in the flat theory, must be accounted for in the supersymmetry transformations. This leads to a variation of the form \begin{align}\label{newdL} \delta {\cal L} = -\frac12 \sigma \mathcal{F}^{ij}\delta \mathcal{F}_{ij} \ . \end{align} We must use properties of $\mathcal{F}_{ij}$ to shift $\delta \mathcal{G}_{ij}$ in such a way to cancel the effects of this new term, whilst ensuring $\delta \mathcal{G}_{ij}$ remains self-dual. \\ \\ It is useful to note that a fermionic term of definite duality, {\it e.g.} $\bar{\epsilon} \Gamma_{+} \Gamma_{ij} \psi$, can be used to build other terms of either the same or opposite duality (see Appendix A for for origin of these dualities). For instance, inserting an additional $\Gamma_{k}$ will result in either a term of the same duality; $\bar{\epsilon} \Gamma_{+} \Gamma_{k} \Gamma_{ij} \psi$, or opposite duality; $\bar{\epsilon} \Gamma_{+} \Gamma_{ij} \Gamma_{k} \psi$. . With this in mind we choose the shift \begin{align} \label{eq:Gshift} \delta \mathcal{G}_{ij} \longrightarrow \delta \mathcal{G}_{ij} + i \bar{\epsilon} \sigma \Gamma_{+} \Gamma_{ij} \Gamma_{k} \hat{\nabla}^{k} \psi \ , \end{align} which is self-dual by construction. A simple Gamma matrix manipulation shows the overall change to $\delta \mathcal{L}$ is \begin{align} \delta \mathcal{L} \longrightarrow \delta \mathcal{L} + \frac{1}{2} \mathcal{F}^{ij}( i\bar{\epsilon} \sigma \Gamma_{+}\Gamma_{ijk} \hat{\nabla}^{k} \psi + 2i \bar{\epsilon} \sigma \Gamma_{+[i} \hat{\nabla}_{j]} \psi ) \end{align} The last term cancels (\ref{newdL}) so we require the second term to vanish for supersymmetry. This is trivially true if $\sigma = 0$. Looking at the form of (\ref{eq:bianchi}) for $\mathcal{F}$ under $\nabla$, the first term is a total derivative if $du - u \wedge \partial_{-} u = 0$ and $\partial_i\sigma=0$. These three possibilities: $\sigma=0$ or $du - u \wedge \partial_{-} u = 0$ and $\partial_i\sigma=0$ or $\Gamma_+\epsilon=0$, arise in the two natural cases studied below. Note that $\delta \mathcal{G}_{ij}$ also has a term proportional to $\eta$, to account for terms arising from integration by parts. \\ \\ Our corrected supersymmetry transformations read \begin{align} \begin{split} &\delta X^I = i \bar{\epsilon}\Gamma^{I} \psi \\ &\delta A_i \; = - i \bar{\epsilon}(\Gamma_{+-}u_i + \Gamma_{+i}) \psi \\ &\delta A_{-} = - i \bar{\epsilon} \Gamma_{+-} \psi \\ &\delta \mathcal{G}_{ij} = - \frac{1}{2} i \bar{\epsilon} \Gamma_{+} \Gamma_{-} \Gamma_{ij} \hat{D}_{-} \psi - \frac{1}{2 }i \bar{\epsilon} \Gamma_{-}\Gamma^{k} \Gamma_{ij} \hat{\nabla}_{k} \psi + i \bar{\epsilon} \sigma \Gamma_{+} \Gamma_{ij} \Gamma^{k} \hat{\nabla}_{k} \psi - 3 i \bar{\eta}\Gamma_{-} \Gamma_{ij} \psi \\ & \delta {\psi} \ \ \: \! = -F_{i-} \Gamma^{+-i}{\epsilon}+ \frac{1}{4} \mathcal{F}_{ij} \Gamma^{+ij}{\epsilon} + \frac{1}{4}\mathcal{G}_{ij} \Gamma^{-ij}{\epsilon} + \Gamma^{-} \Gamma^I {D}_{-}X^I {\epsilon} + \Gamma^i \Gamma^I \hat{\nabla}_i X^I{\epsilon} - 4 X^I \Gamma^I{\eta} \ . \end{split} \end{align} Again we have kept many of the fermionic terms in their six-dimensional form for notational simplicity. With these supersymmetry transformations we find that the action (\ref{AAction}) is invariant up to terms arising from the $\bar\psi\hat M\psi$ term, and terms from the shift to $\delta G$. In other words we find $\delta S=0$ if \begin{align}\label{Mconstraint} \delta\bar \psi \hat M\psi =0 \end{align} and \begin{align}\label{OtherConstraints} \sigma &= 0 \nonumber\\ \text{or} \quad du - u \wedge \partial_{-} u &= 0\ {\rm and}\ \partial_i\sigma=0\\ \text{or} \quad du - u \wedge \partial_{-} u &= 0\ {\rm and}\ \Gamma_+\epsilon=0\ .\nonumber \end{align} The implications of these constraints are explored in section \ref{sec:M}, and as we will see there is a remarkable amount of redundancy between them. \subsection{Non-Abelian Theory} Our next task is to find a non-abelian extension of the abelian action found above which remains supersymmetric. After some trial and error we find that, assuming (\ref{Mconstraint}) and (\ref{OtherConstraints}) holds, non-abelian extension is \begin{align} \begin{split} S = \frac{1}{{g^2_{\text{YM}}}}\text{tr} \int d x^{-} d^4 x \sqrt{g}\Big\{& \frac{1}{2} F_{i -} F^{i}_{-} - \frac{1}{4} \sigma \mathcal{F}_{ij}\mathcal{F}^{ij} + \frac{1}{2} \mathcal{G}_{ij}\mathcal{F}^{ij} - \frac{1}{2\sqrt{g}} \varepsilon^{ijkl} F_{i-} v_j F_{kl} \\ & - \frac{1}{2} {\nabla}_{i} X^I {\nabla}^i X^I - \frac{1}{10}\hat{R} X^I X^I+\frac{i}{2}\bar\psi\hat M\psi \\ &+ \frac{1}{2}i \bar{\psi} \Gamma^{-}\hat{D}_{-} \psi + \frac{1}{2}i \bar{\psi} \Gamma^{i}\hat{\nabla}_{i} \psi + \frac{1}{2} \bar{\psi} \Gamma_{+} \Gamma^I \big [ X^I, \psi \big] \Big\}\ , \end{split} \end{align} where all the fields now live in the adjoint of some gauge group. The supersymmetry transformations are \begin{align} \begin{split} &\delta X^I = i \bar{\epsilon}\Gamma^{I} \psi \\ &\delta A_i \; = - i \bar{\epsilon}(\Gamma_{+-}u_i + \Gamma_{+i}) \psi \\ &\delta A_{-} = - i \bar{\epsilon} \Gamma_{+-} \psi \\ &\delta \mathcal{G}_{ij} = - \frac{1}{2} i \bar{\epsilon} \Gamma_{+} \Gamma_{-} \Gamma_{ij} \hat{D}_{-} \psi - \frac{1}{2 }i \bar{\epsilon} \Gamma_{-}\Gamma^{k} \Gamma_{ij} \hat{\nabla}_{k} \psi + i \bar{\epsilon} \sigma \Gamma_{+} \Gamma_{ij} \Gamma^{k} \hat{\nabla}_{k} \psi \\ & \qquad \quad -\frac{1}{2}i \bar{\epsilon} \Gamma_+\Gamma_- \Gamma_{ij} \Gamma^{I} \big[ X^I, \psi \big] - 3 i \bar{\eta}\Gamma_{-} \Gamma_{ij} \psi \\ & \delta {\psi} \ \ \: \! = -F_{i-} \Gamma^{+-i}{\epsilon}+ \frac{1}{4} \mathcal{F}_{ij} \Gamma^{+ij}{\epsilon} + \frac{1}{4}\mathcal{G}_{ij} \Gamma^{-ij}{\epsilon} + \Gamma^{-} \Gamma^I {D}_{-}X^I {\epsilon} + \Gamma^i \Gamma^I \hat{\nabla}_i X^I{\epsilon} \\ & \qquad \quad + \frac{i}{2} \Gamma_{+} \Gamma^{IJ} \big [X^I, X^J \big]{\epsilon} - 4 X^I \Gamma^I{\eta} \ , \end{split} \end{align} where again we have left $\hat R$ and the fermion derivatives in their six-dimensional form. \subsection{Twisting} We can also introduce an non-zero connection on the R-symmetry of the form \begin{align} \hat {\cal D}_M X^I &= \hat \partial_M X^I + \hat {A}_M(X^I)\nonumber\\ \hat {\cal D}_M\psi & = \hat D_M\psi + \frac14 \hat \Omega_M^{IJ}\Gamma^{IJ}\psi\ , \end{align} and similarly for $\hat {\cal D}_M\epsilon$. This will allow us to introduce a twisting of the normal bundle. Here $\hat A_M$ acts on $X^I$ in a representation of some subgroup $Q$ of $SO(5)$ and $\hat \Omega^{IJ}_M$ provides a spinor embedding of $Q$ into $Spin(5)$. Since this modification only affects the dynamics through derivatives of the scalars and fermions, we can see its effect by modifying the matter part of the six-dimensional action to \begin{align} S_{matter} = {\rm tr}\int d^6x \sqrt{-\hat g}\left( -\frac12 {\cal D}_M X^I {\cal D}^M X^I + \frac{i}{2}\bar\psi\hat\Gamma^M{\cal D}_M\psi -\frac{1}{10}\hat R X^IX^I - \frac{1}{2} T^{IJ}X^IX^J\right)\ , \end{align} where $T^{IJ}$ is an invariant tensor of $Q$. This modification leads to \begin{align} \delta S_{matter} = {\rm tr}\int d^6x \sqrt{-\hat g}\Big(\frac{i}{10}\bar\psi \Gamma^{IJK}\Gamma^{MN}\epsilon \hat {\cal R}_{MN}{}^{JK} X^I &+ \frac{3i}{10}\bar\psi \Gamma^I\Gamma^{MN}\epsilon \hat {\cal R}_{MN}{}^{IJ} X^J\nonumber\\ & - iT^{IJ}\bar\psi \Gamma^IX^J\Big)\ , \end{align} where $\hat {\cal R}_{MN}{}^{IJ}$ is the curvature of $\hat \Omega^{IJ}_M$. Thus, to obtain a supersymmetric reduction we must ensure $\hat {\cal D}_M\epsilon=\hat\Gamma_M\eta$, $\partial_+\epsilon=0$ and arrange for suitable choices of curvature and $ T^{IJ}$ so that the terms in $\delta S_{matter}$ cancel. Indeed, the usual role of twisting is to allow for solutions to $\hat {\cal D}_M\epsilon=0$ on manifolds with non-vanishing curvature. For example, in the case of a Riemann surface along $x^3,x^4$ with normal directions $X^6,X^{7}$ the first term vanishes and we can arrange to cancel the last two by taking \begin{align} T^{67} = \mp\frac{3}{5}\hat {\cal R}_{34}{}^{67}\ , \end{align} and projecting on to spinors with $\hat \Gamma_{34}\hat\Gamma^{67}\epsilon =\pm \epsilon$, where the sign is chosen to correspond to solutions of $\hat {\cal D}_M\epsilon=0$. \section{Examples} In the previous section we constructed the non-abelian extension of the reduced M5-brane equations and their supersymmetry transformations. We left the fermion terms in a six-dimensional form as the complete expression in full generality is quite complicated and unenlightening. In this section we will evaluate some general classes of examples explicitly. \subsection{Obstruction from $\hat M$} \label{sec:M} In order to obtain a supersymmetric reduction we require in addition that (\ref{Mconstraint}), {\it i.e.} $\delta\bar\psi\hat M\psi=0$, is satisfied. In addition the condition (\ref{econ}) ensures that \begin{align}\label{e+conditions} \frac14\partial_-u_i\Gamma^{-i}\epsilon_- + \frac14(\partial_iu_j-u_i\partial_-u_j)\Gamma^{ij}\epsilon_+ & = \Gamma_+\eta\nonumber\\ \partial_iu_j\Gamma^{ij}\epsilon_- & = 0\ . \end{align} We do not propose to give the general solutions to these conditions which place various restrictions on both $\epsilon$ and the background fields $\sigma,u,v$. For example, if $du$ is not anti-self-dual then the second equation implies that $\epsilon_-=0$. Since there are no mass terms for the scalars (beyond the usual conformal coupling to the curvature) a physically well-motivated class of background that ensures (\ref{Mconstraint}) are those for which there is also no mass term for the fermions: \begin{align} \bar\psi\hat M\psi =0\ . \end{align} This leads to the following conditions on the background fields \begin{align} du - u\wedge \partial_-u & =-\star(du - u\wedge \partial_-u)\nonumber\\ \partial_-u & = -2i_v(du - u\wedge \partial_-u) \nonumber\\ \sigma(du - u\wedge \partial_-u) &= \frac12 (1-\star)(v\wedge \partial_-u)\ . \end{align} With $i_v(\cdot)$ denoting contraction with $v$. We also had a further constraint that had two choices, arising from our requirement to only shift $\delta G$ by self dual terms. One recalls \begin{align} \sigma = 0 \quad \text{or} \quad du - u \wedge \partial_{-} u = 0 \ . \end{align} There are two natural solutions to these constraints:\footnote{Note that in case 2 we could consider the the weaker conditions $du=0$ and $\partial_-u=0$. But this implies $u=df$ in which case we can set $u=0$ by a diffeomorphism $x^-\to x^-+f$.} \begin{align} {\rm case }\ &1: u\ne 0\ ,\partial_{-}u =0 \quad\implies \quad v = \sigma = 0\ ,\ du= -\star du \nonumber\\ {\rm case }\ &2: u = 0\ ,\ v ,\sigma \ne 0\ .\end{align} Therefore, from (\ref{e+conditions}) we find \begin{align} {\rm case\ 1}&: \epsilon_-\ne 0\quad \eta = - \frac18\partial_iu_j\Gamma_-\Gamma^{ij}\epsilon_+ \nonumber\\ {\rm case\ 2}&: \eta=0\ . \end{align} In what follows we will only focus on these two cases so that we can be as explicit as possible. We emphasize that other solutions to the constraints (\ref{econ}) and (\ref{Mconstraint}) might also be possible. \subsection{Case 1: $\partial_{-}u = v = \sigma = 0\quad du= -\star du$} Here the action is \begin{align} \begin{split} S = \frac{1}{{g^2_{\text{YM}}}}\int d x^{-} d^4 x \sqrt{g}\Big\{ \frac{1}{2} F_{i -} F^{i}{}_{-} + \frac{1}{2} \mathcal{G}_{ij}\mathcal{F}^{ij} - \frac{1}{2} \nabla_{i} X^I \nabla^i X^I - \frac{1}{10}\hat{R} X^I X^I \\ +\frac{1}{2}i\bar{\psi}\Gamma^{-}D_{-} \psi + \frac{1}{2}i \bar{\psi} \Gamma^{i} \nabla_i \psi - \frac{1}{4} e^{\underline{i}}_{\ [i} \partial_{-}e_{j]\underline{i}} \bar{\psi}(\Gamma^{-} - u_k \Gamma^k) \Gamma^{ij} \psi \\ + \frac{1}{2} \bar{\psi} \Gamma_{+} \Gamma^{I} \big [ X^I, \psi \big ] \Big\}, \end{split} \end{align} which is invariant under \begin{align} \begin{split} &\delta X^I = i \bar{\epsilon}\Gamma^{I} \psi \\ &\delta A_i \; = - i \bar{\epsilon}(\Gamma_{+-}u_i + \Gamma_{+i}) \psi \\ &\delta A_{-} = - i \bar{\epsilon} \Gamma_{+-} \psi \\ &\delta \mathcal{G}_{ij} = - \frac{1}{2} i \bar{\epsilon} \Gamma_{+} \Gamma_{-} \Gamma_{ij} D_{-} \psi - \frac{1}{2 }i \bar{\epsilon} \Gamma_{-}\Gamma^{k} \Gamma_{ij}( D_{k} - u_k D_{-}) \psi - \frac{1}{2}i \partial_{-} g_{kl} \bar{\epsilon} \Gamma^k \Gamma_{ij} \Gamma^l \psi_{-} \\ & \qquad \quad \, - \frac{1}{4}i e^{\underline{i}}_{\ [k} \partial_{-} e_{l] \underline{i}} \bar{\epsilon} \Gamma_{ij} \Gamma^{kl} \psi_{+} - \frac{1}{8}i e^{\underline{i}}_{\ [k} \partial_{-} e_{l] \underline{i}} u_p \bar{\epsilon} \Gamma^p \Gamma_{ij} \Gamma^{+} \Gamma^{kl} \psi - 3 i \bar{\eta}\Gamma_{-} \Gamma_{ij} \psi \\ & \qquad \quad \, -\frac{1}{2}i \bar{\epsilon} \Gamma_+\Gamma_- \Gamma_{ij} \Gamma^{I} \big[ X^I, \psi \big] \\ & \delta {\psi} \ \ \: \! = -F_{i-} \Gamma^{+-i}{\epsilon}+ \frac{1}{4} \mathcal{F}_{ij} \Gamma^{+ij}{\epsilon} + \frac{1}{4}\mathcal{G}_{ij} \Gamma^{-ij}{\epsilon} + \Gamma^{-} \Gamma^I {D}_{-}X^I {\epsilon} + \Gamma^i \Gamma^I {\nabla}_i X^I{\epsilon} \\ & \qquad \quad \, + \frac{i}{2} \Gamma_{+} \Gamma^{IJ} \big [X^I, X^J \big]{\epsilon} - 4 X^I \Gamma^I{\eta} \ . \end{split} \end{align} For brevity we have left the six-dimensional Ricci scalar unexpanded, for completeness in terms of four-dimensional objects only this is \begin{align} \begin{split} \hat{R} = R &- \frac{1}{2}g^{ij}\big( \partial_{-}^2 g_{ij} + \frac{1}{2}|u|^2 g^{kl} \partial_{-}g_{ik} \partial_{-}g_{jl} - g^{kl} \partial_{-} g_{ik} u_m \gamma^m_{\ jl}\big) \\ &-u^i \big( \partial_j g^{jk} \partial_{-} g_{ki} + g^{jk}\partial_{-}g_{ik} \gamma^{l}_{\ kl} - g^{jk} \partial_{-} g_{kl} \gamma^{l}_{\ ji} - \partial_{-} \gamma^{j}_{\ ij} + \frac{1}{2}\partial_i (g^{jk}\partial_{-} g_{jk}) \big), \end{split} \end{align} with $ \gamma^{i}_{ \ jk}$ the Christoffel symbols of the 4d metric. In the specific case of this metric being independent of $x^{-}$ this reduces to \begin{align} \hat{R} = R\ . \end{align} \subsection{Case 2: $u=0$} \begin{align} \begin{split} S = \frac{1}{{g^2_{\text{YM}}}}\int d x^{-} d^4 x \sqrt{g}\Big\{ \frac{1}{2} F_{i -} F^{i}{}_{-} - \frac{1}{4} \sigma F_{ij}F^{ij} + \frac{1}{2} {\mathcal G}_{ij}F^{ij} - \frac{1}{2\sqrt{g}} \varepsilon^{ijkl} F_{i-} v_j F_{kl} \\ - \frac{1}{2} D_{i} X^I D^i X^I + \frac{1}{2} i \bar{\psi} \Gamma^{-}D_{-} \psi + \frac{1}{2} i \bar{\psi} \Gamma^{i} D_{i} \psi - \frac{1}{4}(\partial_{[i} v_{j]} + e^{\underline{i}}_{\ [i} \partial_{-}e_{j]\underline{i}}) \bar{\psi} \Gamma^{-ij} \psi \\ + \frac{1}{2} \bar{\psi} \Gamma_{+} \Gamma^{I} \big [ X^I, \psi \big ] \Big \}, \end{split} \end{align} since $u$ is now zero $\mathcal{F} = F$. Note also that since $\eta = 0$, we have $\hat D_M\epsilon=0$ and hence $\hat R=0$. This action is invariant under the following transformations \begin{align} \begin{split} &\delta X^I = i \bar{\epsilon}\Gamma^{I} \psi \\ &\delta A_i \; = - i \bar{\epsilon} \Gamma_{+i} \psi \\ &\delta A_{-} = - i \bar{\epsilon} \Gamma_{+-} \psi \\ &\delta \mathcal{G}_{ij} = - \frac{1}{2} i \bar{\epsilon} \Gamma_{+} \Gamma_{-} \Gamma_{ij} D_{-} \psi + \frac{1}{2}i (\partial_{-} v_k - \partial_{k} \sigma) \bar{\epsilon} \Gamma_{ij} \Gamma^{-k} \psi - \frac{1}{8} i (\partial_k v_l - e^{\underline{i}}_{\ k} \partial_{-} e_{\underline{i} l}) \bar{\epsilon} \Gamma_{ij} \Gamma^{kl} \Gamma_{+} \Gamma_{-} \psi \\ & \qquad \quad - \frac{1}{2 }i \bar{\epsilon} \Gamma_{-}\Gamma^{k} \Gamma_{ij} D_{k} \psi - \frac{1}{4} i (\partial_k v_l - \frac{1}{2} \partial_{-} g_{kl} ) \bar{\epsilon} \Gamma^{k}\Gamma_{ij} \Gamma^{l} \Gamma_{-} \Gamma_{+} \psi + i \bar{\epsilon} \sigma \Gamma_{+} \Gamma_{ij} \Gamma^{k} D_{k} \psi \\ & \qquad \quad -\frac{1}{2}i \bar{\epsilon} \Gamma_+\Gamma_- \Gamma_{ij} \Gamma^{I} \big[ X^I, \psi \big] \\ & \delta {\psi} \ \ \: \! = -F_{i-} \Gamma^{+-i}{\epsilon}+ \frac{1}{4} {F}_{ij} \Gamma^{+ij}{\epsilon} + \frac{1}{4}\mathcal{G}_{ij} \Gamma^{-ij}{\epsilon} + \Gamma^{-} \Gamma^I {D}_{-}X^I {\epsilon} + \Gamma^i \Gamma^ID_i X^I{\epsilon} \\ & \qquad \quad + \frac{i}{2} \Gamma_{+} \Gamma^{IJ} \big [X^I, X^J \big]{\epsilon} \ . \end{split} \end{align} However, we remind the reader that, due to (\ref{OtherConstraints}), the action is only invariant under $\epsilon_+\ne 0$ if $\sigma$ is independent of $x^i$. \section{Flux Terms} \label{sec:flux} In \cite{cordova2017five} the reduced M5-branes action is coupled to background supergravity fields such as a non-zero M-theory 4-form $\hat G_{\mu\nu\rho\sigma}$.\footnote{The authors of \cite{cordova2017five} use a $USp(4)$ notation where the flux terms are denoted by $S^{mn}$ and $T^{mn}_{ab}$ with $m,n=1,2,3,4$ and $a,b=0,1,2,3,4,5$.} The presence of such a flux leads to Myers-like terms in the M5-brane effective action. In addition the fluxes modify the Killing spinor condition to: \begin{align}\label{MKSE} 0=\hat D_\mu \epsilon + \frac{1}{288}\left(\hat G_{\nu\lambda\rho\sigma}\hat \Gamma^{\nu\lambda\rho\sigma}{}_\mu+ 8\hat G_{\mu\nu\lambda\rho}\hat \Gamma^{\nu\lambda\rho}\right)\epsilon\ . \end{align} We need to find fluxes that are compatible with the condition $\partial_+\epsilon=0$. In particular applying the condition $\partial_+\epsilon=0$ to (\ref{MKSE}) for the choice $\mu=+$ leads to a purely algebraic constraint. For simplicity we will restrict our attention here to cases where this constraint is trivial: {\it i.e.} $\hat D_+\epsilon=0$ and there is no contribution in (\ref{MKSE}) from the fluxes for $\mu=+$. Non-trivial cases arise in case 1 and require a cancellation between $\hat D_+\epsilon$ and the fluxes or twisting (and perhaps including additional restrictions on $\epsilon$). These are better addressed on a case-by-case basis rather than our general discussion. Thus, we restrict to case 2 ($u=0$), where $\hat D_+\epsilon=\partial_+\epsilon=0$. From the form of the above Killing spinor equation, it is easy to see that we need only consider constant fluxes of the form \begin{align} \hat G_{\mu\nu\lambda-} = C_{\mu\nu\lambda} \ , \end{align} with $\mu,\nu,\lambda\ne +,-$. In particular we find the possibilities $C_{IJK},C_{IJk}, C_{Ijk}$ and $C_{ijk}$, with all other combinations identically zero. These are expected to lead to additional terms in the M5-brane effective action of the form: \begin{align} S' \sim \frac{1}{{g^2_{\text{YM}}}}{\rm tr}\int d x^- d^4 x\sqrt{g} \Big(& C^{IJK} X^I[X^J,X^K] + C^{IJi} X^ID_iX^J + C^{Iij} X^IF_{ij} \nonumber \\ & + C^{ijk}\left(A_i\partial_j A_k - \frac{2i}{3}A_iA_jA_k\right) - \frac{1}{2}m^2_{IJ}X^IX^J +\frac{i}{2}\bar \psi m\psi \Big)\ , \end{align} where $m$ and $m_{IJ}$ are a masses which are linear in the fluxes. Starting with a general ansatz, we find only the following corrections to the action can be made supersymmetric: \begin{align}\label{Sprime} S'= \frac{1}{{g^2_{\text{YM}}}}{\rm tr}\int d x^- d^4 x\sqrt{g}\left(\ \frac{1}{6}C^{Iij} X^IF_{ij}+\frac{i}{144}\bar \psi \left( -\Gamma_+C^{IJK}\Gamma^{IJK}+ 3 \Gamma_+C^{Ijk}\Gamma^{I}{}\Gamma_{jk} \right)\psi\right)\ . \end{align} Along with this there are additional terms in the supersymmetry transformations: $\delta\to \delta+\delta'$ with \begin{align} \delta' \psi & = -\frac{1}{12} C^{JKL}\Gamma^{IJKL}\Gamma_+X^I\epsilon -\frac{1}{6} C^{IJK}\Gamma^{JK}\Gamma_+X^I \epsilon \nonumber\\ &\qquad -\frac13 C^{Ijk}\Gamma_{jk}\Gamma_+X^I\epsilon -\frac14 C^{Ijk}\Gamma_+\Gamma_{jk}\Gamma^{IJ}X^J\epsilon \nonumber\\ \delta'G_{ij} & = -\frac{7i}{144} C^{IJK} \bar\epsilon\Gamma_+\Gamma_{ij}\Gamma_- \Gamma^{IJK}\psi + \frac{i}{12} (C^I+\star C^I)_{ij}\bar\epsilon \Gamma_-\Gamma_+ \Gamma^I\psi \nonumber\\ &\qquad - \frac{5i}{24}C^{Ikl}\bar\epsilon\Gamma_+\Gamma_- \Gamma^{I} \Gamma_{kl}\Gamma_{ij}\psi -\frac{i}{48} C^{Ikl} \bar\epsilon\Gamma_+\Gamma_-\Gamma^{I}\Gamma_{ij}\Gamma_{kl}\psi \ , \end{align} and furthermore the Killing spinor equation is modified to \begin{align} \hat{D}_i\epsilon &= \frac{1}{72} C^{IJK}\Gamma^{IJK}\Gamma_+\Gamma_i\epsilon -\frac16 C^I{}_{ik}\Gamma^{I}\Gamma^k \Gamma_+\epsilon - \frac{1}{24} C^{Ijk}\Gamma^{I}\Gamma_{ijk} \Gamma_+\epsilon \nonumber\\ \hat{D}_-\epsilon &= \frac{1}{72} C^{IJK}\Gamma^{IJK}\Gamma_{+ -}\epsilon+\frac{1}{36} C^{IJK}\Gamma^{IJK} \epsilon + \frac{1}{24} C^{Ijk}\Gamma^I\Gamma_{jk}\Gamma_{+ -}\epsilon +\frac{1}{12} C^{Ijk}\Gamma^I\Gamma_{jk}\epsilon \nonumber\\ \hat{D}_+\epsilon &= 0\ , \end{align} which is in agreement with the eleven-dimensional supergravity Killing spinor equation (\ref{MKSE}). At first glance our result is somewhat surprising: we find no supersymmetric corrections possible for fluxes of the form $C^{ijk}$ or $C^{IJk}$, no Myers-type flux term for $C^{IJK}$ and no bosonic mass terms at all. One way to see this strange behaviour is to note that the null theory can be obtained from a non-Lorentzian rescaling of familiar five-dimensional Yang-Mills theory \cite{Lambert2019rishi}. Here one makes the rescaling of space and time according to \begin{align} x^i \to \zeta^{-1/2} x^i , \qquad x^0\to \zeta^{-1} x^0 \ , \end{align} and the matter fields by \begin{align} X^I \to \zeta X^I\ ,\qquad \psi_+\to \zeta^{3/2}\psi_+\ ,\qquad \psi_-\to \zeta \psi_-\ , \end{align} and then takes the limit $\zeta\to 0$, carefully removing divergent terms. One then makes the identification $x^-=x^0$ (but note that $\Gamma_{-} = (\Gamma_0-\Gamma_5)/\sqrt{2}$). The scaling of the supersymmetry parameter $\epsilon$ is fixed by requiring the fields scale the same way as their supersymmetry variations, this leads to \cite{Lambert2019rishi} \begin{align} \epsilon_+ \to \epsilon_+\ ,\qquad \epsilon_-\to \zeta^{-1/2}\epsilon_- \ . \end{align} Let us now consider the form of $S'$ that would arise from a spacelike reduction of the M5-brane in a non-vanishing supergravity flux ({\it e.g.} as in \cite{cordova2017five}): \begin{align} S'_{SYM} \sim \frac{1}{{g^2_{\text{YM}}}}{\rm tr}&\int d^5 x \sqrt{g}\Big( C^{IJK} X^I[X^J,X^K]+ C^{IJM}X^ID_MX^J + C^{IMN} X^IF_{MN}\nonumber\\ &+ C^{MNP}\left(A_M\partial_N A_P - \frac{2i}{3}A_MA_NA_P\right) - \frac{1}{2}m^2_{IJ}X^IX^J +\frac{i}{2}\bar\psi m \psi \Big)\ , \end{align} where again $m_{IJ}$ and $m$ are linear in the fluxes. Examining the Killing spinor equation (\ref{MKSE}) one sees that we must scale the fluxes according to \begin{align} C_{\mu\nu\lambda} \to \zeta^{-1} C_{\mu\nu\lambda}\ , \end{align} otherwise we encounter divergences or the fluxes are scaled away. As a result, the deformed action scales as, schematically, \begin{align} S'_{SYM} \sim &\frac{1}{{g^2_{\text{YM}}}}{\rm tr}\int d x^- d^4 x \sqrt{g}\Big( \zeta C^{IJK} X^I[X^J,X^K] + C^{Iij} X^IF_{ij}\nonumber\\ & + \zeta^{ 1/2} C^{IJi}X^ID_iX^J+ \zeta^{-1/2} C^{ijk}\left(A_i\partial_j A_k - \frac{2i}{3}A_iA_jA_k\right)\nonumber\\ &+ i \psi_-^T C_{\mu\nu\lambda} \Gamma^{\mu\nu\lambda}\psi_- + i\zeta \psi_+^T C_{\mu\nu\lambda} \Gamma^{\mu\nu\lambda}\psi_+ - \zeta C^{I\nu\lambda}C^J{}_{\nu\lambda}X^IX^J \Big)\ . \end{align} Thus, in the limit $\zeta\to 0$, the only terms in $S'$ that survive are precisely those in (\ref{Sprime}). The only exception is the Chern-Simons-like term which diverges, and therefore is not consistent with taking the limit. \\ \section{Conclusions and Comments} In this paper we performed a general reduction of the M5-brane along a null Killing direction. We then extended the result to a non-abelian theory. The result is a class of supersymmetric gauge theories in 4+1 dimensions but without Lorentz invariance. We also explored the effect of coupling of background supergravity fluxes to the M5-brane and twistings of the normal bundle. The results presented above include and generalise earlier results. In particular simply setting $u = v = \sigma = 0$ and $g_{ij} = \delta_{ij}$ recovers the flat space case \cite{Lambert:2018lgt}, and setting $u_i = \frac{1}{2} \Omega_{ij}x^j$ recovers the metric and action of of \cite{Lambert:2019jwi}. An interesting feature of this construction is how the information of $H$ is encoded in a consistent way into the Lagrangian. Our isometry creates a natural split in the field; $H_{ij+} = F_{ij}$, $H_{i-+}=F_{i-}$ and $H_{ij-} = {G}_{ij}$. $H$ is self-dual and closed, which is problematic for a Lagrangian. But here we find ${F}$ is closed but with no self-duality constraint off-shell, whereas $ { G}$ satisfies a self-duality constraint but is not closed. On-shell the self-duality of $ {G}$ enforces anti-self-duality condition on ${ F}$ as its equation of motion. In effect we have introduced a Lagrange multiplier, but without adding any new unphysical fields to our Lagrangian; $H$ provides its own Lagrange multiplier. It would be interesting to explore how this construction ties in with the six-dimensional lagrangian approach of \cite{Sen:2019qit,Lambert:2019diy,Andriolo:2020ykk}. In case 2 ${\mathcal G}$ imposes the constraint $F = -\star F$, and therefore the dynamics is restricted to the space of anti-self-dual gauge fields on the four-dimensional submanifold. Such field configurations are then solved for by the ADHM construction in terms of moduli. The remaining part of the action leads to one-dimensional motion on the instanton moduli space \cite{Lambert:2011gb,Lambert2019rishi}. This is in keeping with the various DLCQ proposals such as \cite{Aharony:1997an,Aharony:1997pm}. In case 1, ${\mathcal G}$ imposes the constraint ${\mathcal F} = -\star {\mathcal F}$ but here there are time-derivative terms and hence there is no simple reduction to motion on a moduli space but it would be interesting to explore the resulting constraint. The general form for the action includes an $F \wedge F_{-} \wedge v$ term which we can think of as a mixed Chern-Simons term between diffeomorphisms and gauge transformations. In particular for case 1 this term vanishes but in in case 2, we have $u=0$ and so ${\cal F}_{ij} =F_{ij}$. In this case if we let $v_{(5)} = v_i dx^i + \sigma dx^-$ then the metric admits a diffeomorphism $x^+\to x^+ + \omega$ which has the effect of mapping $v_{(5)}\to v_{(5)} +d_{(5)}\omega$ where $\omega$ depends on $x^i$ and $x^-$.\footnote{Curiously this diffeomorphism allows us to set $\sigma=0$ even though manifest supersymmetry of the action can be affected by the choice of $\sigma$.} We can rewrite the terms involving $F$ as \begin{align} \mathcal{L}_{F} = \frac12 {\rm tr}(F_-\wedge \star F_-) -\frac18 \sigma {\rm tr}\big((F-\star F)\wedge \star (F -\star F)\big) + \frac12 {\rm tr} (F\wedge {\cal G}) +\mathcal{L}_{\text{cs}} \ , \end{align} where \begin{align} \label{eq:CS2} \mathcal{L}_{\text{cs}} = - \frac{1}{4} {\rm tr}( F_{(5)}\wedge F_{(5)}) \wedge v_{(5)}\ , \end{align} and $ F_{(5)} = F + F_{-} \wedge dx^-$. Thus, under a diffeomorphism $v_{(5)} \to v_{(5)}+ d_{(5)}\omega$ the Lagrangian shifts by a total derivative. Alternatively we can write \begin{align} \mathcal{L}_{\text{cs}} = \frac{1}{4}{\rm tr}\left( A_{(5)}\wedge dA_{(5)} - \frac{2i}{3} A_{(5)} \wedge A_{(5)}\wedge A_{(5)}\right) \wedge dv_{(5)}\ , \end{align} in which case the gauge symmetry is only preserved up to a boundary term. We cannot write this term in a way which makes explicit both of these invariances simultaneously. Thus, we see that $\mathcal{L}_{\text{cs}} $ mixes a five-dimensional diffeomorphism with the $U(1)$ part of the gauge symmetry. We hope that the results will be of use in studying the $(2,0)$ and related theories reduced on non-trivial manifolds through DLCQ-type constructions \cite{Aharony:1997an,Aharony:1997pm}. For example, one could consider theories of class ${\cal S}$ \cite{Gaiotto:2009we} obtained by reduction of M5-branes on a Riemann surface $\Sigma$. Our results here should allow for a systematic construction in terms of motion on the moduli space of instantons on ${\mathbb R}^2\times \Sigma$, {\it i.e.} Hitchin systems, coupled to scalars, fermions and possible additional data associated with singularities of $\Sigma$. \section*{Acknowledgements} We would like to thank Rishi Mouland for discussions. N.L. was support in part by STFC grant ST/L000326/1 and would like to thank the CERN Theory Division for hospitality. T.O. is supported by the STFC studentship ST/S505468/1. \section*{Appendix A: Conventions} In this paper our conventions we use $\mu,\nu=0,1,2,...,10$ and consider an M5-brane with worldvolume coordinates $x^M$, $M=0,1,2,...,5$. However, we also introduce light cone coordinates \begin{align} x^{+} = \frac{1}{\sqrt{2}} (x^0 + x^5)\ ,\qquad x^{-} = \frac{1}{\sqrt{2}} (x^0 - x^5)\ ,\qquad x^i\ , \ i=1,2,3,4\ . \end{align} We will use hats to denote six-dimensional geometrical quantities. Fermions are dealt with by using Gamma matrices that satisfy a flat Clifford algebra in eleven dimensions (again with light cone Minkowski metric). All other Gamma matrices appearing in our work are derived from this basis as outlined below. Underlined indices refer to the tangent space. \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Notation & Definition & Description & Indices\\ \hline & & & \\ $\Gamma^{\underline{\mu}}$ & $\{ \Gamma^{\underline{\mu}}, \Gamma^{\underline{\nu}} \} = 2 \eta^{\, \underline{\mu \nu}}$ & Matrices of Spin(1,10) &$\underline{\mu} \in \{0, \dots, 10\}$ \\ & & & \\ $\Gamma^{\underline{M}}$ & $\{ \Gamma^{\underline{M}}, \Gamma^{\underline{N}} \} = 2 \eta^{\, \underline{MN}}$ & On the brane &$\underline{M} \in \{+,-,1, \dots, 4\}$ \\ & & & \\ $\Gamma^{I}$ & $\{ \Gamma^{I}, \Gamma^{J} \} = 2 \delta^{\, \underline{IJ}}$ & Off the brane &$I \in \{6, \dots, 10\}$ \\ & & & \\ $\hat{\Gamma}^M$ & $\hat{e}^{M}_{\ \underline{M}} \Gamma^{\underline{M}}$ & 6d curved index Gamma matrices & $M \in \{+,-,1,\dots,4$\} \\ & & & \\ $\Gamma^{i}$ & $e^{i}_{\ \underline{i}} \Gamma^{\underline{i}}$ & 4d curved index Gamma matrices & $i \in \{1,\dots,4\}$\\ & & & \\ \hline \end{tabular} \end{center} To avoid the confusion of whether or not $\Gamma^{\underline{+}}$ means $\Gamma^{\underline{\text{plus}}}$ or $ \Gamma^{\text{plus minus}}$, we will only use \begin{align} \Gamma^{+} &= \frac{\Gamma^{\underline 0}+\Gamma^{\underline 5}}{\sqrt{2}}\ , \qquad \Gamma^{-} = \frac{\Gamma^{\underline 0}-\Gamma^{\underline 5}}{\sqrt{2}}\nonumber\\ \Gamma_{+} &= \frac{\Gamma_{\underline 0}+\Gamma_{\underline 5}}{\sqrt{2}}\ , \qquad \Gamma_{-} = \frac{\Gamma_{\underline 0}-\Gamma_{\underline 5}}{\sqrt{2}}\ . \end{align} The relations \begin{align} \begin{split}\label{GammaRel} &\hat{\Gamma}^+ = \Gamma^{+} - \sigma \Gamma^{-} - v_{i} \Gamma^{i}, \qquad \hat{\Gamma}^{-} = \Gamma^{-} - u_{i}\Gamma^{i} \qquad \hat{\Gamma}^{i} = \Gamma^{i} \\ &\hat{\Gamma}_{+} = \Gamma_{+}, \qquad \hat{\Gamma}_{-} = \sigma \Gamma_{+} + \Gamma_{-}, \qquad \hat{\Gamma}_{i} = (v_i + \sigma u_i) \Gamma_{+} + u_i \Gamma_{-} + \Gamma_i\ , \end{split} \end{align} will be repeatedly used. \\ \\ The subscript $\pm$ on spinors labels their eigenvalue under $\Gamma_{\underline{05}}$, {\it e.g.}: \begin{align} \Gamma_{\underline{05}}\epsilon_\pm = \pm\epsilon_\pm \ . \end{align} In addition we always have that $\Gamma_{\underline{012345}}\epsilon=\epsilon$ and $\Gamma_{\underline{012345}}\psi=-\psi$. This has the crucial consequence of giving certain spinor bilinears definite duality under the 4d Hodge star. Consider the following spinor bilinear \begin{align} \bar{\epsilon} \Gamma_{ij} \psi \ . \end{align} Since $\Gamma_{\underline{012345}}\psi=-\psi$, it follows that \begin{align} \Gamma_{\underline{12}}\psi = \Gamma_{\underline{34}} \Gamma_{\underline{05}} \psi \ , \end{align} or in general \begin{align} \Gamma_{\underline{ij}}\psi = \frac{1}{2} \varepsilon_{\underline{ijkl}}\Gamma^{\underline{kl}} \Gamma_{\underline{05}} \psi \ . \end{align} From this its easy to see that $\Gamma_{ij}\psi_{+}$ is self-dual, while $\Gamma_{ij}\psi_{-}$ is anti-self-dual under the four-dimensional Hodge star. Since $\epsilon$ has the opposite chirality under $\Gamma_{\underline{012345}}$, these are reversed: $\Gamma_{ij}\epsilon_{+}$ is anti-self-dual, $\Gamma_{ij}\epsilon_{-}$ is self-dual. \section*{Appendix B: The Background} The vielbein (and inverse) for the metric are given by $\hat{e}^{\ \underline{M}}_{M} \, \hat \eta_{\underline{M N}} \, \hat{e}^{\underline{N}}_{\ N} = \hat{g}_{M N} $, with $\eta_{\underline{MN}}$ the light-cone Minkowski metric in six dimensions. This results in \begin{align} \label{eq:viel} \hat{e}^{\, \underline{M}}_{\ M} = \begin{pmatrix} 1 & \sigma & v_i + \sigma \, u_i \\ 0 & 1 & u_i \\ 0 & 0 & e^{\, \underline{i}}_{\ i} \end{pmatrix}\ , \qquad \hat{e}^{\, M}_{\ \underline{M}} = \begin{pmatrix} 1 & -\sigma & -v_{\underline{i}} \\ 0 & 1 & -u_{\underline{i}} \\ 0 & 0 & e^{\, i}_{\ \underline{i}} \end{pmatrix}, \end{align} with $e^{\, \underline{i}}_{\ j}$ being the veilbien for the four-dimensional metric $g_{\, ij}$. Where $u^i$ and $v^i$ are defined to have their index raised by $g_{\, ij}$, such that dot products are defined also with $g_{\, ij}$. We also note that \begin{align} \hat{g} = \det(\hat{g}_{MN}) = \det(\hat{e}^{\underline M}{}_{N})^2 \det(\hat\eta_{\underline{MN}}) = -\det(g_{ij})\ . \end{align} \\ Adding the fermions requires knowledge of the spin connection terms, the non zero terms of which are \begin{align} \begin{split}\label{SpinC} \hat{\omega}_{+-i} &= \frac{1}{2}\partial_{-}u_i \\ \hat{\omega}_{+ij} &= \partial_{[i}u_{j]} \\ \hat{\omega}_{-+i} &= \frac{1}{2} \partial_{-}u_i \\ \hat{\omega}_{--i} &= -\partial_{i} \sigma + u_i \partial_{-} \sigma + 2 \sigma \partial_{-} u_i + \partial_{-} v_i\\ \hat{\omega}_{-ij} &= \partial_{[i} (v_{j]} + 2 \sigma u_{j]}) + u_{[i} \partial_{-} v_{j]} - v_{[i} \partial_{-} u_{j]} - e^{\underline{i}}_{\ [i} \partial_{-} e_{|\underline{i}| j]} \\ \hat{\omega}_{i+-} &= - \frac{1}{2}\partial_{-}u_i \\ \hat{\omega}_{i+j} &= \partial_{[i}u_{j]} \\ \hat{\omega}_{i-j} &= \partial_{[i}( v_{j]} + 2 \sigma u_{j]}) + 2 u_{(i} \partial_{j)} \sigma + \partial_{-} (u_{(i}(v_{j)} + \sigma u_{j)})) - \frac{1}{2}\partial_{-}g_{ij} \\ \hat{\omega}_{ijk} &= \omega_{ijk} + \partial_{j} ( u_{(i}(v_{k)} + \sigma u_{k)})) - \partial_{k} ( u_{(i}(v_{j)} + \sigma u_{j)})) + \partial_{i}( u_{[j} v_{k]}) + 2 (v_{[j} + \sigma u _{[j}) \partial_{|i|} u_{k]}\ , \end{split} \end{align} where $\omega_{ijk}$ is the four-dimensional spin connection for $D_i$, the Levi-Civita connection for $g_{ij}$ on our euclidean submanifold. \bibliographystyle{JHEP}
{'timestamp': '2020-10-30T01:18:04', 'yymm': '2005', 'arxiv_id': '2005.14441', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14441'}
arxiv
\section{Introduction} \label{sec:intro} Speech enhancement is to separate clean speech from noisy speech~\cite{se_book}. It is an essential branch of speech signal processing and has been widely stu
died in the past few decades. It can be used in hearing aids, voice recorders, and smart speakers, as well as the front end of tasks such as speech recognition~\cite{speech_se} and speaker recognition~\cite{speaker_se}. In recent years, a large number of speech enhancement methods based on deep learning have been proposed~\cite{se_overview}~\cite{regression_approch}~\cite{crn}~\cite{wavenet}~\cite{iconip_2019}, showing stronger robustness than traditional signal-based methods. These methods can generally be divided into time-domain methods and frequency-domain methods. The time-domain methods~\cite{unetgan}~\cite{tcnn} use the neural network to directly map noisy speech waveform to clean speech waveform and usually do not require any preprocessing. The frequency-domain methods generally use short-term Fourier transform (STFT) to convert the noisy speech from the time domain to the frequency domain. They then use the neural network to map the magnitude spectrum of the noisy speech to some masking~\cite{ibm_as_goal_wang_2005} or the magnitude spectrum of the clean speech~\cite{regression_approch}. Compared with the chaotic time-domain sampling points, the magnitude spectrum contains more geometric information, which makes it easier to calculate losses and analyze frequency components. As the SNR of noisy speech decreases, the correct phase becomes more and more important for speech intelligibility and quality~\cite{important_phase}. However, since the mapping of the phase spectrum is complicated (no obvious geometric structure), the speech enhancement methods in the time domain are also widely used. With the increasing demand for speech-related services in recent years, the scenarios that need to be addressed for speech enhancement are becoming more and more challenging. A noticeable trend is that the range of SNR of the noisy speech is significantly expanded. Typical scenarios include stations, factories, subways, and shopping malls. The increased range of SNR means that speech enhancement needs to have the ability to enhance a wide range of background noises of different intensities. When the background noise is low, the main goal of the speech enhancement is to improve the hearing sense of the speech. When the background noise is high (in extreme cases (below -10dB), noisy speech is even hard to be heard), the speech enhancement needs to enhance the speech that is difficult to be heard to a clearer speech. To better perform speech enhancement in the above scenarios, this paper proposes a speech enhancement method based on knowledge distillation~\cite{hinton2015distilling} and time-domain U-Net. Our motivation is as follows. Speech enhancement can be viewed as a particular task of speech separation. In speech separation, Wave-U-Net~\cite{wave_u_net}, a model in the time domain, has achieved state-of-the-art performance. It can perform feature mapping directly in the time domain to avoid processing the phase. Inspired by it, we built a powerful time-domain model suitable for speech enhancement. To enable the speech enhancement model to handle both high SNR and low SNR, there are usually two solutions. The first solution is providing a large amount of training data at each SNR and training a large-scale neural network. The second solution is to integrate multiple different models for a specific SNR range to process the noisy speech in parallel or serially. However, the disadvantage of the former is that large-scale neural networks will consume a lot of computing resources, memories, and times. The problem of the latter is that it needs to train multiple models, which is very troublesome to deploy and severely limits the application. To deal with the above problems, we introduce the knowledge distillation, which has been widely used in image recognition~\cite{object_detection_knowledge_distillation} and speech recognition~\cite{danpovey_distilling}. Knowledge distillation can extract knowledge from a large teacher model and improve the performance of a small student model. It is also called the teacher-student technique. In this paper, we extend the traditional teacher-student technique and propose an SNR-based teachers-student technique. We first build multiple teacher speech enhancement models and train them independently with the datasets of small SNR range. Then we build a student model. To make the student model have the ability to handle both high SNR and low SNR, we will use different teacher models to guide its training according to the SNR of the training data. To evaluate the proposed method, we construct a challenging speech enhancement dataset that covers a wide range of SNR (-20dB to 20dB). We experimentally analyzed the effectiveness of the SNR-based teachers-student technique and compared the proposed method with several state-of-the-art (SOTA) methods. \subsection{Method} \subsection{SNR-based teachers-student technique} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/teacher_and_student.png} \caption{An illustration of the standard knowledge distillation.} \label{fig:teacher_and_student} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/teachers.png} \caption{An illustration of the SNR-based teachers-student technique.} \label{fig:teachers_and_student} \end{figure} We show the standard knowledge distillation in Figure~\ref{fig:teacher_and_student}. The idea behind knowledge distillation is that the soft probability output $p'$ of a pre-trained teacher model contains more information than the correct class label $p''$. If the teacher model gives higher probabilities for specific categories, then this indicates that the categories of the input image should be near these categories. Knowledge distillation forces the student model to minimize the difference between its probability output $p$ and the teacher's probability output $p'$ to extract the additional knowledge that the teacher model obtained when calculating the correct probability. We usually use knowledge distillation to distill the information learned by large networks and pass it to networks with small parameters and weak learning capabilities. In this paper, we extend the standard knowledge distillation. The workflow are shown in Figure~\ref{fig:teachers_and_student}. First, we train multiple teacher models who are proficient in speech enhancement at a specific small SNR range. Then we use the teacher models to guide the training of the student model so that the student model can perform speech enhancement under both high SNR and low SNR. Below we describe the entire process. \textbf{Train the teacher models.} We use $ f_T = \{ f_{t_1}, f_{t_2}, ... \}$ to represent the set of teacher models. We use $f_{t_i}$ represents the $i$-th teacher model. The dataset of each teacher model covers only a small range of SNRs, and the range of SNRs covered by each teacher's dataset does not overlap each other. After sufficient training, we get multiple teacher models that are good at dealing with the SNR covered by their dataset. We fixed the weights of the teacher models. \textbf{Forward calculation.} We use $x$ to represent a noisy speech, and $y$ to represent the clean speech corresponding to the $x$. We input the $x$ to the SNR-based teacher models. According to the SNR of $x$, the training algorithm will select a teacher model $f_{t_i}$ to obtain the enhanced speech $f_{t_i} (x)$. We also input $x$ to the student model $f_s$ to get the enhanced speech $f_s (x)$. \textbf{Distill knowledge from the teacher model}. We use a $L_2$ loss function to minimize the difference between the output of the student model and the teacher model. This loss can extract additional knowledge from the teacher model during the student training process, and strengthen the student model's ability to enhance speech at a specific SNR. We still need to preserve the loss between the enhanced speech of the student model $f_s (x)$ and the clean speech $y$. The overall loss $L$ is as follows: \begin{eqnarray} L = \alpha \frac{1}{2} {|| f_s (x) - f_{t_i} (x) ||}^2 + (1 - \alpha) \frac{1}{2} {|| f_s (x) - y||}^2 \label{eq:loss} \end{eqnarray} where $\alpha$ is the weight of the knowledge obtained from the teachers, which is determined based on the validation dataset. \begin{figure*}[t] \centerline{\includegraphics[width=\textwidth]{figures/basic_network.png}} \caption{The network structure of the time-domain U-Net model. $C$ represents the number of convolution kernels, $K$ represents the size of the convolution kernels, and $S$ represents the stride. All convolutional layers of the model use same-padding.} \label{fig:basic_network} \end{figure*} \subsection{Model structure} As the SNR of noisy speech decreases, the correct phase becomes more and more critical for speech enhancement. However, since the phase spectrum mapping is complicated, we conduct speech enhancement in the time domain. Inspired by Wave-U-Net~\cite{wave_u_net}, we designed a time-domain speech enhancement model based on one-dimensional (1D) convolutional neural networks. The teacher models and student model both take such the network structure. The input of the model is a fixed-length noisy speech signal, and the output is an enhanced speech signal. The model contains three parts: Encoder, Bottleneck, and Decoder, and its structure is shown in Figure~\ref{fig:basic_network}. Below we describe it in detail. The Encoder part consists of 12 consecutive encoder blocks. The first seven encoder blocks use the same structure: ``1D convolutional layer + batch normalization + leaky ReLU + downsampling layer". 1D convolutional layers can be used for feature extraction and integration. The parameters of the convolutional layers are listed above each encoder block. These seven encoder blocks all contain a downsampling layer, which can select one sampling point from two adjacent sampling points as output. After their processing, the size of the input signal in the time dimension will gradually become smaller. The structure of the remaining five encoder blocks in the Encoder is similar to that of the previous seven encoder blocks, but they do not contain a downsampling layer. For the time-domain model, since the first convolution layer of the model plays a critical role~\cite{sincnet}, we set the number of convolution kernels in the first convolution layer to a larger value (48), so that the model can better extract the features of the speech waveform. The number of convolution kernels in the remaining convolutional layers increases with the step size of 24 as the number of layers of the network increases. The structure of the Bottleneck section is the same as that of the Encoder. The Decoder part contains 12 consecutive decoder blocks. The beginning of each decoder block contains the concatenation operation, which is used to concatenate the underlying features passed through the skip-connection. In order to maintain symmetry, the last seven decoder blocks in the decoder part all include the upsampling layer. The upsampling layer doubles the size of the feature map in the time dimension by linear interpolation, and finally restores the size corresponding to the input speech waveform. The remaining structure of the decoder blocks are similar to the encoder blocks. \begin{table*}[!t] \centering \scriptsize \caption{The PESQ and STOI of the proposed method at different SNRs and different noises.} \renewcommand\arraystretch{1} \setlength{\tabcolsep}{1.2mm}{ \begin{tabular}{cccccccccccccccccccc} \toprule \multirow{3}[0]{*}{Noise} & \multirow{3}[0]{*}{Target} & \multicolumn{9}{c}{PESQ} & \multicolumn{9}{c}{STOI} \\ \cmidrule(r){3-11}\cmidrule(lr){12-20} & & \multicolumn{5}{c}{Seen} & \multicolumn{4}{c}{Unseen} & \multicolumn{5}{c}{Seen} & \multicolumn{4}{c}{Unseen} \\ \cmidrule(r){3-7}\cmidrule(r){8-11}\cmidrule(lr){12-16}\cmidrule(r){17-20} & & -20dB & -10dB & 0dB & 10dB & 20dB & -15dB & -5dB & 5dB & 15dB & -20dB & -10dB & 0dB & 10dB & 20dB & -15dB & -5dB & 5dB & 15dB \\ \midrule \multirow{2}[0]{*}{Babble} & Noisy & 0.973 & 1.025 & 1.438 & 2.215 & 2.900 & 0.703 & 1.079 & 1.823 & 2.573 & 0.326 & 0.443 & 0.641 & 0.815 & 0.912 & 0.362 & 0.547 & 0.740 & 0.871 \\ & Enhanced & 1.410 & 1.495 & 2.258 & 2.816 & 3.227 & 1.084 & 1.783 & 2.577 & 3.018 & 0.336 & 0.552 & 0.808 & 0.901 & 0.941 & 0.377 & 0.671 & 0.861 & 0.923 \\ \midrule Destroyer & Noisy & 0.924 & 0.886 & 1.326 & 2.087 & 2.777 & 0.783 & 1.050 & 1.716 & 2.442 & 0.361 & 0.462 & 0.646 & 0.839 & 0.937 & 0.399 & 0.548 & 0.750 & 0.894 \\ engine & Enhanced & 1.230 & 1.727 & 2.400 & 2.790 & 3.143 & 1.315 & 2.083 & 2.574 & 2.966 & 0.423 & 0.637 & 0.831 & 0.899 & 0.941 & 0.482 & 0.739 & 0.868 & 0.923 \\ \midrule Destroyer & Noisy & 0.411 & 0.844 & 1.525 & 2.285 & 2.997 & 0.432 & 1.077 & 1.926 & 2.640 & 0.355 & 0.472 & 0.669 & 0.817 & 0.908 & 0.400 & 0.572 & 0.753 & 0.868 \\ ops & Enhanced & 1.140 & 1.762 & 2.471 & 2.941 & 3.342 & 1.461 & 2.169 & 2.727 & 3.130 & 0.449 & 0.644 & 0.840 & 0.904 & 0.941 & 0.498 & 0.754 & 0.872 & 0.928 \\ \midrule \multirow{2}[0]{*}{Factory floor 1} & Noisy & 0.855 & 0.831 & 1.338 & 2.137 & 2.894 & 0.906 & 0.989 & 1.744 & 2.516 & 0.322 & 0.437 & 0.624 & 0.817 & 0.920 & 0.373 & 0.525 & 0.727 & 0.874 \\ & Enhanced & 1.070 & 1.631 & 2.322 & 2.811 & 3.191 & 1.296 & 1.993 & 2.538 & 2.975 & 0.368 & 0.577 & 0.803 & 0.898 & 0.941 & 0.415 & 0.708 & 0.857 & 0.921 \\ \midrule Speech shaped & Noisy & 0.640 & 0.547 & 1.307 & 2.116 & 2.850 & 0.424 & 0.921 & 1.724 & 2.490 & 0.349 & 0.443 & 0.628 & 0.817 & 0.922 & 0.384 & 0.525 & 0.733 & 0.877 \\ noise & Enhanced & 0.995 & 1.588 & 2.356 & 2.820 & 3.152 & 1.232 & 1.989 & 2.591 & 2.984 & 0.444 & 0.553 & 0.804 & 0.896 & 0.943 & 0.460 & 0.700 & 0.860 & 0.922 \\ \midrule Pedestrian & Noisy & 0.924 & 0.774 & 1.407 & 2.178 & 2.902 & 0.616 & 1.061 & 1.813 & 2.518 & 0.333 & 0.461 & 0.646 & 0.815 & 0.909 & 0.390 & 0.550 & 0.737 & 0.873 \\ area & Enhanced & 1.185 & 1.370 & 2.222 & 2.802 & 3.235 & 1.244 & 1.803 & 2.534 & 2.965 & 0.341 & 0.523 & 0.796 & 0.898 & 0.937 & 0.407 & 0.681 & 0.856 & 0.924 \\ \midrule \multirow{2}[0]{*}{On the bus} & Noisy & 0.636 & 1.377 & 2.038 & 2.906 & 3.608 & 0.859 & 1.753 & 2.474 & 3.198 & 0.513 & 0.703 & 0.784 & 0.860 & 0.907 & 0.610 & 0.751 & 0.832 & 0.885 \\ & Enhanced & 1.242 & 2.277 & 2.746 & 3.287 & 3.728 & 1.835 & 2.569 & 3.039 & 3.457 & 0.517 & 0.785 & 0.874 & 0.910 & 0.938 & 0.643 & 0.839 & 0.892 & 0.924 \\ \midrule \multirow{2}[0]{*}{Cafe} & Noisy & 0.665 & 0.911 & 1.645 & 2.404 & 3.234 & 0.704 & 1.211 & 2.049 & 2.775 & 0.344 & 0.495 & 0.691 & 0.853 & 0.934 & 0.422 & 0.577 & 0.783 & 0.894 \\ & Enhanced & 0.981 & 1.584 & 2.378 & 2.954 & 3.445 & 1.185 & 1.934 & 2.721 & 3.140 & 0.357 & 0.619 & 0.819 & 0.909 & 0.948 & 0.477 & 0.722 & 0.875 & 0.929 \\ \midrule \multirow{2}[0]{*}{Street} & Noisy & 0.806 & 0.776 & 1.715 & 2.453 & 3.139 & 0.492 & 1.205 & 2.106 & 2.799 & 0.399 & 0.545 & 0.726 & 0.841 & 0.919 & 0.508 & 0.634 & 0.800 & 0.878 \\ & Enhanced & 1.286 & 1.844 & 2.589 & 3.044 & 3.489 & 1.591 & 2.179 & 2.851 & 3.217 & 0.402 & 0.675 & 0.857 & 0.908 & 0.941 & 0.554 & 0.778 & 0.888 & 0.927 \\ \bottomrule \end{tabular}% } \label{tab:model_performance}% \end{table*}% \section{Experiment} \subsection{Dataset} We use public datasets to evaluate the proposed method. \textbf{For SNR-based teacher models:} In this paper, we set up four teacher models. In order to train the teacher models for different SNR ranges, we randomly select 950 clean speeches from the TIMIT~\cite{timit} training dataset, and mixed speech shaped noise, babble, destroy engine, destroyer ops and factory floor 1 (from NoiseX-92 dataset~\cite{noisex92}) under different SNRs to generate the noisy speech dataset. The SNRs of the four teacher models are \{-20, -17, -13, -11\}dB, \{-10, -7, -3, 1\}dB, \{0, 3, 7, 9\}dB, and \{10, 13, 17, 20\}dB. In the end, we generated 19,000 noisy speeches for each teacher model, of which 18,000 speeches were used as the training dataset and the rest were used as the validation dataset. \textbf{For student model:} We randomly selected 950 speeches from the TIMIT training dataset and mixed them with the five noises that appeared in the training dataset of the teacher models at \{-20, -10, 0, 10, 20\}dB. This produces 23,750 noisy speeches. Among them, 22,000 were used as the training dataset, and 1,750 were used the validation dataset. To evaluate the student model, we randomly selected 100 speeches from the TIMIT test dataset, mixing with five noises that appeared in the above two datasets and four noises from the CHiME-4~\cite{chime4} dataset (pedestrian area noise, on the bus noise, cafe noise, and street noise) at \{-20, -15, -10, -5, 0, 5, 10, 15, 20\}dB. The resulting 8,100 noisy speeches act as the test dataset. It is worth mentioning that the test dataset of the student model contains noises, SNRs, and speakers that have not appeared in the training dataset, which makes speech enhancement very challenging. We use PESQ~\cite{pesq} and STOI~\cite{stoi} to measure speech quality and intelligibility, respectively. \subsection{Implementation and training details} The sampling rate of all speeches is 16,000Hz. The input length of all models is fixed. Before the beginning of each training epoch, we will select consecutive 16,384 sampling points (1.024 seconds) from the random position of noisy speech as the input of the model. Except for the learning rate, other experimental parameters are the same for all models. We use Adam optimizer~\cite{adam} (decay rates $\beta_{1} = 0.9$, $\beta_{2} = 0.999$) and set the batch size to 16. we set $\alpha$ of leaky ReLU to 0.1. We set the learning rate of the teacher models to a small constant of 0.0002. We set the initial learning rate of the student model to 0.002, which is reduced twice by every 300 epochs until the validation loss does not decrease. According to the validation dataset, we set $\alpha$ in Equation~\ref{eq:loss} to $0.5$. \subsection{Baseline methods} We compared with the three methods, which are Wavenet-denoising~\cite{wavenet}, PSM-BiLSTM~\cite{phase_sensitive_bilstm}, and CRN~\cite{crn}. The first one is to mapping features on the time domain, and the last two are mapping features on the frequency domain. \begin{itemize} \item Wavenet-denoising: It is an end-to-end method for speech denoising based on wavenet~\cite{wavenet_sn}. It retains wavenet’s powerful acoustic modeling capabilities, while significantly reducing time complexity by eliminating its autoregressive nature. We used its official implementation. \item PSM-BiLSTM: It is a bidirectional LSTM network for speech enhancement, using a phase-sensitive spectrum approximation (PSA) cost function. We implemented it and used the same hyperparameters as that in the paper. \item CRN: It contains a convolutional encoder-decoder and a LSTM bottleneck. We implemented it and used the same hyperparameters as that in the original paper. \end{itemize} \section{Result} \subsection{Performance of the proposed method} Table~\ref{tab:model_performance} presents the noisy speech and the enhanced speech from -20dB to 20dB in terms of PESQ and STOI. The ``Noisy” lines in the table represent the noisy speech, and the ``Enhanced” lines represent the enhanced speech using the proposed method. The ``Seen” columns mean the SNRs exist in the training dataset of the student model while the “Unseen” columns indicate the SNRs that do not exist in the training dataset of the student model. When comparing the noisy and enhanced speech, we can found that the PESQ and STOI are improved after speech enhancement with the proposed method even some noises do not appear in the training dataset (the last four noises). It is also clear that our method improves the PESQ and STOI for both “Seen” SNR and “Unseen” SNRs. The SNRs lower than -5dB are generally considered as extremely low SNRs, and the speech enhancement under such conditions is very challenging. However, our method works well, even if the range of the SNR of the dataset is so wide (-20dB to 20dB). The average improvement of the PESQ and STOI is 38.71\% and 12.73\%, respectively. This improvement indicates that the proposed method is effective. \subsection{Effectiveness of SNR-based teachers} To demonstrate the effectiveness of the SNR-based teacher models, Table~\ref{tab:compare_student} lists the performance of the four SNR-based teacher models (T1, T2, T3, T4), the student model trained without teacher supervision (S1), and the proposed method (S2). As shown in Table~\ref{tab:compare_student}, we only test the teacher models on the trained small range of SNRs. For clarity, the PESQ and STOI of the noisy speech are also listed in the table. Comparing with the noisy speech, there are notable improvements on the PESQ and STOI for the teacher models on their corresponding SNRs. This suggests that the teacher models can perform well on a small range of SNRs. The S1 and the S2 use the same training dataset and network structure. Compared to all the above models, the S1 performs the worst no matter which SNR. We guess that this is because the SNR range of the dataset is too large. With the learning ability of the S1, it is very difficult to consider all the SNRs from low to high. The PESQ and STOI of the proposed method S2 approximate those of the teacher models at the corresponding SNR, and are far better than the S1 at all SNRs. These results indicate that the proposed SNR-based teachers-student technique can help the student model improve its ability to handle high SNR and low SNR at the same time. We calculated the PESQ and STOI on the validation dataset for every ten epochs during the training process of the S1 and S2. The growth curves are shown in Figure~\ref{fig:validation_metrics}. The horizontal axis indicates the epoch of training, and the vertical axis is the average metric. We can clearly notice that from about the 200th epoch, the PESQ and STOI of S2 exceed the PESQ and STOI of S1. In the subsequent training process, the performance gap between the two models is still increasing. \begin{table} \centering \scriptsize \caption{An Illustration of average performance on noisy speech (Noisy), the SNR-based teacher models (T1, T2, T3, and T4), the student without teacher (S1), and the student with SNR-based teachers (S2).} \renewcommand\arraystretch{1} \setlength{\tabcolsep}{1.2mm}{ \begin{tabular}{ccccccccccc} \toprule & & -20dB & -15dB & -10dB & -5dB & 0dB & 5dB & 10dB & 15dB & 20dB \\ \midrule \multirow{7}[2]{*}{\begin{sideways}PESQ\end{sideways}} & Noisy & 0.761 & 0.660 & 0.892 & 1.155 & 1.531 & 1.936 & 2.314 & 2.666 & 3.039 \\ \cmidrule(lr){2-11} & T1 & 1.132 & \textbf{1.400} & \textbf{1.720} & - & - & - & - & - & - \\ & T2 & - & - & 1.719 & \textbf{2.121} & \textbf{2.461} & - & - & - & - \\ & T3 & - & - & - & - & 2.387 & \textbf{2.761} & \textbf{3.109} & - & - \\ & T4 & - & - & - & - & - & - & 2.891 & \textbf{3.228} & \textbf{3.561} \\ \cmidrule(lr){2-11} & S1 & 1.018 & 1.177 & 1.581 & 1.957 & 2.324 & 2.642 & 2.885 & 3.016 & 3.257 \\ & S2 & \textbf{1.172} & 1.360 & 1.700 & 2.057 & 2.417 & 2.686 & 2.921 & 3.098 & 3.332 \\ \midrule \multirow{7}[2]{*}{\begin{sideways}STOI\end{sideways}} & Noisy & 0.367 & 0.428 & 0.497 & 0.582 & 0.674 & 0.762 & 0.831 & 0.879 & 0.919 \\ \cmidrule(lr){2-11} & T1 & \textbf{0.414} & \textbf{0.498} & 0.626 & - & - & - & - & - & - \\ & T2 & - & - & \textbf{0.639} & \textbf{0.755} & \textbf{0.833} & - & - & - & - \\ & T3 & - & - & - & - & 0.827 & \textbf{0.888} & 0.918 & - & - \\ & T4 & - & - & - & - & - & - & \textbf{0.964} & \textbf{0.982} & \textbf{0.987} \\ \cmidrule(lr){2-11} & S1 & 0.312 & 0.441 & 0.586 & 0.693 & 0.810 & 0.865 & 0.901 & 0.915 & 0.935 \\ \cmidrule(lr){2-11} & S2 & 0.404 & 0.480 & 0.619 & 0.733 & 0.826 & 0.876 & 0.903 & 0.925 & 0.941 \\ \bottomrule \end{tabular}% } \label{tab:compare_student}% \end{table}% \begin{figure} % \begin{minipage}[h]{0.48\linewidth} \centering \centerline{\includegraphics[width=1\linewidth]{./figures/metric_PESQ.png}} \end{minipage} \hfill \begin{minipage}[h]{0.49\linewidth} \centering \centerline{\includegraphics[width=1\linewidth]{./figures/metric_STOI.png}} \end{minipage} % \caption{The effectiveness of the SNR-based teachers-student technique.} \label{fig:validation_metrics} % \end{figure} \subsection{Comparison of our method with the baselines} Table~\ref{tab:compare_with} reports the performances of the proposed method and the baseline methods on the test dataset (-20dB to 20dB). We can easily notice that the proposed method (S2) achieves the best performance, whether it is PESQ or STOI. Compared to the most robust method CRN in the baseline methods, we still improved 0.143 on PESQ and 0.021 on STOI. In the scenario that contains both high SNR and low SNR noisy speeches, our proposed method based on SNR-based teachers-student technique and time-domain U-Net is undoubtedly more advantageous. \begin{table}[h] \centering \footnotesize \caption{Comparison of the proposed method and the baselines.} \label{tab:compare_with} \renewcommand\arraystretch{1.2} \setlength{\tabcolsep}{5mm}{ \begin{tabular}{lrr} \toprule Method & PESQ & STOI \\ \midrule Noisy & 1.661 & 0.660 \\ Wavenet-denoising & 1.948 & 0.711 \\ PSA-BiLSTM & 2.020 & 0.719 \\ CRN & 2.161 & 0.723 \\ S2 & \textbf{2.304} & \textbf{0.744} \\ \bottomrule \end{tabular} } \end{table} \section{Conclusion} Speech enhancement for both low SNR and high SNR is a very challenging task. This paper proposes a method that integrates an SNR-based teachers-student technique and time-domain U-Net to deal with this problem. The student model is trained with the supervision of multiple teacher models. Each teacher model is well trained in an SNR-based way, which means they are only responsible for speech enhancement of a small SNR range. The experimental result shows that our method achieves state-of-the-art performance and suggests the effectiveness of SNR-based knowledge distillation in speech enhancement. The result also proves that our method is robust on ``Seen" and ``Unseen" noise and SNRs. To our best knowledge, this is the first time that knowledge distillation is investigated in speech enhancement. \section{ACKNOWLEDGMENTS} This work was funded by National Natural Science Foundation of China (Grant No.61762069, 61773224), Natural Science Foundation of Inner Mongolia Autonomous Region (Grant No. 2017BS0601), and Inner Mongolia University Research and Innovation Project (Grant No. 10000-15010109). \bibliographystyle{IEEEbib}
{'timestamp': '2020-11-17T02:15:07', 'yymm': '2005', 'arxiv_id': '2005.14506', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14506'}
arxiv
\section{Introduction} \label{sec:introduction} \input{introduction} \section{Notation} \input{notation} \section{DCTN's description} \subsection{Input preprocessing} \label{sec:input_preprocessing
} \input{input_preprocessing} \subsection{Entangled plaquette states} \label{sec:entangled_plaquette_states} \input{entangled_plaquette_states} \subsection{Description of the whole model} \label{sec:description_of_the_whole_model} \input{description_of_the_whole_model} \subsection{Optimization} \input{optimization} \section{Experiments} \label{sec:experiments} \input{experiments} \section{Conclusion} \input{conclusion} \section*{Acknowledgements} The work of Anh-Huy Phan was supported by the Ministry of Education and Science of the Russian Federation under Grant 14.756.31.0001. \bibliographystyle{plainnat} \subsection{MNIST} We tested DCTN with one EPS, \(\nu=0.5\) in \cref{eq:phi_definition}, \(K_1=4, Q_1=4\), \(\text{lr} = 3 \cdot 10^{-3}\), \(\lambda = 0\) in \cref{eq:opt_problem_whole_tn_l2_reg}, batch size 128 on MNIST dataset with 50000/10000/10000 training/validation/test split. We got 98.75\% test accuracy. MNIST is considered relatively easy and doesn't represent modern computer vision tasks \citep{fashionmnist_readme}. \subsection{FashionMNIST} FashionMNIST \citep{xiao2017fashion} is a dataset fully compatible with MNIST: it contains 70000 grayscale \(28 \times 28\) images. Each image belongs to one of 10 classes of clothes. We split 70000 images into 50000/10000/10000 training/validation/test split and experimented with models with one, two, and three EPSes. The more EPSes we used, the more overfitting DCTN experienced and the worse validation accuracy got, so we didn't experiment with more than three EPSes. For one, two, and three EPSes, we chose hyperparameters by a combination of gridsearch and manual choosing and presented the best result (chosen by validation accuracy before being evaluated on the test dataset) in \Cref{table:fashionmnist_experiments}. In \Cref{sec:how_hyperparameters_affect_optimization_and_generalization}, we describe more experiments and discuss how various hyperparameters affect optimization and generalization of DCTN. \begin{table}[h] \caption{Comparison of our best models (top 3 rows) with 1, 2, and 3 EPSes, respectively, with the best (by a combination of accuracy and parameter count) existing models on FashionMNIST dataset. DCTN with one EPS wins against existing models with similar parameter count. Adding more EPSes makes test accuracy worse due to overfitting. All 3 of our models eventually reach nearly 100\% accuracy if not stopped early. We trained all DCTNs with batch size 128.} \label{table:fashionmnist_experiments} \resizebox{\textwidth}{!}{% \begin{tabular}{p{13cm}lr} Model & Accuracy & Parameter count \\ \hline One EPS \(K_1{=}4\), \(Q_1{=}4\), \(\nu{=}0.5\), \(E_1 {\sim} \mathcal{N}(\mu{=}0, \sigma{=}0.25)\), \(A,b {\sim} U[-(H_1 W_1 Q_1)^{-0.5}, -(H_1 W_1 Q_1)^{0.5}]\), \(\text{lr}{=}3\cdot10^{-3}\), \(\lambda{=}0\), \cref{eq:opt_problem_whole_tn_l2_reg} & 89.38\% & \(2.9 \cdot 10^5\) \\ \hline Two EPSes, \(K_1{=}4, Q_1{=}4, K_2{=}3, Q_2{=}6\), \(\nu {\approx} 1.46\), EPSes initialized from \(\mathcal{N}(\mu{=}0, \sigma{=}Q_\text{in}^{-0.5 K^2})\), \(A {\sim} \mathcal{N}(\mu{=}0, \sigma{=}0.25(H_2 W_2 Q_2)^{-0.5})\), \(b {\sim} U[-(H_2 W_2 Q_2)^{-0.5}, (H_2 W_2 Q_2)^{-0.5}]\), \(\text{lr}{=}1.11\cdot10^{-4}\), \(\lambda{=}10^{-2}\), \cref{eq:opt_problem_epswise_l2_reg} & 87.65\% & \(1.8 \cdot 10^6\) \\ \hline Three EPSes, \(K_1{=}4, Q_1{=}4, K_2{=}3, Q_2{=}12, K_2{=}2, Q_2{=}24\), \(\nu {\approx} 1.46\), EUSIR initialization of EPSes (see \Cref{sec:initialization_and_scaling_of_input}), \(A {\sim} \mathcal{N}(\mu{=}0, \sigma{=}0.25 (H_2 W_2 Q_2)^{-0.5})\), \(b {\sim} U[-(H_3 W_3 Q_3)^{-0.5}, (H_3 W_3 Q_3)^{-0.5}]\), \(\text{lr}{=}10^{-7}\), \(\lambda {=} 10^{-1}\), \cref{eq:opt_problem_whole_tn_l2_reg} & 75.94\% & \(4\cdot10^6\) \\ \hline GoogleNet + Linear SVC & 93.7\% & \(6.8\cdot 10^6\) \\ \hline VGG16 & 93.5\% & \(2.6 \cdot 10^7\) \\ \hline CNN: 5x5 conv -\textgreater 5x5 conv -\textgreater linear -\textgreater linear & 91.6\% & \(3.3 \cdot 10^6\) \\ \hline AlexNet + Linear SVC & 89.9\% & \(6.2 \cdot 10^7\) \\ \hline Matrix tensor train in snake pattern (Glasser 2019) & 89.2\% & ? \\ \hline Multilayer perceptron & 88.33\% & \(2.3 \cdot 10^5\) \end{tabular}} \end{table} \subsection{CIFAR10} CIFAR10~\citep{cifar10} is a colored dataset of 32 by 32 images in 10 classes. We used 45000/5000/10000 train/validation/test split. We evaluated DCTN on the colored version using YCbCr color scheme and on grayscale version which mimics MNIST and FashionMNIST. The results are in \Cref{table:cifar10_experiments}. DCTN overfits and performs poorly -- barely better than a linear classifier. Our hypotheses for why DCTN performs poorly on CIFAR10 in contrast to MNIST and FashionMNIST are: (a)~CIFAR10 images have much less zero values; (b)~classifying CIFAR10 is a much more difficult problem; (c)~making CIFAR10 grayscale loses too much useful information, while non-grayscale version has too many features, which leads to overfitting. In the future work, we are going to check these hypotheses with intensive numerical experiments. \begin{table}[h] \caption{DCTN results on CIFAR10. For each number of color channels, for each number of EPSes, we chose the kernel sizes \(K_n\), the quantum dimension sizes \(Q_n\), and the learning rate using grid search (excluding models the training of which didn't fit in 8 Gb of videocard's RAM) and showed the best model in the table. All of these models can reach almost 100\% training accuracy if not stopped early. Two bottom rows show the accuracy of a linear classifier and of one of the state of the art CNNs for comparison.} \label{table:cifar10_experiments} \centering \begin{tabular}{ccc} Channels & Model & Accuracy\\ \hline Grayscale & One EPS, \(K_1{=}4, Q_1{=}4\) & 49.5\%\\ \hline Grayscale & Two EPSes, \(K_1{=}4, Q_1{=}4, K_2{=}3, Q_2{=}6\) & 54.8\%\\ \hline YCbCr & One EPS, \(K_1{=}2, Q_1{=}24\) & 51\%\\ \hline YCbCr & Two EPSes, \(K_1{=}2, Q_1{=}23, K_2{=}2, Q_2{=}24\) & 38.6\%\\ \hline RGB & Linear classifier & 41.73\%\\ \hline RGB & EfficientNet-B7~\citep{efficientnet} & 98.9\%\\ \end{tabular} \end{table} \subsection{Properties of successful neural networks} \label{sec:properties} Nowadays, neural networks (\emph{NNs}) achieve outstanding results in many machine learning tasks~\citep{paperswithcode_sota}, including computer vision, language modeling, game playing (e.g. Checkers, Go), automated theorem proving~\citep{gpt_f}. There are three properties many (but not all) NNs enjoy, which are thought to be responsible for their success. For example, \citep{cohen2016expressive} discusses the importance of these properties for deep CNNs. \begin{itemize} \item \emph{Parameter sharing}, aka applying the same transformation multiple times in parallel or sequentially. A layer of a convolutional neural network (\emph{CNN}) applies the same function, defined by a convolution kernel, to all sliding windows of an input. A recurrent neural network (RNN) applies the same function to the input token and the hidden state at each time step. A self-attention layer in a transformer applies the same query-producing, the same key-producing, and the same value-producing function to each token.~\citep{illustrated_transformer} \item \emph{Locality}. Interactions between nearby parts of an input are modeled more accurately, while interactions between far away parts are modeled less accurately or not modeled at all. This property makes sense only for some types of input. For images, this is similar to receptive fields in a human's visual cortex. For natural language, nearby tokens are usually more related than tokens far away from each other. CNNs and RNNs enjoy this property. \item \emph{Depth}. Most successful NNs, including CNNs and transformers, are deep, which allows them to learn complicated transformations. \end{itemize} \subsection{The same properties in tensor networks} Tensor networks (\emph{TNs}) are linear algebraic representations of quantum many-body states based on their entanglement structure. They've found applications in signal processing. People are exploring their applications to machine learning, e.g. tensor regression -- a class of machine learning models based on contracting (connecting the edges) an input tensor with a parametrized TN. Since NNs with the three properties mentioned in \Cref{sec:properties} are so successful, it would make sense to try to devise a tensor regression model with the same properties. That is what we do in our paper. As far as we know, some existing tensor networks have one or two out of the three properties, but none have all three. \begin{itemize} \item MERA (see Ch. 7 of \citep{bridgeman2017handwaving_and_interpretive_dance}) is a tree-like tensor network used in quantum many-body physics. It's deep and has locality. \item Deep Boltzmann machine can be viewed as a tensor network. (See Sec. 4.2 of \citep{cichocki_part_2} or \citep{glasser2018probabilistic} for discussion of how restricted Boltzmann machine is actually a tensor network. It's not difficult to see a DBM is a tensor network as well). For supervised learning, it can be viewed as tensor regression with depth, but without locality or weight sharing. \item \citep{glasser2018probabilistic} introduced Entangled plaquette states (\emph{EPS}) with weight sharing for tensor regression. They combined one EPS with a linear classifier or a matrix tensor train. Such a model has locality and parameter sharing but isn't deep. \item \citep{cohen2016expressive} introduced a tensor regression model called Deep convolutional arithmetic circuit. However, they used it only theoretically to analyze the expressivity of deep CNNs and compare it with the expressivity of tensor regression with tensor in CP format (canonical polyadic / CANDECOMP PARAFAC). Their main result is a theorem about the typical canonical rank of a tensor network used in Deep convolutional arithmetic circuit. The tensor network is very similar to the model we propose, with a few small modifications. We conjecture that the proof of their result about the typical canonical rank being exponentially large can be modified to apply to our tensor network as well. \item \citep{miller_2020_umps_psm} did language modeling by contracting an input sequence with a matrix tensor train with all cores equal to each other. It has locality and parameter sharing. \item \citep{liu2019ml_by_unitary_tn_of_hierarchical_tree_structure} used a tree-like tensor regression model with all cores being unitary. Their model has locality and depth, but no weight sharing. \item \citep{stoudenmire1605supervised} and \citep{novikov2016exponential} performed tensor regression on MNIST images and tabular datasets, respectively. They encoded input data as rank-one tensors like we do in \Cref{sec:input_preprocessing} and contracted it with a matrix tensor train to get predictions. Such a model has locality if you order the matrix tensor train cores in the right way. \end{itemize} \subsection{Contributions} The main contributions of our article are: \begin{itemize} \item We devise a novel tensor regression model called Deep convolutional tensor network (\emph{DCTN}). It has all three properties listed in \Cref{sec:properties}. It is based on the (functional) composition of TNs called Entangled plaquette state (\emph{EPS}). DCTN is similar to a deep CNN. We apply it to image classification, because that's the most straightforward application of deep CNNs. (\Cref{sec:description_of_the_whole_model}) \item We show how EPS can be implemented as a backpropagatable function/layer which can be used in neural networks or other backpropagation based models (\Cref{sec:entangled_plaquette_states}). \item Using common techniques for training deep neural networks, we train and evaluate DCTN on MNIST, FashionMNIST, and CIFAR10 datasets. A shallow model based on one EPS works well on MNIST and FashionMNIST and has a small parameter count. Unfortunately, increasing depth of DCTN by adding more EPSes hurts its accuracy by increasing overfitting. Also, our model works very badly on CIFAR10 regardless of depth. We discuss hypotheses why this is the case. (\Cref{sec:experiments}). \item We show how various hyperparameters affect the model's optimization and generalization (\Cref{sec:how_hyperparameters_affect_optimization_and_generalization}). \end{itemize}
{'timestamp': '2021-03-23T01:34:37', 'yymm': '2005', 'arxiv_id': '2005.14282', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14282'}
arxiv
\section{Introduction} Blockchain technology is maturing at a fast pace. The development of real-world applications shows real interest from both industry and academia \cite{zohar2015,UlHassan2019}.
For instance, applications have been developed in the areas of public administration \cite{Belchior2019, Belchior2019_Audits}, access control \cite{rouhani2019,ssibac}, and others \cite{casino2019}. Additionally, blockchain is progressing towards the performance of centralized systems: for example, the Hyperledger Fabric blockchain is predicted to achieve 50,000 \emph{transactions per second} \cite{Gorenflo2019,Gorenflo2020}. Figure \ref{fig:stats} depicts the number of search results per year for ``blockchain interoperability'' that Google Scholar returned. In 2015, only two documents were related to blockchain interoperability. In 2016, 2017, 2018, 2019, and 2020, the results were 8, 15, 64, 130, and 207, respectively, showing a steep increase regarding interest in this research area. \begin{wrapfigure}{r}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{figures/gs_interest.pdf} \caption{Research trends on blockchain interoperability} \label{fig:stats} \end{wrapfigure} Serving multiple use cases and stakeholders requires various blockchain features and capabilities \cite{wef2020}. The need for adaptability is a motivating factor for creating different blockchains, leading to a heterogeneous ecosystem \cite{Xu2017,hardjono2021,pilai2020}. Choosing new blockchains allows researchers and developers to implement new use case scenarios and keep up with recent endeavors. However, each blockchain has its security risks, as the technology is still maturing, the user base is limited (e.g., in comparison to the web or databases), and there are uncovered bugs, and security flaws \cite{sok_attacks}. Therefore, developers and researchers have to choose between novelty and stability, leading to a vast diversity of choices \cite{Anceaume2018}. This diversity leads to \emph{fragmentation}: there are many \emph{immature} blockchain solutions (e.g., without extensive testing). Until recently, blockchains did not consider the need for interoperability, as each one focused on resolving specific challenges, leading to \emph{data and value silos} \cite{jin2018, Abebe2019,tasca2017}. Moreover, what if the blockchain in which a particular service is running becomes obsolete, vulnerable, or is shutdown? If the user requirements or circumstances change over time, a different blockchain might be more appropriate for a specific use case \cite{agileLikoebe}. What if the service to serve is so crucial that it requires seamless dependability? Furthermore, if we want to reproduce our use case to another blockchain, how can we increase \emph{portability}? In 1996, Wegner stated that ``interoperability is the ability of two or more software components to cooperate despite differences in language, interface, and execution platform'' \cite{wegner96}. In that context, Wegner established a bridge between the concept of interoperability and existing standards. As authors were influenced by the standards existing at that time, authors nowadays are influenced by the Internet architecture and concepts, in what concerns blockchain interoperability \cite{Hardjono2019,vo2018}. Thus, reflecting on the Internet's architecture seems like a good starting point to understand how blockchains can interoperate. Thus, it is important to solve the \emph{blockchain interoperability} challenge, i.e., to provide interoperability between blockchains in order to explore synergies between different solutions, scale the existing ones, and create new use cases (see Section \ref{subsec:bid}). For example, a user should be able to transfer their assets from a blockchain to another or build \emph{cross-blockchain decentralized applications}. While information systems evolve, so do the meaning and scope of interoperability. According to the National Interoperability Framework Observatory (NIFO), endorsed by the European Commission, there are several interoperability layers \cite{NIFO_inter}: \emph{technical interoperability}, \emph{semantic interoperability}, \emph{organizational interoperability}, \emph{legal interoperability}, \emph{integrated public service governance}, and \emph{interoperability governance}. For instance, technical interoperability regards the technical mechanisms that enable integration among blockchains, while semantic interoperability concerns whether the application-specific semantics can be conserved across blockchains. Despite interoperability having an extensive scope, we mainly focus on \emph{technical interoperability}, and \emph{semantic interoperability} as most blockchain interoperability work is concentrated. We leave the study of other interoperability layers for future work. Interoperability does not only conflate flexibility and application portability. It also has the potential to solve some of the biggest blockchain research challenges. In particular, interoperability promotes blockchain \emph{scalability}, as it provides a way to offload transactions to other blockchains, e.g., via sharding \cite{scotm,bcsharding}, it can promote privacy (by allowing the end-user to use different blockchain for data objects with different privacy requirements), and creates new business opportunities. Given the complexity of this research area, we attempt to answer three research questions: \textbf{RQ1}: What is the current landscape concerning blockchain interoperability, both in industry and academia? \textbf{RQ2}: Are technological requirements for blockchain interoperability currently satisfied? \textbf{RQ3}: Are there real use cases requiring blockchain interoperability? \subsection{Contributions} As a systematization of knowledge on blockchain interoperability, this paper yields four-fold contributions: \begin{itemize} \item Introduce the blockchain interoperability research area, presenting the necessary background and highlighting definitions tailored both for industry and academia. We define blockchain interoperability and discuss different blockchain interoperability architectures and standards. \item Propose the Blockchain Interoperability Framework (BIF), a framework defining criteria to assess blockchain interoperability solutions. \item Present a \emph{systematic literature review}, where we identify and discuss blockchain interoperability solutions, accordingly to BIF, in three categories: \emph{Public Connectors}, \emph{Blockchain of Blockchains}, and \emph{Hybrid Connectors}. In particular, our analysis is based on several sources (e.g., peer-reviewed papers, whitepapers, blog posts, technical reports), enabling an in-depth understanding of each solution's current state and its \emph{roadmap}, i.e., its creator's plans. To achieve this, \emph{we systematically contacted the authors of grey literature papers and industrial solutions}: this is our innovative attempt to provide the reader with high-quality information in this rapidly emerging research area. This method allows us to obtain up-to-date, reliable information that often is cumbersome to obtain. \item We identify and propose use cases that benefit from a multiple blockchain approach, pinpoint challenges and obstacles to the development of blockchain interoperability solutions and standards, and propose future research directions, paving the way for systematic research in this area. \end{itemize} \subsection{Organization} Section \ref{sec:back} provides background on blockchain consensus algorithms, previous results on blockchain interoperability, and blockchain interoperability definitions and architecture. Next, Section \ref{sec:related_literature_reviews} presents and discusses related literature reviews, while Section \ref{sec:bif} introduces the Blockchain Interoperability Framework. Next, a systematic review and analysis of blockchain interoperability categories is conducted, distributed across three categories, in Section \ref{sec:solutions}: Public Connectors (Section \ref{sec_crypto}), Blockchain of Blockchains (Section \ref{sec:be}), and Hybrid Connectors (Section \ref{subsec:blockchain_connectors}). For each category, we provide a detailed analysis and discussion. To provide a holistic view of the blockchain interoperability landscape, we promote a general discussion in Section \ref{sec:discussion_sec}. This discussion compares solutions across categories (Section \ref{sec:discussion}), presents standardization efforts (Section \ref{sec:technologies_standards}), informs readers regarding use case scenarios with multiple blockchains (Section \ref{sec:use_cases}), answers to the research questions (Section \ref{sec:res_q_a}), and indicates challenges related to interoperability (Section \ref{sec:issues}). We present research directions (Section \ref{sec:research_directions}), and, finally, we conclude the paper (Section \ref{sec:concl}). Six appendices complement this survey. Appendix \ref{a:method} presents the methodology employed. Appendix \ref{a:architecture} presents an architecture for blockchain interoperability, reviewing the various efforts on that topic. Appendix \ref{a:crypto}, \ref{a:be} and \ref{a:connectors} presents a description of the surveyed public connectors, blockchain of blockchains, and hybrid connector approaches, respectively. Finally, Appendix \ref{a:use_cases} complements the use case section, by presenting more cross-blockchain use cases. \section{Background} \label{sec:back} In this section, we provide the necessary background to the understanding of this survey. \subsection{A Primer on Blockchain Technology} \label{sec:primer} The term \emph{blockchain} has at least two different meanings: a type of system and a type of data structure. In this paper, we use the term blockchain to denominate a class of distributed systems. A blockchain maintains a shared state, specifically a replicated data structure that we denominate \emph{distributed ledger}. This ledger is maintained by a set of machines with computational and storage resources, called nodes (or peers or participants). Nodes are not trusted individually to maintain the distributed ledger; they are trusted as a group due to their number and diversity \cite{vukolic_bcpitw}. A blockchain can also be considered a \emph{deterministic state machine} that provides a certain service, given existing incentives that the network can reward. The first blockchain was part of the Bitcoin system and provided as service transactions of a cryptocurrency, a digital currency, also designated Bitcoin \cite{nakamoto2008}. The service provided by Bitcoin is the execution of transactions of bitcoins. Most blockchains are programmable, i.e., their state machine is extensible with user programs. These programs are often designated \emph{smart contracts} \cite{szabo1997paper,ethereum-white-paper} and their execution is caused by calls also designated \emph{transactions}. Smart contracts are executed in a virtual machine, e.g., in the Ethereum Virtual Machine (EVM) in Ethereum and other blockchains that adopted the EVM for compatibility (that we designate \emph{EVM-based blockchains}). Smart contracts are often used to implement \emph{tokens}, i.e., blockchain-based abstractions that can be owned and represent currency, resources, assets, access, equity, identity, collectibles, etc.~\cite{antonopoulos2018mastering}. There are several standard token formats, e.g., ERC-20 and ERC-721. These tokens are fungible and non-fungible assets, respectively. A fungible asset is interchangeable with another asset of the same type. Conversely, a non-fungible asset is an asset that is unique and has specific properties. In many blockchains, transactions are aggregated in \emph{blocks}, linked by the previous block's cryptographic hash. Hence those data structures are also called blockchains - often viewed as deterministic state machines. Blockchain systems ought to be resilient to faults (e.g., \emph{crash fault-tolerant} or \emph{Byzantine fault-tolerant}), as there may be crashes or malicious nodes on the network \cite{correia2019byzantine}. They run a consensus algorithm to create agreement on a global ledger state in the presence of Byzantine faults. Consensus algorithms are important because they define the behavior of blockchain nodes and their interaction \cite{correia2019byzantine,zheng2017}, and the security assumptions of each blockchain. They, therefore, affect how blockchain peers communicate and operate with each other: in Bitcoin's Proof-of-Work (PoW), peers have to compute a cryptographic challenge to validate transactions, competing with each other. Another blockchain, Tendermint, uses a Byzantine fault-tolerant state machine replication (BFT) algorithm \cite{Kwon2016}, supporting up to a third less one of faulty participants. In Hyperledger Fabric, a widely-used private blockchain platform, a consensus algorithm allows higher transaction throughput than PoW by allowing a subset of nodes to execute and endorse transactions (called endorser peers) and by typically using a weaker consensus (only crash fault-tolerant). The variety of blockchain infrastructures makes it challenging to categorize blockchains, and their interoperability solutions, as there are no \emph{de facto} blockchain interoperability or blockchain architecture standards. Apart from differences in the consensus, blockchains can be deemed \emph{public} (also called permissionless) or \emph{private} (also called permissioned). Permissionless blockchains do not require authentication for participants to access the ledger. \emph{Bitcoin} \cite{nakamoto2008} and \emph{Ethereum} \cite{ethereum-white-paper,ethereum_yellow_paper} are examples of such blockchains. Permissioned blockchains are blockchains in which users are authenticated and can be held accountable according to a governance model suitable for enterprise and governmental needs. Hyperledger Fabric \cite{fabric}, Corda \cite{r3}, Quorum \cite{quorum}, Tendermint \cite{Kwon2016}, and Multichain \cite{multichain} are examples of permissioned blockchains. \begin{wrapfigure}{r}{0.6\textwidth} \centering \includegraphics[scale=0.32]{figures/Fig1.pdf} \caption{Representation of two blockchains, Hyperledger Fabric \cite{fabric}, and Bitcoin \cite{nakamoto2008}.} \label{fig:blockchains} \end{wrapfigure} Figure \ref{fig:blockchains} depicts two blockchains: Hyperledger Fabric, a permissioned blockchain; and Bitcoin, a permissionless blockchain. The supporting layers (e.g., networking, storage, encryption) \cite{Kan2018} provide a basis for the consensus engine, which orders transactions and appends them to the chain of blocks. In Hyperledger Fabric, the consensus is modular, based on endorsement policies. In Fabric, a client (C) sends a transaction proposal to the peer nodes (P), and obtains a signed transaction, called an endorsement (steps 1 and 2). An orderer validates the endorsements and builds a block with valid transactions, appending it to the ledger (steps 3 and 4). In Bitcoin, the consensus is based on the notion of Proof-of-Work (PoW), a cryptographic puzzle that mining nodes need to solve in order to build a valid block. This corresponds roughly to Fabric's steps 1-3. After a node finds a solution to PoW, it then can propose a block of transactions to be appended to the ledger (step 4). Blockchain trust is based on the incentive models that guide the behavior of the nodes. For instance, in Bitcoin, nodes have the incentive to produce blocks of transactions and support the network because they are rewarded Bitcoins. Conversely, nodes do not have the incentive to disrespect the protocol, as attacks are expensive and nodes can get punished \cite{Conti2018}. In Hyperledger Fabric, where nodes are identified, they have the business incentive to follow the protocol because parties cooperate towards a common goal, and misbehavior can be punished according to the law or applicable governance model. Decentralization, different goals, and incentives support the trust on the blockchain -- parties can share the ledger without relying on a trusted, centralized party. The ability to distribute trust on a global state fostered the appearance of \emph{decentralized applications} (\emph{dApps}) \cite{antonopoulos2018mastering}. A dApp is a computer program running on a decentralized peer-to-peer network. For example, Steemit\footnote{https://steemit.com/} is a social blogging dApp that rewards content-creators with cryptocurrency. Thus, dApps are based on smart contracts running on a blockchain, but they also have other components that should equally be decentralized. \subsection{Cross-Blockchain Communication} \label{sec:ccc} Cross-blockchain communication involves two blockchains: a \emph{source blockchain}, and a \emph{target blockchain}. The source blockchain is the blockchain in which the transaction is initiated to be executed on a target blockchain. While general-purpose interoperability comes down to a blockchain exposing its internal state to other, cross-chain asset transfers rely on an atomic three-phase procedure: 1) locking (or extinguishing) of an asset on a source blockchain; 2) blockchain transfer commitment, and 3) creation of a representation of the asset on a target blockchain \cite{odap_draft_01,hermes-middleware-2021,scotm}. This procedure, later explained in detail, relies on a \emph{cross-chain communication protocol} (CCCP). A CCCP defines the process by which a pair of blockchains interact to synchronize cross-chain transactions correctly. Hence, a CCCP allows \emph{homogeneous} blockchains to communicate. For instance, sidechains typically use a CCCP (e.g., Zendoo allows communication between Bitcoin-like blockchains systems \cite{zendo}). Conversely, a \emph{cross-blockchain communication protocol} (CBCP) defines the process by which a pair of blockchains interact to synchronize cross-blockchain transactions correctly. CBCPs allow \emph{heterogeneous} blockchains to communicate (e.g., the Interledger Protocol allows any blockchains that implement the protocol to exchange ``money packets'' \cite{ILPv4}). The differentiation between CCCPs and CBCPs is important because CCCPs typically can leverage the interoperating blockchains' constructs and functionality (e.g., utilize smart contracts to implement a relay \cite{peacerelay}), whereas CBCPs normally require blockchains to be adapted. However, CBCPs may leverage specific functionalities of both blockchains \cite{btcrelay}. Cross-blockchain, or cross-chain communication, is a requirement for blockchain interoperability. This section provides a few theoretical results regarding cross-blockchain communication, and thus also blockchain interoperability. Zamyatin et al.~\cite{sok_cdl} prove that ``there exists no asynchronous CCC [cross-chain communication] protocol tolerant against misbehaving nodes''. The authors use a reduction to the fair exchange problem \cite{fair_exchange} to prove that correct cross-chain communication is as hard as the fair exchange problem. As a consequence of the presented theorem, the authors state that ``there exists no CCC protocol tolerant against misbehaving nodes without a trusted third party''. A trusted third party can be centralized or decentralized. Centralized trusted parties are, for example, trusted validators \cite{hyperledger_cactus}. A decentralized trusted party can be another blockchain, in which their participants agree on the global ledger state via a consensus algorithm. However, the trusted party has to ensure that most participants are honest, guaranteeing the correctness of the process is guaranteed. Cross-chain protocols, therefore ``use the consensus of the distributed ledgers as an abstraction for a trusted third party.'' \cite{sok_cdl}. Borkowski et al.~\cite{tast_paper7} derive the ``lemma of rooted blockchains'' that states that a source blockchain cannot verify the existence of data on a target blockchain with practical effort. In particular, the source blockchain would need to be able to mimic consensus from the target blockchain, and it would have to store a (potentially large) subset of the target blockchain's block history. On a recent endeavor, Lafourcade and Lombard-Platet \cite{pascal2020} formalize the blockchain interoperability problem, arguing that fully decentralized blockchain interoperability is not possible. More specifically, there is no protocol assuming a full-client that can realize its interoperability functions, such as asset transfer, without a third party's aid. However, a blockchain with two ledgers offers the possibility of interoperability (there is, in fact, the possibility of moving assets from one ledger to the other). This study applies mainly to public blockchains. The results above are relevant because they lead to an important consideration: \emph{cross-blockchain transactions are not feasible in practice without the participation of a trusted third party}. In other words, although trust assumptions vary greatly from permissionless to permissioned networks, cross-blockchain transactions, as well as cross-chain transactions, require a trusted third party to assure the correctness of the underlying protocol. Most solutions presented throughout this paper present at least one decentralized trust anchor. \subsection{Blockchain Interoperability Definitions} \label{subsec:bid} In this section, we define additional technical terms for an understanding of this study. Vernadat defines interoperability among enterprise systems as \cite{Vernadat2006}: ``a measure of the ability to perform interoperation between [...] entities (software, processes, systems, business units...). The challenge relies on facilitating communication, cooperation, and coordination among these processes and units''. Abebe et al.~propose a general communication protocol as an alternative approach to the ``point-to-point'' blockchain interoperability approach \cite{Abebe2019}. Interoperability is defined as ``the semantic dependence between distinct ledgers to transfer or exchange data or value, with assurances of validity''. Pillai and Biswas refer that ``cross-communication is not intended to make direct state changes to another blockchain system. Instead, cross-communication should trigger some set of functionalities on the other system that is expected to operate within its own network'' \cite{Pillai2019}. A technical report from the National Institute of Standards and Technology (NIST) defines blockchain interoperability as ``a composition of distinguishable blockchain systems, each representing a unique distributed data ledger, where atomic transaction execution may span multiple heterogeneous blockchain systems, and where data recorded in one blockchain are reachable, verifiable, and \emph{referable} by another possibly foreign transaction in a semantically compatible manner'' \cite{NISTIR}. Hardjono et al.~define blockchain survivability as ``the completion (confirmation) of an application-level transaction [composed of subtransactions] independent of blockchain systems involved in achieving the completion of the transaction.''\cite{Hardjono2019} The concept of transactions and subtransactions relates to ``\emph{best effort delivery}'', that applications must comply to, by ensuring that transactions and their \emph{subtransactions} are completed (i.e., committed) within a certain time frame. Regarding types of blockchain interoperability, Besançon et al. highlight three \cite{tbig}: interoperability between different blockchains, interoperability between dApps using the same blockchain, and interoperability blockchain and other technologies (such as integration with enterprise systems). While different definitions tackle different dimensions of interoperability, there is room for improvement. We define several terms that encompass the whole scope of technical interoperability to later provide a holistic definition of technical interoperability (see Figure \ref{fig:concept_map}). To recall the definition presented in Section \ref{sec:ccc}, a {source blockchain} is a blockchain that issues transactions against a {target blockchain}. A \emph{source node} is a node from the source blockchain, and a target node belongs to the target blockchain. When several participants elect a source node and a target node, we achieve decentralization in the context of interoperability \cite{jin2018}. A \ac{CC-Tx}, where ``CC'' stands for \emph{cross-chain}, and ``Tx'' for transaction, is a transaction between different chains, which belong to the same blockchain system (homogeneous blockchains), for example, between EVM-based blockchains. We use the \ac{CC-Tx}, \emph{inter-chain transaction}, and \emph{inter-blockchain transaction} terms interchangeably. A \ac{CB-Tx} is a transaction between different blockchains (heterogeneous blockchains), for example, between Hyperledger Fabric and Bitcoin. Note that the terms \ac{CC-Tx} and \ac{CB-Tx} are used as synonyms in the industry, as currently, most solutions connect homogeneous blockchains. A \ac{CC-dApp} is a dApp that leverages cross-blockchain transactions to implement its business logic. We use the terms \ac{CC-dApp} and \emph{cross-blockchain decentralized application} (CB-dApp) interchangeably. Other terms with the same meaning in the literature are inter-chain decentralized application and inter-blockchain decentralized application. A \ac{IoB} is a system ``where homogeneous and heterogeneous decentralized networks communicate to facilitate cross-chain transactions of value'' \cite{vo2018}. We use this definition of IoB throughout this paper. The term \ac{BoB} is not used consistently \cite{Verdian2018, hyperservice}. Verdian et al.~use it to describe the structure that aggregates blocks from different blockchains into ``meta blocks'', organized through a consensus mechanism using \emph{posets} (partially ordered sets) and total order theory \cite{Verdian2018}, thus producing a blockchain of blockchains. A poset consists of a set of elements and their binary relationships that are ordered according to a specific set of rules \cite{lattice}. Influenced by those authors, we define a \ac{BoB} \emph{as a system in which a consensus protocol organizes blocks that contain a set of transactions belonging to CC-dApps, spread across multiple blockchains. Such a system should provide accountability for the parties issuing transactions on the various blockchains and providing a holistic, updated view of each underlying blockchain}. Note that BoB solutions belong to the category with the same name. Therefore, the notion of \ac{IoB} directly refers to the connection relationships among blockchains, whereas the term \ac{BoB} refers to an architecture made possible by IoB. BoB approaches are concerned with the validation and management of cross-blockchain transactions. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[scale=0.22]{figures/concept_map_4.pdf} \caption{Concept map, illustrating the relationship between different concepts related to blockchain interoperability} \label{fig:concept_map} \end{wrapfigure} Figure \ref{fig:concept_map} shows the relationship between the different concepts concerning blockchain interoperability. A \ac{CC-dApp} realizes the blockchain of blockchains approach. This approach can provide the semantic level interoperability (i.e., concerned at transmitting the meaning of the data, which corresponds to the value level interoperability) required by organizations, mappable by the applicational layer. However, it relies on the existence of an IoB -- a network of blockchains. For an IoB to exist, technical interoperability (or mechanical interoperability) is required. In the context of a CC-dApp, cross-chain transactions are ordered by a \emph{cross-chain dApp protocol}. Such protocols should assure transaction atomicity and resolve possible conflicts in transactions spawning across homogeneous and heterogeneous blockchains. From the several definitions we encountered during our research, we envision \emph{blockchain interoperability} as: \emph{the ability of a source blockchain to change the state of a target blockchain (or vice-versa), enabled by cross-chain or cross-blockchain transactions, spanning across a composition of homogeneous and heterogeneous blockchain systems, the IoB.} IoB transactions are delivered via a cross-blockchain communication protocol, thereby granting technical interoperability, enabling \ac{CC-dApp}s. \ac{CC-dApp}s provide semantic interoperability via the BoB. The BoB approach is realized by a cross-blockchain dApp protocol, which provides consensus over a set of cross-chain transactions, thus enabling cross-chain dApps. \section{Related Literature Reviews} \label{sec:related_literature_reviews} Due to the novelty and large-breadth of this research area, few updated surveys cover aspects of blockchain interoperability. We compare existing surveys based on the \emph{criteria} and \emph{sub-criteria} shown in Table \ref{tab:related_survey_criteria}. For example, in the first row, the criteria ``public connector'' evaluates if a study addresses its sub-criteria: work on sidechains, hash-lock time contracts, and notary schemes. On the second row, the criteria Blockchain of Blockchains evaluates if a study describes BoB solutions (1) and if it performs a detailed comparison, including consensus, security, validators, and performance. \begin{table}[] \centering \resizebox{\linewidth}{!}{% \begin{tabular}{@{}lllll@{}} \toprule \textbf{Criteria} & \textbf{Description} & \textbf{Sub-criteria 1} & \textbf{Sub-criteria 2} & \textbf{Sub-criteria 3} \\ \midrule \textbf{Public Connectors (PC)} & Addresses public connectors & Sidechains & Hash lock contracts & Notary Schemes \\ \midrule \textbf{Blockchain of Blockchains (BoB)} & Addresses BoBs & Describes solutions & Detailed comparison & N/A \\ \midrule \textbf{Hybrid Connectors (HC)} & Addresses Hybrid Connectors & Trusted Relays & Blockchain agnostic protocols & Blockchain migrators \\ \midrule \textbf{Architecture (AR)} & Addresses architectures enabling CCCPs & Proposes architecture & Presents related work & N/A \\ \midrule \textbf{Cross-chain Standards (ST)} & Addresses standards for interoperability & Present standards & Relate standards to solutions & N/A \\ \midrule \textbf{Cross-analysis (CC)} & Compares across categories & Compare categories & Compare sub-categories & N/A \\ \midrule \textbf{Use Cases (UC)} & Presents use cases using an IoB or BoB & Existing use cases & Predicted use cases & N/A \\ \midrule \textbf{Open Issues (OI)} & Challenges on interoperability & Research directions & Relate interoperability to other issues & N/A \\ \bottomrule \end{tabular}% } \caption{Survey comparison criteria, description, and sub-criteria. } \label{tab:related_survey_criteria} \end{table} Buterin presents a survey on public connector solutions, including notary schemes, sidechains, and hash-time locking techniques \cite{vitalik2016}. Similarly, other surveys focus on public connectors \cite{sok_cdl,tast_paper8,koens2019,singh2020}, with a focus on sidechains and hash lock time contracts. Vo et al. present work mostly on architecture for interoperability, presenting some BoB and HC solutions \cite{vo2018}, while Qasse et al. organize solutions across sidechains, blockchain routers, smart contracts, and industrial solutions \cite{Qasse2019}. Johnson et al. focus on Ethereum as the infrastructure enabling interoperability across several categories of solutions \cite{Johnson2019}. Siris et al.~\cite{inter_approaches}, Kannengieber et al. \cite{niclas2020}, and Bishnoi et al. \cite{Bishnoi2020} tackle a wider range of solutions. \begin{wraptable}{r}{0.7\textwidth} \centering \footnotesize \begin{tabular}{@{}lcccccccc@{}} \toprule & \multicolumn{3}{c}{\textbf{Solution category}} & \multicolumn{5}{c}{\textbf{Detailed Analysis}} \\ \midrule \multicolumn{1}{c}{} & & & \multicolumn{1}{c|}{} & & & & & \\ \multicolumn{1}{c}{\multirow{-2}{*}{\textbf{Reference}}} & \multirow{-2}{*}{PC} & \multirow{-2}{*}{BoB} & \multicolumn{1}{c|}{\multirow{-2}{*}{HC}} & \multirow{-2}{*}{AR} & \multirow{-2}{*}{ST} & \multirow{-2}{*}{CC} & \multirow{-2}{*}{UC} & \multirow{-2}{*}{OI} \\ \midrule \multicolumn{1}{l|}{Buterin \cite{vitalik2016}, 2016} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \multicolumn{1}{c|}{\cellcolor[HTML]{BF504D}-} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ \\ \midrule \multicolumn{1}{l|}{Vo et al.\cite{vo2018}, 2018} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \multicolumn{1}{c|}{\cellcolor[HTML]{F79545}$\pm$} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{D8E3BB}+ \\ \midrule \multicolumn{1}{l|}{Borkowski et al. \cite{tast_paper5}, 2018} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \multicolumn{1}{c|}{\cellcolor[HTML]{BF504D}-} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ \\ \midrule \multicolumn{1}{l|}{Quasse et al. \cite{Qasse2019}, 2019} & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \multicolumn{1}{c|}{\cellcolor[HTML]{F79545}$\pm$} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ \\ \midrule \multicolumn{1}{l|}{Johnson et al. \cite{Johnson2019}, 2019} & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \multicolumn{1}{c|}{\cellcolor[HTML]{F79545}$\pm$} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- \\ \midrule \multicolumn{1}{l|}{Zamyatin et al. \cite{sok_cdl}, 2019} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \multicolumn{1}{c|}{\cellcolor[HTML]{BF504D}-} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ \\ \midrule \multicolumn{1}{l|}{Siris et al. \cite{inter_approaches}, 2019} & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \multicolumn{1}{c|}{\cellcolor[HTML]{F79545}$\pm$} & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- \\ \midrule \multicolumn{1}{l|}{Koens et al. \cite{koens2019}, 2019} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ \\ \midrule \multicolumn{1}{l|}{Singh et al. \cite{singh2020}, 2020} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ \\ \midrule \multicolumn{1}{l|}{Kannengießer et al., \cite{niclas2020}, 2020} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \multicolumn{1}{c|}{\cellcolor[HTML]{F79545}$\pm$} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- \\ \midrule \multicolumn{1}{l|}{Bishnoi et al. \cite{Bishnoi2020}, 2020} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- \\ \midrule \multicolumn{1}{l|}{\emph{this survey}} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \multicolumn{1}{c|}{\cellcolor[HTML]{D8E3BB}+} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ \\ \midrule & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \bottomrule \end{tabular} \caption{Comparison of related literature reviews: PC (Public Connectors), Blockchain of Blockchains (BoB), HC (Hybrid Connectors), AR (architectures for blockchain interoperability), ST (standards), CC (cross-comparison), UC (use cases), OI (open-issues). Each criterion can be ``fulfilled'' (``+'' in green background), ``partially fulfilled'' (``$\pm$'' in orange background) or ``not fulfilled'' (``-`'' in red background), if it addresses all, between one and all, or none of its sub-criteria, respectively. } \label{tab:related_slr} \end{wraptable} We aim at providing a solid, throughout and comprehensive foundation on which researchers can rely upon as a starting point in this field, including a description of the related surveys, which illuminated our research. In contrast to most of the works mentioned above, this paper provides a holistic view of blockchain interoperability by focusing not only on public connectors but also on BoBs and hybrid connectors. By including updated grey literature and focusing on private blockchain interoperability, a comprehensive discussion on standards, use cases, and architecture for interoperability was possible. \section{Blockchain Interoperability Framework} \label{sec:bif} This section presents the Blockchain Interoperability Framework (BIF), a framework classifying solutions collected through our methodology. To drive criteria for assessing the categories (and specific solutions) of blockchain interoperability, we analyzed the solution space using the six ``W'' questions: Who, What, Where, When, Why, and How. The ``Why'' was determined irrelevant to our analysis because its purpose is constant -- connecting different chains (CC-Txs), different blockchains (CB-Txs), or even to arbitrary systems (e.g., enterprise legacy systems). This is instead addressed by the ``where'' question. \subsection{Deriving Evaluation Criteria} The ``what'' refers to the \emph{assets} exchanged. An interoperability solution can handle different data objects or assets. Hence it is important to know which data representations a solution supports \cite{wegner96}. Assets can be treated as data (arbitrary payloads), as fungible assets, or non-fungible assets \cite{barnes2020,pilai2020,hyperledger_cactus}. Arbitrary data is often represented via a key-value pair, being the preferred representation of some blockchains \cite{sawtooth,fabric,vukolic_bcpitw}. The key-value is also useful to represent the contents of account-based blockchains \cite{eth2_wiki,algorand,quorum}. Payment tokens are fungible tokens \cite{Pillai2019}. Utility tokens include tokens used to access a service or application, such as non-fungible tokens (e.g., ERC20 tokens). Finally, asset tokens represent real-world physical or digital instruments, such as blockchain-based promissory notes, regulated by the {Swiss Financial Market Supervisory Authority} \cite{draft-sardon-blockchain-gateways-usecases-00} (see more details in Section \ref{sec:use_cases}), or bonds \cite{barnes2020}. An asset has different maturity levels. In particular, an asset may be standardized, (e.g., ERC tokens\cite{erc20}, standardized schema for utility tokens, ERC1400, a security token \cite{erc1400s,erc1400}) and/or {regulated} \cite{oecd2021,finma,usexchcomm}. Regulated digital assets are backed by legal frameworks. We consider all asset tokens to be regulated. We envision utility tokens as standardized and asset tokens as standardized and regulated (i.e., asset tokens are emitted by legal entities) The ``who'' question refers to whom controls the CC-Tx process and thus accounts for trust establishment \cite{sidechains_pos,sok_cdl}). It can be the end-user (e.g., \cite{hyperledger_cactus,Frauenthaler2019}), a consortium (e.g., \cite{peggedsidechains, Schulte2019}), or a trusted third party (e.g., cloud services, centralized notary schemes). Some solutions allow different levels of control. The ``where'' refers to what are the source and target ledgers, as well as what is the support of conducting the CC process. Solutions can support public blockchains (P) or non-public blockchains (NP). We use NP to designate private blockchains, other decentralized ledger technology (DLT) systems, and centralized systems (e.g., VISA payment network). The supported systems of each solution matter since communication may happen unidirectionally or bi-directionally \cite{hyperledger_cactus}. Blockchain oracles apart, it often is not feasible to have a solution based on a blockchain system connected to a centralized system (e.g., providing insurance data). A smart contract may be the one conducting an asset transfer (on-chain channel, with on-chain CC-Tx validation) versus an off-chain settlement, e.g., techniques using commitment schemes \cite{abebe2020,zendo}, or via (semi-)centralized system (off-chain channel). Typically, on-chain channels offer more resiliency, but off-chain channels are more scalable. Combinations between off-chain and on-chain channels also exist (e.g., payment networks \cite{LN}). Offline channels depend on different proof generation mechanisms \cite{abebe2020, sidechains_pos, zendo}. The ``when'' refers to the set of processes (e.g., executing CC-Txs) that are defined at \emph{design-time} or \emph{run-time}. \emph{Design-time customization} decisions affect the punctual behavior of a CC-dApp concerning when it is executed. At design-time, a user defines the behavior of the solution \emph{apriori}. If a change is needed, a new instance of the solution needs to be deployed. Conversely, \emph{run-time customization} decisions are flexible, allowing the end-user to adjust the conditions defined by business logic as needed. Solutions in which business logic is changed at run-time are called \emph{flexible approaches}, allowing to adjust business logic and conditions that trigger the execution of a CB-Tx or CC-Tx by a CC-dApp. Most literature reviews focus on design-time approaches and public blockchains, leaving a vast range of recent solutions out of scope. In this survey, we also consider private-private and public-private blockchain interoperability, focusing on flexible approaches. The ``how'' regards the realization of cross-chain transactions: how are CC-Txs realized on the underlying DLTs? Often, these transactions can be performed using \emph{cross claims}, i.e., by locking/burning an asset on the source blockchain and unlocking/creating its representation on the target blockchain. Cross-claims require two nodes from different blockchains, where one performs one operation in a source blockchain in exchange for its counterparty performing other operations on a target blockchain - each party logs the operation in case a dispute is needed. Typically, cross-claims operate in semi-trusted environments (e.g., private blockchain, regulated blockchain), and can be operated via a (semi) trusted third party \cite{hyperledger_cactus,odap_draft_01,crp_draft_00}. Escrowed cross-claims are the standard mechanism for asset transfers, operating similarly to cross-claims, but in an untrusted environment, leveraging dispute-resolution mechanisms (e.g., via smart contracts requiring inclusion proofs \cite{abebe2020}) or by parties holding custody of assets and collateral, \cite{xclaim,dextt, plasma_vitalik}. Inclusion proofs include applying Merkle tree proofs to block header transfer via a coordinating blockchain, block header transfer, or direct signing \cite{robinson2020}. Collateralization is the process in which a party performing the transfer of assets provides a certain amount of their assets as a guarantee of following the protocol (e.g., not to steal assets from the end-user). If a party misbehaves (e.g., steals assets), the deposit is given to the victim party. Finally, a mediated CC-Tx includes (an offline) trusted party \cite{hyperledger_cactus}. In case of a dispute about an asset transfer between a public blockchain and a private blockchain (P-NP) or a public blockchain and an enterprise system (also P-NP), there needs to be a dispute-resolution mechanism. This is due to NP systems' private nature, although several mechanisms exist to prove internal state belonging to private blockchains. Hence, CC-Txs have a trade-off risk-performance: the less centralization there is on the CC-Tx settlement, the worst the performance, but the lesser the risk. The ``how'' also relates to the extent to which the implementation of the solution is tested. Solutions might be implemented, tested, and validated (application to a real-world scenario). Testing regards \emph{correctness guarantees}: \emph{behavioral correctness} or \emph{formal correctness}. Behavioral correctness is the ability to guarantee that CC-Txs are issued as intended, without unintended consequences (e.g., asset lock, asset theft). While in practice, behavioral correctness depends on formal correctness, we say a solution has behavioral correctness if it has a suite of test cases \cite{testing}. Formal correctness assures that an algorithm is correct with respect to a specification. Formal verification checks the correctness of algorithms against a specification using, for instance, formal methods. Smart contract verification tools allow developers to reduce the probability of creating bugs, thus incurring penalties, as smart contracts are generally difficult to update once deployed \cite{smartbugs}. Another point of providing trust to the user is the solution to have an open-source implementation, where the code can be peer-reviewed and corrected if needed. \subsection{Evaluation Criteria} \label{subsec:criteria_bif} Having discussed the survey's scope, we next define the set of criteria we use to characterize the interoperability solutions. Similarly to Section \ref{sec:related_literature_reviews}, each criterion can be ``fulfilled'' ``partially fulfilled'' or ``not fulfilled''. If a criterion is a yes/no question (e.g., does the solution support asset type ``data''?), we do not explicitly refer to the fulfillment conditions as they are evident. Next, we detail the criteria type (first-level), criteria sub-type (second level), and criteria from BIF: \begin{itemize}\small \item Asset: this category refers to properties of an asset involved in a CC-Tx. \begin{itemize} \item Type: what type of assets does the solution support? \begin{enumerate} \item Data: can the solution manipulate arbitrary data? \item Payment tokens: can the solution manipulate cryptocurrencies? This criterion is partially fulfilled if the asset is only used as collateral or to reward a service's operational maintenance. \item Utility tokens: can the solution manipulate utility tokens? This criterion is partially fulfilled if the asset is used only as collateral or to reward a service's operational maintenance. \item Asset tokens: can the solution manipulate utility tokens? \end{enumerate} \item Infrastructure: what are the systems involved? \begin{enumerate} \item P: This criterion is fully fulfilled if more than two public blockchains are supported. It is partially fulfilled if one or two public blockchains are supported. \item NP: This criterion is fully fulfilled if more than two non-public blockchains are supported. It is partially fulfilled if one or two non-public blockchains are supported. \end{enumerate} \end{itemize} \item Trust Establishment: this category refers to how a solution provides trust to the users. \begin{itemize} \item Decentralization: who operates the solution instance? \begin{enumerate} \item End-user \item Consortium \item Trusted (third) party \end{enumerate} If multiple criteria are selected, it indicates a solution supports more than one mode of operation. \item Channel: where are CC-Tx validated? \begin{enumerate} \item On-chain: This criteria is partially fulfilled if proofs are created on-chain but validation occurs off-chain. \item Off-chain: This criteria is partially fulfilled if proofs are created off-chain but validation occurs on-chain. \end{enumerate} \end{itemize} \item CC-Tx Realization: this category refers to how and where a CC-Tx is settled. \begin{itemize} \item Mechanism: how are CC-Txs agreed-upon multiple parties? \begin{enumerate} \item Cross-claim \item Escrowed cross-claim \item Mediated \end{enumerate} \end{itemize} \item Extra-functional: this category refers to the design of the solution itself. \begin{enumerate} \item Tests: the approach provides a set of test cases. \item Implementation: the approach provides an open-source implementation and is validated in the industry. This criterion is partially fulfilled if the implementation is closed-source. \item Validation: the approach is validated in an actual use case scenario. \item Run-time: the business logic of the solution can be changed dynamically, as needed. This criterion is considered not fulfilled if logic is settled when the solution is instantiated, i.e., changing logic requires a new instance. \end{enumerate} \end{itemize} \section{Overview of Blockchain Interoperability Approaches} \label{sec:solutions} We conducted a systematic literature review following the protocol described in Appendix A, yielding 80 relevant documents out of the initial 330. By grouping the publications and grey literature, a pattern arises: these works are either about interoperability across public blockchains holding cryptocurrencies, application-specific blockchain generators with interoperability capabilities, or protocols connecting heterogeneous blockchains. We thus classify each study into one of the following categories: \emph{Public Connectors} (Section \ref{sec_crypto}), \emph{Blockchain of Blockchains} (Section \ref{sec:be}), and \emph{Hybrid Connectors} (Section \ref{subsec:blockchain_connectors}). Each category is further divided into sub-categories. Table \ref{tab:existing_sols} summarizes the work conducted. \begin{table}[] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{@{}lcccccccccccccl@{}} \toprule & \multicolumn{5}{c}{\textbf{Asset}} & \multicolumn{8}{c}{\textbf{Trust Establishment}} & \\ \midrule \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{\textbf{Type}} & \multicolumn{2}{c|}{\textbf{Infra.}} & \multicolumn{3}{c|}{\textbf{Decentral.}} & \multicolumn{2}{c|}{\textbf{Channel}} & \multicolumn{3}{c|}{\textbf{CC-Realization}} & \\ \midrule \multicolumn{1}{l|}{\textbf{Sub-Category}} & \multicolumn{1}{c|}{D} & \multicolumn{1}{c|}{P} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{P} & \multicolumn{1}{c|}{NP} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{C} & \multicolumn{1}{c|}{TTP} & \multicolumn{1}{c|}{OC} & \multicolumn{1}{c|}{OF} & \multicolumn{1}{c|}{CC} & \multicolumn{1}{c|}{ECC} & \multicolumn{1}{c|}{M} & \multicolumn{1}{c}{\textbf{References}} \\ \midrule {\color[HTML]{ADD73F} \begin{tabular}[c]{@{}l@{}}Sidechains\\ \& Relays\end{tabular}} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{plasma,wang-cc_elec-2020} \\ {\color[HTML]{000000} } & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{btcrelay, peacerelay, testimonium,waterloo2019,Frauenthaler2020} \\ {\color[HTML]{000000} } & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{POA,blockCollider} \\ {\color[HTML]{000000} } & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{peggedsidechains,loom,liquid,blockstream,nocust} \\ {\color[HTML]{000000} } & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{blocknet,rsk,zendo,horizon2021} \\ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{Shlomovits2020,cctxnarges,belotti2020,Robinson2019,Deshpande2020,gugger-bmcc-2020} \\ {\color[HTML]{000000} } & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cite{ILPv4,quilt} \\ \midrule {\color[HTML]{ADD73F} } & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & See Section \ref{sec:crypto_notaries} \\ \multirow{-2}{*}{{\color[HTML]{ADD73F} \begin{tabular}[c]{@{}l@{}}Notary\\ Scheme\end{tabular}}} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{0x,uniswaptokrex,Tian2020} \\ \midrule {\color[HTML]{ADD73F} HLTC} & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{dextt,xclaim,Lu2017,comit,fusion,Dai-cctm-2020,Rueegger-ccas-2020} \\ \midrule {\color[HTML]{F79545} \begin{tabular}[c]{@{}l@{}}Blockchain\\ of Blockchains\end{tabular}} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{Kwon2016,Wood2017,komodo} \\ {\color[HTML]{000000} } & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cite{ark2019,aion2017,overledger03} \\ \midrule {\color[HTML]{BF504D} \begin{tabular}[c]{@{}l@{}}Trusted \\ Relays\end{tabular}} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cite{nissl2020,falazi2020,Kan2018,crossfabric2020} \\ {\color[HTML]{BF504D} } & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cite{Hardjono2019,hermes-middleware-2021,odap_draft_01,crp_draft_00,zhao2020,wang2020,xiao2020} \\ \midrule {\color[HTML]{BF504D} B. Agnostic} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{hyperledger_cactus,abebe2020,Abebe2019,acctp2020} \\ {\color[HTML]{BF504D} Protocols} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{hyperservice,ion2018,pilai2020,robinson2020,cantonwhitepaper,pang2020} \\ \midrule {\color[HTML]{BF504D} } & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{9B9B9B}N/A & \cellcolor[HTML]{9B9B9B}N/A & \cellcolor[HTML]{9B9B9B}N/A & \cite{Frauenthaler2019,scheid2019,Westerkamp-scmob-2019} \\ \multirow{-2}{*}{{\color[HTML]{BF504D} \begin{tabular}[c]{@{}l@{}}Blockchain\\ Migrators\end{tabular}}} & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{F79545}$\pm$ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{BF504D}- & \cellcolor[HTML]{D8E3BB}+ & \cellcolor[HTML]{BF504D}- & \cite{scotm} \\ \bottomrule \end{tabular}% } \caption{ Evaluation of blockchain interoperability solutions by subcategory accordingly to the Blockchain Interoperability Framework. N/A stands for not applicable. Public connectors are represented in green, Blockchain of blockchains in orange, and Hybrid connectors in red. } \label{tab:existing_sols} \end{table} \subsection{Public Connectors} \label{sec_crypto} The first family of blockchain interoperability solutions aimed to provide interoperability between cryptocurrency systems, as stated by Vitalik \cite{vitalik2016}. This category identifies and defines different chain interoperability strategies across public blockchains supporting cryptocurrencies, including sidechain approaches, notary schemes, and hash time hash-locks. Some solutions share characteristics of more than one sub-category, and thus they can be considered hybrid. We introduce each sub-category, presenting only two illustrative examples of each one for the sake of space. Appendix \ref{a:crypto} depicts a complete list of Public Connectors approaches. After that, a summarized evaluation table is presented using the BIF. These tables are later discussed in Section \ref{sec:disc_crypto}. \subsubsection{\underline{Sidechains \& Relays}} \label{subsec:crypto_side} A \emph{sidechain} (or \emph{secondary chain}, or \emph{satellite chain}, or \emph{child chain}) is a mechanism for two existing blockchains to interoperate \cite{peggedsidechains, sidechains_pos}, scale (e.g., via blockchain sharding \cite{omniledger}), and be upgraded \cite{velvet} in which one blockchain (\emph{main chain} or mainchain) considers another blockchain as an extension of itself (the sidechain). The mainchain maintains a ledger of assets and is connected to the sidechain, a separate system attached to the main chain via a cross-chain communication protocol \cite{zendo}. An example is a \emph{two-way peg}, a mechanism for transferring assets between the main chain and the sidechain \cite{singh2020}. Main chains can be sidechains of each other \cite{vitalik2016}, creating each chain's possible to connect to others. Sidechains are considered layer one solutions (built on top of layer 0 solutions - blockchains) to implement layer-2 solutions, such as payment channels \cite{nocust}. The second layer allows off-chain transactions between users through the exchange of messages tethered to a sidechain \cite{sok_layers}. A sidechain is then a construct that allows for offloading transactions from the mainchain, processes it, and can redirect the outcome of such processing back to the main chain. For instance, state channels are off-chain sidechains used to implement, for example, payment channels, by offloading transactions of the blockchain \cite{LN}. In a payment channel, participants interact, collecting cryptographically signed messages. Those messages update the current state without publishing it to the mainchain. When the payment channel is closed, the final state is published onto the main chain, where an on-chain dispute/closure phase may occur \cite{nocust}. Payment channels are appropriated for use cases requiring several transactions that can be combined in a single one. Main chains communicate with sidechains via a CCP, often tightly coupled with the functionality of both chains. The basic components of sidechain design are the mainchain consensus protocol, the sidechain consensus protocol, and the cross-chain communication protocol \cite{zendo}. Sidechains allow different interactions between participating blockchains, being the most common the transfer of assets between the main chain and the sidechain (two-way peg) \cite{pow_side,singh2020}. A \emph{two-way peg} works in the following manner: a user, operating on the mainchain, sends X tokens to a custom address that locks assets. Those funds are locked on the mainchain, and a corresponding number of tokens are created on the sidechain. The user can now use the tokens on the sidechain. Eventually, the user can transfer back the tokens to the main chain, which causes assets on the sidechain to be locked or destroyed, depending on the implementation. There are three major types of two-way pegs: simplified payment verification, centralized two-way pegs, and federated two-way pegs. \emph{Simplified payment verification} (SPV) \cite{nakamoto2008,spvbwiki} is done by \emph{light clients}, which consist of blockchain clients that can verify transactions on the blockchain without having its entire state. The SPV client only needs the block headers; verifying that a transaction is in a block is to request a Merkle tree proof \cite{merkle} including that transaction. In particular, transactions are represented as Merkle tree leaves. Given a leaf node as a target and a path comprised of nodes and its siblings to the target, verifying a Merkle tree proof of including the target is to reconstruct a partial Merkle tree root. A relay solution is an SPV client for a source blockchain running on a target blockchain, enabling verification of transactions \cite{testimonium}. This verification enables conditional logic to occur on a target blockchain. Since relays are between blockchains and those blockchains are using behavior from others (bidirectionally or unidirectionally), relays include the presence of sidechains. This is saying, without a sidechain, there are no relay solutions. \emph{Centralized two-way pegs}, on the contrary, trust a central entity, benefiting in terms of efficiency. An example is an \emph{exchange}, an organization, typically a company, that trades cryptocurrencies on behalf of its clients. However, Exchanges are a Notary Scheme, so we defer their explanation to Section \ref{sec:crypto_notaries}. Disadvantages include a single point of failure and centralization. \emph{Federated two-way pegs} try to decentralize the previous solution. In this solution, a group is responsible for locking and unlocking funds instead of just one. Standard implementations rely on multi-signature schemes, in which a quorum of entities must sign transactions to be deemed valid by the network. Although a better option, it does not eliminate centralization. \begin{wrapfigure}{r}{0.55\textwidth} \centering \includegraphics[scale=0.40]{figures/Fig6.pdf} \caption{A general sidechain system \cite{btcrelay}} \label{fig:sidechains} \end{wrapfigure} Figure \ref{fig:sidechains} depicts a system based on the BTC Relay \cite{btcrelay}. In \emph{BTC Relay}, parties called \emph{relayers} keep track of the block headers of the main chain (the Bitcoin network in the figure), and input them to the BTC Relay smart contract, hosted on Ethereum. This procedure builds a pool of Bitcoin headers that can be used (via their stored Merkle trees) to verify on-chain information, including the presence of transactions. This way, any party can request a transaction to be verified by the smart contract that holds the headers' knowledge (via SPV). Transaction validation can be relayed to deployed Ethereum smart contracts, allowing several use cases, for example, the issuance of tokens. \emph{Zendoo} is a cross-chain transfer protocol that realizes a decentralized, verifiable blockchain system for payments \cite{zendo}. The authors consider a parent-child relationship, where nodes from the sidechain can observe the mainchain's state, but the main chain can only observe the sidechains via cryptographically authenticated certificates. Zk-SNARKSs enable the authentication, validation, and integrity of the information provided by the sidechains via verifiable proofs \cite{zksnarks}. Such proofs are used to generate certificate proofs for the mainchain, enabling a secure verification scheme. \subsubsection{\underline{Notary Schemes}} \label{sec:crypto_notaries} A notary scheme involves a \emph{notary} that is an entity that monitors multiple chains, triggering transactions in a chain upon an event (e.g., a smart contract is deployed) taking place on another chain \cite{vitalik2016}. Notary schemes are, in practice, instantiated as centralized exchanges (EXs) or decentralized exchanges (DEXs). The most popular centralized exchanges, by volume, as of the 8th of February 2021 are Binance\footnote{https://www.binance.com/en}, Coinbase\footnote{https://www.coinbase.com/}, and Huobi Global\footnote{https://www.huobi.com/}. Exchanges facilitate signaling between market participants by managing an order book and matching buyers and sellers. If the trust anchor is put on a centralized party, where it holds users' private keys or funds, the notary is a centralized exchange. Otherwise, if exchanges do not execute the trades on behalf of the users, only providing a matching service, they are considered decentralized exchanges. We present the protocols of two decentralized exchanges: 0x \cite{0x}, and Uniswap \cite{uniswap}. \emph{0x} implements a decentralized exchange as a set of smart contracts (called automated market makers), replacing an on-chain order book with a real-time price-adjustment model. 0x uses a hybrid implementation, ``off-chain order relay with on-chain settlement'', combining the idea of a state channel with settlement smart contracts. Two parties participate: \emph{makers} and \emph{takers}. Makers place orders on the exchange, providing liquidity for the network (a set of decentralized exchanges), while takers place orders matched with the makers' orders. 0x uses the ZRX token and the Ethereum blockchain to incentivize users to host and maintain order books (provide liquidity). In exchange, 0x makers choose the rewards they obtain for each trade - although they have to comply with the DEX policies under the possibility of the order not being disseminated. This approach relies on a smart contract set (smart contract) and several smart contracts representing the different tokens supported. First, a maker creates an order to exchange token A for B, at a given rate, right after it approves a DEX to access its balance of token A. A taker discovers this order and wishes to trade its tokens B for tokens A. The taker grants permission to the DEX to access its tokens, and the DEX performs the exchange after several validations (e.g., the order has not expired, and it has not been filled). \emph{Uniswap} is a set of smart contracts implementing an automated liquidity pool, serving as a decentralized exchange \cite{uniswap}. Each Uniswap pool provides liquidity for two assets based on the constant set as the reserves' product. Prices for each asset are provided by an on-chain price oracle smart contract. Uniswap can support ERC-20 to ERC-20 trades and even flash loans, a theme explored in the decentralized finance area. A flash loan is a type of loan that does not require collateral, as the debt is repaid within the transaction. Flash loans work because the borrowed asset to be paid within the transaction requesting it \cite{uniswap}. \subsubsection{\underline{Hashed Time-Lock Contracts}} \label{subsec:crypto_hashed} Hashed time-locks contracts (HTLCs) initially appeared as an alternative to centralized exchanges, as they enable cross-chain atomic operations \cite{htl}. HTLCs techniques use hashlocks \cite{hashlock} and timelocks \cite{timelock} to enforce atomicity of operations, normally between two parties. A trader commits to make the transaction by providing a cryptographic proof before a timeout to the other. This scheme allows for the creation of multiple outputs (such as multiple payments), depending on solely one hashlock. HTLCs are used in Bitcoin for conditional payments, or cross-chain payments (Bitcoin-Ethereum), i.e., \emph{atomic swaps} \cite{atomicswaps,Black2019,accs2015}. Atomic swaps can be thought as a form of distributed commitment resilient to Byzantine adversaries. Thus, an atomic cross-chain swap is a distributed atomic transaction \cite{Herlihy2018}, settled on-chain. Several projects implement HTLCs differently, providing different correctness guarantees. However, the general algorithm is quite similar in most of the solutions. Let us consider an HTLC-supported atomic swap between Alice (holding assets of type $a$ in blockchain $\mathcal{B}_a$) and Bob (holding assets of type $b$ in blockchain $\mathcal{B}_b$). An atomic swap can be realized as follows \cite{belotti2020,zapala2020}: 1) Alice generates and hashes a secret $s$, yielding $h$. The protection of a smart contract with hash $h$ is called a hashlock because it will lock a smart contract - only parties with knowledge of secret $s$ can know it since secure hash functions are pre-image resistant (i.e., a hash function cannot be inverted). Alice also creates a timelock $t_b$, corresponding to an upper bound in which the created hashlock can be unlocked, i.e., Bob can unlock the smart contract up to $t_b$, where $t_b$ corresponds to a specified future time or block height; 2) Alice publishes the smart contract in $\mathcal{B}_a$. Bob verifies the deployment, and records $h$ and $t_b$; 3) Bob publishes a smart contract in $\mathcal{B}_b$ locking $b$ with hashlock $h$, but with timelock $ta$ such that $t_a < t_b$, i.e., Alice can claim $b$ before $t_a$. 4) Alice checks that Bob's smart contract has been published and gives as input secret $s$, before $t_a$, acquiring asset $b$. In practice, this triggers a transfer; 5) Bob now sends $s$ to Alice's smart contract in the interval $[t_a,t_b]$, acquiring $a$. Note that if Bob issues the transaction after $t_b$, Bob will not obtain access to $b$. Some solutions utilize the notion of HLTC and enhance it, providing an additional on-chain trust anchor. In particular, two solutions are presented: XCLAIM \cite{xclaim} and the \ac{LN} \cite{LN}. \emph{XClaim} uses a combination of HLTCs, collateralization, and escrow parties, realizing non-interactive cross-chain atomic swaps \cite{xclaim}. This protocol includes several actors: the requester, the sender, the receiver, the redeemer, the backing vault, and the issuing smart contract. Requesters lock coins to issue tokens, while the redeemer burns tokens to receive coins. The sender sends tokens, while the receiver receives them. After that, the vault smart contract fulfills requests of asset backing and ensures correct redeeming. An issuing smart contract issues and exchanges representations of a token (cryptocurrency-backed assets) and enforces the vault's correct behavior. Considering a transaction between Bitcoin and Ethereum, firstly, the vault locks collateral in Ethereum smart contracts. This collateral defines the amount of CBA that the vault can issue. A user that wants to issue Bitcoin-backed tokens sends Bitcoin to the vault. User A then sends a proof of transaction submitted to the Bitcoin mainchain to a chain relay, e.g., BTC Relay. The chain relay verifies the submitted transaction and alerts the issuing smart contract. The smart contract releases the Bitcoin-backed assets to the user. On the other hand, a user issues a transaction against the smart contract, locking/burning its backed tokens. The vault releases the Bitcoin to the user, and it submits a proof of the involved operations to the chain relay. The chain relay verifies the proof and only then releases the collateral to the vault. XClaim currently supports exchanges between Bitcoin and Ethereum\footnote{https://github.com/crossclaim/xclaim-sol}. The protocol execution consumes substantially lower Ether than traditional HTLCs. {\ac{LN}} enables high-volume, low latency micro-payments on the Bitcoin network \cite{LN}. LN is a payment scheme (i.e., an off-chain sidechain). LN allows several parties to open a payment channel, transact amongst them, and when all the intermediary payments are completed, the final output is sent to the mainchain. LN works as follows: 1) funds are placed into a multi-signature Bitcoin address (two-party multi-signature if only two people are transacting). In order for funds to be changed, two signatures are required. After that, the funds will be managed off-chain via commitment transactions (i.e., a commitment to pay part of the available funds to the other party); 2) Parties can now transact offline under the regime they choose; 3) To settle the payments performed off-chain, both parties sign a new exit transaction. Note that parties can unilaterally close the payment channel in case of conflict. LN is considered a precursor of HLTCs because its bi-directional payment channels allow payments to be routed across multiple payment channels using HLTCs. \subsubsection{\underline{Discussion on Public Connectors}} \label{sec:disc_crypto} Public Connectors started emerging as early as 2015 \cite{ZyskindOz2015}, when researchers and practitioners alike saw the potential in cross-chain transactions to support, for instance, atomic swaps \cite{atomicswaps,Black2019,accs2015}, and payment channels \cite{LN}. Sidechains are the solutions increasing the main network's scalability by processing and batching large amounts of transactions before submission on the main blockchain \cite{singh2020, plasma, rsk}. Relays can fetch block headers from sidechains, enabling data verification \cite{btcrelay,peacerelay,waterloorelay}. While sidechains are mainly used on public blockchains, there are also permissioned blockchain sidechains \cite{sidechain_fabric}. We note that some sidechains may have a cross-chain mechanism realization HLTCs, being a solution belonging to multiple categories (e.g., \cite{LN}). Most sidechains use Ethereum and have a sidechain consensus mechanism, which is allusive to bidirectional transfers \cite{zendo}. Simple relay schemes, which verify transactions on other chains, such as BTC Relay, have a simple sidechain consensus, as the information flow is unidirectional \cite{btcrelay}. In particular, validators can sign events that happened on the source chain (if validation happens across EVM-based chains) or transfer block headers (via users or aggregation chains) \cite{robinson2020}. Liquid \cite{liquid}, and POA \cite{POA} rely on a consortium of validators running trusted hardware to execute smart contracts and validate transactions. Other solutions, such as Wanchain \cite{wanchainroadmap} rely on a trusted consortium, but without running trusted hardware. However, sidechains suffer from several limitations. Safe cross-chain interactions are rooted in the assumption that the main chain is secure, i.e., the network cannot be successfully attacked. Compromising the main chain would invalidate the sidechain logic. Conversely, centralization in sidechains tends to exist to a higher degree than on mainchains, because typically there is a trade-off between decentralization-performance (e.g., lesser validating nodes versus higher throughput). Consequently, if an attacker can obtain control on a (potentially small) set of validators, funds can be stolen from users. Therefore, it is important to have different stakeholders with different incentives, diminish the likelihood of collusion, and rely on a reasonable quorum of validators to sign each transaction (e.g., 8 out of 11 versus 3 out of 10). If a sidechain has a strong security model, it may lead to a slow transaction settlement, stalling assets, and lowering liquidity. For example, the RSK sidechain \cite{rsk} takes approximately the time to confirm 100 Bitcoin blocks (around 15 hours) to convert BTC to RBTC\footnote{https://developers.rsk.co/rsk/}. Finally, sidechains typically do not allow for arbitrary code to specify conditions on the pegging mechanism, thus not empowering them to develop more complex applications. Notaries on the Public Connectors category are cryptocurrency exchanges. EXs have the majority of the market share, comparatively to DEXs. While EXs provide services to the end-user, decentralized exchanges tend to provide better exchange fees and security. The trade-off is, therefore, comfort and speed - security. This subcategory provides great flexibility at run-time because EXs and smart contracts that DEXs support triggers (e.g., stop-loss orders). Notary schemes have to capture the logic of smart contracts in both chains. Although they can capture the full spectrum of interoperability -- both at the value and mechanical levels (see Section \ref{subsec:blockchain_connectors}), practical applications are limited. In summary, notary schemes are intermediaries between blockchains. EXs are notaries because they execute actions on behalf of the end-user (e.g., buy cryptocurrencies conditionally). DEXs are notaries because they provide matching for the end-users by pinning and advertising trade offers encoded in smart contracts. The HTLCs category was the first one to allow asset exchange in a trustless way. HTLCs allow atomic swaps between different blockchains, funding bidirectional payment channels. HTLCs are flexible because they can be chained after each other \cite{zapala2020}, and therefore enable trades even if there is no direct connection between the trading parties. As they serve as programmable escrows, they represent the most trustless and practical approach of the three. However, hashed timelocks might lead to capital retention and unfair trade, as the trader issuing a cross-blockchain asset transfer may only provide the secret on specific conditions (exploring the spread of the cryptocurrency exchange rate) \cite{xclaim}. Many solutions are hybrid, sharing characteristics of HTLCs and sidechains, either exploring collateralization-punishment schemes rooted on smart contracts (\cite{cctxnarges,sai2019,xclaim}, or locking-in and locking-out assets \cite{dextt,fusion,metronome_faq,metronome}. HLTCs are practical solutions across public blockchains. HLTCs could also provide asset transfers between private blockchains, but only under the participation of a third party blockchain and a semi-trust environment \cite{hardjono2021}, or if both parties belong to both private blockchains. Current efforts to address these limitations include Hyperledger Cactus \cite{hyperledger_cactus}. Concluding, Public Connectors are the best approach to perform cryptocurrency trades and moving fungible and non-fungible assets across public blockchains. We encourage the reader to refer to some related surveys focusing on sidechains to complement this survey (see Section \ref{sec:related_literature_reviews}). \subsection{Blockchain of Blockchains} \label{sec:be} \emph{Blockchain of Blockchains} are frameworks that \emph{provide reusable data, network, consensus, incentive, and contract layers for the creation of application-specific blockchains (customized blockchains) that interoperate between each other}. We briefly present Polkadot \cite{Wood2017,polkadot_2} and Cosmos \cite{Kwon2016}, the most widely adopted Blockchain of Blockchains in terms of market capitalization\footnote{USD 22.1B and USD 3.6B respectively, as of February 2021}. A detailed comparison between Polkadot, Cosmos, and Ethereum 2.0 (the baseline) is deferred to Appendix \ref{a:be}. Other Blockchain of Blockchains include Ark \cite{ark2019}, Komodo \cite{komodo}, and AION \cite{aion2017}. Wood proposes \emph{Polkadot}, a network that aims to connect blockchain networks \cite{Wood2017}. Polkadot provides the foundation for \emph{parachains}, i.e., ``globally-coherent dynamic data structures'' hosted side-by-side. Parachains are, thus, the parallelized chains that participate in the Polkadot network. Specialized parachains called bridges link independent chains \cite{Wood2017}. Polkadot is based on \emph{Substrate}, a framework for creating cryptocurrencies and other decentralized systems. It guarantees cross-language support with WebAssembly, a light client, and off-chain workers, allowing for integration with other technologies. Polkadot enables interoperability based on state transition validation, done by the chain-relay validators. Parachains communicate through the Cross-chain Message Passing Protocol (XCMP), a queuing communication mechanism based on a Merkle tree \cite{xcmp}. Communicating state transition proofs from parachain to relay chain is achieved via an erasure-coding scheme. Polkadot scales by connecting up to 100 parachains directly to the relay chain in the short-medium term. A long-term solution is being studied, where second and third-level parachains are added in parallel. \emph{Cosmos} is a decentralized network of independent parallel blockchains, called \emph{zones} \cite{Kwon2016}. The zones are essentially Tendermint blockchains \cite{tendermint}. Zones can transfer data to other zones directly or via \emph{hubs}. Hubs minimize the number of connections between zones and avoid double spendings. For example, zone A can connect to zone B via Hub C and receive tokens from zone B. Zone A would need to trust the tokens from zone B and Hub C. This scheme allows zones to maintain a reduced number of connections. Both ways utilize the inter blockchain communication protocol (IBC) \cite{ibc}. IBC resembles the Internet network layer as it routes arbitrary data packets to a target blockchain. A target blockchain can know that a certain ordered packet with arbitrary data came from another blockchain. By handling transportation and order, the protocol has several steps to achieve cross-zone transactions. First, each chain involved tracks the headers of the others, acting as a light client. When a transfer is initiated, the protocol locks the assets on the origin chain. After that, the proof is sent to the target blockchain, which then represents the locked assets. A similar mechanism is used to recover the original tokens. This scheme allows for interoperability among Tendermint blockchains. Other kinds of blockchains can interoperate with a Cosmos chain via peg zones. Peg zones resemble the pegged sidechain mechanism \cite{peggedsidechains}, in which a representation of the locked token of the source blockchain is created on the target blockchain. Cosmos abstracts the development of a blockchain into three layers: networking, consensus, and application. Tendermint BFT realizes the networking and consensus layers. The Tendermint BFT engine is connected to the application layer by a protocol called: the Application Blockchain Interface (ABCI). The Cosmos SDK realizes the applicational layer, allowing developers to develop smart contracts in languages that can be compiled to WASM\footnote{https://blog.cosmos.network/announcing-the-launch-of-cosmwasm-cc426ab88e12}. \subsubsection{\underline{Discussion on Blockchain of Blockchains}} \label{sec:block_engine_disc} Blockchain of Blockchains implementations are similar to relays and sidechains, as there is typically the main chain (often called relay chain) that connects the secondary chains, which can be application-specific blockchains. This scheme allows high throughput and flexibility to the end-users, providing interoperability capabilities between their platform instances. For example, Cosmos's Tendermint-based blockchains interoperate (instant finality assured), while Polkadot provides interoperability on Substrate-based blockchains (for instance, via Cumulus\footnote{https://wiki.polkadot.network/docs/en/build-cumulus}, a tool for connecting a blockchain to Polkadot). To connect to other chains, Cosmos, Polkadot, AION, and utilize a mechanism similar to pegged sidechains or hashlock time contracts (ARK \cite{ark2019}) to interact with other blockchains, commonly called bridges. \begin{table*}[] \caption{Comparison of Blockchain Engine interoperability solutions \cite{polkadot_comparison,Kwon2016}} \label{tab:be} \centering \normalsize \resizebox{\textwidth}{!}{% \begin{tabular}{@{}lllllllllll@{}} \toprule & \multicolumn{2}{c}{\textbf{Communication}} & \multicolumn{6}{c}{\textbf{Properties}} & \multicolumn{2}{c}{\textbf{Community}} \\ \midrule \multicolumn{1}{l|}{} & Cross-chain & \multicolumn{1}{l|}{Cross-blockchain} & Consensus & Security & Validator & Maximum & Number of & \multicolumn{1}{l|}{Smart} & \multicolumn{1}{c}{Launch} & \multicolumn{1}{c}{Roadmap} \\ \multicolumn{1}{l|}{} & Protocol & \multicolumn{1}{l|}{interoperability} & Mechanism & assumption & number & Throughput & instances & \multicolumn{1}{l|}{Contracts} & & \\ \midrule \multicolumn{1}{l|}{Polkadot \cite{Wood2017} \mbox{}\hfill \checkmark} & XCMP & \multicolumn{1}{l|}{$\CIRCLE$} & BABE and GRANDPA & SM & 197 & $10^{3}$ & 200 & \multicolumn{1}{l|}{WASM} & November 2019 & Main network launch \\ \multicolumn{1}{l|}{Cosmos \cite{Kwon2016} \mbox{}\hfill \checkmark} & IBC Protocol & \multicolumn{1}{l|}{$\LEFTcircle$} & Tendermint & SM & 125 & $10^{3}$ & $\textgreater$ 70 & \multicolumn{1}{l|}{WASM} & March 2019 & Governance updates \\ \multicolumn{1}{l|}{ARK \cite{ark2019} \mbox{}\hfill \checkmark} & SmartBridge & \multicolumn{1}{l|}{$\LEFTcircle$} & Delegated proof of stake & M & 51 & 18.5 & Unlimited & \multicolumn{1}{l|}{WASM$^{\ast}$} & May 2019 & ARK Swap Market \\ \multicolumn{1}{l|}{AION \cite{aion2017} \mbox{}\hfill \checkmark} & Interchain transactions & \multicolumn{1}{l|}{$\Circle$} & Proof of intelligence & M & $\times$ & $\times$ & $\times$ & \multicolumn{1}{l|}{Aion Language} & April 2018 & Market assimilation \\ \midrule & & & & & & & & & & \\ \multicolumn{11}{l}{\begin{tabular}[c]{@{}l@{}}\checkmark our description was endorsed by the authors/team \\ $\times$ not known \\ $\ast$ some languages compilable to WASM, such as Go and .NET, but not all of them\end{tabular}} \\ \multicolumn{11}{l}{\begin{tabular}[c]{@{}l@{}}$\CIRCLE$ can interoperate with instances of the same blockchain engine. Interoperate with more than two heterogeneous blockchains\\ $\LEFTcircle$ can interoperate with instances of the same blockchain engine. Interoperate with up to two heterogeneous blockchains \\ $\Circle$ can interoperate with instances of the same blockchain engine\end{tabular}} \end{tabular}% } \end{table*} Table \ref{tab:be} maps out the current blockchain engine landscape by extracting and evaluating their main characteristics. Some information was not possible to obtain due to the lack of details on the whitepapers. It is possible to observe that Blockchain of Blockchains is very recent: Polkadot's test network, Kusama \cite{kusama}, was released in November 2019; Cosmos' main network was launched in March 2019. ARK launched in May 2019. AION launched in April 2018. Blockchain of Blockchains has different cross-chain communication protocol, e.g., in Polkadot, {cross-chain message passing}\footnote{https://wiki.polkadot.network/docs/en/learn-crosschain}; in Cosmos, the inter-blockchain communication protocol \cite{Kwon2016}. Cosmos and Polkadot have some differences regarding their approach: in Cosmos, the idea is to provide blockchains tailored to specific applications. IBC is more generic than XCMP, letting users customize their zones with higher freedom: security and validation are decided per zone. Polkadot restricts this customization but offers a shared security layer, making a trade-off security-customization. The security assumptions criteria depict the number of nodes assumed to be honest. A supermajority (SM) assumes that at least two-thirds of the nodes are honest, a common condition required by Byzantine fault-tolerant consensus algorithms ($n \textgreater$ $\frac{2}{3}$), while the majority (M) assumes at least half of the nodes are honest ($\textgreater \frac{1}{2}$). The validator number on a network comes with a trade-off: while a higher number is generally better for decentralization and robustness, it comes with an increase of latency towards block production -- and consequently lower throughput. Polkadot currently has around 297 validators, and this number is gradually increasing in the short-term to support up to 100 first-level parachains. At the time of writing, Polkadot is developing bridges for Bitcoin \cite{xclaim}, Tendermint, Libra \cite{libra}, and Ethereum. Interoperability between parachains is provided by Substrate. Currently, Cosmos has 125 validators. The number of validators can rise to 300. Currently, there are around 70 zones, and ``the number is growing''. While Cosmos does hold a limit for zones (as each zone is self-sovereign), there is no limit for how many zones can be attached to a Hub. Cosmos can interoperate with Etheruem. The Cosmos SDK provides interoperability between zones. Cosmos supports multiple peg zone implementations for Bitcoin and one for Ethereum. ARK has 51 validators, which can validate the transactions of a number of blockchains bound to the company's physical resources (instances managed by ARK). ARK can send and receive ERC-20 tokens to the Ethereum blockchain. We found no information regarding AION's validator number, throughput, or maximum sub-chains \cite{aion2017}. The theoretical throughput of the presented solutions varies: Polkadot's relay chain supports around 1000 transactions per second, considering that a block can include around 7,000 transactions at a 6-second block time (considering current weights, March 2021). Cosmos theoretical throughput can achieve up to dozens of thousands of transactions per second (tps) with two validators. With 64 validators, it falls into several thousand transactions per second. ARK can achieve around 18.5 transactions per second, relying on a proof of work consensus. The number of validators is set to 51. ARK is not a completely decentralized solution, as it manages instances of ARK blockchains. There is no theoretical limit of bridge chains, except the service provider resources. Several optimizations are being done in Cosmos, Polkadot, and ARK, to increase the throughput. The AION The project looks deprecated and stalled. As stated, the ``white paper is both ambitious and experimental'' \cite{aion2017}. AION is now a part of a larger project called the \emph{Open Application Network} (OAN). Cosmos and Polkadot support smart contracts in languages compilable to WASM (Web Assembly), which means developers can write them in languages such as Go, C++, and JavaScript. AION would support domain-specific languages, Aion language. Blockchain of Blockchains instances achieve inter-chain interoperability by a common point of contact, the ``connector'', analogous with Hyperledger Fabric channels \cite{fabric}. The connectors are the relay chain, the Cosmos Hub, the AION-1 blockchain, and the ARK main net if the technology is Polkadot, Cosmos Network, AION, or ARK, respectively. In Polkadot, the connector provides shared security. The relay-chain (the chain that coordinates consensus and communication between parachains and external blockchains) connects parachains and parachains to bridges. In Cosmos, the connector is loosely coupled to blockchains, providing greater flexibility than Polkadot. We could not extract meaningful considerations about AION's connector. In ARK, it looks like the connector is centralized at the expense of developability and ease of use. Concerning cross-blockchain interoperability, all solutions rely on \emph{bridges} or \emph{adapters} that route transaction from a particular blockchain type to another. While the provided features can be desirable for end-users, blockchain-engines do not interoperate with each other. In light of this fact, end-users are obligated to choose between existing solutions, leading to sub-optimal leveraging of available resources. Therefore, participant networks have constraints on interoperability, ending at relying on a single blockchain engine solution. Some authors defend that blockchain engine approaches are not universally accepted and cannot eliminate fragmentation \cite{Abebe2019}. Some solutions are even centralized, in the sense that its code is not open-source, and the end-user needs to use an SDK to access core functionalities (e.g., \cite{ark2019,aion2017}). However, ongoing work on building a Tendermint light client for GRANDPA, which would allow Polkadot to interact with Cosmos may allow blockchain engine interoperability in the short-medium term. Thus, in theory, interoperability across Blockchain of Blockchains can also be achieved via the relay chain technique (i.e., a blockchain engine can be a sidechain of other blockchain engines; validation can happen via SPV). Moreover, Blockchain of Blockchains requires transaction fees to keep the network operating. Given enterprise blockchain systems, a question could be posed: at which point shall an organization pay fees to sustain its business model across several blockchains? While Cosmos can provide flexibility configuring a zone, on Polkadot, this can be harder. Therefore, Blockchain of Blockchains can provide an optimal leveraging for public infrastructures, but that is not necessarily the case for private blockchains. \subsection{Hybrid Connectors} \label{subsec:blockchain_connectors} The \emph{Hybrid Connector} category is composed of interoperability solutions that are not Public Connectors or Blockchain of Blockchains. Directed to both public and private blockchains, Hybrid Connectors attempt at delivering a ``blockchain abstraction layer'' \cite{wef2020}, capable of exposing a set of uniform operations allowing a dApp to interact with blockchains without the need of using different APIs \cite{falazi2020}. We derived a set of sub-categories from the studies available: \emph{Trusted Relays}, \emph{Blockchain Agnostic Protocols} (including \emph{Blockchain of Blockchains}), and \emph{Blockchain Migrators}. Trusted relays are directed to environments where a blockchain registry facilitates the discovery of the target blockchains. Typically, such a scheme appears in a permissioned blockchain environment, where trusted escrow parties route cross-blockchain transactions. As the name suggests, Blockchain-agnostic protocols provide technology-agnostic protocols for interoperation between distributed ledger systems but do not guarantee backward compatibility. In other words, to use such protocols, their source code has to be changed to existing blockchains to use such protocols. Solutions from the blockchain of blockchains category aim to provide mechanisms for developers to build cross-chain dApps. The blockchain migrators sub-category aggregates solutions that perform data migration across blockchains, which resemble the notary schemes discussed in Section \ref{sec:crypto_notaries} (as there is typically a centralized party mediating the migration process). We introduce each sub-category, presenting only one illustrative example of each for the sake of space. Appendix \ref{a:connectors} depicts a complete list of Hybrid Connectors. Evaluation tables for each sub-category are discussed in Section \ref{sec:disc_connect}. \subsubsection{\underline{Trusted Relays}} \label{sec:block_connector_trusted_relay} Trusted relays are trusted parties that redirect transactions from a source blockchain to a target blockchain, allowing end-users to define arbitrary business logic. These solutions imply managing different APIs, in which cross-chain consensus may be modular. \emph{Hyperledger Cactus} (Cactus), previously known as Blockchain Integration Framework, uses an interoperability validator network that validates cross-chain transactions, optionally using a trusted escrow party \cite{hyperledger_cactus}. However, decentralized validators are implemented as well -- making this project move towards a decentralized trusted relay. Cactus allows a party or a set of parties to issue transactions against several ledgers, similarly to some notary scheme solutions \cite{timothesis,timo2019}. The interoperability is enabled through a set of \emph{interoperability validators}, which are participants from the source and target blockchains. Such validators collect cross-chain transaction requests, sign and deliver them. A CB-Tx is deemed valid, given that a quorum of validators signs them. It is then assumed that the blockchains participating in the network know how to address each other. However, trusted escrows can be replaced by decentralized parties. Currently, Hyperledger Cactus\footnote{https://github.com/hyperledger/cactus} supports Hyperledger technologies (e.g., Fabric, Besu), Corda, and Quorum. The roadmap predicts integration with public blockchains and blockchain migration capabilities. \subsubsection{\underline{Blockchain-Agnostic Protocols}} \label{sec:block_connector_blockagnosticprotocol} Blockchain-agnostic protocols enable cross-blockchain or cross-chain communication between arbitrarily distributed ledger technologies by providing a blockchain abstraction layer. These solutions enable BoBs, ``a system in which a consensus protocol organizes blocks that contain a set of transactions belonging to CC-dApps, spread across multiple blockchains. Such system should provide accountability for the parties issuing transactions on the various blockchains and providing a holistic, updated view of each underlying blockchain'' (Section \ref{subsec:bid}). Typically, the cross-chain consensus is fixed, and business logic is more restricted. The \emph{\ac{ILP}} can be considered a decentralized, peer-to-peer payment network \cite{Thomas2015}. It firstly adopted a generalized hash locking scheme to enable asset transfers, and it was directed to cryptocurrency transfers. Nowadays, ILP is technology-agnostic, defining a ``lowest unit common denominator'' across distributed ledgers, blockchains, fiat payment networks, the ILP packet. ILP sends payment information in packets by leveraging a network of connectors, which route such packets. At the core of Interledger is the Interledger Protocol (ILPv4) \cite{ILPv4}, which defines how senders, routers (or node, or connector), and receivers interact. Typically, the connector is a money packet router. The root of trust is then the connector, which has to be trusted: companies can settle payments via the routers, given that clearance of such payments is done afterward while being protected by the law. A sender is an entity that initiates a value transfer. A router applies currency exchange and forwards packets of value. The receiver obtains the value transmitted. ILPv4 is a request/response protocol enabled by ILPv4 packets. Each packet contains transaction information, and can be divided into \emph{prepare}, \emph{fulfill}, and \emph{reject} packets. A sender node initiates an exchange of value by sending a \emph{prepare} ILPv4 packet to a receiver. When a receiver obtains the prepared packet, it sends the response back to the sender via routers. The response may be a \emph{fulfill} packet, whereby a transaction has been successfully executed, or a reject packet. Several specifications for Interledger and related protocols are available\footnote{https://github.com/interledger/rfcs}. The Interledger Protocol is discussed by a W3C community group\footnote{https://www.w3.org/community/interledger/} and has a proposal that ``describes data structures and formats, and a simple processing model, to facilitate payments on the Web''\footnote{https://w3c.github.io/webpayments/proposals/interledger/}. The interledger protocol cannot integrate with existing blockchains: each one must be adapted to use ILP. A disadvantage is that Interledger does not support the transfer of non-fungible tokens (such as ERC-721\footnote{http://erc721.org/} tokens). \subsubsection{\underline{Blockchain Migrators}} \label{sec:block_connector_bm} Blockchain migrators allow an end-user to migrate the state of a blockchain to another. Currently, it is only possible to migrate data across blockchains, although moving smart contracts is also predicted \cite{hyperledger_cactus}. \emph{Fynn et al.} present an abstraction for smart contracts to switch to another blockchain consistently, moving the state required by the transaction to the target blockchain and execute it \cite{scotm}. The authors call such abstraction the \emph{Move} operation. The operation works as follows: first, it locks a smart contract on the source blockchain; next, the Move protocol recreates the smart contract in the target blockchain. This method allows arbitrary states to be transferred between blockchains. For example, it allows transferring cryptocurrencies by creating tokens on the target blockchain backed-up by locked tokens on the source blockchain (similarly to pegged sidechains). This method was tested on Ethereum and Hyperledger Burrow (based on Ethereum). The solution assumes the same cross-blockchain smart contracts utilize the same virtual machine, which can be limiting. Furthermore, for such a solution to be deployed, it requires Solidity changes and possibly a soft fork on Ethereum. \subsubsection{\underline{Discussion on Hybrid Connectors}} \label{sec:disc_connect} This section defined the hybrid connector category and its sub-categories: trusted relays, blockchain-agnostic protocols, and blockchain migrators. Regarding centralization, almost all adopt a decentralized model. Permissioned blockchain solutions are less flexible, as all involved participants are identified. In particular, trusted relays endorse connections made in a peer-to-peer fashion, upon previous agreement \cite{Abebe2019,Ghaemi2021}. However, Abebe et al.'s work pose some limitations: interoperating networks require a priori knowledge of each other's identities and configurations, hence being static. A discovery service could be implemented using a blockchain registry or a pub-sub mechanism \cite{Ghaemi2021}, in which networks could be added and removed. In trusted relays, it is not completely clear the mechanisms to minimize malicious relay services, apart from replication (whereby the risk of a censorship attack is reduced but not erased). Hyperledger Cactus could be a true enabler of interoperability, given that a (decentralized) trusted blockchain registry would be deployed, and public escrow parties could replace the overlay of trusted parties. Cactus could, therefore, make the transition between a trusted relay to a semi-trusted relay or even a trustless relay. Blockchain-agnostic protocols will be better positioned to offer interoperability to existing and yet to-exist blockchains, but most do not grant backward compatibility and lack the flexibility to define business logic. This inflexibility is inherent to the provided homogeneous interfaces (containing roles, methods, data, message formats, for instance, \cite{falazi2020}); at least such solutions scale slowly, as adding methods compatible with all the supported blockchains incur in a polynomial effort. However, this category might resemble some of the trusted relay solutions. In particular, both Cactus \cite{hyperledger_cactus}, and SCIP \cite{falazi2020} rely on connectors and validators and gateways, to access the underlying blockchains. The gateway paradigm implies a (semi) trusted gateway having read/write access to the shared ledger of the blockchain, and often they are expected to participate in the consensus mechanism of the blockchain \cite{hardjono2021}. While there is a higher trust requirement, gateway approaches might be the most suitable to solve interoperability across private blockchains if gateways are framed in a legal and regulatory framework. Proper solutions for enterprises, gateways need infrastructure comprising, for example, public identifiers, a set of connectors, and validators (which Cactus could provide), among others. From the blockchain of the blockchains category, we highlight Hyperservice, a peer-reviewed paper, and Overledger. Hyperservice tries to achieve full dApp atomicity by introducing the concept of \emph{stateless smart contracts}. Using a stateless smart contract, a \ac{CC-dApp} can load a clean state for a contract, using a valid block. While it can partially solve forks in the underlying blockchains, a \ac{CC-dApp} utilizes, the application of this concept paves a direction to decouple smart contract execution from the consensus layer \cite{hyperservice}. Overledger is a sorted list of messages that are interpreted as the state of a cross-blockchain application. While this is an exciting approach to blockchain interoperability, the solution is proprietary, hindering community efforts for more complex solutions. Blockchain migrators respond to an enterprise need: migration in case of disaster or performance issues \cite{belchior2020_bpvi,bandara2019}. The two presented solutions can only provide data portability across a small set of public blockchains. It is currently impossible to reproduce the chain of events via smart contracts, as that requires a smart-contract translator functionality. A limitation that we identified in the context of Hybrid Connectors is that most solutions do not support hard forks (i.e., the separation of a blockchain into two different blockchains) nor propose a solution for eventual forks, unlike some public connectors (most HTLCs and notary schemes). Forks do not happen regularly, and some solutions offer a quick analysis of the problem and acknowledge their importance \cite{hyperledger_cactus, blockCollider, Verdian2018}. However, this is still a problem that can affect the dependability of cross-chain dApps; dealing with forks is still an open issue. For instance, the protocol used in Hyperservice is unable to revert any state update to smart contracts when a dApp terminates prematurely, i.e., it does not grant atomicity. If one does not have atomicity guarantees, it forces the cross-blockchain application into an inconsistent state when a fork occurs. This can put at risk the purpose of the project: functional cross-blockchain applications. The same problem applies to, for instance, Overleder \cite{overledger03}. While one might be tempted to conclude that standardization could improve cross-blockchain API design, some argue that APIs are unlikely to generalize well across radically different technologies. Blockchain-agnostic protocols are more likely to be standardized than APIs, as shown historically by successful standards efforts such as HTTP or the TCP/IP family. Finally, solutions that prove cross-smart contract capabilities are emerging, but still in development \cite{scheid2019,Verdian2018,hyperservice,blockCollider,Abebe2019}. \section{Discussion, Use Cases and Research Questions} \label{sec:discussion_sec} This section presents a comprehensive summary of each blockchain interoperability category we extracted and our considerations about blockchain interoperability. Then it presents use cases and finishes with answers to the research questions we proposed. \subsection{Discussion} \label{sec:discussion} Although blockchain interoperability is a complex technology, connecting blockchains ends up being a manageable approach, despite differences in, for example, data structures, digital signature schemes, transmission protocols, verification mechanisms, consensus mechanisms, token issue mechanisms, and smart contract language. However, ``there is a scant effort today to address the standardization of the various infrastructure building blocks – messages, data formats, and flows – to support the interoperability across blockchains'' \cite{hardjono2021}. Different categories of solutions approach the interoperability problem differently. Our paper firstly introduced Public Connectors in Section \ref{sec_crypto} and stressed their importance. Token exchange is arguably no longer the whole scope of blockchain interoperability \cite{hyperservice}. Instead, various interoperability approaches emerged in the last years, whereby many of them aimed at generalizing blockchain interoperability. In particular, emerging solutions can be categorized as Hybrid Connectors, which provide cross-blockchain communication, and Blockchain of Blockchains, which allow an end-user to create customized, interoperable blockchains at the expense of vendor lock-in. Public connectors are the most cited among industry and academia, as they provide practical solutions to real-world problems: asset transfers. As these were the first solutions to emerge, not surprisingly, some may not succeed. It seems that the merge of sidechain and protocols relying on an escrow party (enforced by smart contracts) are the most suitable solutions for interoperability among public blockchains. We argue that the flexibility, decentralization, and security of such proposals can be utilized for secure interoperability. However, creating and maintaining a decentralized application using several blockchains was difficult - and hence the Blockchain of Blockchains solutions appeared. Those can facilitate blockchain adoption while providing built-in interoperability among instances of the same platform, whereas variations of the solutions mentioned above can be used to bridge Blockchain of Blockchains to other blockchains. While Blockchain of Blockchains, such as Cosmos or Polkadot provide a consensus engine and a security infrastructure to build blockchains, blockchain of blockchains aims at developing solutions using different infrastructures. In particular, Cosmos and Polkadot might progress towards \emph{homogenity}, as they support only the creation of Tendermint-based blockchains and Substrate-based blockchains, respectively. While they provide interoperability capabilities, mainly on the chains relying on their technology and other desirable features (shared layer of security, decentralization, governance, better scalability), the end-users choice will be tied to specific implementations. Paradoxically, such solutions might contribute to data and value silos, as solutions built with them cannot connect with an arbitrary blockchain \cite{Abebe2019}. Despite this fact, one could argue that this problem can be alleviated by building bridges/adapters. These solutions are promising but are challenging to integrate with legacy systems and, generally, private blockchains - and hence the hybrid connectors started appearing. Hybrid Connectors, specifically blockchain migrators and blockchain of blockchains, progress towards a user-centric, blockchain-agnostic view, enabling enterprise-connected CC-dApps. Arguably, the most suitable solution for connecting private blockchains is the usage of blockchain-agnostic protocols; however, they do not grant backward compatibility (as all previous solutions have to be adapted to integrate the adopted communication protocol). To overcome this fact, the short-medium-term solution would be using trusted relays. An interesting way for trusted relays to venture is by decentralizing the escrow party: from a set of trusted validators to a network of public nodes. It then follows from this survey that one could perceive trusted relays and blockchain-agnostic protocols to be good solutions to link private blockchains; and sidechain, smart-contract-based protocols suitable to solve interoperability among public blockchains. A network of blockchain engine-powered blockchains can be leveraged using Hybrid Connectors. For instance, there is a possible synergy between Cosmos and the Interledger Protocol: when a user wants to make an in-app payment with fiat currency (e.g., dollars) within a Cosmos zone, he or she can rely on the interledger protocol as a payment rail. If using cryptocurrencies to pay (e.g., Bitcoin), the interledger router can route the transactions for a payment channel (e.g., Lightning Network), providing more trustful interaction. To connect this ecosystem to private blockchains, bridges have to be developed. To make such bridges trustable, a possible solution would be to elect a group of validator nodes, via an overlay network, that participates in the consensus of public blockchains and private blockchains. This way, cross-chain, and cross-blockchain transactions can be endorsed. It is worth mentioning that several cross-chain programming languages are appearing, such as the Hyperservice Language \cite{Liu2019} and DAML \cite{daml}. DAML provides a unified Blockchain programming model by abstracting the underlying blockchains and exposing a higher-level abstract ledger on top, similarly to HSL. DAML has different integration degrees: DAML as an application on the target platform; and integration where the DAML runtime engine validates transactions. Programs compiled on such languages can run on top of a BoB platform. To conclude this discussion, we recall to the reader that blockchain development has been done in silos since its inception. New solutions for blockchain interoperability started emerging as of 2017, and, perhaps not surprisingly, such solutions are also being adopted in silos. While Public Connectors methods are commonly used nowadays, we focus on Blockchain of Blockchains and Hybrid Connectors. Blockchain of Blockchains and Hybrid Connectors allows interoperability between blockchains and other distributed ledger technologies and enterprise systems in the medium term. This promotes the development of blockchain interoperability standards. While blockchain matures, industries will tend to incorporate this technology into their business processes. Then, we predict that mass adoption will follow. \subsection{Supporting Technologies and Standards} \label{sec:technologies_standards} Besides the presented solutions, there is work towards the support and standardization of blockchain interoperability. Blockchain interoperability standards attempt to create a ``standardized transaction format and syntax'', which is common to all blockchains, and secondly, a ``standardized minimal operations set,'' common to all blockchains \cite{Hardjono2019}. In particular, a standardized format is important because while fungible and non-fungible assets have a single, well-defined representation in each blockchain, arbitrary data can be manipulated freely. First, we introduce indirect contributions that promote blockchain interoperability and then the existing standards. Recent efforts are visible in enabling heterogeneous smart contract integration through service-orientation \cite{falazi_scip}, allowing external consumer applications to invoke smart contract functions uniformly. A cross-blockchain data storage solution becomes a feasible solution to achieve application interoperability, whereby applications rely on one blockchain. Some dApps\footnote{https://ethlance.com/} already leverage the InterPlanetary File System (IPFS) \cite{Benet2014} to create a common storage, adjacent to the blockchain. The InterPlanetary File System provides a peer-to-peer network for storing and delivering arbitrary data in a distributed file system, potentially facilitating the transfer of data across blockchains \cite{belchior2021-bungee}. Organizations are working on standardizing digital assets. The Token Taxonomy Initiative\footnote{https://tokentaxonomy.org/} is a consortium dedicated to digital token standardization. It proposes a standard to identify tokens' behavior, properties, and control interfaces according to a token classification hierarchy. This project allows application developers to utilize a standard code set for interacting with tokens regardless of the blockchain platform, thus incentivizing blockchain interoperability. In the context of general interoperability, the Ethereum ERCs are a \emph{de facto} standard\footnote{https://eips.ethereum.org/erc}. Oracles are mechanisms that software systems provide an external source of truth for blockchains \cite{oracles2020}, and they can be centralized or decentralized \cite{Fan2018}. Typically, centralized oracles are not as dependable as decentralized oracles, as they constitute a single point of failure. Hyperledger Avalon \cite{hyperledger_avalon} defers intensive processing from the main blockchain to an off-chain channel to support centralized yet trustable oracles (by using trusted execution environments). Since multiple blockchains can use the same data, it fosters interoperability. Open source projects like Hyperledger Indy\footnote{https://www.hyperledger.org/projects/hyperledger-indy} and Hyperledger Aries\footnote{https://www.hyperledger.org/projects/hyperledger-aries} operate in the field of digital identity and self-sovereign identity. Central concepts of self-sovereign identity are decentralized identifiers (DIDs) \cite{did} and verifiable credentials \cite{vc2017}. Decentralized Identifiers can be created, managed, and shared using Zero-Knowledge Proofs (ZKPs) mechanism, even allowing to create new access control models \cite{ssibac}. Such technologies allow for identity portability, enabling cross-blockchain identities \cite{Hyland-Wood2018}. So far, the presented standards are called DLT/Blockchain Enabling Technology Standards because they focus on standardizing elements that blockchains can use, as opposed to DLT/Blockchain Generic Framework Standards \cite{lima2018}. These refer to standardization of blockchain interoperability data and metadata formats, identity, and protocols, namely the IETF, ISO, Enterprise Ethereum Alliance, IEEE, The EU Blockchain Observatory \& Forum, and W3C. At the Internet Engineering Task Force (IETF), work is being done defining a set of drafts that guide the implementation of ODAP, a protocol using gateways \cite{odap_draft_01,crp_draft_00,draft-hardjono-blockchain-interop-arch-01}. The ISO Technical Committee 307 works towards the ``standardization of blockchain and distributed ledger technologies''\footnote{https://www.iso.org/committee/6266604.html}, but did not produce any standard yet. Subgroup 7 (ISO/TC/SG7) focuses specifically on interoperability. The Enterprise Ethereum Client Specification, currently on its seventh version, ``defines the implementation requirements for Enterprise Ethereum clients, including the interfaces to external-facing components of Enterprise Ethereum and how they are intended to be used'', including cross-chain interoperability \cite{eeav7}. The IEEE Blockchain Initiative\footnote{https://blockchain.ieee.org/standards} and the IEEE Standards Association\footnote{https://standards.ieee.org/}, through the IEEE Standards P3203, P3204, and P3205 \footnote{https://blockchain.ieee.org/standards} work at providing ``interfaces and protocols of data authentication and communication for homogeneous and heterogeneous blockchain interoperability''. The EU Blockchain Observatory \& Forum by the European Commission aims to 1) the monitoring of blockchain activities in Europe, 2) the management of the source of blockchain knowledge, 3) the creation of a forum for sharing information, and 4) the creation of recommendations on the role the EU could play in blockchain \cite{eublock}. The same entity points out the likelihood of an increasing number of standards and adoption within governments \cite{eu_forum}. The W3C, via the Interledger Payments Community Group \footnote{https://www.w3.org/community/interledger/}, is connecting payment networks, including decentralized ledgers. Other organizations working in the area include BIA, BiTA, BRIBA, BSI, CESI, DCSA, EBP, GS1, and MOBI \cite{wef2020}. Standardization efforts focused on a specific blockchain (DLT/Blockchain Platform-Specific Standards) are, for example, the 0302 Aries Interop Profile \footnote{https://github.com/hyperledger/aries-rfcs/tree/master/concepts/0302-aries-interop-profile} and the Hyperledger Fabric Interoperability working group \footnote{https://wiki.hyperledger.org/display/fabric/Fabric+Interop+Working+Group}. Multiple standards will likely arise and be used, for each vertical industry, as there is a lack of generalized interoperability standards. Standards are then reused across industries (e.g., IEEE P2418.5). Solving interoperability in a specific sector would then pave the way for standards in other industries because the main requirement is domain expertise (ontologies are good starting points for standardization) \cite{lima2018}. The heterogeneity created by standards will pose a regulation challenge, as blockchains may spread across different jurisdictions \cite{hermes-middleware-2021}. \subsection{Use Cases with Multiple Blockchains} \label{sec:use_cases} In this Section, we present use cases with multiple blockchains. More use cases can be found in Appendix \ref{a:use_cases}. The industry is still applying blockchain to use cases using only one blockchain. Consequently, it is expected that use cases with multiple blockchains are rare. Notwithstanding, according to the existing resources, it seems that there is considerable interest in use cases using multiple blockchains. As long as the technologies mature, novel, disruptive use cases may be found. For the sake of space, we present some general use cases involving an IoB \cite{draft-sardon-blockchain-gateways-usecases-00}. We refer readers to Appendix F for more use cases relative to Public Connectors, Hybrid Connectors, and Blockchain of Blockchains. The first big IoB use case is asset transfers, where users can transfer assets from one blockchain to another. While some approaches implement this use case in an ad-hoc way, the emergence of central bank digital currencies (CBDCs) \cite{mbcb,oecd2021}, requires further efforts and standardization \cite{visa2020}. A CBDC is a digital version of a sovereign currency of a nation. A CBDC is issued by central banks, where each unit represents a claim on the value held by such central bank. Many blockchains features are appealing to implement CBDCs, particularly the offered immutability, transparency, and trust distribution. Some central banks are already experimenting with blockchain, including the Monetary Authority of Singapore and the Bank of Canada \cite{draft-sardon-blockchain-gateways-usecases-00}. As each CBDC can be implemented with a blockchain, and each central bank might choose a different technology, interoperability between them is achieved using an IoB or even a BoB. Another major use case is interoperability across supply chains \cite{draft-sardon-blockchain-gateways-usecases-00, hyperledger_cactus}. A supply chain is a chain of value transfer between parties, from the raw product (physical or intellectual) to its finalized version. Managing a supply chain is a complex process because it includes many non-trusting stakeholders (e.g., enterprises, regulators). As many markets are open and fluid, enterprises do not take the time to build trust - and instead, rely on a paper trail that logs the state of an object in the supply chain. This paper trail is needed for auditability and typically can be tampered with, leading to blockchain's suitability to address these problems \cite{wef2020}. A key challenge of blockchain-based supply chains is to interoperate with other DLT systems. Interoperability granted each participant of the supply chain (e.g., supplier, manufacturer, retailer) can participate at several supply chains (and thus several blockchains) using a single endpoint, simplifying the interaction process while reducing costs. Other use cases comprise connecting Hyperledger Fabric and Ethereum with Singapore Exchange and Monetary Authority of Singapore via node integration and EVRYTHNG, a product connecting multiple chains via API to digitize products \cite{wef2020}. Finally, identity and data portability can be provided by an IoB approach. Identity paradigms like self-sovereign identity \cite{ssibac} can increase identity portability by providing users control of their identities. Typically, this is achieved by rooting user credentials in a blockchain. Hence, if blockchains can communicate with identity providers that are blockchains, one can use the same identity in different blockchains. Data portability complies with blockchains, allowing blockchain users to use their data outside of a blockchain without requiring significant effort. \subsection{Answers to the Research Questions} \label{sec:res_q_a} In this section, we provide answers to the presented research questions (further elaborated on Section \ref{sec:res_q}). \begin{enumerate} \item \textbf{What is the current landscape concerning blockchain interoperability solutions, both from the industry and the academia? i.e., what is the current state of blockchain interoperability solutions?} The first step towards blockchain interoperability has been creating mechanisms allowing the exchange of tokens (e.g., cryptocurrencies). We categorized such solutions as Public Connectors (Section \ref{sec_crypto}). Such category comprises Sidechains and Relays (Section \ref{subsec:crypto_side}), Notary Schemes (Section \ref{sec:crypto_notaries}), and Hash Time Lock Contracts (Section \ref{subsec:crypto_hashed}). This category provides an idea of the emergence of blockchain interoperability - but this area no longer applies solely to token exchanges between homogeneous blockchains. Novel blockchain interoperability approaches are Blockchain of Blockchains (see Section \ref{sec:be}) and Hybrid Connectors (Section \ref{subsec:blockchain_connectors}). Hybrid Connectors fall into four sub-categories: trusted relays (Section \ref{sec:block_connector_trusted_relay}), blockchain-agnostic protocols (Section \ref{sec:block_connector_blockagnosticprotocol}), and blockchain migrators (Section \ref{sec:block_connector_bm}. We also analyzed related literature reviews on blockchain interoperability, in Section \ref{sec:related_literature_reviews}. \item \textbf{Is the set of technological requirements for blockchain interoperability currently satisfied?} There are two requirements for realizing technical interoperability \cite{vitalik2016}: a pair of sufficiently mature blockchains to build artifacts that promote interoperability and ``some application or need that cannot be served by implementing it on a single blockchain.'' There are several blockchains that can be considered mature enough to support applications built on top of them \cite{fabric, Gorenflo2019,Wood2017,Kwon2016}. On the other hand, interoperability regarding blockchain needs to have the necessary infrastructure and facilitating technologies. In particular, the production of standards \cite{Hyland-Wood2018,token_taxonomy_initiative} technologies like decentralized identifiers \cite{did}, verifiable credentials \cite{vc2017}, cross-blockchain communication protocols \cite{dextt,xclaim,sok_cdl}, and the representation of blockchain smart contracts \cite{Hyland-Wood2018} can foster the likelihood for blockchain interoperability standards and solutions, as they remove considerable barriers to blockchain interoperability. On the other hand, there is a set of cross-blockchain use cases that validate the need for interoperability, which will inevitably foster it \cite{hermes-middleware-2021}. In conclusion, the set of critical requirements for blockchain interoperability is currently satisfied, but there is still work to be done at standardization and interoperability across public-private and private-private blockchains. \item \textbf{Are there real use cases enabling a value chain coming from blockchain interoperability?} Regarding the third research question, some authors defend that blockchain interoperability is important and crucial for the survivability of this technology \cite{Hardjono2019,Pillai2019 hermes-middleware-2021,hyperservice}. Standards are paving the way for blockchain adoption \cite{Hyland-Wood2018, Deshpande2017}. It is likely that ``forward-looking interoperability standards are most likely to result in successful standards creation and facilitate industry growth'' \cite{Hyland-Wood2018}. Conversely, standardization is a requirement for mass adoption that is being developed. Given the multiple blockchain interoperability solutions, both Hybrid Connectors, and Blockchain of Blockchains, some of them with considerable weight on the industry, we believe this is a very likely scenario. In Section \ref{sec:use_cases}, we expose multiple use-cases that may benefit from cross-blockchain technology, which can foster adoption by enthusiasts and enterprises. In conclusion, we envision reliable solutions and standards emerging in the following years and a steady increase in blockchain adoption by enterprises and individuals alike. \end{enumerate} As a value enhancer and maturing key factor, interoperability will ultimately decide the survival of this technology. Based on the evidence collected and analyzed, we foresee increased attention to this research area, with blockchain interoperability gaining traction among the academia and the industry. \subsection{Open Issues and Challenges} \label{sec:issues} In this section, we present open issues and challenges regarding blockchain interoperability and, in a more general sense, the adoption of blockchain. Nowadays, solutions available to build decentralized applications lack interoperability, thwarting scalability \cite{tbig}. As Liu et al.~note, ``it is very challenging to enforce correct executions in a full trust-free manner where no trusted authority is allowed to coordinate the executions on different blockchains'' \cite{hyperservice}. Although interesting and notorious recent advances in this area make interoperability a reality, there is still a gap between theory and practice, as much of the existing work is mostly conceptual. Given the vast amount of blockchain platforms, fragmentation regarding solutions and their approach to interoperability is strongly present, for example, in IoT scenarios \cite{Zhu2019}. A combination of multiple platforms tailored for specific purposes, which can be public, private, or consortium, adds an overhead to manage workflows. In particular, this concern is intensified when multiple blockchains are serving a specific application. Concerning blockchain scalability, the internet of blockchains can be realized upon improvements to current performance, both in public and private blockchains. Techniques such as implicit consensus and data sharding can improve transaction throughput and storage \cite{omniledger}. However, blockchain sharding requires solving cross-blockchain transaction routing and retrieval and asset referencing (also known as the discoverability problem). It is challenging to coordinate transactions from different blockchains to support a cross-chain dApp, as different blockchains have different properties (e.g., architecture, protocols \cite{Abebe2019}, service discovery, access control, between others). In particular, reverting a transaction that depended on another can be cumbersome, especially given different transaction finalities from different blockchains). Some solutions have proposed a mechanism to overcome such a challenge (blockchain of blockchains) \cite{Verdian2018,hyperservice}. Although a promising approach, it is still unclear the applicability of these solutions to arbitrarily complex cross-blockchain dApp. More research is required to confirm the feasibility of this approach. Some authors \cite{vo2018} highlight problems related to the GDPR, such as security, trust, confidentiality, and data privacy issues. In particular, security threats are exacerbated by the presence of multiple blockchains and possible multiple administrators. Regarding privacy, the authors underline problems with the right-to-forget, in which a user can ask his or her data to be deleted from the blockchain. Currently, most blockchains do not provide effective mechanisms that can respond to this request. Blockchain fine-grain access control is appointed as a key requirement to minimize information leakage and confidentiality risk. Blockchain interoperability reduces dependencies on a single blockchain, and consequently, risk (e.g., the blockchain is attacked) \cite{dextt}, it does not eliminate the inherent risks. It is worth underscoring that the multiple blockchain approach is more complicated than the sum of its parts, as there is extra complexity underlying the cross-chain communication. This adds challenges to governance: whereas a private consortia can use Hybrid Connectors at will to interoperate systems (decentralized and/or decentralized), the governance model is not straightforward within community projects, supported by public blockchains. In short, the most relevant open issues towards blockchain interoperability are: \begin{itemize} \item The gap between theory and practice, including the lack of standardization and implementations \cite{hardjono2021,Zhu2019}, \item Discoverability \cite{Verdian2018,hyperservice, Abebe2019}, \item Privacy and Security \cite{Thomas2015,vo2018,Wood2017,sok_cdl}, \item Governance \cite{Hardjono2019, Hardjono2019a,wef2020,Qasse2019}. \end{itemize} Notwithstanding, security \cite{surv_sec, sok_crpyto}, privacy \cite{casino2019}, and scalability (e.g., using sharding \cite{surv_shar} or novel blockchain systems \cite{appendableblock}) remain the most prominent areas to be improved in the blockchain space. \section{Research Directions} \label{sec:research_directions} New tools, frameworks, standard proposals, and even programming models are emerging and need further development. Programming models such as Polkadot and Cosmos offer developers a way to create their blockchains effectively and connect them to other blockchains. Protocols such as ILP and UIP allow cross-blockchain transactions. Programming languages such as HSL and DAML aim at embedding different blockchain models, providing an abstraction for cross-blockchain dApps. Although one can have good reasons to utilize blockchain interoperability solutions for public or private blockchains, few solutions are available for connecting them. The problem of obtaining state from permissioned blockchains effectively \cite{abebe2020} makes interoperating with private blockchains a challenge \cite{wef2020,iiconsortium2020}. Thus, connecting public and private blockchains bidirectionally remains an open problem. One of the problems that bidirectional communication across permissioned and permissionless ledgers poses is semantic compatibility. Technical interoperability does provide the technical foundation that realizes interoperability but does not grant semantic interoperability per se \cite{Hardjono2019}. There is, therefore, a gap: how can we effectively combine both blockchain types to enable new use cases? How to make sure a solution complies with the goals of all involved stakeholders? Disciplines as view integration can help to provide an answer \cite{belchior2020_bpvi}. View integration is the process that combines views of the same business process into a consolidated one by combining the different views of the stakeholders participating in different blockchains. Another considerable obstacle for blockchain adoption is its fast-paced development. The development of blockchain interoperability standards may provide a way for more flexibility regarding backward compatibility. In the light of the present study and the identified open issues and challenges, we propose research directions based on some sections of our survey: research on architecture for enabling blockchain interoperability, Public Connectors, Blockchain of Blockchains, Hybrid Connectors, and supporting technologies, standards, use cases, and others. \emph{Architecture for Blockchain Interoperability} (Section \ref{a:architecture}): \begin{itemize} \item Define a blockchain interoperability maturity model, modeling interoperability at its various layers (e.g., technological, semantic, organizational). \item Model the different views on the various types of interoperability, according to different stakeholders (e.g., the provider's technical view on a cross-blockchain dApp \emph{versus} the semantic view of the end-user on the same cross-blockchain dApp). \item Study blockchain interoperability semantics by exploring, for example, the research area of view integration \cite{view_int}. \end{itemize} Public Connectors (Section \ref{sec_crypto}): \begin{itemize} \item Research on how permissioned blockchains can benefit from sidechains to improve scalability and privacy. \item Develop protocols to allow fiat money exchange, higher liquidity on decentralized exchanges. Conversely, improve the level of privacy and security of centralized exchanges. \end{itemize} Blockchain of Blockchains (Section \ref{sec:be}): \begin{itemize} \item Integration of existing blockchain systems with Blockchain of Blockchains. \item Study how Blockchain of Blockchains can provide a reliable interoperability scheme bridging permissioned blockchains and permissionless blockchains. \item Connect Blockchain of Blockchains to both centralized systems and decentralized ledger systems (e.g., connect Polkadot to Visa). \end{itemize} Hybrid Connectors (Section \ref{subsec:blockchain_connectors}): \begin{itemize} \item Decentralize the trust of trusted relays by integrating them with public blockchains (e.g., by submitting the state periodically to a public blockchain); \item Study how blockchain-agnostic protocols can be easily adapted to existing ledgers. \item Explore the blockchain of blockchains approach as an advance in dependable blockchain-based applications. \item Improve atomicity and consistency guarantees on cross-blockchain decentralized applications. \item Explore blockchain migration across public and permissioned ledgers. Such migration schemes can be decentralized and adapt to functional and non-functional requirements imposed by stakeholders. \item Explore blockchain migration via non-trusted relays (e.g., using a set of public escrow nodes following a protocol). \item Develop frameworks for multiple blockchain management. Such frameworks should respond to multiple stakeholder needs, decentralizing trust. \item Model integration abstraction layers that enable the development of universally connected contracts. \item Research on the visualization of CC-Txs. \end{itemize} Supporting technologies and standards, use cases, and others (Section \ref{sec:discussion}): \begin{itemize} \item Work along with regulators and standardizing bodies to come with blockchain interoperability standards across industries \item Research on blockchain interoperability programming languages, supporting tools, and standards, including but not limited to cross-blockchain programming languages and frameworks, decentralized identifiers and verifiable credentials, and blockchain interoperability standards for enterprise blockchains; \item Explore new use cases using multiple blockchains, the ``value-level'' interoperability \cite{hyperledger_cactus}. \item Research synergies between cryptocurrency-based interoperability approaches, Blockchain of Blockchains, and Hybrid Connectors. \item Study security aspects of blockchain interoperability. \item Understand the implications of the different interoperability layers (value, semantic, organizational, among others). \end{itemize} \section{Conclusion} \label{sec:concl} In this paper, we performed a systematic literature review on blockchain interoperability. We systematically analyzed, compared, and discussed 80 documents, corresponding to 45 blockchain interoperability solutions. By including grey literature, we expect to thwart intrinsic limitations regarding the blockchain interoperability research area, such as a considerable presence of the industry. By exploring each solution methodologically, this study provides interesting insights, distributed across three categories: Public Connectors, Blockchain of Blockchains, and Hybrid Connectors. Despite sidechain and HLTC solutions are gaining traction in the industry, blockchain interoperability are not solely Public Connectors solutions. New approaches started emerging since 2017. Hybrid Connectors provide a varied landscape of solutions, adapted for the majority of the use cases. They are likely to be used to produce cross-blockchain dApps. Blockchain of Blockchains are likely to be adopted by the industry in the short-medium term, by leveraging easy-to-produce, customizable blockchains. Our findings allow us to conclude that conditions to research on blockchain interoperability are fulfilled, allowing a multitude of new use cases. Thus, we expect interest in this research area to raise considerably. This work is towards making the blockchain ecosystem more practical, by easing work for developers and researchers. We expect that this study provides a robust and dependable starting point whereby developers and researchers can work in the blockchain interoperability research area \begin{acks} The authors would like to thank to the anonymous reviewers that constructively provided suggestions that significantly improved this paper. Thanks to Peter Somogyvari, Paul DiMarzio, Jag Sidhu, Sergio Lerner, Andy Leung, Travis Walker, Bill Laboon, Josh Lee, Austin King, Oliver Birch, Thomas Hardjono, and Miguel Matos for fruitful discussions regarding blockchain interoperability. We thank Daniel Hardman and Ken Elbert for constructive discussions about DIDs and verifiable credentials. Special thanks go to Iulia Mihaiu, Cláudio Correia, Benedikt Putz, Francisco Braga, Gavin Wood, João Ferreira, Miguel Pardal, Jonas Gehrlein, and Dilum Bandara for constructive comments and suggestions that greatly contributed to improving the paper. The authors express their gratitude to the Linux Foundation for providing the Diversity \& Inclusion scholarship. This work was partially supported by the EC through project 822404 (QualiChain), and by national funds through Fundação para a Ciência e a Tecnologia (FCT) with reference UIDB/50021/2020 (INESC-ID). \end{acks} \bibliographystyle{ACM-Reference-Format}
{'timestamp': '2020-06-01T02:04:50', 'yymm': '2005', 'arxiv_id': '2005.14348', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14348'}
arxiv
\section{Introduction} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Picture1_v3ga.png} \caption{\small D0$_3$ (space group Fm$\bar3$m no. 225, prototype BiF$_3$) and $\beta$-W A1
5 (space group Pm$\bar3$n no. 223, prototype Cr$_3$Si) crystal structures of V$_3$Ga.} \label{fig1} \end{figure} Combining superconductivity and antiferromagnetism could find useful applications in quantum spintronic devices. Superconductivity and magnetism were once thought to be mutually exclusive, as magnetic fields are efficient at closing the superconducting gap. Nevertheless, it was found that superconducting materials could contain 3$d$ magnetic transition metal atoms and magnetic lattices as well \cite{Canfield-1998}. Following that, high-$T_c$ cuprate superconductors were found to have exceedingly strong magnetic exchange\cite{RevModPhys.78.17}, while superconducting Fe-pnitides were found to have large Fe moments of several Bohr magnetons\cite{Mizuguchi,Lumsden_2010}. Of interest here are binary vanadium compounds such as V$_3$Al that belong to a class of simple superconductor materials with A15 crystal structure \cite{Du_2013,Bansil1999,Testardi-1997,Ohshima_1989}. Interestingly, V$_3$Al was also synthesized in a non-superconducting D0$_3$ Heusler phase with antiferromagnetic (AFM) order \cite{Jamer4}. This D0$_3$ phase of V$_3$Al was predicted to be a gapless semiconductor\cite{Gao2013,V3AlGalanakis}, and was found experimentally\cite{Jamer4} to be a G-type antiferromagnet having a Ne\'{e}l temperature of $T_N = $~600~K. One can draw the conclusion that V$_3$Z-type compounds represent a class of hybrid materials with superconducting and magnetic properties at the same temperature, which could have applications in possible fault-tolerant quantum computers hosting Majorana modes and in other important quantum technology applications. Another well-known compound in this family is V$_3$Ga, which has been used in superconducting applications for many years \cite{markiewicz_1977}. This material has been known since 1956 to have remarkable low-temperature properties related to elastic constants, Knight shifts, electrical resistance, magnetic susceptibility, superconductivity and it has been investigated extensively both experimentally and theoretically (see e.g. Refs.~\cite{Matthias-1956,Weger-1964, Izyumov_Kurmaev, Testardi-1975, Klein-1978, Jarlborg-1979,MaterialsProj}). The critical temperature of superconducting V$_3$Ga in the A15 phase is 14~K. V$_3$Ga can exist in two near-equilibrium phases, the A15 superconducting phase and the AFM D0$_3$ phase - an interesting and potentially useful result of their similar formation energies. Since the arrangement of atoms in binary V$_3$Ga can accommodate both D0$_3$ and A15 structures, shown in Fig.~\ref{fig1}, one must study the stability of these two phases. Density functional theory~(DFT) was used here to compute the formation energies as a function of crystalline and magnetic structures. Previous theoretical calculations for the D0$_3$ structure of V$_3$Ga by Galanakis {\it{et al.}} \cite{V3AlGalanakis} have predicted a Heusler G-type AFM phase with a Ne\'{e}l temperature well above room temperature, which makes the compound attractive for applications~\cite{Jamer4,JamerCrCoGa,JamerPRA2017}. A recent study also reported on the similar formation energies of the two structures.\cite{He-2019} The present magnetization measurements on bulk samples show a strong Meissner effect indicating a superconducting transition temperature of 14~K. \section{Experimental Details and Results} Bulk samples of V$_3$Ga were synthesized via arc-melting using an Edmund Buehler MAM-1. The ingots were subsequently annealed at 1050$^\circ$C for 48 hours in an Argon environment to promote homogeneity and quenched in an ice bath. The composition was confirmed using energy dispersive spectroscopy (EDS) to be within $\pm$2\% of the nominal composition. Magnetic measurements were performed in a Quantum Design MPMS XL-5 SQUID magnetometer in magnetic fields up to 5~T and temperatures from T~=~2 to 400~K. Synchrotron X-ray diffraction was done using beamline 11-BM at the Advanced Photon Source (APS). The structure was refined using TOPAS.\cite{TOPAS} Overall, the structural results determined that there was a mixed phase of both A15 and D0$_3$ of V$_3$Ga. The Rietveld refinement analysis determined that the A15 was the predominant phase at 81\% and the D0$_3$ accounted for 19\% of the structural analysis (Supplemental Information). Results of the magnetic measurements are shown in Figs.~\ref{fig2} and \ref{fig3}. In particular, Fig.~\ref{fig2} shows a magnetic hysteresis from the Meissner effect at low temperature, which is characteristic of flux pinning of the A15 type-II superconducting phase. The superconducting transition temperature $T_c=13.6$~K is deduced from the plot of the magnetization as a function of temperature as shown in the inset of Fig.~\ref{fig2}. Dimensionless magnetic susceptibility is shown in Supplemental Information which indicates $\sim$ 90\% superconducting volume fraction in line with expectations from the compositional analysis of the X-ray diffraction data through Rietveld refinement. The inset of Fig.~\ref{fig3} shows the magnetic moment as a function of applied field at $T$~=~300~K. The moment is linear up to at least $\mu_0$H~=~5~T, where the moment is only $\mu~=~0.007~\mu_B$ per formula unit (f.u.). This linear-in-H moment is similar to that found in the AFM D0$_3$ phase of V$_3$Al\cite{Jamer4}. A focus on the superconducting properties of the A15 phase is presented in Supplemental Information which shows an H$_{c2}$ of about 3.5 T for isothermal magnetization at 10 K. This value is significantly lower than the previously accepted value of 15 T at 10 K for the A15 V$_{3}$Ga phase. \cite{Foner, Decker} The decrease in H$_{c2}$ is most likely due to the antiferromagnetic phase from the D0$_3$ component which attributes significant magnetic signal at higher field. Previous reports of off-stoichiometry A15 V$_{3}$Ga \cite{Foner} did not alter the H$_{c2}$ as drastically as the observed in the current system, therefore it is reasonable to attribute this significant decrease in H$_{c2}$ as due to the antiferromagnetic signal of another magnetic V$_{3}$Ga phase present in the as-synthesized compound mixture. At higher temperatures there is a notable peak in the temperature dependence of the low-field (H~=~500~Oe) moment at $T$~=~360~K, shown in Fig.~\ref{fig3}. A similar feature was seen in the temperature-dependent resistivity of V$_3$Ga at 350~K\cite{He-2019}, however, the peak was not assigned to a magnetic transition. The similar temperature-dependent features in both the magnetization and the resistivity could arise from either a structural or a magnetic transition. A small peak in the magnetization of an AFM is generally characteristic of a Ne\'{e}l temperature\cite{Jamer4}, so there is a reasonable case to assign the small peak observed here in the magnetization of V$_3$Ga to some type of magnetic transition. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Figure2.png} \caption{\small Magnetization of superconducting V$_3$Ga versus magnetic field at low temperatures under Zero field cooling (ZFC), showing a hysteretic peak around H~=~0 characteristic of flux pinning in a type-II superconductor. The inset shows the temperature-dependent Meissner flux exclusion below $T_C=13.6$~K taken in a field of H~=~500~Oe.} \label{fig2} \end{figure} \begin{figure} \centering \includegraphics[width=0.38\textwidth]{Figure3.png} \caption{\small Magnetic properties of AFM V$_3$Ga versus temperature and magnetic field. Magnetization versus temperature taken at H~=~500~Oe shows a peak at 360~K, indicating a magnetic transition. (inset) Magnetic moment versus magnetic field taken at $T$~=~300~K, showing a small moment of only $\mu$~=~0.007~$\mu_B$/f.u. at $\mu_0$H~=~5~T.} \label{fig3} \end{figure} \begin{table*}[tbp] \caption{The calculated equilibrium magnetic states (\textit{Mag. st.}), lattice constants ($a_0$ in \AA), atom-resolved and total magnetic moments ($\mu_{V_i}$ and $\mu_{tot}$ in $\mu_B$/f.u.), total energy ($E_0$ in eV/atom), as well as the energy difference between the D$0_3$ and A15 structures ($\Delta E_{D0_3-A15}$ in eV/atom) for V$_3$Ga. The results are shown for various exchange-correlation approaches described in the text. For the A15 structure, the SCAN solutions for FM and AFM-III are almost degenerate and within 5 meV/atom.} \label{table-1} \begin{tabular}{|l|ccccccc|ccccccc|c|} \hline \multicolumn{1}{|c}{} & \multicolumn{7}{|c|}{D0$_3$} & \multicolumn{7}{c|}{A15} & \multirow{2}{*}{$\Delta E$} \\ & \textit{Mag. st.} & $a_0$ & $\mu_{V_1}$ & $\mu_{V_2}$ & $\mu_{V_3}$ & $\mu_{tot}$ & $E_0$ & \textit{Mag. st.} & $a_0$ & $\mu_{V_1}$ & $\mu_{V_2}$ & $\mu_{V_3}$ & $\mu_{tot}$ & $E_0$ & \\ \hline LDA & AFM & 5.902 & 0.429 & -0.429 & 0.0 & 0.0 & -8.487 & FM & 4.678 &0.089& 0.164& 0.112& 0.368& -8.572& 0.085 \\ GGA & AFM & 6.064 & -1.314 & 1.314 & 0.0 & 0.0 & -7.600 & FM & 4.788 & 0.222 & 0.390 & 0.303 & 0.916 & -7.645 & 0.045 \\ GGA+$U$ & AFM & 6.130 & -1.916 & 1.917 & 0.0 & 0.0 & -6.647 & AFM-III & 4.879 & $\pm$1.351 & $\pm$1.508 & $\pm$1.502 & 0.0 & -6.632 & -0.015 \\ SCAN & AFM & 6.035& -1.848& 1.848& 0.0& 0.0& -17.486 &FM & 4.744 & 0.308 & 0.523 & 0.414 & 1.245 & -17.442 & -0.044 \\ & & & & & & & & AFM-III& 4.744 & $\pm$0.268 & $\pm$0.326 & $\pm$0.330 & 0.0 & -17.437 & - \\ \hline \end{tabular}% \end{table*} \section{Computational Details and Results} Density Functional Theory (DFT) within the projector augmented wave~(PAW) as implemented in VASP~\cite{Kresse-1996,paw} was used for the electronic structure calculations. Various approximations were considered for the exchange-correlation~(XC) energy, such as the local density approximation~(LDA)~\cite{Perdew_LDA}, the generalized gradient approximation~(GGA)~\cite{Perdew-1991,Burke-1997,Perdew1996}, GGA+$U$ (with the Hubbard $U$ correction), and the strongly constrained and appropriately normed~(SCAN)~meta-GGA~\cite{Perdew-1999,Tao-2003,Sun-2015}. We have allowed full lattice and spin relaxation. The calculations were converged to an accuracy of 10$^{-8}$~eV, while a convergence criterion in the optimization for the residual forces was 10$^{-7}$~eV/\AA. Concerning the GGA+$U$, the Coulomb integrals $U$ = 2.0 eV and $J$ = 0.67 eV have been used that were provided by He \textit{et al.}~\cite{He-2019}. The main results are summarized in Table~\ref{table-1}, while more details are contained in the Supplemental Materials. The general trend of correlation effects beyond GGA is to stabilize the D0$_3$ solution with respect to the A15 one. The ground state (GS) in both LDA and GGA is the ferromagnetic (FM) phase with A15 structure. When correlation effects are included, the GS becomes the AFM G-type D0$_3$ solution. Regarding the A15 solution, the effect of correlations is to stabilize an AFM-III solution~\cite{He-2019,Supplemental} with respect to FM one. Within SCAN, the FM and AFM-III solutions in the A15 structure are almost degenerate and within 5 meV/atom (FM is marginally more stable). However, GGA+$U$ fully stabilizes the AFM-III solution. The AFM-III solution having a net zero magnetic moment is more compatible with the superconducting properties of A15 structure. \begin{figure} \centering \includegraphics[width=\columnwidth]{DOS_total.png} \vfill \includegraphics[width=\columnwidth]{DOS_additional.png} \caption{\small Total DOS for AFM D0$_3$ structure calculated with LDA, GGA, GGA+$U$ and SCAN methods. The lower figure shows the gapless region near the Fermi level on an expanded energy scale.} \label{fig4} \end{figure} Given the observation of the dual phase in the present V$_3$Ga samples, SCAN may exaggerate the stabilization of the D0$_3$ solution, while GGA+$U$ gives almost degenerate A15 and D0$_3$ solutions within 15 meV/atom, a value found previously in DFT solutions\cite{He-2019} and is in better accord with the experiment. SCAN is also known to exaggerate the magnetic moment of transition metal atoms, which are well described within GGA~\cite{isaacs2018,fu2018,ekholm2018}. In order to alleviate this problem, a modification of SCAN with deorbitalization has been suggested recently~\cite{deorbit}. The calculated electronic structure is represented by density of states (DOS) shown in Fig.~\ref{fig4}. LDA and GGA schemes give an almost gapless semiconductor, while the effects of correlation beyond GGA within GGA+$U$ and SCAN lead to the opening of gap with size of about 0.2~eV. Similar gap opening has been observed by Buchelnikov~\textit{et~al.}~\cite{Buchelnikov-2019} in other Heusler alloys. In order to estimate the magnetic transition temperature for the AFM G-type D0$_3$ phase, the GGA solution is more appropriate. Using Monte Carlo simulations with \textit{ab initio} exchange integrals and Heisenberg model~\cite{Supplemental}, we obtain the Ne\'{e}l temperature $T_N = 590$~K, which is somewhat higher than the peak in the experimental M(T) data at 360~K that was extracted from Fig.~\ref{fig3}. However, the Ne\'{e}l temperature is found to be strongly affected by disorder. Effects of disorder on the Ne\'{e}l temperature are calculated by using the mean field approximation implemented in the SPR-KKR packages. Figure~\ref{fig5} illustrates how $T_N$ collapses for increasing disorder when the Ga atoms are exchanged with the nonmagnetic V atoms (those V atoms lying between the two antiferromagnetically coupled magnetic V atoms). It is seen that $T_N$ goes to zero with 20\% exchange. Thus, a reduction of the Ne\'{e}l temperature to $T_N$~=~360~K would require only 6\% of the Ga atoms exchanging for nonmagnetic V atoms. However, we note that there is a sizeable energy barrier for V-Ga exchange. Nevertheless, these results illustrate the importance of the nonmagnetic V atoms to the exchange between the other two antiferromagnetically coupled V-atoms. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig_TN.png} \caption{\small Dependence of Ne\'{e}l temperature of V$_3$Ga from disorder using calculations based on SPR-KKR package. Disorder was obtained by exchanging a fraction x of the Ga atoms with the nonmagnetic V atoms. Note that $T_N$ at x~=~0 here is somewhat larger than that found using a more accurate simulation described in the text.} \label{fig5} \end{figure} \section{Conclusions} We have studied the dual-phase superconducting A15 phase and the semiconducting AFM phase D0$_3$ in V$_3$Ga. To rationalize our results, we have considered several models within DFT. We find that the effect of more accurate XC corrections within DFT is to eliminate or to weaken the total FM moment given by simpler LDA and GGA schemes in the A15 cell. This FM moment could jeopardize the A15 superconducting properties. Correlation effects also tend to stabilize the D0$_3$ solution. However, this trend could be exaggerated within SCAN. Concerning the formation energies, we believe that the most accurate results are between GGA and SCAN, therefore, we deduce that A15 and D0$_3$ phases are degenerate within an uncertainty of about 10 meV/atom. Finally, assuming that GGA gives the best description for the AFM magnetic moments of V atoms in the D0$_3$ solution, we estimated the Ne\'{e}l temperature to be $T_N = 590$~K, but becomes lower with disorder in the atomic sublattices. Finally, the present results indicate the possibility of using V$_3$Ga for quantum technology devices that require both superconductivity and antiferromagnetism at the same temperature. \begin{acknowledgements} Use of the Advanced Photon Source at Argonne National Laboratory was supported by the U. S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. The work at Northeastern University was supported by the US Department of Energy (DOE), Office of Science, Basic Energy Sciences Grant No. DE-SC0019275 (materials discovery for QIS applications) and benefited from Northeastern University’s Advanced Scientific Computation Center and the National Energy Research Scientific Computing Center through DOE Grant No. DE-AC02-05CH11231. The work of Chelyabinsk State University was supported by RSF-Russian Science Foundation project No.~17-72-20022 (computational studies). This work was supported by NSF Grants DMR-1904446 (M.E.J) and DMR-1905662 (D.H.). V.B. acknowledges support from the NUST "MISiS" No. K2-2020-018 B.B. acknowledges support from the COST Action CA16218. \end{acknowledgements} \label{References} \bibliographystyle{aip}
{'timestamp': '2020-06-23T02:29:50', 'yymm': '2005', 'arxiv_id': '2005.14521', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14521'}
arxiv
\section{Introduction} Tensors, as a generalization of vectors and matrices, arise in many data processing applications and have attracted great interests. For instance, video inpainting \cite{tens
or_video}, magnetic resonance imaging (MRI) data recovery \cite{tensor_MRI, MRI_tensor}, 3D image reconstruction \cite{tensor_3dimage, tensor_3DReconstruction}, high-order web link analysis [16], hyperspectral image (HSI) or multispectral image recovery \cite{tensor_HSI, chen2018tensor}, personalized web search \cite{tensor_web}, and seismic data reconstruction \cite{tensor_seismic_data}. Tensor completion tries to recover a low-rank tensor from its partially observed entries. A large number of tensor completion methods have been proposed and successfully used in many applications. Among them, the tensor rank minimization based methods are considered as state-of-the-art methods with promising performance, and their robustness to noisy and missing data has also been proven \cite{lu2019TRPCA}. However, due to the nonunique definitions of the tensor rank, it is extremely hard to directly solve the tensor rank minimization problem. To overcome this issue, many researchers have been devoted to defining the tensor rank based on the different decomposition methods, such as the matrix factorization \cite{LMaFit}, CANDECOMP/PARAFAC (CP) decomposition \cite{CP}, Tucker decomposition \cite{tucker_1, ZENG_HSI_tensor}, tensor singular value decomposition (t-SVD) \cite{t_SVD}, tensor train \cite{tensor_train_TingzhuHuang} and tensor ring \cite{tensor_ring_YipengLiu}. The commonly used definitions of tensor rank are CP-rank, Tucker-rank, multi-rank and tubal-rank based on t-SVD. However, it is NP-hard to solve the minimization problem of CP-rank which has no relaxation and certain limitations in applications. Although the Tucker-rank, relying on matrix ranks, is relatively simple, it is also NP-hard to directly minimizing the Tucker-rank problem. To tackle this difficulty, the sum of nuclear norm (SNN) \cite{HaLRTC} is introduced as a convex relaxation of the Tucker-rank. Specifically, SNN is defined by the sum of nuclear norms of the unfolding matrices along all dimensions in tensors. Due to the similar approximation to matrix case and convenient calculation algorithm, SNN is widely used in the tensor completion task \cite{tensor_Qrank}. Besides, the tensor multi-rank and tubal-rank induced from t-SVD are computable. As a tightest convex surrogate for tensor multi-rank, the tensor nuclear norm (TNN) \cite{t_SVD} is defined as the sum of the matrix nuclear norms of each frontal slice in the Fourier transformed tensor. TNN has shown its effectiveness to keep the intrinsic structure of tensors and attracted extensive attention for tensor completion problems in recent years \cite{lu2019TRPCA}. Due to the convexity of matrix nuclear norms, SNN and TNN have limitation in the accuracy of approximation to the tensor rank function. Recently, a number of studies \cite{list_nonconvex,Original_BFMN}, both practically and theoretically, have shown that the nonconvex approximation of rank function can provide better estimation accuracy and variable selection consistency than the nuclear norm. For example, a partial sum of the tensor nuclear norm (PSTNN) is proposed as a nonconvex surrogate of the tensor rank by Jiang et al. \cite{PSTNN}; Xue et al. \cite{xue2019nonconvex} unfold the underlying tensor along its all modes and use a nonconvex logarithmic surrogate function to refine the rank approximation. Actually, except for the logarithmic function used by Xue et al., a series of nonconvex surrogate are proposed for approximating to the rank function better, such as the minimax concave function \cite{Folded_concave}, log-sum function \cite{logsum_penalty}, log-determinant function \cite{ji_nonlogDet}, $L_p$ norm for $p\in (0, 1)$ \cite{lysaker_noise_2004_10_27,burger_nonlinear_2006_10_28}, $L_{1/2}$ norm \cite{lou_L1L2} and $\gamma$ norm \cite{Non_LRMA}. In addition, TNN and PSTNN involve the singular value decompositions (SVDs) of many matrices, which are time-consuming. To cope with this issue, Xu et al. \cite{Tmac} propose a parallel matrix factorization low-rank tensor completion model (TMac), which obtain promising results with less running time than TNN and PSTNN. Further, combined with the total variation (TV) regularization, Ji et al. \cite{MFTV} propose TV regularized low-rank matrix factorization method (MF-TV) for low-rank tensor completion problems. \begin{figure*}[!t] \centering \subfloat[Original image]{\includegraphics[width=0.23\linewidth]{Image_video_b94.png} \hfil \subfloat[Our model]{\includegraphics[width=0.23\linewidth]{Image_video_b94_sr005LxLxNN}}% \hfil \subfloat[TMac]{\includegraphics[width=0.23\linewidth]{Image_video_b94_sr005Tmac}}% \hfil \subfloat[TNN]{\includegraphics[width=0.23\linewidth]{Image_video_b94_sr005TNN} \caption{The completed results of Suzie with 95\% missing entries by different methods. From (a) to (d), the original image, the result by our model, Tmac, and TNN, respectively.} \label{TNN_Tamc_our-model_figure_video_sr0.05} \end{figure*} Although the above-mentioned low-rank tensor completion methods show great success in dealing with various issues, three major open questions have yet to be addressed. Firstly, in the tensor rank approximations based on tensor decomposition or matrix decomposition, the low-rank priors of underlying tensor are only explored by the convex or nonconvex relaxations of original tensor rank function, while the low-rank priors of factors obtained by the decomposition are not investigated further. Secondly, TNN or PSTNN based methods \cite{TNN, PSTNN} need to compute lots of SVDs, which become very slow or even not applicable for large-scale problems \cite{Tmac}. Thirdly, the aforementioned methods adopt single surrogate of tensor rank function, which would cause suboptimal solution of the low-rank tensor completion problems \cite{T_Sp} and can not fully explore the low-rank priors in all modes, especially when the tensor data is heavily contaminated or the sampling rate is very low. One can see an example in Fig. \ref{TNN_Tamc_our-model_figure_video_sr0.05}. In this paper, motivated and convinced by the much better performance of models that utilize the low-ranknesses in all mode in tensors \cite{HaLRTC, Tmac}, instead of using the single surrogate of tensor rank function to represent the low-rank prior in underlying tensor directly, we first apply parallel matrix factorization to all modes of underlying tensor. Further, the novel double $L_{\gamma}$ norm, a kind of nonconvex penalty, is designed to represent the underlying joint-manifold drawn from the mode factorization factors. By exploiting this auxiliary information, our method leverages low-rank decomposition and low-rank approximation, which help to accurately estimate the mode factors and missing entries. An block successive upper-bound minimization method-based algorithm is designed to efficiently solve the proposed model, and it can be demonstrated that our numerical scheme converge to the coordinatewise minimizers. The proposed model has been evaluated on three types of public tensor datasets, which show that our algorithm can recover a variety of low-rank tensors with significantly fewer samples than the compared methods. The rest of this paper is organized as follows. Section \ref{notation} introduces some notations about tensors and the operations. Section \ref{related works} reviews the related works. In Section \ref{the proposed model}, the proposed model is presented and its optimization is deduced in detail. In Section \ref{Numerical experiments}, the proposed model is evaluated on several public tensor datasets. Section \ref{conclusion} gives the conclusions. \section{Preliminary} \label{notation} \subsection{Notations} In this paper, following \cite{Tmac}, vector, matrix and tensor are denoted as bold lower-case letter $\mathbf{x}$, bold upper-case letter $\mathbf{X}$ and caligraphic letter $\mathcal{X}$, respectively. Let $x_{i_{1} \ldots i_{N}}$ represent the $\left(i_{1}, \ldots, i_{N}\right)$-th component of an $N$-way tensor $\mathcal{X}$. Then, for $\mathcal{X}, \mathcal{Y} \in \mathbb{R}^{I_{1} \times \ldots \times I_{N}},$ their inner product is defined as \begin{equation} \label{equation:inner_product} \langle\mathcal{X}, \mathcal{Y}\rangle=\sum_{i_{1}=1}^{I_{1}} \cdots \sum_{i_{N}=1}^{I_{N}} x_{i_{1} \cdots i_{N}} y_{i_{1} \cdots i_{N}}. \end{equation} Based on (\ref{equation:inner_product}), the \textbf{Frobenius norm} of a tensor $\mathcal{X}$ is defined as $\|\mathcal{X}\|_{\text{F}}=\sqrt{\langle\mathcal{X}, \mathcal{X}\rangle}$. \textbf{Fibers} of tensor $\mathcal{X}$ are defined as a vector obtained by fixing all indices of $\mathcal{X}$ except one, and \textbf{slices} of $\mathcal{X}$ are defined as a matrix by fixing all indices of $\mathcal{X}$ except two. The \textbf{mode-$n$ matricization}/\textbf{unfolding} of $\mathcal{X} \in \mathbb{R}^{I_{1} \times \ldots \times I_{N}}$ is denoted as a matrix $\mathbf{X}_{(n)} \in \mathbb{R}^{I_{n} \times \Pi_{j \neq n} I_{j}}$ with columns being the mode-$n$ fibers of $\mathcal{X}$ in the lexicographical order. Furthermore, to clearly represent the matricization process, we define \textbf{unfold}$_{n}(\mathcal{X})=\mathbf{X}_{(n)}$, and \textbf{fold}$_{n}$ is the inverse of \textbf{unfold}$_{n}$, i.e., \textbf{fold} $_{n}\left(\text { \textbf{unfold} }_{n}(\mathcal{X})\right)=\mathcal{X}$. The $n$-rank of an $N$-way tensor $\mathcal{X},$ denoted as $\operatorname{rank}_{n}(\mathcal{X}),$ is the rank of $\mathbf{X}_{(n)},$ and the rank of $\mathcal{X}$ is defined as an array: \begin{equation} \operatorname{rank}(\mathcal{X})=\left(\operatorname{rank}\left(\mathbf{X}_{(1)}\right), \cdots, \operatorname{rank}\left(\mathbf{X}_{(N)}\right)\right). \end{equation} \subsection{Operators} \label{operators} The \textbf{Proximal Operator} of is defined as follows: \begin{equation} \label{equation:PPA_0} \operatorname{prox}_{f}(v):=\arg \min _{u} f(u)+\frac{\rho}{2}\|u-v\|^{2}, \end{equation} where $f(u)$ is convex; $\rho$ is the proximal parameter. Then, the minimization of $\{f(u)\}$ is equivalent to \begin{equation} \arg \min _{u} f(u)+\frac{\rho}{2}\|u-u^k\|^{2}, k=1,2,\cdots, \end{equation} where $u^k$ is the last update of $u$. We define the \textbf{Projection Operator} as follows: \begin{equation} \left(\mathcal{P}_{\Omega}(\mathcal{Y})\right)_{i_{1} \cdots i_{N}}=\left\{\begin{array}{ll} {y_{i_{1}, \cdots, i_N,}} & {\left(i_{1}, \cdots, i_{N}\right) \in \Omega} \\ {0,} & {\text { otherwise }} \end{array}\right. \end{equation} where $\Omega$ is the index set of observed entries. The function of $\mathcal{P}_{\Omega}$ is to keep the entries in $\Omega$ and zeros out others. \section{Related Works} \label{related works} We first introduce related tensor completing methods based on the tensor rank minimization. Given a partial observed tensor $\mathcal{F} = \mathcal{P}_{\Omega}(\mathcal{Y}) \in \mathbb{R}^{I_{1} \times I_{2} \times \cdots \times I_{N}}$, tensor completion task is to recover a low-rank tensor $\mathcal{Y}$ from $\mathcal{F}$, according to the priors of underlying tensor $\mathcal{Y}$. In the past decade, TNN induced by t-SVD \cite{t_SVD} has been widely used for 3-order low-rank tensor completion \cite{TNN}. The TNN based method aims to recover a low-rank tensor by penalizing the nuclear norm of each front slice in the Fourier transformed domain, \begin{equation} \label{TNN} \arg\min_{\mathcal{Y}} \frac{1}{I_{3}} \sum_{i=1}^{I_{3}}\left\|\bar{\mathbf{Y}}^{(i)}\right\|_{*} , s.t.~ \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}, \end{equation} where $\bar{\mathbf{Y}}^{(i)} \in \mathbb{C}^{I_{1} \times I_{2}}$ denotes the $i$-th frontal slice of $\bar{\mathcal{Y}}$, $\bar{\mathcal{Y}}=\operatorname{fft}(\mathcal{Y},[],3)$ denotes the fast Fourier transform of $\mathcal{Y}$ along the third dimension. \begin{figure*} \centering \includegraphics[width=1\linewidth]{3-model-lowrank-Lx-norm} \caption{Flowchart of the proposed low-rank tensor approximation: nonconvex tensor $L_{\gamma}$ norm.} \label{fig:3-model-lowrank-lx-norm} \end{figure*} Then, to alleviate bias phenomenons of the TNN minimization in tensor completion tasks, Jiang et al. \cite{PSTNN} represent the low-rank prior of underlying tensor by using the PSTNN. The PSTNN regularized tensor completion model is formulated as follows: \begin{equation} \label{PSTNN} \arg\min_{\mathcal{Y}} \frac{1}{I_{3}} \sum_{i=1}^{I_{3}}\left\|\bar{\mathbf{Y}}^{(i)}\right\|_{p=M}, s.t.~ \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}, \end{equation} where $$\|\bar{\mathbf{Y}}^{(i)}\|_{p=M} := \sum_{j=M+1}^{\min (I_1, I_2)} \sigma_{j}(\bar{\mathbf{Y}}^{(i)}),$$ and $\sigma_{j}(\bar{\mathbf{Y}}^{(i)})$ denotes the $j$-th largest singular value of $\bar{\mathbf{Y}}^{(i)}$. To reduce the burden of calculating SVDs in TNN and PSTNN, Liu et al. \cite{HaLRTC} unfold the $N$-order tensor into multiple modal matrices along the direction of each mode, and then use the sum of the nuclear norms of these modal matrices, i.e., SNN, to describe the low-rank structure of the underlying tensor. With that definition, the completion model is formulated as follows: \begin{equation} \label{} \arg\min_{\mathcal{Y}} \sum_{n=1}^{N} \alpha_i \left\|\mathbf{Y}_{(n)}\right\|_{*}, s.t., \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}. \end{equation} Furthermore, Xu et al. \cite{Tmac} propose a Tmac model by using parallel matrix factorization, which obtain promising results with less computational complexity than TNN and PSTNN, \begin{equation} \label{Tmac} \begin{aligned} &\arg\min _{\mathcal{Y}, \mathbf{X}, \mathbf{A}} \sum_{n=1}^{N} \frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{F}^{2},\\ &\text { s.t. } \quad \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}, \end{aligned} \end{equation} where $\alpha_n$ are weights and satisfy $\sum_{n=1}^{N}\alpha_n=1$. Although the above-mentioned low-rank tensor completion methods report success on dealing with a large variety of tasks, there are several open issues which have yet to be addressed. Firstly, the above approaches either explore the low-rank prior lying in only one mode of the underlying tensor or directly represent the low-rank prior of original tensor by using low-rank decomposition. They do not further explore the prior of the factors (e.g., $\mathbf{A}_n, \mathbf{X}_n$ in (\ref{Tmac})) obtained by low-rank decomposition. Secondly, TNN based methods \cite{TNN, PSTNN, lu2019TRPCA} need to compute lots of SVDs, which is time-consuming or even not applicable for large-scale problems \cite{Tmac}. Thirdly, these methods adopt single surrogate of tensor rank function, which would cause suboptimal solution of the low-rank tensor completion problems \cite{T_Sp} and can not fully explore the low-rank priors in all modes, especially when the tensor data is heavily contaminated or the sampling rate is very low. One can see an example in Fig. \ref{TNN_Tamc_our-model_figure_video_sr0.05}, in which the video "suzie" with 95\% missing entries, it can be seen that our model restores most of the structural information of the video, while the videos restored by the methods adopt single surrogate contain only the outline of the images. \section{Double nonconvex $L_{\gamma}$ norm based low-rank approximation for tensor completion } \label{the proposed model} In the following, a novel double nonconvex $L_{\gamma}$ norm based low-rank approximation of tensor multi-modes (LRATM) is introduced firstly. Then, the optimization of the proposed LRATM model is deduced in detail. \subsection{LRATM Model} For a tensor $\mathcal{Y}\in \mathbb{R}^{I_{1} \times I_{2} \times \cdots \times I_{N}}$, to enhance the flexibility for handling different correlations along different modes in the underlying tensor, while to effectively explore the underlying joint-manifold drawn from the mode factorization factors, we first formulate a nonconvex novel tensor $L_{\gamma}$ norm, \begin{equation} \begin{aligned} &\left\|\mathcal{Y}\right\|_{\gamma}= \sum_{n=1}^{N} (\tau_n \left\|\mathbf{X}_{n}\right\|_{\gamma}+\lambda_n \left\|\mathbf{A}_{n}\right\|_{\gamma}), \end{aligned} \end{equation} where $\mathbf{Y}_{(n)}=\mathbf{A}_{n} \mathbf{X}_{n}$, and $$\left\|\mathbf{X}\right\|_{\gamma}:=\sum_{t=1}^{\min \{p, q\}}\left(1-e^{-\sigma_{t}(\mathbf{X}) / \gamma}\right)$$ is a nonconvex approximation of $rank(\mathbf{X})$, and $\sigma_{t}(\mathbf{X})$ is the $t$-th biggest singular value of $\mathbf{X} \in \mathbb{R}^{p \times q}$. $\tau_n$ and $\lambda_n$ are non-negative constants that balance the two terms. Then, our tensor $L_{\gamma}$ norm based low-rank approximation model for low-rank tensor completion, i.e., the proposed LRATM model is written as \begin{equation} \label{BMF_LRTC} \begin{aligned} \arg\min{\left\|\mathcal{Y}\right\|_{\gamma}}=&\arg\min_{\mathcal{Y},\mathbf{X},\mathbf{A}} \sum_{n=1}^{N} (\tau_n \left\|\mathbf{X}_{n}\right\|_{\gamma}+\lambda_n \left\|\mathbf{A}_{n}\right\|_{\gamma}), \\ &s.t., ~ \mathbf{Y}_{(n)}=\mathbf{A}_{n} \mathbf{X}_{n}, \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}. \end{aligned} \end{equation} To better understand the proposed LRATM model, we plot the flowchart of the proposed low-rank tensor approximation in Fig. \ref{fig:3-model-lowrank-lx-norm}. It can be seen that the video, MRI and HSI in the first column essentially can be viewed as 3-order tensors in the second column. When we unfold the 3-order tensor in three directions, the low-rank structure of all the $N$ modes can be explored by using parallel matrix decomposition, i.e., $\mathbf{Y}_{(n)}=\mathbf{A}_{n} \mathbf{X}_{n}, n=1,2,\cdots, N$, which is computationally much cheaper than SVD. The decomposed factors have practical physical meaning, $\mathbf{A}_n$ represents a library (each column contains a signature of the $n$-th mode direction), and $\mathbf{X}_n$ is called an encoding. For example, in the unmixing problem for HSI \cite{HSI_unmixing}, each column of $\mathbf{A}_3$ contains a spectral signature, and each row of $\mathbf{X}_3$ contains the fractional abundances of a given endmembers. This interpretation is also valid for the mode-3 unfolding of video and MRI. The above parallel matrix decomposition can effectively explore the low-rank structure of underlying tensor, but the prior information contained in the factor matrices ($\mathbf{A}_{n}, \mathbf{X}_{n}$) is not explored at all. Therefore, to further enhance the potential capacity of tensor completion models, it is necessary to design new and reasonable formulas to explore the priors in the factor matrices. Here, we propose the novel nonconvex double $L_{\gamma}$ norm to formulate the underlying joint-manifold drawn from the mode factorization factors $\mathbf{A}_n$ and $\mathbf{X}_n$. The superiority of $L_{\gamma}$ norm over nuclear norm and other nonconvex penalties is shown in the fifth column of Fig. \ref{fig:3-model-lowrank-lx-norm}. It is obvious that the red curve of $L_{\gamma}$ norm is closer to the green curve of $L_0$ norm (rank function) than other nonconvex penalties. \subsection{Optimization Procedure of LRATM} In this section, the proposed model is solved by using the block successive upper-bound minimization (BSUM) \cite{BSUM} method. The objective function of the proposed LRATM model(\ref{BMF_LRTC}) can be formulated as follows: \begin{equation} \begin{aligned} f(\mathbf{X}, \mathbf{A}, \mathcal{Y})=\sum_{n=1}^{N}& \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\tau_n \left\|\mathbf{X}_{n}\right\|_{\gamma}\right.\\ &\left.+\lambda_n \left\|\mathbf{A}_{n}\right\|_{\gamma}\right). \end{aligned} \end{equation} According to the proximal operator (\ref{equation:PPA_0}), the update can be written as: \begin{equation} \label{equation_original_PPA} p(\mathcal{S}, \mathcal{S}^k)=f\left(\mathcal{S}, \mathcal{S}^k\right)+\frac{\rho}{2}\left\|\mathcal{S}-\mathcal{S}^{k}\right\|_{\text{F}}^{2}, \end{equation} where $\rho>0$ is a positive constant; $\mathcal{S}=(\mathbf{X}, \mathbf{A}, \mathcal{Y})$ and $\mathcal{S}^{k}=\left(\mathbf{X}^{k}, \mathbf{A}^{k}, \mathcal{Y}^{k}\right)$. Let \begin{equation} \left\{\begin{array}{l} g_{1}\left(\mathbf{X}, \mathcal{S}_{1}^{k}\right)=f\left(\mathbf{X}, \mathbf{A}^{k}, \mathcal{Y}^{k}\right)+\frac{\rho}{2}\left\|\mathbf{X}-\mathbf{X}^{k}\right\|_{\text{F}}^{2}, \\ g_{2}\left(\mathbf{A}, \mathcal{S}_{2}^{k}\right)=f\left(\mathbf{X}^{k+1}, \mathbf{A}, \mathcal{Y}^{k}\right)+\frac{\rho}{2}\left\|\mathbf{A}-\mathbf{A}^{k}\right\|_{\text{F}}^{2,} \\ g_{3}\left(\mathcal{Y}, \mathcal{S}_{3}^{k}\right)=f\left(\mathbf{X}^{k+1}, \mathbf{A}^{k+1}, \mathcal{Y}\right)+\frac{\rho}{2}\left\|\mathcal{Y}-\mathcal{Y}^{k}\right\|_{\text{F}}^{2}, \end{array}\right. \end{equation} where \begin{equation} \left\{\begin{array}{l} S_{1}^{k}=\left(\mathbf{X}^{k}, \mathbf{A}^{k}, \mathcal{Y}^{k}\right),\\ S_{2}^{k}=\left(\mathbf{X}^{k+1}, \mathbf{A}^{k}, \mathcal{Y}^{k}\right),\\ S_{3}^{k}=\left(\mathbf{X}^{k+1}, \mathbf{A}^{k+1}, \mathcal{Y}^{k}\right). \end{array}\right. \end{equation} Then, problem (\ref{equation_original_PPA}) can be rewritten as follows: \begin{equation} \label{equation:XAY} \left\{\begin{array}{l} \displaystyle \mathbf{X}^{k+1}=\arg\min_{\mathbf{X}} g_{1}\left(\mathbf{X}, \mathcal{S}_{1}^{k}\right), \\ \displaystyle \mathbf{A}^{k+1}=\arg\min_{\mathbf{A}} g_{2}\left(\mathbf{A}, \mathcal{S}_{2}^{k}\right), \\ \displaystyle \mathcal{Y}^{k+1}=\arg\min_{\mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}} g_{3}\left(\mathcal{Y}, \mathcal{S}_{3}^{k}\right). \end{array}\right. \end{equation} \subsubsection{Update $\mathbf{X}_n$} With fixing other variables, the optimization subproblem with respect to $\mathbf{X}_n, n=1,2,\cdots,N$, in (\ref{equation:XAY}) can be written as follows: \begin{equation} \label{equation_X} \begin{aligned} \arg\min_{\{\mathbf{X}_n\}_{n=1}^{N}} \sum_{n=1}^{N}& \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\tau_n \left\|\mathbf{X}_{n}\right\|_{\gamma}\right.\\ &\left.+\frac{\rho_n}{2}\left\|\mathbf{X}_n-\mathbf{X}_n^{k}\right\|_{\text{F}}^{2}\right). \end{aligned} \end{equation} To efficiently solve the above optimization, we first introduce auxiliary variables $\mathbf{X}_{n}=\mathbf{Z}_{n}, n=1,2,\cdots,N$. Then, by the augmented Lagrangian multiplier (ALM) method, the optimization subproblem (\ref{equation_X}) can be rewritten as: \begin{equation} \label{equation:X_alm} \begin{aligned} &\arg\min_{\{\mathbf{X}_n, \{\mathbf{Z}_n\}_{n=1}^{N}} \sum_{n=1}^{N} \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\tau_n \left\|\mathbf{Z}_{n}\right\|_{\gamma}\right.\\ &\left.+\frac{\rho_n}{2}\left\|\mathbf{X}_n-\mathbf{X}_n^{k}\right\|_{\text{F}}^{2}+\left\langle\Gamma_{n}^\mathbf{X}, \mathbf{X}_n-\mathbf{Z}_n\right\rangle +\frac{\rho_{n}}{2}\left\|\mathbf{X}_{n}-\mathbf{Z}_{n}\right\|_{\text{F}}^{2}\right). \end{aligned} \end{equation} where $\Gamma_{n}^\mathbf{X}$ are Lagrangian multipliers. With other variables fixed, the minimization subproblem for $\mathbf{Z}_n$ can be deduced from (\ref{equation:X_alm}) as follows: \begin{equation} \label{equation:Zn} \mathbf{Z}_n^{k+1}= \arg \min_{\mathbf{Z}_n} \left\|\mathbf{Z}_{n}\right\|_{\gamma}+\frac{\hat{\rho}_{n}}{2}\left\|\mathbf{Z}_{n}-\mathbf{P}_n^k\right\|_{\text{F}}^{2}, \end{equation} where $\hat{\rho}_{n}=\rho_{n}/\tau_n$, $\mathbf{P}_n^k=\mathbf{X}_{n}^{k}+\Gamma_{n}^{\mathbf{X}}/\rho_n$. Let $\sigma_{1}^{k} \geq \sigma_{2}^{k} \geq \cdots \geq \sigma_{t_n}^{k}$ represent the singular values of $\mathbf{Z}_n^{k} \in \mathbb{R}^{r_n \times s_n}$ with $t_n=\min \left\{r_n, s_n\right\}$ and $\nabla \phi\left(\sigma_{n}^{k}\right)$ denote the gradient of $\phi(x) = 1-e^{-x/\gamma}$ at $\sigma_{n}^{k} .$ Let $$f(\mathbf{Z}_n)=\frac{1}{2}\left\|\mathbf{Z}_n-\mathbf{P}_n^k\right\|_{\text{F}}^{2}.$$ It is easy to prove that the gradient of $f(\mathbf{Z}_n)$ is Lipschitz continuous by setting the Lipschitz constant being $1.$ As stated in \cite{Non_LRMA}, considering the nonascending order of singular values and according to the antimonotone property of gradient of our nonconvex function, we have \begin{equation} \begin{aligned} & 0 \leq \nabla \phi\left(\sigma_{1}^{k}\right) \leq \nabla \phi\left(\sigma_{2}^{k}\right) \leq \cdots \leq \nabla \phi\left(\sigma_{t_n}^{k}\right), \\ & \phi\left(\sigma_{i}(\mathbf{Z}_n)\right) \leq \phi\left(\sigma_{i}^{k}\right)+\nabla \phi\left(\sigma_{i}^{k}\right)\left(\sigma_{i}(\mathbf{Z}_n)-\sigma_{i}^{k}\right), \end{aligned} \label{equ:gradient} \end{equation} where $i=1,2,\cdots,t_n$. Following (\ref{equ:gradient}), the subproblem with respect to $\mathbf{Z}_n$ (\ref{equation:Zn}) can be written as following relaxation: \begin{equation} \begin{split} &\arg \min _{\mathbf{Z}_n} \frac{1}{\hat{\rho}_{n}} \sum_{n=1}^{t_n} \phi\left(\sigma_{n}^{k}\right)+\nabla \phi\left(\sigma_{n}^{k}\right)\left(\sigma_{n}(\mathbf{Z}_n)-\sigma_{n}^{k}\right)+f(\mathbf{Z}_n) \\ &=\arg \min _{\mathbf{Z}_n} \frac{1}{\hat{\rho}_{n}} \sum_{n=1}^{t_n} \nabla \phi\left(\sigma_{n}^{k}\right) \sigma_{n}(\mathbf{Z}_n)+\frac{1}{2}\left\|\mathbf{Z}_n-\mathbf{P}_{n}^{k}\right\|_{\text{F}}^{2}. \end{split} \label{equ:relaxation} \end{equation} Then, based on \cite{lu_nonconvex, Non_LRMA}, the solution of (\ref{equ:relaxation}) can be efficiently obtained by generalized weight singular value thresholding (WSVT) \cite{WSVT} , as shown in Lemma 1. \noindent\textbf{Lemma 1}: For any $1 / \hat{\rho}_{n}>0$, the given data $\mathbf{P}_n^k=\mathbf{X}_{n}^{k}+\Gamma_{n}^{\mathbf{X}}/\rho_n$, and $0 \leq \nabla \phi\left(\sigma_{1}^{k}\right) \leq \nabla \phi\left(\sigma_{2}^{k}\right) \leq \cdots \leq \nabla \phi\left(\sigma_{t_n}^{k}\right)$, a globally optimal solution $\mathbf{Z}_n^{*}$ to (\ref{equ:relaxation}) is given as follows: \begin{equation} \label{equation_solution_Zn} \quad \mathbf{Z}_n^{k+1}= \operatorname{WSVT}\left(\mathbf{P}_n^{k}, \frac{\nabla \phi}{\hat{\rho}_{n}}\right) =\mathbf{U} \mathbf{S}_{\frac{\nabla \phi}{\hat{\rho}_{n}}}(\mathbf{\Sigma}) \mathbf{V}^{T}, \end{equation} where $\mathbf{P}_n^{k}=\mathbf{U} \mathbf{\Sigma} \mathbf{V}^{T}$ is the SVD of $\mathbf{P}_n^{k}$; $$\mathbf{S}_{\frac{\nabla \phi}{\hat{\rho}_{n}}}(\mathbf{\Sigma})=\operatorname{Diag}\left\{\max \left(\mathbf{\Sigma}_{ii}-\frac{\nabla \phi \left( \mathbf{\sigma}_{i}^{k}\right)}{\hat{\rho}_{n}}, 0\right)\right\},$$ and $i=1,2,\cdots,t_n$. With other variables fixed, the minimization subproblem for $\mathbf{X}_n$, $n=1,2,\cdots, N$, can be deduced from (\ref{equation:X_alm}) as follows: \begin{equation} \label{equationforX} \begin{aligned} \mathbf{X}_n^{k+1}&= \arg\min_{\mathbf{X}_n}\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n}^{k} \mathbf{X}_{n}\right\|_{\text{F}}^{2}\\ &+\frac{\rho_{n}}{2}\left\|\mathbf{X}_{n}-\frac{\mathbf{Z}_{n}^{k+1}-\Gamma_{n}^k/\mu_n+\mathbf{X}_n^{k}}{2}\right\|_{\text{F}}^{2}. \end{aligned} \end{equation} They are convex and have the following closed-form solutions \begin{equation} \label{equation:solution_Xn} \begin{aligned} \mathbf{X}_n^{k+1}=&\frac{1}{2}(\alpha_{n}\mathbf{A}_n^T\mathbf{A}_n+2\rho_n \mathbf{I}_n)^{-1}\left[2\alpha_{n}\mathbf{A}_n^T\mathbf{Y}_{(n)}\right.\\ &\left. +\mu_n \left(\mathbf{Z}_{n}^{k+1}-\Gamma_{n}^k/\mu_n+\mathbf{X}_n^{k}\right)\right]. \end{aligned} \end{equation} The Lagrangian multipliers $\Gamma_{n}^{\mathbf{X}}$ can be updated by the following equation \begin{equation} \label{equation:Lambda_2} \Gamma_{n}^{\mathbf{X}} = \Gamma_{n}^{\mathbf{X}} + \mathbf{X}_n-\mathbf{Z}_n. \end{equation} \subsubsection{Update $\mathbf{A}_n$} With fixing other variables, the optimization subproblem with respect to $\mathbf{A}_n$, $n=1,2,\cdots,N$, in (\ref{equation:XAY}) can be written as follows: \begin{equation} \label{equation_A_aux} \begin{aligned} \arg\min_{\{\mathbf{A}_n\}_{n=1}^{N}} \sum_{n=1}^{N} & \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\lambda_n \left\|\mathbf{A}_{n}\right\|_{\gamma}\right.\\ &\left.+\frac{\rho_n}{2}\left\|\mathbf{A}_n-\mathbf{A}_n^{k}\right\|_{\text{F}}^{2}\right). \end{aligned} \end{equation} To efficiently solve the above optimization, we first introduce auxiliary variables $\mathbf{A}_{n}=\mathbf{J}_{n}, n=1,2,\cdots,N$. Then, by the ALM method, the problem (\ref{equation_A_aux}) can also be reformulated as \begin{equation} \label{equation:A_alm} \begin{aligned} & \arg\min_{\{\mathbf{A}_n,\mathbf{J}_n\}_{n=1}^{N}} \sum_{n=1}^{N} \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\lambda_n \left\|\mathbf{J}_{n}\right\|_{\gamma}\right.\\ & \left. +\frac{\rho_n}{2}\left\|\mathbf{A}_n-\mathbf{A}_n^{k}\right\|_{\text{F}}^{2} +\left\langle\Gamma_{n}^\mathbf{A},\mathbf{A}_n-\mathbf{J}_n\right\rangle +\frac{\rho_{n}}{2}\left\|\mathbf{A}_{n}-\mathbf{J}_{n}\right\|_{\text{F}}^{2}\right), \end{aligned} \end{equation} where $\Gamma_{n}^\mathbf{A}$ are the Lagrangian multipliers. With other variables fixed, the minimization subproblem for $\mathbf{J}_n$ can be deduced from (\ref{equation:A_alm}) as follows: \begin{equation} \displaystyle \mathbf{J}_n^{k+1}= \arg \min_{\mathbf{J}_n} \lambda_n \left\|\mathbf{J}_{n}\right\|_{\gamma}+\frac{\rho_{n}}{2}\left\|\mathbf{J}_{n}-\mathbf{Q}_n^k\right\|_{\text{F}}^{2}. \end{equation} where $\tilde{\rho}_{n}=\rho_{n}/\lambda_n$; $\mathbf{Q}_n^k=\mathbf{A}_{n}^{k}+\Gamma_{n}^{\mathbf{A}}/\rho_n$. Its solution can also be obtained by \textbf{Lemma 1} as follows: \begin{equation} \label{equation_solution_Jn} \quad \mathbf{J}_n^{k+1}=\operatorname{WSVT}\left(\mathbf{Q}_n^{k}, \frac{\nabla \phi}{\tilde{\rho}_{n}}\right). \end{equation} With other variables fixed, the minimization subproblem for $\mathbf{A}_n$, $n=1,2,\cdots, N$, can be deduced from (\ref{equation:A_alm}) as follows: \begin{equation} \label{equationforA} \begin{aligned} \mathbf{A}_n^{k+1}= &\arg\min_{\mathbf{A}_n} \frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}\\ &+\rho_{n}\left\|\mathbf{A}_{n}-\frac{\mathbf{J}_{n}^{k+1}-\Gamma_{n}^\mathbf{A}/\rho_n+\mathbf{A}_n^{k}}{2}\right\|_{\text{F}}^{2}. \end{aligned} \end{equation} They are also convex and have the following closed-form solutions \begin{equation} \label{equation_solution_An} \begin{aligned} \mathbf{A}_{n}^{k+1}=&\left(\mathbf{Y}_{(n)}^{k}\left(\mathbf{X}_{n}^{k+1}\right)^{T}+\rho_n (\mathbf{J}_{n}^{k+1}-\Gamma_{n}^\mathbf{A}/\rho_n+\mathbf{A}_n^{k})\right)\\ &\left(\mathbf{X}_{n}^{k+1}\left(\mathbf{X}_{n}^{k+1}\right)^{T}+2\rho_{n} \mathbf{I}_{n}\right)^{\dagger}. \end{aligned} \end{equation} Finally, the Lagrangian multipliers $\Gamma_{n}^{\mathbf{A}}$ can be updated by the following equation \begin{equation} \label{equation:Lambda_1} \Gamma_{n}^{\mathbf{A}} = \Gamma_{n}^{\mathbf{A}} + \mathbf{A}_n-\mathbf{J}_n. \end{equation} \subsubsection{Update $\mathcal{Y}$} With other variables fixed, the minimization subproblem with respect to $\mathcal{Y}$ in (\ref{equation:XAY}) can be written as \begin{equation} \begin{aligned} & \arg\min_{\{\mathbf{Y}_{(n)}\}_{n=1}^{N}} \sum_{n=1}^{N} \frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\frac{\rho}{2}\left\|\mathcal{Y}-\mathcal{Y}^{k}\right\|_{\text{F}}^{2} \\ & s.t., \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}. \end{aligned} \end{equation} Then, the update of $\mathcal{Y}^{k+1}$ can be written explicitly as \begin{equation} \label{equation_solution_Y} \begin{array}{l} \displaystyle \mathcal{Y}^{k+1}=\mathcal{P}_{{\Omega}^c}\left(\sum_{n=1}^{N} \alpha_{n} \text { fold }_{n}\left(\frac{\mathbf{A}_{n}^{k+1} \mathbf{X}_{n}^{k+1}+\rho_n \mathbf{Y}_{(n)}^{k}}{1+\rho_n}\right)\right)+\mathcal{F}, \end{array} \end{equation} where $\Omega^{C}$ is the complementary set of $\Omega$. \begin{algorithm}[!t] \caption{:Algorithm for the LRATM model.} \label{algorithm:A1} \begin{algorithmic}[1] \Require The observed tensor $\mathcal{F}$; The set of index of observed entries $\Omega$; Stopping criterion $\varepsilon$, the given $n$-rank, $r=\left(r_{1}, r_{2}, r_{3}\right)$. \Ensure The completed tensor. \State Initialize: $\mathbf{X}_n=\mathbf{Z}_n=0, \mathbf{A}_n=\mathbf{J}_n=0,\Gamma_{n}^\mathbf{X}=0, \Gamma_{n}^\mathbf{A}=0, n=1,2, \cdots, N$; $k=0$. \State Repeat until convergence: \State Update $\mathbf{X}, \mathbf{Z}, \mathbf{A}, \mathbf{J}, \mathcal{Y}, \Gamma^{\mathbf{X}}, \Gamma^{\mathbf{A}}$ via 1st step: Update $\mathbf{Z}_n$ via (\ref{equation_solution_Zn}) 2nd step: Update $\mathbf{X}_n$ via (\ref{equation:solution_Xn}) 3rd step: Update $\mathbf{A}_n$ via (\ref{equation_solution_An}) 4th step: Update $\mathbf{J}_n$ via (\ref{equation_solution_Jn}) 5th step: Update $\mathcal{Y}$ via (\ref{equation_solution_Y}) 6th step: Update the parameter via (\ref{equation:Lambda_2}), (\ref{equation:Lambda_1}) \State Check the convergence condition. \end{algorithmic} \end{algorithm} \subsection{ Complexity and Convergence Analysis} The proposed algorithm for our LRATM model is summarized in Algorithm \ref{algorithm:A1}. Further, we discuss the complexity and convergence of the proposed algorithm. \subsubsection{Complexity analysis} The cost of computing $\mathbf{X}_{n}$ is $O\left(I_{n} r_{n}^{2}+I_{n} r_{n} s_{n}+r_{n}^{2} s_{n}\right)$, calculating $\mathbf{Z_n}$ has a complexity of $O\left( \Pi_{j \neq n} I_{j} \times r_{n}^2 \right)$, the complexity of updating $\mathbf{J}_n$ is $O\left(I_{n} r_{n}^2\right)$, calculating $\mathbf{A}_{n}$ has a complexity of $O\left(I_{n} r_{n}^{2}+I_{n} r_{n} s_{n}+r_{n}^{2} s_{n}\right)$, calculating $\mathcal{Y}$ has a complexity of $O\left(r_{1} I_{1} s_{1}+\cdots+r_{N} I_{N} s_{N}\right)$. Therefore, the total complexity of the proposed algorithm can be obtained by counting the complexity of the above variables, i.e., \begin{equation} \label{equation:complexity_model1} O\left(\sum_{n}(3I_{n} r_{n}^2+\Pi_{j \neq n} I_{j} \times r_{n}^2+3 I_{n} r_{n} S_{n}+2 r_{n}^{2} s_{n})\right). \end{equation} \subsubsection{Convergence analysis} In this section, we theoretically analyze the convergence of the proposed algorithm by using the BSUM method \cite{BSUM}. \noindent \textbf{Lemma 2} \cite{BSUM}: Let us assume that the feasible set $\mathcal{X}$ is the cartesian product of $n$ closed convex sets: $\mathcal{X}=\mathcal{X}_1 \times \mathcal{X}_2 \times \cdots \times \mathcal{X}_{n}$. Given the problem \begin{equation} \min f(x), s.t. ~ x \in \mathcal{X}, \end{equation} assume $h\left(x, x^{k-1}\right)$ is an approximation of $f(x)$ at the $(k-1)$-th iteration, which satisfies the following conditions: \begin{equation} \begin{array}{l} 1) \quad h_{i}\left(y_{i}, y\right)=f(y), \forall y \in \mathcal{X}, \forall i; \\ 2) \quad h_{i}\left(x_{i}, y\right) \geq f\left(y_{1}, \ldots, y_{i-1}, x_{i}, y_{i+1}, \ldots, y_{n}\right), \\ \quad \quad \forall x_{i} \in \mathcal{X}_{i}, \forall y \in \mathcal{X}, \forall i;\\ 3) \quad \left.h_{i}^{\prime}\left(x_{i}, y ; d_{i}\right)\right|_{x_i=y_i}=f^{\prime}(y ; d), \forall d=\left(0, \cdots, d_{i} \cdots 0\right) \\ \quad \quad \text { s.t. } y_{i}+d_{i} \in \mathcal{X}_{i}, \forall i;\\ 4) \quad h_{i}\left(x_{i}, y\right) \text{is continuous in} \left(x_{i}, y\right), \forall i; \end{array} \end{equation} where $h_{i}\left(x_{i}, y\right)$ is the sub-problem with respect to the $i$-th block and $f^{\prime}(y ; d)$ is the direction derivative of fat the point $y$ in direction $d$. Suppose $h_{i}\left(x_{i}, y\right)$ is quasi-convex in $x_{i}$ for $i=1,2, \cdots, n$. Furthermore, assume that each sub-problem $\arg\min h_i\left(x_{i}, x^{k-1}\right), s.t. ~ x \in \mathcal{X}_{i}$ has a unique solution for any point $x^{k-1} \in \mathcal{X} .$ Then, the iterates generated by the BSUM algorithm converge to the set of coordinatewise minimum of $f$. \noindent \textbf{Theorem 1.} The iterates generated by (\ref{equation_original_PPA}) converge to the set of coordinatewise minimizers. \textbf{Proof.} It is easy to verify that $g\left(\mathcal{S}, \mathcal{S}^{k}\right)$ is an approximation and a global upper bound of $f(\mathcal{S})$ at the $k$-th iteration, which satisfies the following conditions: \begin{equation} \begin{array}{l} 1) \quad g_{i}\left(\mathcal{S}_{i}, \mathcal{S}\right)=f(\mathcal{S}), \forall \mathcal{S}, i=1,2,3; \\ 2) \quad g_{i}\left(\bar{\mathcal{S}}_{i}, \mathcal{S}\right) \geq f\left(\mathcal{S}_{1}, \cdots, \bar{\mathcal{S}_{i}}, \cdots, \mathcal{S}_{3}\right), \\ \quad\quad \forall \bar{\mathcal{S}}_{i}, \forall \mathcal{S}, i=1,2,3; \\ 3) \quad \left.g_{i}^{\prime}\left(\bar{\mathcal{S}}_{i}, \mathcal{S} ; \mathbf{M}_{i}\right)\right|_{\bar{\mathcal{S}}_{i}=\mathcal{S}_{i}}=f^{\prime}\left(\mathcal{S} ; \mathbf{M}^i\right), \\ \quad \quad \forall \mathbf{M}^{i}=\left(0, \ldots, \mathbf{M}_{i},\ldots, 0\right); \\ 4) \quad g_{i}\left(\bar{\mathcal{S}}_{i}, \mathcal{S}\right) \text{is continuous in} \left(\bar{\mathcal{S}}_{i}, \mathcal{S} \right), i=1,2,3; \end{array} \end{equation} where $\mathcal{S}=(\mathbf{X}, \mathbf{A}, \mathcal{Y}),$ and $\mathcal{S}_{i}$ equal $\mathbf{X}, \mathbf{A}, \mathcal{Y}$ for $i=1,2,3,$ respectively. In addition, the subproblem $g_{i}(i=1,2,3)$ is quasi-convex with respect to $\mathbf{X}, \mathbf{A}$ and $\mathcal{Y}$ respectively and each sub-problem of $g_{i}$ has a unique solution. Therefore, all assumptions in \textbf{Lemma 1} are satisfied. According to the conclusion of \textbf{Lemma 1}, the \textbf{Theorem 1} is valid, and the proposed algorithm is theoretically convergent. \section{Numerical experiments} \label{Numerical experiments} In this section, the proposed model is evaluated on three types of public tensor datasets, i.e., video datasets, MRI dataset and HSI dataset which have been frequently used to interpret the tensor completion performance of different models. To demonstrate its effectiveness, we compare the proposed model with TMac method \cite{Tmac}, MF-TV method \cite{MFTV}, single TNN based method \cite{lu2019TRPCA} and PSTNN based method \cite{PSTNN}. \begin{table}[t] \caption{The averaged PSNR, SSIM, FSIM, ERGA and SAM of the completed results on video "suzie" by Tmac, MF-TV, TNN, PSTNN and our model with different sampling rates. The best values are highlighted in bolder fonts.} \centering \label{table_video_suzie} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{cccccccc} \hline \hline && &SR =0.05 &&&& \\ PQI & nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 7.259 & \textbf{29.464} & 13.801 & 23.385 & 17.447 & 22.005 \\ SSIM & 0.009 & \textbf{0.807} & 0.094 & 0.622 & 0.192 & 0.563 \\ FSIM & 0.454 & \textbf{0.885} & 0.42 & 0.792 & 0.59 & 0.776 \\ ERGA & 1057.282 & \textbf{83.571} & 501.117 & 167.927 & 327.678 & 194.844 \\ MSAM & 77.324 & \textbf{3.622} & 24.095 & 6.927 & 13.775 & 7.797 \\ \hline &&&SR = 0.1&&&&\\ PQI & nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR& 7.493& \textbf{32.056}& 22.356& 26.189& 26.647& 26.032\\ SSIM& 0.014& \textbf{0.878}& 0.605& 0.74& 0.68& 0.692\\ FSIM& 0.426& \textbf{0.924}& 0.758& 0.838& 0.843& 0.846\\ ERGA& 1029.096& \textbf{62.314}& 196.059& 124.369 &117.104& 124.923\\ MSAM& 71.725& \textbf{2.764}& 6.99& 5.423& 5.171& 5.405 \\ \hline &&&SR = 0.2&&&&\\ PQI & nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR& 8.005& \textbf{34.378}& 32.064& 27.274& 30.566& 30.561\\ SSIM& 0.02& \textbf{0.918}& 0.872& 0.782& 0.829& 0.831\\ FSIM& 0.391& \textbf{0.948}& 0.916& 0.853& 0.91& 0.911\\ ERGA& 970.285& \textbf{47.877}& 66.692& 109.627& 75.472& 75.598\\ MSAM& 63.522& \textbf{2.183}& 2.81& 4.812& 3.399& 3.395\\ \hline \hline \end{tabular}} \end{table} To accurately evaluate the performance of the test models, two types of standards are employed to quantitatively evaluate the quality of the completed tensors. The first one is the visual evaluation of the completed data, which is a qualitative evaluation standard. The second one is the five quantitative picture quality indices (PQIs), including the peak signal-to-noise ratio (PSNR) \cite{PSNR}, structural similarity index (SSIM) \cite{SSIM}, feature similarity (FSIM) \cite{FSIM}, erreur relative globale adimensionnelle de synth\`ese (ERGAS) \cite{EGRAS}, the mean the spectral angle mapper (SAM) \cite{SAM}. Larger PSNR, SSIM, FSIM and smaller ERGAS, SAM are, the better the completion performance of the corresponding model is. Since the experimental datasets are all third-order tensors, the PQIs for each frontal slice in the completed tensor are first calculated, and then the mean of these PQIs are finally used to evaluate the performance of the models. All experiments are performed on MATLAB 2019b, the CPU of the computer is Inter core i7@2.2GHz and the memory is 64GB. The code will be posted on the following URL: https://github.com/NavyZeng/LRATM. \subsection{Video Data} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_video_b94}}% \hfil \subfloat[95\% Masked]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005TNN} \caption{One slice of the recovered video for “suzie” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 5\%.} \label{figure_video_sr0.05} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01suzie_b10}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01TNN} \caption{One slice of the recovered video for “suzie” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_video_sr0.1} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02suzie_b10}}% \hfil \subfloat[80\% Masked]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02TNN} \caption{One slice of the recovered video for “suzie” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 20\%.} \label{figure_video_sr0.2} \end{figure*} In this subsection, we compare our model with MF-TV, Tmac, TNN and PSTNN on two video datasets: "suzie" and “hall”\footnote{http://trace.eas.asu.edu/yuv/}, both of which are colored using YUV format, and two slices of them are shown in Fig. \ref{figure_video_sr0.05} and Fig. \ref{figure_hall_sr0.05}, respectively. Their sizes both are 144 $\times$ 176 $\times$ 150. We test all five methods on a series of sampling rates (SR): 5\%, 10\% and 20\%, and all the test models are evaluated in terms of quantitative comparison and visual evaluation. In addition, the $n$-rank is approximated by using the number of the largest 0.5\% singular values. For quantitative comparison, Table \ref{table_video_suzie} and Table \ref{table_video_hall} report the PQIs of the results completed by different methods. The best result for each PQI are marked in bold. From Table \ref{table_video_suzie} and Table \ref{table_video_hall}, it can be found that our model obtains the highest indices among the five tested models in all SR cases; Tmac obtains the second best PQIs, when the SR is set to 5\% or 10\%; While MF-TV obtains the second best PQIs, when SR is set to 20\%. The margins between the results by our model and the second best results are more than 5dB considering the PSNR. For visual evaluation, we illustrate one frontal slice of the completed results with different sampling rates in Fig. \ref{figure_video_sr0.05}, Fig. \ref{figure_video_sr0.1}, Fig. \ref{figure_video_sr0.2}, Fig. \ref{figure_hall_sr0.05} and Fig. \ref{figure_hall_sr0.1}. It is clear from the figures that the results of our model are closest to the ground truth than other tested models, especially at low sampling rates. Specifically, as shown in Fig. \ref{figure_hall_sr0.1}, Fig. \ref{figure_video_sr0.05} and Fig. \ref{figure_video_sr0.1}, when the sampling rates are 0.05 and 0.1, the advantages of our model are most obvious. Our model restores most of the structural information of the videos, while the videos restored by the competitive models contain only the outline of the images. At a higher sampling rate, as shown in Fig. \ref{figure_video_sr0.2} and Fig. \ref{figure_hall_sr0.1}, our model and competitive models both complete the main structural information of the images, but our model recovers more texture and detail information. \begin{table} [t] \centering \caption{The averaged PSNR, SSIM, FSIM, ERGA and SAM of the recovered results on video "hall" by Tmac, MF-TV, TNN, PSTNN and our model with different sampling rates. The best values are highlighted in bolder fonts.} \label{table_video_hall} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{cccccccc} \hline \hline &&&SR =0.05 &&&& \\ PQI& nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 4.82 & \textbf{28.347} & 13.539 & 22.101 & 16.075 & 20.78 \\ SSIM & 0.007 & \textbf{0.894} & 0.412 & 0.675 & 0.36 & 0.636 \\ FSIM & 0.387 & \textbf{0.920} & 0.612 & 0.789 & 0.672 & 0.792 \\ ERGA & 1225.779 & \textbf{83.146} & 452.351 & 168.866 & 335.52 & 195.315 \\ MSAM & 77.299 & \textbf{2.360} & 12.865 & 3.818 & 8.64 & 4.299 \\ \hline &&&SR = 0.1&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 5.055 & \textbf{31.804} & 24.855 & 26.936 & 29.014 & 28.433 \\ SSIM & 0.013 & \textbf{0.935} & 0.829 & 0.854 & 0.892 & 0.905 \\ FSIM & 0.393 & \textbf{0.950} & 0.873 & 0.888 & 0.934 & 0.936 \\ ERGA & 1193.075 & \textbf{56.998} & 131.422 & 97.185 & 77.395 & 82.259 \\ MSAM & 71.7 &\textbf{ 1.904} & 3.669 & 2.404 & 2.417 & 2.46 \\ \hline &&&SR = 0.2&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 5.567 & \textbf{33.941} & 33.006 & 27.648 & 33.629 & 33.691 \\ SSIM & 0.025 & \textbf{0.952} & 0.94 & 0.869 & 0.961 & 0.962 \\ FSIM & 0.403 & \textbf{0.964} & 0.954 & 0.897 & 0.973 & 0.974 \\ ERGA & 1124.737 & \textbf{44.581} & 50.971 & 89.271 & 46.123 & 45.851 \\ MSAM & 63.507 & \textbf{1.574} & 1.779 & 2.226 & 1.584 & 1.565 \\ \hline \hline \end{tabular}} \end{table} \subsection{Magnetic resonance imaging data} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005Original}}% \hfil \subfloat[95\% Masked]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005TNN} \caption{One slice of the recovered video “hall” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 5\%.} \label{figure_hall_sr0.05} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01Original}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01TNN} \caption{One slice of the recovered video “hall” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_hall_sr0.1} \end{figure*} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_suzie_sr005}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_suzie_sr01}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_suzie_sr02}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_suzie_sr005}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_suzie_sr01}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_suzie_sr02}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_suzie_sr005}}% \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_suzie_sr01}}% \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_suzie_sr02}}% \caption{The PSNR, SSIM and FSIM of the recovered video "suzie" by MF-TV, Tmac, TNN, PSTNN and our model for all slices, respectively.} \label{PSNR and SSIM of video} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01MRI_b7}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01TNN} \caption{One slice of the recovered MRI by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_MR_sr0.1_1} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01MRI_b83}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01TNN} \caption{One slice of the recovered MRI by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_MR_sr0.1_2} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01MRI_b95}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01TNN} \caption{One slice of the recovered MRI by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_MR_sr0.1_3} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01MRI_b118}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01TNN} \caption{One slice of the recovered MRI by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_MR_sr0.1_4} \end{figure*} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_mri_sr005}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_mri_sr01}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_mri_sr02}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_mri_sr005}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_mri_sr01}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_mri_sr02}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_mri_sr005}}% \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_mri_sr01}}% \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_mri_sr02}}% \caption{The PSNR, SSIM and FSIM of the recovered MRI by MF-TV, Tmac, TNN, PSTNN and our model for all slices, respectively.} \label{PSNR and SSIM of MRI} \end{figure*} To further verify the versatility of our model for different datasets, in this subsection we compare our model with MF-TV, Tmac, TNN and PSTNN on MRI dataset, i.e., the cubical MRI data\footnote{http://brainweb.bic.mni.mcgill.ca/brainweb/selection$\_$normal.html}. Its one slice is shown in Fig. \ref{figure_MR_sr0.1_1}. The size of the dataset is 181 $\times$ 217 $\times$ 150. We test all five methods on a series of sampling rates: 5\%, 10\%, 20\% and 30\%. In addition, the $n$-rank is approximated by using the number of the largest 0.5\% singular values. For quantitative comparison, Table \ref{table_MRI} reports the PQIs of the results completed by different methods. The best result for each PQI are marked in bold. From Table \ref{table_MRI}, it can be found that our method obtains the highest indices among the five tested methods in all SR cases. Further, the same advantage of our model can also be seen in Fig. \ref{PSNR and SSIM of MRI} which reports the PSNR, SSIM and FSIM of each slice. For visual comparison, Fig. \ref{figure_MR_sr0.1_1}, Fig. \ref{figure_MR_sr0.1_2}, Fig. \ref{figure_MR_sr0.1_3} and Fig. \ref{figure_MR_sr0.1_4} show slices with 90\% missing values and the corresponding completed slices by all the tested methods. From the results, we see again that our model can better retain the local details and texture information of the images, and effectively restore the main structure of the images than other compared methods. Therefore, one can see that the recovered data obtained by our model has the best visual evaluation. \begin{table}[t] \centering \caption{The averaged PSNR, SSIM, FSIM, ERGA and SAM of the recovered results on MRI by Tmac, MF-TV, TNN, PSTNN and our model with different sampling rates. The best values are highlighted in bolder fonts.} \label{table_MRI} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{ccccccccc} \hline \hline &&&SR =0.05 &&&& \\ PQI& nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 10.258 & \textbf{26.414} & 12.332 & 20.51 & 15.859 & 18.218 \\ SSIM & 0.228 & \textbf{0.722} & 0.099 & 0.45 & 0.224 & 0.27 \\ FSIM & 0.473 & \textbf{0.834} & 0.52 & 0.711 & 0.642 & 0.646 \\ ERGA & 1030.203 & \textbf{184.279} & 814.747 & 339.385 & 545.77 & 434.774 \\ MSAM & 76.54 & \textbf{20.411} & 55.603 & 31.367 & 36.355 & 31.11 \\ \hline &&&SR = 0.1&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 10.492 & \textbf{32.652} & 15.406 & 21.411 & 22.061 & 22.535 \\ SSIM & 0.241 & \textbf{0.912} & 0.25 & 0.531 & 0.482 & 0.536 \\ FSIM & 0.511 & \textbf{0.926} & 0.587 & 0.732 & 0.764 & 0.78 \\ ERGA & 1002.8 & \textbf{89.116} & 584.827 & 308.655 & 275.473 & 266.753 \\ MSAM & 70.986 & \textbf{14.637} & 41.826 & 29.345 & 24.585 & 24.6 \\ \hline &&&SR = 0.2&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 11.003 & \textbf{36.529} & 27.062 & 22.33 & 29.152 & 28.571 \\ SSIM & 0.271 & \textbf{0.962} & 0.737 & 0.586 & 0.804 & 0.802 \\ FSIM & 0.564 & \textbf{0.963} & 0.84 & 0.754 & 0.895 & 0.891 \\ ERGA & 945.583 & \textbf{57.037} & 173.636 & 276.269 & 127.133 & 136.182 \\ MSAM & 62.887 & \textbf{11.559} & 21.792 & 27.267 & 17.513 & 17.855 \\ \hline \hline \end{tabular}} \end{table} \subsection{Hyperspectral Image Data} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72.png}}% \hfil \subfloat[97.5\% Masked]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025TNN} \caption{One slice of the recovered HSI "Cuprite" by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 2.5\%.} \label{figure_HSI_sr0.025} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72.png}}% \hfil \subfloat[95\% Masked]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005TNN} \caption{One slice of the recovered HSI "Cuprite" by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 5\%.} \label{figure_HSI_sr0.05} \end{figure*} \begin{figure*}[!t] \centering \subfloat[PSNR]{\includegraphics[width=0.15\linewidth]{psnr_hsi_sr0025}}% \hfil \subfloat[SSIM]{\includegraphics[width=0.15\linewidth]{ssim_hsi_sr0025}}% \hfil \subfloat[FSIM]{\includegraphics[width=0.15\linewidth]{fsim_hsi_sr0025}}% \hfil \subfloat[PSNR]{\includegraphics[width=0.15\linewidth]{psnr_hsi_sr005}}% \hfil \subfloat[SSIM]{\includegraphics[width=0.15\linewidth]{ssim_hsi_sr005}}% \hfil \subfloat[FSIM]{\includegraphics[width=0.15\linewidth]{fsim_hsi_sr005}}% \caption{The PSNR, SSIM and FSIM of the recovered HSI "Cuprite" by MF-TV, Tmac, TNN, PSTNN and our model for all slices, respectively.(a)-(c): 97.5\% entries missing, (d)-(f): 95\% entries missing.} \label{PSNR and SSIM of HSI} \end{figure*} In this section, we compare our model with MF-TV, Tmac, TNN and PSTNN on one HSI dataset: Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Cuprite data\footnote{http://aviris.jpl.nasa.gov/html/aviris.freedata.html}. The size of AVIRIS Cuprite data is 150 $\times$ 150 $\times$ 188. Its one slice is shown in Fig. \ref{figure_HSI_sr0.05}. We test all five methods on a series of sampling rates: 2.5\%, 5\% and 10\%. In addition, the $n$-rank is approximated by using the number of the largest 0.3\% singular values. For quantitative comparison, Table \ref{table_HSI} reports the average PQIs of each tested method with three different sampling rates. At sampling rates 0.025 and 0.05, Fig. \ref{PSNR and SSIM of HSI} reports the PSNR, SSIM and FSIM of each frontal slice in the completed results. For visual comparison, Fig. \ref{figure_HSI_sr0.05} and Fig. \ref{figure_HSI_sr0.025} show slices of the sampled data with 97.5\% and 95\% missing values and the corresponding recovered slices by the tested methods. From the results, we see again that our model not only obtains the highest PQIs, but also recovers more structure information and spatial details of the images than compared methods, especially at low sampling rates. \begin{table} \caption{The averaged PSNR, SSIM, FSIM, ERGA and SAM of the recovered results on hyperspectral image "Cuprite" by Tmac, MF-TV, TNN, PSTNN and our model with different sampling rates. The best values are highlighted in bolder fonts.} \centering \label{table_HSI} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{ccccccccc} \hline \hline &&&SR =0.025&&&& \\ PQI& nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 7.666 & \textbf{34.595} & 26.115 & 21.25 & 13.387 & 22.783 \\ SSIM & 0.007 & \textbf{0.861} & 0.539 & 0.412 & 0.124 & 0.554 \\ FSIM & 0.48 & \textbf{0.916} & 0.765 & 0.755 & 0.613 & 0.775 \\ ERGA & 1043.633 & \textbf{50.383} & 237.074 & 235.594 & 539.574 & 245.333 \\ MSAM & 81.221 & \textbf{1.662} & 12.913 & 7.842 & 17.98 & 9.156 \\ \hline &&&SR = 0.05&&&&\\ PQI& nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 7.779 & \textbf{38.202} & 34.684 & 28.945 & 20.621 & 26.579 \\ SSIM & 0.01 & \textbf{0.928} & 0.845 & 0.712 & 0.31 & 0.663 \\ FSIM & 0.471 & \textbf{0.960} & 0.915 & 0.846 & 0.735 & 0.836 \\ ERGA & 1030.139 & \textbf{41.898} & 89.372 & 93.352 & 234.445 & 154.292 \\ MSAM & 77.268 & \textbf{1.559} & 4.386 & 3.278 & 7.886 & 5.413 \\ \hline &&&SR = 0.1&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 8.013 &39.056 & \textbf{40.888} & 35.627 & 35.51 & 35.015 \\ SSIM & 0.014 &0.939 & \textbf{0.957} & 0.885 & 0.907 & 0.897 \\ FSIM & 0.451 &0.966 & \textbf{0.978} & 0.931 & 0.951 & 0.943 \\ ERGA & 1002.75 &34.544 & \textbf{34.263} & 44.518 & 54.421 & 57.537 \\ MSAM & 71.695 &\textbf{1.299} &1.46 & 1.445 & 2.072 & 2.192 \\ \hline \hline \end{tabular}} \end{table} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=005_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=01_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=02_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=005_SSIM}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=01_SSIM}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=02_SSIM}}% \caption{Sensitivity analysis of parameter $\gamma$ in video "hall" dataset. (a-c) Change in the MPSNR value: SR= 0.05, 0.1 and 0.2. (e-f) Change in the MSSIM value: 0.05, 0.1 and 0.2, respectively.} \label{parameter_analysis_hall} \end{figure*} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=005_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=01_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=02_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=005_SSIM}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=01_SSIM}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=02_SSIM}}% \caption{Sensitivity analysis of parameter $\gamma$ in MRI dataset. (a-c) Change in the MPSNR value: SR= 0.05, 0.1 and 0.2. (e-f) Change in the MSSIM value: 0.05, 0.1 and 0.2, respectively.} \label{parameter_analysis_MRI} \end{figure*} \begin{figure*}[!t] \centering \subfloat[SR=0.025]{\includegraphics[width=0.15\linewidth]{HSI_SR=0025_PSNR}}% \hfil \subfloat[SR=0.05]{\includegraphics[width=0.15\linewidth]{HSI_SR=005_PSNR}}% \hfil \subfloat[SR=0.1]{\includegraphics[width=0.15\linewidth]{HSI_SR=01_PSNR}}% \hfil \subfloat[SR=0.025]{\includegraphics[width=0.15\linewidth]{HSI_SR=0025_SSIM}}% \hfil \subfloat[SR=0.05]{\includegraphics[width=0.15\linewidth]{HSI_SR=005_SSIM}}% \hfil \subfloat[SR=0.1]{\includegraphics[width=0.15\linewidth]{HSI_SR=01_SSIM}}% \caption{Sensitivity analysis of parameter $\gamma$ in HSI dataset. (a-c) Change in the MPSNR value: SR= 0.025, 0.05 and 01. (e-f) Change in the MSSIM value: 0.025, 0.05 and 0.1, respectively.} \label{parameter_analysis_HSI} \end{figure*} \begin{table*} \centering \caption{Parameters setting in the proposed algorithm} \label{table:parameter_setting} \begin{tabular}{ccccc ccccc ccc} \hline & &Vdieo-suzie& & &Video-hall& & &MRI dataset& & &HSI dataset& \\ \hline SR& 0.05& 0.1&0.2 &0.05& 0.1&0.2 &0.05& 0.1&0.2 &0.025& 0.05&0.1\\ $\gamma$& 2.3&2.5&2.7&1.7&2.6&2.6&1.3&2.5&3.0&3.5&5.5&5.7\\ \hline \end{tabular} \end{table*} \subsection{Discussion} Considering the fact that the $\gamma$ in $\mathbf{X}_n$ and $\mathbf{A}_n$ have certain proportional relationship, we simply set the $\gamma$ in $\mathbf{X}_n$ as 0.1, and tune the $\gamma$ in $\mathbf{A}_n$ carefully. For experiments on video datasets , MRI and HSI dataset, Fig. \ref{parameter_analysis_hall}, Fig. \ref{parameter_analysis_MRI} and Fig. \ref{parameter_analysis_HSI} show the effect of parameter $\gamma$ on PSNR and SSIM at different sampling rates, respectively. To enhance the repeatability of our model, we list the optimal $\gamma$ of various datasets at different sampling rates in Table \ref{table:parameter_setting}. In addition, we manually set both $\tau_n$ and $\lambda_n$ as 0.01. \section{Conclusions} \label{conclusion} In this paper, we propose a novel double low-rank tensor model based on multi-mode matrix decomposition for tensor completion. Instead of using the traditional single nuclear norm or its relaxation to represent the low-rank prior of underlying tensor directly, we first apply parallel matrix factorization to all modes of underlying tensor, then, a novel double nonconvex $L_{\gamma}$ norm is designed to represent the underlying joint-manifold drawn from the mode factorization factors. An BSUM based algorithm is designed to efficiently solve the proposed model, and it can be demonstrated that our numerical scheme converges to the coordinatewise minimizers. The proposed model has been evaluated on three types of public tensor datasets, which show that our algorithm can complete a variety of low-rank tensors with significantly fewer samples than the compared methods. \section*{Acknowledgment} The authors would like to express their thanks to Dr. C. Lu, Dr. Y. Xu, Dr. T. Ji and Dr. T. Jiang for sharing their codes for the tested methods. In addition, this research is supported by the Fundamental Research Funds for the Central Universities under Grant No. 2452019073 and the National Natural Science Foundation of China under Grant No. 61876153. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran \section{Introduction} Tensors, as a generalization of vectors and matrices, arise in many data processing applications and have attracted great interests. For instance, video inpainting \cite{tensor_video}, magnetic resonance imaging (MRI) data recovery \cite{tensor_MRI, MRI_tensor}, 3D image reconstruction \cite{tensor_3dimage, tensor_3DReconstruction}, high-order web link analysis [16], hyperspectral image (HSI) or multispectral image recovery \cite{tensor_HSI, chen2018tensor}, personalized web search \cite{tensor_web}, and seismic data reconstruction \cite{tensor_seismic_data}. Tensor completion tries to recover a low-rank tensor from its partially observed entries. A large number of tensor completion methods have been proposed and successfully used in many applications. Among them, the tensor rank minimization based methods are considered as state-of-the-art methods with promising performance, and their robustness to noisy and missing data has also been proven \cite{lu2019TRPCA}. However, due to the nonunique definitions of the tensor rank, it is extremely hard to directly solve the tensor rank minimization problem. To overcome this issue, many researchers have been devoted to defining the tensor rank based on the different decomposition methods, such as the matrix factorization \cite{LMaFit}, CANDECOMP/PARAFAC (CP) decomposition \cite{CP}, Tucker decomposition \cite{tucker_1, ZENG_HSI_tensor}, tensor singular value decomposition (t-SVD) \cite{t_SVD}, tensor train \cite{tensor_train_TingzhuHuang} and tensor ring \cite{tensor_ring_YipengLiu}. The commonly used definitions of tensor rank are CP-rank, Tucker-rank, multi-rank and tubal-rank based on t-SVD. However, it is NP-hard to solve the minimization problem of CP-rank which has no relaxation and certain limitations in applications. Although the Tucker-rank, relying on matrix ranks, is relatively simple, it is also NP-hard to directly minimizing the Tucker-rank problem. To tackle this difficulty, the sum of nuclear norm (SNN) \cite{HaLRTC} is introduced as a convex relaxation of the Tucker-rank. Specifically, SNN is defined by the sum of nuclear norms of the unfolding matrices along all dimensions in tensors. Due to the similar approximation to matrix case and convenient calculation algorithm, SNN is widely used in the tensor completion task \cite{tensor_Qrank}. Besides, the tensor multi-rank and tubal-rank induced from t-SVD are computable. As a tightest convex surrogate for tensor multi-rank, the tensor nuclear norm (TNN) \cite{t_SVD} is defined as the sum of the matrix nuclear norms of each frontal slice in the Fourier transformed tensor. TNN has shown its effectiveness to keep the intrinsic structure of tensors and attracted extensive attention for tensor completion problems in recent years \cite{lu2019TRPCA}. Due to the convexity of matrix nuclear norms, SNN and TNN have limitation in the accuracy of approximation to the tensor rank function. Recently, a number of studies \cite{list_nonconvex,Original_BFMN}, both practically and theoretically, have shown that the nonconvex approximation of rank function can provide better estimation accuracy and variable selection consistency than the nuclear norm. For example, a partial sum of the tensor nuclear norm (PSTNN) is proposed as a nonconvex surrogate of the tensor rank by Jiang et al. \cite{PSTNN}; Xue et al. \cite{xue2019nonconvex} unfold the underlying tensor along its all modes and use a nonconvex logarithmic surrogate function to refine the rank approximation. Actually, except for the logarithmic function used by Xue et al., a series of nonconvex surrogate are proposed for approximating to the rank function better, such as the minimax concave function \cite{Folded_concave}, log-sum function \cite{logsum_penalty}, log-determinant function \cite{ji_nonlogDet}, $L_p$ norm for $p\in (0, 1)$ \cite{lysaker_noise_2004_10_27,burger_nonlinear_2006_10_28}, $L_{1/2}$ norm \cite{lou_L1L2} and $\gamma$ norm \cite{Non_LRMA}. In addition, TNN and PSTNN involve the singular value decompositions (SVDs) of many matrices, which are time-consuming. To cope with this issue, Xu et al. \cite{Tmac} propose a parallel matrix factorization low-rank tensor completion model (TMac), which obtain promising results with less running time than TNN and PSTNN. Further, combined with the total variation (TV) regularization, Ji et al. \cite{MFTV} propose TV regularized low-rank matrix factorization method (MF-TV) for low-rank tensor completion problems. \begin{figure*}[!t] \centering \subfloat[Original image]{\includegraphics[width=0.23\linewidth]{Image_video_b94.png} \hfil \subfloat[Our model]{\includegraphics[width=0.23\linewidth]{Image_video_b94_sr005LxLxNN}}% \hfil \subfloat[TMac]{\includegraphics[width=0.23\linewidth]{Image_video_b94_sr005Tmac}}% \hfil \subfloat[TNN]{\includegraphics[width=0.23\linewidth]{Image_video_b94_sr005TNN} \caption{The completed results of Suzie with 95\% missing entries by different methods. From (a) to (d), the original image, the result by our model, Tmac, and TNN, respectively.} \label{TNN_Tamc_our-model_figure_video_sr0.05} \end{figure*} Although the above-mentioned low-rank tensor completion methods show great success in dealing with various issues, three major open questions have yet to be addressed. Firstly, in the tensor rank approximations based on tensor decomposition or matrix decomposition, the low-rank priors of underlying tensor are only explored by the convex or nonconvex relaxations of original tensor rank function, while the low-rank priors of factors obtained by the decomposition are not investigated further. Secondly, TNN or PSTNN based methods \cite{TNN, PSTNN} need to compute lots of SVDs, which become very slow or even not applicable for large-scale problems \cite{Tmac}. Thirdly, the aforementioned methods adopt single surrogate of tensor rank function, which would cause suboptimal solution of the low-rank tensor completion problems \cite{T_Sp} and can not fully explore the low-rank priors in all modes, especially when the tensor data is heavily contaminated or the sampling rate is very low. One can see an example in Fig. \ref{TNN_Tamc_our-model_figure_video_sr0.05}. In this paper, motivated and convinced by the much better performance of models that utilize the low-ranknesses in all mode in tensors \cite{HaLRTC, Tmac}, instead of using the single surrogate of tensor rank function to represent the low-rank prior in underlying tensor directly, we first apply parallel matrix factorization to all modes of underlying tensor. Further, the novel double $L_{\gamma}$ norm, a kind of nonconvex penalty, is designed to represent the underlying joint-manifold drawn from the mode factorization factors. By exploiting this auxiliary information, our method leverages low-rank decomposition and low-rank approximation, which help to accurately estimate the mode factors and missing entries. An block successive upper-bound minimization method-based algorithm is designed to efficiently solve the proposed model, and it can be demonstrated that our numerical scheme converge to the coordinatewise minimizers. The proposed model has been evaluated on three types of public tensor datasets, which show that our algorithm can recover a variety of low-rank tensors with significantly fewer samples than the compared methods. The rest of this paper is organized as follows. Section \ref{notation} introduces some notations about tensors and the operations. Section \ref{related works} reviews the related works. In Section \ref{the proposed model}, the proposed model is presented and its optimization is deduced in detail. In Section \ref{Numerical experiments}, the proposed model is evaluated on several public tensor datasets. Section \ref{conclusion} gives the conclusions. \section{Preliminary} \label{notation} \subsection{Notations} In this paper, following \cite{Tmac}, vector, matrix and tensor are denoted as bold lower-case letter $\mathbf{x}$, bold upper-case letter $\mathbf{X}$ and caligraphic letter $\mathcal{X}$, respectively. Let $x_{i_{1} \ldots i_{N}}$ represent the $\left(i_{1}, \ldots, i_{N}\right)$-th component of an $N$-way tensor $\mathcal{X}$. Then, for $\mathcal{X}, \mathcal{Y} \in \mathbb{R}^{I_{1} \times \ldots \times I_{N}},$ their inner product is defined as \begin{equation} \label{equation:inner_product} \langle\mathcal{X}, \mathcal{Y}\rangle=\sum_{i_{1}=1}^{I_{1}} \cdots \sum_{i_{N}=1}^{I_{N}} x_{i_{1} \cdots i_{N}} y_{i_{1} \cdots i_{N}}. \end{equation} Based on (\ref{equation:inner_product}), the \textbf{Frobenius norm} of a tensor $\mathcal{X}$ is defined as $\|\mathcal{X}\|_{\text{F}}=\sqrt{\langle\mathcal{X}, \mathcal{X}\rangle}$. \textbf{Fibers} of tensor $\mathcal{X}$ are defined as a vector obtained by fixing all indices of $\mathcal{X}$ except one, and \textbf{slices} of $\mathcal{X}$ are defined as a matrix by fixing all indices of $\mathcal{X}$ except two. The \textbf{mode-$n$ matricization}/\textbf{unfolding} of $\mathcal{X} \in \mathbb{R}^{I_{1} \times \ldots \times I_{N}}$ is denoted as a matrix $\mathbf{X}_{(n)} \in \mathbb{R}^{I_{n} \times \Pi_{j \neq n} I_{j}}$ with columns being the mode-$n$ fibers of $\mathcal{X}$ in the lexicographical order. Furthermore, to clearly represent the matricization process, we define \textbf{unfold}$_{n}(\mathcal{X})=\mathbf{X}_{(n)}$, and \textbf{fold}$_{n}$ is the inverse of \textbf{unfold}$_{n}$, i.e., \textbf{fold} $_{n}\left(\text { \textbf{unfold} }_{n}(\mathcal{X})\right)=\mathcal{X}$. The $n$-rank of an $N$-way tensor $\mathcal{X},$ denoted as $\operatorname{rank}_{n}(\mathcal{X}),$ is the rank of $\mathbf{X}_{(n)},$ and the rank of $\mathcal{X}$ is defined as an array: \begin{equation} \operatorname{rank}(\mathcal{X})=\left(\operatorname{rank}\left(\mathbf{X}_{(1)}\right), \cdots, \operatorname{rank}\left(\mathbf{X}_{(N)}\right)\right). \end{equation} \subsection{Operators} \label{operators} The \textbf{Proximal Operator} of is defined as follows: \begin{equation} \label{equation:PPA_0} \operatorname{prox}_{f}(v):=\arg \min _{u} f(u)+\frac{\rho}{2}\|u-v\|^{2}, \end{equation} where $f(u)$ is convex; $\rho$ is the proximal parameter. Then, the minimization of $\{f(u)\}$ is equivalent to \begin{equation} \arg \min _{u} f(u)+\frac{\rho}{2}\|u-u^k\|^{2}, k=1,2,\cdots, \end{equation} where $u^k$ is the last update of $u$. We define the \textbf{Projection Operator} as follows: \begin{equation} \left(\mathcal{P}_{\Omega}(\mathcal{Y})\right)_{i_{1} \cdots i_{N}}=\left\{\begin{array}{ll} {y_{i_{1}, \cdots, i_N,}} & {\left(i_{1}, \cdots, i_{N}\right) \in \Omega} \\ {0,} & {\text { otherwise }} \end{array}\right. \end{equation} where $\Omega$ is the index set of observed entries. The function of $\mathcal{P}_{\Omega}$ is to keep the entries in $\Omega$ and zeros out others. \section{Related Works} \label{related works} We first introduce related tensor completing methods based on the tensor rank minimization. Given a partial observed tensor $\mathcal{F} = \mathcal{P}_{\Omega}(\mathcal{Y}) \in \mathbb{R}^{I_{1} \times I_{2} \times \cdots \times I_{N}}$, tensor completion task is to recover a low-rank tensor $\mathcal{Y}$ from $\mathcal{F}$, according to the priors of underlying tensor $\mathcal{Y}$. In the past decade, TNN induced by t-SVD \cite{t_SVD} has been widely used for 3-order low-rank tensor completion \cite{TNN}. The TNN based method aims to recover a low-rank tensor by penalizing the nuclear norm of each front slice in the Fourier transformed domain, \begin{equation} \label{TNN} \arg\min_{\mathcal{Y}} \frac{1}{I_{3}} \sum_{i=1}^{I_{3}}\left\|\bar{\mathbf{Y}}^{(i)}\right\|_{*} , s.t.~ \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}, \end{equation} where $\bar{\mathbf{Y}}^{(i)} \in \mathbb{C}^{I_{1} \times I_{2}}$ denotes the $i$-th frontal slice of $\bar{\mathcal{Y}}$, $\bar{\mathcal{Y}}=\operatorname{fft}(\mathcal{Y},[],3)$ denotes the fast Fourier transform of $\mathcal{Y}$ along the third dimension. \begin{figure*} \centering \includegraphics[width=1\linewidth]{3-model-lowrank-Lx-norm} \caption{Flowchart of the proposed low-rank tensor approximation: nonconvex tensor $L_{\gamma}$ norm.} \label{fig:3-model-lowrank-lx-norm} \end{figure*} Then, to alleviate bias phenomenons of the TNN minimization in tensor completion tasks, Jiang et al. \cite{PSTNN} represent the low-rank prior of underlying tensor by using the PSTNN. The PSTNN regularized tensor completion model is formulated as follows: \begin{equation} \label{PSTNN} \arg\min_{\mathcal{Y}} \frac{1}{I_{3}} \sum_{i=1}^{I_{3}}\left\|\bar{\mathbf{Y}}^{(i)}\right\|_{p=M}, s.t.~ \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}, \end{equation} where $$\|\bar{\mathbf{Y}}^{(i)}\|_{p=M} := \sum_{j=M+1}^{\min (I_1, I_2)} \sigma_{j}(\bar{\mathbf{Y}}^{(i)}),$$ and $\sigma_{j}(\bar{\mathbf{Y}}^{(i)})$ denotes the $j$-th largest singular value of $\bar{\mathbf{Y}}^{(i)}$. To reduce the burden of calculating SVDs in TNN and PSTNN, Liu et al. \cite{HaLRTC} unfold the $N$-order tensor into multiple modal matrices along the direction of each mode, and then use the sum of the nuclear norms of these modal matrices, i.e., SNN, to describe the low-rank structure of the underlying tensor. With that definition, the completion model is formulated as follows: \begin{equation} \label{} \arg\min_{\mathcal{Y}} \sum_{n=1}^{N} \alpha_i \left\|\mathbf{Y}_{(n)}\right\|_{*}, s.t., \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}. \end{equation} Furthermore, Xu et al. \cite{Tmac} propose a Tmac model by using parallel matrix factorization, which obtain promising results with less computational complexity than TNN and PSTNN, \begin{equation} \label{Tmac} \begin{aligned} &\arg\min _{\mathcal{Y}, \mathbf{X}, \mathbf{A}} \sum_{n=1}^{N} \frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{F}^{2},\\ &\text { s.t. } \quad \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}, \end{aligned} \end{equation} where $\alpha_n$ are weights and satisfy $\sum_{n=1}^{N}\alpha_n=1$. Although the above-mentioned low-rank tensor completion methods report success on dealing with a large variety of tasks, there are several open issues which have yet to be addressed. Firstly, the above approaches either explore the low-rank prior lying in only one mode of the underlying tensor or directly represent the low-rank prior of original tensor by using low-rank decomposition. They do not further explore the prior of the factors (e.g., $\mathbf{A}_n, \mathbf{X}_n$ in (\ref{Tmac})) obtained by low-rank decomposition. Secondly, TNN based methods \cite{TNN, PSTNN, lu2019TRPCA} need to compute lots of SVDs, which is time-consuming or even not applicable for large-scale problems \cite{Tmac}. Thirdly, these methods adopt single surrogate of tensor rank function, which would cause suboptimal solution of the low-rank tensor completion problems \cite{T_Sp} and can not fully explore the low-rank priors in all modes, especially when the tensor data is heavily contaminated or the sampling rate is very low. One can see an example in Fig. \ref{TNN_Tamc_our-model_figure_video_sr0.05}, in which the video "suzie" with 95\% missing entries, it can be seen that our model restores most of the structural information of the video, while the videos restored by the methods adopt single surrogate contain only the outline of the images. \section{Double nonconvex $L_{\gamma}$ norm based low-rank approximation for tensor completion } \label{the proposed model} In the following, a novel double nonconvex $L_{\gamma}$ norm based low-rank approximation of tensor multi-modes (LRATM) is introduced firstly. Then, the optimization of the proposed LRATM model is deduced in detail. \subsection{LRATM Model} For a tensor $\mathcal{Y}\in \mathbb{R}^{I_{1} \times I_{2} \times \cdots \times I_{N}}$, to enhance the flexibility for handling different correlations along different modes in the underlying tensor, while to effectively explore the underlying joint-manifold drawn from the mode factorization factors, we first formulate a nonconvex novel tensor $L_{\gamma}$ norm, \begin{equation} \begin{aligned} &\left\|\mathcal{Y}\right\|_{\gamma}= \sum_{n=1}^{N} (\tau_n \left\|\mathbf{X}_{n}\right\|_{\gamma}+\lambda_n \left\|\mathbf{A}_{n}\right\|_{\gamma}), \end{aligned} \end{equation} where $\mathbf{Y}_{(n)}=\mathbf{A}_{n} \mathbf{X}_{n}$, and $$\left\|\mathbf{X}\right\|_{\gamma}:=\sum_{t=1}^{\min \{p, q\}}\left(1-e^{-\sigma_{t}(\mathbf{X}) / \gamma}\right)$$ is a nonconvex approximation of $rank(\mathbf{X})$, and $\sigma_{t}(\mathbf{X})$ is the $t$-th biggest singular value of $\mathbf{X} \in \mathbb{R}^{p \times q}$. $\tau_n$ and $\lambda_n$ are non-negative constants that balance the two terms. Then, our tensor $L_{\gamma}$ norm based low-rank approximation model for low-rank tensor completion, i.e., the proposed LRATM model is written as \begin{equation} \label{BMF_LRTC} \begin{aligned} \arg\min{\left\|\mathcal{Y}\right\|_{\gamma}}=&\arg\min_{\mathcal{Y},\mathbf{X},\mathbf{A}} \sum_{n=1}^{N} (\tau_n \left\|\mathbf{X}_{n}\right\|_{\gamma}+\lambda_n \left\|\mathbf{A}_{n}\right\|_{\gamma}), \\ &s.t., ~ \mathbf{Y}_{(n)}=\mathbf{A}_{n} \mathbf{X}_{n}, \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}. \end{aligned} \end{equation} To better understand the proposed LRATM model, we plot the flowchart of the proposed low-rank tensor approximation in Fig. \ref{fig:3-model-lowrank-lx-norm}. It can be seen that the video, MRI and HSI in the first column essentially can be viewed as 3-order tensors in the second column. When we unfold the 3-order tensor in three directions, the low-rank structure of all the $N$ modes can be explored by using parallel matrix decomposition, i.e., $\mathbf{Y}_{(n)}=\mathbf{A}_{n} \mathbf{X}_{n}, n=1,2,\cdots, N$, which is computationally much cheaper than SVD. The decomposed factors have practical physical meaning, $\mathbf{A}_n$ represents a library (each column contains a signature of the $n$-th mode direction), and $\mathbf{X}_n$ is called an encoding. For example, in the unmixing problem for HSI \cite{HSI_unmixing}, each column of $\mathbf{A}_3$ contains a spectral signature, and each row of $\mathbf{X}_3$ contains the fractional abundances of a given endmembers. This interpretation is also valid for the mode-3 unfolding of video and MRI. The above parallel matrix decomposition can effectively explore the low-rank structure of underlying tensor, but the prior information contained in the factor matrices ($\mathbf{A}_{n}, \mathbf{X}_{n}$) is not explored at all. Therefore, to further enhance the potential capacity of tensor completion models, it is necessary to design new and reasonable formulas to explore the priors in the factor matrices. Here, we propose the novel nonconvex double $L_{\gamma}$ norm to formulate the underlying joint-manifold drawn from the mode factorization factors $\mathbf{A}_n$ and $\mathbf{X}_n$. The superiority of $L_{\gamma}$ norm over nuclear norm and other nonconvex penalties is shown in the fifth column of Fig. \ref{fig:3-model-lowrank-lx-norm}. It is obvious that the red curve of $L_{\gamma}$ norm is closer to the green curve of $L_0$ norm (rank function) than other nonconvex penalties. \subsection{Optimization Procedure of LRATM} In this section, the proposed model is solved by using the block successive upper-bound minimization (BSUM) \cite{BSUM} method. The objective function of the proposed LRATM model(\ref{BMF_LRTC}) can be formulated as follows: \begin{equation} \begin{aligned} f(\mathbf{X}, \mathbf{A}, \mathcal{Y})=\sum_{n=1}^{N}& \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\tau_n \left\|\mathbf{X}_{n}\right\|_{\gamma}\right.\\ &\left.+\lambda_n \left\|\mathbf{A}_{n}\right\|_{\gamma}\right). \end{aligned} \end{equation} According to the proximal operator (\ref{equation:PPA_0}), the update can be written as: \begin{equation} \label{equation_original_PPA} p(\mathcal{S}, \mathcal{S}^k)=f\left(\mathcal{S}, \mathcal{S}^k\right)+\frac{\rho}{2}\left\|\mathcal{S}-\mathcal{S}^{k}\right\|_{\text{F}}^{2}, \end{equation} where $\rho>0$ is a positive constant; $\mathcal{S}=(\mathbf{X}, \mathbf{A}, \mathcal{Y})$ and $\mathcal{S}^{k}=\left(\mathbf{X}^{k}, \mathbf{A}^{k}, \mathcal{Y}^{k}\right)$. Let \begin{equation} \left\{\begin{array}{l} g_{1}\left(\mathbf{X}, \mathcal{S}_{1}^{k}\right)=f\left(\mathbf{X}, \mathbf{A}^{k}, \mathcal{Y}^{k}\right)+\frac{\rho}{2}\left\|\mathbf{X}-\mathbf{X}^{k}\right\|_{\text{F}}^{2}, \\ g_{2}\left(\mathbf{A}, \mathcal{S}_{2}^{k}\right)=f\left(\mathbf{X}^{k+1}, \mathbf{A}, \mathcal{Y}^{k}\right)+\frac{\rho}{2}\left\|\mathbf{A}-\mathbf{A}^{k}\right\|_{\text{F}}^{2,} \\ g_{3}\left(\mathcal{Y}, \mathcal{S}_{3}^{k}\right)=f\left(\mathbf{X}^{k+1}, \mathbf{A}^{k+1}, \mathcal{Y}\right)+\frac{\rho}{2}\left\|\mathcal{Y}-\mathcal{Y}^{k}\right\|_{\text{F}}^{2}, \end{array}\right. \end{equation} where \begin{equation} \left\{\begin{array}{l} S_{1}^{k}=\left(\mathbf{X}^{k}, \mathbf{A}^{k}, \mathcal{Y}^{k}\right),\\ S_{2}^{k}=\left(\mathbf{X}^{k+1}, \mathbf{A}^{k}, \mathcal{Y}^{k}\right),\\ S_{3}^{k}=\left(\mathbf{X}^{k+1}, \mathbf{A}^{k+1}, \mathcal{Y}^{k}\right). \end{array}\right. \end{equation} Then, problem (\ref{equation_original_PPA}) can be rewritten as follows: \begin{equation} \label{equation:XAY} \left\{\begin{array}{l} \displaystyle \mathbf{X}^{k+1}=\arg\min_{\mathbf{X}} g_{1}\left(\mathbf{X}, \mathcal{S}_{1}^{k}\right), \\ \displaystyle \mathbf{A}^{k+1}=\arg\min_{\mathbf{A}} g_{2}\left(\mathbf{A}, \mathcal{S}_{2}^{k}\right), \\ \displaystyle \mathcal{Y}^{k+1}=\arg\min_{\mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}} g_{3}\left(\mathcal{Y}, \mathcal{S}_{3}^{k}\right). \end{array}\right. \end{equation} \subsubsection{Update $\mathbf{X}_n$} With fixing other variables, the optimization subproblem with respect to $\mathbf{X}_n, n=1,2,\cdots,N$, in (\ref{equation:XAY}) can be written as follows: \begin{equation} \label{equation_X} \begin{aligned} \arg\min_{\{\mathbf{X}_n\}_{n=1}^{N}} \sum_{n=1}^{N}& \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\tau_n \left\|\mathbf{X}_{n}\right\|_{\gamma}\right.\\ &\left.+\frac{\rho_n}{2}\left\|\mathbf{X}_n-\mathbf{X}_n^{k}\right\|_{\text{F}}^{2}\right). \end{aligned} \end{equation} To efficiently solve the above optimization, we first introduce auxiliary variables $\mathbf{X}_{n}=\mathbf{Z}_{n}, n=1,2,\cdots,N$. Then, by the augmented Lagrangian multiplier (ALM) method, the optimization subproblem (\ref{equation_X}) can be rewritten as: \begin{equation} \label{equation:X_alm} \begin{aligned} &\arg\min_{\{\mathbf{X}_n, \{\mathbf{Z}_n\}_{n=1}^{N}} \sum_{n=1}^{N} \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\tau_n \left\|\mathbf{Z}_{n}\right\|_{\gamma}\right.\\ &\left.+\frac{\rho_n}{2}\left\|\mathbf{X}_n-\mathbf{X}_n^{k}\right\|_{\text{F}}^{2}+\left\langle\Gamma_{n}^\mathbf{X}, \mathbf{X}_n-\mathbf{Z}_n\right\rangle +\frac{\rho_{n}}{2}\left\|\mathbf{X}_{n}-\mathbf{Z}_{n}\right\|_{\text{F}}^{2}\right). \end{aligned} \end{equation} where $\Gamma_{n}^\mathbf{X}$ are Lagrangian multipliers. With other variables fixed, the minimization subproblem for $\mathbf{Z}_n$ can be deduced from (\ref{equation:X_alm}) as follows: \begin{equation} \label{equation:Zn} \mathbf{Z}_n^{k+1}= \arg \min_{\mathbf{Z}_n} \left\|\mathbf{Z}_{n}\right\|_{\gamma}+\frac{\hat{\rho}_{n}}{2}\left\|\mathbf{Z}_{n}-\mathbf{P}_n^k\right\|_{\text{F}}^{2}, \end{equation} where $\hat{\rho}_{n}=\rho_{n}/\tau_n$, $\mathbf{P}_n^k=\mathbf{X}_{n}^{k}+\Gamma_{n}^{\mathbf{X}}/\rho_n$. Let $\sigma_{1}^{k} \geq \sigma_{2}^{k} \geq \cdots \geq \sigma_{t_n}^{k}$ represent the singular values of $\mathbf{Z}_n^{k} \in \mathbb{R}^{r_n \times s_n}$ with $t_n=\min \left\{r_n, s_n\right\}$ and $\nabla \phi\left(\sigma_{n}^{k}\right)$ denote the gradient of $\phi(x) = 1-e^{-x/\gamma}$ at $\sigma_{n}^{k} .$ Let $$f(\mathbf{Z}_n)=\frac{1}{2}\left\|\mathbf{Z}_n-\mathbf{P}_n^k\right\|_{\text{F}}^{2}.$$ It is easy to prove that the gradient of $f(\mathbf{Z}_n)$ is Lipschitz continuous by setting the Lipschitz constant being $1.$ As stated in \cite{Non_LRMA}, considering the nonascending order of singular values and according to the antimonotone property of gradient of our nonconvex function, we have \begin{equation} \begin{aligned} & 0 \leq \nabla \phi\left(\sigma_{1}^{k}\right) \leq \nabla \phi\left(\sigma_{2}^{k}\right) \leq \cdots \leq \nabla \phi\left(\sigma_{t_n}^{k}\right), \\ & \phi\left(\sigma_{i}(\mathbf{Z}_n)\right) \leq \phi\left(\sigma_{i}^{k}\right)+\nabla \phi\left(\sigma_{i}^{k}\right)\left(\sigma_{i}(\mathbf{Z}_n)-\sigma_{i}^{k}\right), \end{aligned} \label{equ:gradient} \end{equation} where $i=1,2,\cdots,t_n$. Following (\ref{equ:gradient}), the subproblem with respect to $\mathbf{Z}_n$ (\ref{equation:Zn}) can be written as following relaxation: \begin{equation} \begin{split} &\arg \min _{\mathbf{Z}_n} \frac{1}{\hat{\rho}_{n}} \sum_{n=1}^{t_n} \phi\left(\sigma_{n}^{k}\right)+\nabla \phi\left(\sigma_{n}^{k}\right)\left(\sigma_{n}(\mathbf{Z}_n)-\sigma_{n}^{k}\right)+f(\mathbf{Z}_n) \\ &=\arg \min _{\mathbf{Z}_n} \frac{1}{\hat{\rho}_{n}} \sum_{n=1}^{t_n} \nabla \phi\left(\sigma_{n}^{k}\right) \sigma_{n}(\mathbf{Z}_n)+\frac{1}{2}\left\|\mathbf{Z}_n-\mathbf{P}_{n}^{k}\right\|_{\text{F}}^{2}. \end{split} \label{equ:relaxation} \end{equation} Then, based on \cite{lu_nonconvex, Non_LRMA}, the solution of (\ref{equ:relaxation}) can be efficiently obtained by generalized weight singular value thresholding (WSVT) \cite{WSVT} , as shown in Lemma 1. \noindent\textbf{Lemma 1}: For any $1 / \hat{\rho}_{n}>0$, the given data $\mathbf{P}_n^k=\mathbf{X}_{n}^{k}+\Gamma_{n}^{\mathbf{X}}/\rho_n$, and $0 \leq \nabla \phi\left(\sigma_{1}^{k}\right) \leq \nabla \phi\left(\sigma_{2}^{k}\right) \leq \cdots \leq \nabla \phi\left(\sigma_{t_n}^{k}\right)$, a globally optimal solution $\mathbf{Z}_n^{*}$ to (\ref{equ:relaxation}) is given as follows: \begin{equation} \label{equation_solution_Zn} \quad \mathbf{Z}_n^{k+1}= \operatorname{WSVT}\left(\mathbf{P}_n^{k}, \frac{\nabla \phi}{\hat{\rho}_{n}}\right) =\mathbf{U} \mathbf{S}_{\frac{\nabla \phi}{\hat{\rho}_{n}}}(\mathbf{\Sigma}) \mathbf{V}^{T}, \end{equation} where $\mathbf{P}_n^{k}=\mathbf{U} \mathbf{\Sigma} \mathbf{V}^{T}$ is the SVD of $\mathbf{P}_n^{k}$; $$\mathbf{S}_{\frac{\nabla \phi}{\hat{\rho}_{n}}}(\mathbf{\Sigma})=\operatorname{Diag}\left\{\max \left(\mathbf{\Sigma}_{ii}-\frac{\nabla \phi \left( \mathbf{\sigma}_{i}^{k}\right)}{\hat{\rho}_{n}}, 0\right)\right\},$$ and $i=1,2,\cdots,t_n$. With other variables fixed, the minimization subproblem for $\mathbf{X}_n$, $n=1,2,\cdots, N$, can be deduced from (\ref{equation:X_alm}) as follows: \begin{equation} \label{equationforX} \begin{aligned} \mathbf{X}_n^{k+1}&= \arg\min_{\mathbf{X}_n}\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n}^{k} \mathbf{X}_{n}\right\|_{\text{F}}^{2}\\ &+\frac{\rho_{n}}{2}\left\|\mathbf{X}_{n}-\frac{\mathbf{Z}_{n}^{k+1}-\Gamma_{n}^k/\mu_n+\mathbf{X}_n^{k}}{2}\right\|_{\text{F}}^{2}. \end{aligned} \end{equation} They are convex and have the following closed-form solutions \begin{equation} \label{equation:solution_Xn} \begin{aligned} \mathbf{X}_n^{k+1}=&\frac{1}{2}(\alpha_{n}\mathbf{A}_n^T\mathbf{A}_n+2\rho_n \mathbf{I}_n)^{-1}\left[2\alpha_{n}\mathbf{A}_n^T\mathbf{Y}_{(n)}\right.\\ &\left. +\mu_n \left(\mathbf{Z}_{n}^{k+1}-\Gamma_{n}^k/\mu_n+\mathbf{X}_n^{k}\right)\right]. \end{aligned} \end{equation} The Lagrangian multipliers $\Gamma_{n}^{\mathbf{X}}$ can be updated by the following equation \begin{equation} \label{equation:Lambda_2} \Gamma_{n}^{\mathbf{X}} = \Gamma_{n}^{\mathbf{X}} + \mathbf{X}_n-\mathbf{Z}_n. \end{equation} \subsubsection{Update $\mathbf{A}_n$} With fixing other variables, the optimization subproblem with respect to $\mathbf{A}_n$, $n=1,2,\cdots,N$, in (\ref{equation:XAY}) can be written as follows: \begin{equation} \label{equation_A_aux} \begin{aligned} \arg\min_{\{\mathbf{A}_n\}_{n=1}^{N}} \sum_{n=1}^{N} & \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\lambda_n \left\|\mathbf{A}_{n}\right\|_{\gamma}\right.\\ &\left.+\frac{\rho_n}{2}\left\|\mathbf{A}_n-\mathbf{A}_n^{k}\right\|_{\text{F}}^{2}\right). \end{aligned} \end{equation} To efficiently solve the above optimization, we first introduce auxiliary variables $\mathbf{A}_{n}=\mathbf{J}_{n}, n=1,2,\cdots,N$. Then, by the ALM method, the problem (\ref{equation_A_aux}) can also be reformulated as \begin{equation} \label{equation:A_alm} \begin{aligned} & \arg\min_{\{\mathbf{A}_n,\mathbf{J}_n\}_{n=1}^{N}} \sum_{n=1}^{N} \left(\frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\lambda_n \left\|\mathbf{J}_{n}\right\|_{\gamma}\right.\\ & \left. +\frac{\rho_n}{2}\left\|\mathbf{A}_n-\mathbf{A}_n^{k}\right\|_{\text{F}}^{2} +\left\langle\Gamma_{n}^\mathbf{A},\mathbf{A}_n-\mathbf{J}_n\right\rangle +\frac{\rho_{n}}{2}\left\|\mathbf{A}_{n}-\mathbf{J}_{n}\right\|_{\text{F}}^{2}\right), \end{aligned} \end{equation} where $\Gamma_{n}^\mathbf{A}$ are the Lagrangian multipliers. With other variables fixed, the minimization subproblem for $\mathbf{J}_n$ can be deduced from (\ref{equation:A_alm}) as follows: \begin{equation} \displaystyle \mathbf{J}_n^{k+1}= \arg \min_{\mathbf{J}_n} \lambda_n \left\|\mathbf{J}_{n}\right\|_{\gamma}+\frac{\rho_{n}}{2}\left\|\mathbf{J}_{n}-\mathbf{Q}_n^k\right\|_{\text{F}}^{2}. \end{equation} where $\tilde{\rho}_{n}=\rho_{n}/\lambda_n$; $\mathbf{Q}_n^k=\mathbf{A}_{n}^{k}+\Gamma_{n}^{\mathbf{A}}/\rho_n$. Its solution can also be obtained by \textbf{Lemma 1} as follows: \begin{equation} \label{equation_solution_Jn} \quad \mathbf{J}_n^{k+1}=\operatorname{WSVT}\left(\mathbf{Q}_n^{k}, \frac{\nabla \phi}{\tilde{\rho}_{n}}\right). \end{equation} With other variables fixed, the minimization subproblem for $\mathbf{A}_n$, $n=1,2,\cdots, N$, can be deduced from (\ref{equation:A_alm}) as follows: \begin{equation} \label{equationforA} \begin{aligned} \mathbf{A}_n^{k+1}= &\arg\min_{\mathbf{A}_n} \frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}\\ &+\rho_{n}\left\|\mathbf{A}_{n}-\frac{\mathbf{J}_{n}^{k+1}-\Gamma_{n}^\mathbf{A}/\rho_n+\mathbf{A}_n^{k}}{2}\right\|_{\text{F}}^{2}. \end{aligned} \end{equation} They are also convex and have the following closed-form solutions \begin{equation} \label{equation_solution_An} \begin{aligned} \mathbf{A}_{n}^{k+1}=&\left(\mathbf{Y}_{(n)}^{k}\left(\mathbf{X}_{n}^{k+1}\right)^{T}+\rho_n (\mathbf{J}_{n}^{k+1}-\Gamma_{n}^\mathbf{A}/\rho_n+\mathbf{A}_n^{k})\right)\\ &\left(\mathbf{X}_{n}^{k+1}\left(\mathbf{X}_{n}^{k+1}\right)^{T}+2\rho_{n} \mathbf{I}_{n}\right)^{\dagger}. \end{aligned} \end{equation} Finally, the Lagrangian multipliers $\Gamma_{n}^{\mathbf{A}}$ can be updated by the following equation \begin{equation} \label{equation:Lambda_1} \Gamma_{n}^{\mathbf{A}} = \Gamma_{n}^{\mathbf{A}} + \mathbf{A}_n-\mathbf{J}_n. \end{equation} \subsubsection{Update $\mathcal{Y}$} With other variables fixed, the minimization subproblem with respect to $\mathcal{Y}$ in (\ref{equation:XAY}) can be written as \begin{equation} \begin{aligned} & \arg\min_{\{\mathbf{Y}_{(n)}\}_{n=1}^{N}} \sum_{n=1}^{N} \frac{\alpha_{n}}{2}\left\|\mathbf{Y}_{(n)}-\mathbf{A}_{n} \mathbf{X}_{n}\right\|_{\text{F}}^{2}+\frac{\rho}{2}\left\|\mathcal{Y}-\mathcal{Y}^{k}\right\|_{\text{F}}^{2} \\ & s.t., \mathcal{P}_{\Omega}(\mathcal{Y})=\mathcal{F}. \end{aligned} \end{equation} Then, the update of $\mathcal{Y}^{k+1}$ can be written explicitly as \begin{equation} \label{equation_solution_Y} \begin{array}{l} \displaystyle \mathcal{Y}^{k+1}=\mathcal{P}_{{\Omega}^c}\left(\sum_{n=1}^{N} \alpha_{n} \text { fold }_{n}\left(\frac{\mathbf{A}_{n}^{k+1} \mathbf{X}_{n}^{k+1}+\rho_n \mathbf{Y}_{(n)}^{k}}{1+\rho_n}\right)\right)+\mathcal{F}, \end{array} \end{equation} where $\Omega^{C}$ is the complementary set of $\Omega$. \begin{algorithm}[!t] \caption{:Algorithm for the LRATM model.} \label{algorithm:A1} \begin{algorithmic}[1] \Require The observed tensor $\mathcal{F}$; The set of index of observed entries $\Omega$; Stopping criterion $\varepsilon$, the given $n$-rank, $r=\left(r_{1}, r_{2}, r_{3}\right)$. \Ensure The completed tensor. \State Initialize: $\mathbf{X}_n=\mathbf{Z}_n=0, \mathbf{A}_n=\mathbf{J}_n=0,\Gamma_{n}^\mathbf{X}=0, \Gamma_{n}^\mathbf{A}=0, n=1,2, \cdots, N$; $k=0$. \State Repeat until convergence: \State Update $\mathbf{X}, \mathbf{Z}, \mathbf{A}, \mathbf{J}, \mathcal{Y}, \Gamma^{\mathbf{X}}, \Gamma^{\mathbf{A}}$ via 1st step: Update $\mathbf{Z}_n$ via (\ref{equation_solution_Zn}) 2nd step: Update $\mathbf{X}_n$ via (\ref{equation:solution_Xn}) 3rd step: Update $\mathbf{A}_n$ via (\ref{equation_solution_An}) 4th step: Update $\mathbf{J}_n$ via (\ref{equation_solution_Jn}) 5th step: Update $\mathcal{Y}$ via (\ref{equation_solution_Y}) 6th step: Update the parameter via (\ref{equation:Lambda_2}), (\ref{equation:Lambda_1}) \State Check the convergence condition. \end{algorithmic} \end{algorithm} \subsection{ Complexity and Convergence Analysis} The proposed algorithm for our LRATM model is summarized in Algorithm \ref{algorithm:A1}. Further, we discuss the complexity and convergence of the proposed algorithm. \subsubsection{Complexity analysis} The cost of computing $\mathbf{X}_{n}$ is $O\left(I_{n} r_{n}^{2}+I_{n} r_{n} s_{n}+r_{n}^{2} s_{n}\right)$, calculating $\mathbf{Z_n}$ has a complexity of $O\left( \Pi_{j \neq n} I_{j} \times r_{n}^2 \right)$, the complexity of updating $\mathbf{J}_n$ is $O\left(I_{n} r_{n}^2\right)$, calculating $\mathbf{A}_{n}$ has a complexity of $O\left(I_{n} r_{n}^{2}+I_{n} r_{n} s_{n}+r_{n}^{2} s_{n}\right)$, calculating $\mathcal{Y}$ has a complexity of $O\left(r_{1} I_{1} s_{1}+\cdots+r_{N} I_{N} s_{N}\right)$. Therefore, the total complexity of the proposed algorithm can be obtained by counting the complexity of the above variables, i.e., \begin{equation} \label{equation:complexity_model1} O\left(\sum_{n}(3I_{n} r_{n}^2+\Pi_{j \neq n} I_{j} \times r_{n}^2+3 I_{n} r_{n} S_{n}+2 r_{n}^{2} s_{n})\right). \end{equation} \subsubsection{Convergence analysis} In this section, we theoretically analyze the convergence of the proposed algorithm by using the BSUM method \cite{BSUM}. \noindent \textbf{Lemma 2} \cite{BSUM}: Let us assume that the feasible set $\mathcal{X}$ is the cartesian product of $n$ closed convex sets: $\mathcal{X}=\mathcal{X}_1 \times \mathcal{X}_2 \times \cdots \times \mathcal{X}_{n}$. Given the problem \begin{equation} \min f(x), s.t. ~ x \in \mathcal{X}, \end{equation} assume $h\left(x, x^{k-1}\right)$ is an approximation of $f(x)$ at the $(k-1)$-th iteration, which satisfies the following conditions: \begin{equation} \begin{array}{l} 1) \quad h_{i}\left(y_{i}, y\right)=f(y), \forall y \in \mathcal{X}, \forall i; \\ 2) \quad h_{i}\left(x_{i}, y\right) \geq f\left(y_{1}, \ldots, y_{i-1}, x_{i}, y_{i+1}, \ldots, y_{n}\right), \\ \quad \quad \forall x_{i} \in \mathcal{X}_{i}, \forall y \in \mathcal{X}, \forall i;\\ 3) \quad \left.h_{i}^{\prime}\left(x_{i}, y ; d_{i}\right)\right|_{x_i=y_i}=f^{\prime}(y ; d), \forall d=\left(0, \cdots, d_{i} \cdots 0\right) \\ \quad \quad \text { s.t. } y_{i}+d_{i} \in \mathcal{X}_{i}, \forall i;\\ 4) \quad h_{i}\left(x_{i}, y\right) \text{is continuous in} \left(x_{i}, y\right), \forall i; \end{array} \end{equation} where $h_{i}\left(x_{i}, y\right)$ is the sub-problem with respect to the $i$-th block and $f^{\prime}(y ; d)$ is the direction derivative of fat the point $y$ in direction $d$. Suppose $h_{i}\left(x_{i}, y\right)$ is quasi-convex in $x_{i}$ for $i=1,2, \cdots, n$. Furthermore, assume that each sub-problem $\arg\min h_i\left(x_{i}, x^{k-1}\right), s.t. ~ x \in \mathcal{X}_{i}$ has a unique solution for any point $x^{k-1} \in \mathcal{X} .$ Then, the iterates generated by the BSUM algorithm converge to the set of coordinatewise minimum of $f$. \noindent \textbf{Theorem 1.} The iterates generated by (\ref{equation_original_PPA}) converge to the set of coordinatewise minimizers. \textbf{Proof.} It is easy to verify that $g\left(\mathcal{S}, \mathcal{S}^{k}\right)$ is an approximation and a global upper bound of $f(\mathcal{S})$ at the $k$-th iteration, which satisfies the following conditions: \begin{equation} \begin{array}{l} 1) \quad g_{i}\left(\mathcal{S}_{i}, \mathcal{S}\right)=f(\mathcal{S}), \forall \mathcal{S}, i=1,2,3; \\ 2) \quad g_{i}\left(\bar{\mathcal{S}}_{i}, \mathcal{S}\right) \geq f\left(\mathcal{S}_{1}, \cdots, \bar{\mathcal{S}_{i}}, \cdots, \mathcal{S}_{3}\right), \\ \quad\quad \forall \bar{\mathcal{S}}_{i}, \forall \mathcal{S}, i=1,2,3; \\ 3) \quad \left.g_{i}^{\prime}\left(\bar{\mathcal{S}}_{i}, \mathcal{S} ; \mathbf{M}_{i}\right)\right|_{\bar{\mathcal{S}}_{i}=\mathcal{S}_{i}}=f^{\prime}\left(\mathcal{S} ; \mathbf{M}^i\right), \\ \quad \quad \forall \mathbf{M}^{i}=\left(0, \ldots, \mathbf{M}_{i},\ldots, 0\right); \\ 4) \quad g_{i}\left(\bar{\mathcal{S}}_{i}, \mathcal{S}\right) \text{is continuous in} \left(\bar{\mathcal{S}}_{i}, \mathcal{S} \right), i=1,2,3; \end{array} \end{equation} where $\mathcal{S}=(\mathbf{X}, \mathbf{A}, \mathcal{Y}),$ and $\mathcal{S}_{i}$ equal $\mathbf{X}, \mathbf{A}, \mathcal{Y}$ for $i=1,2,3,$ respectively. In addition, the subproblem $g_{i}(i=1,2,3)$ is quasi-convex with respect to $\mathbf{X}, \mathbf{A}$ and $\mathcal{Y}$ respectively and each sub-problem of $g_{i}$ has a unique solution. Therefore, all assumptions in \textbf{Lemma 1} are satisfied. According to the conclusion of \textbf{Lemma 1}, the \textbf{Theorem 1} is valid, and the proposed algorithm is theoretically convergent. \section{Numerical experiments} \label{Numerical experiments} In this section, the proposed model is evaluated on three types of public tensor datasets, i.e., video datasets, MRI dataset and HSI dataset which have been frequently used to interpret the tensor completion performance of different models. To demonstrate its effectiveness, we compare the proposed model with TMac method \cite{Tmac}, MF-TV method \cite{MFTV}, single TNN based method \cite{lu2019TRPCA} and PSTNN based method \cite{PSTNN}. \begin{table}[t] \caption{The averaged PSNR, SSIM, FSIM, ERGA and SAM of the completed results on video "suzie" by Tmac, MF-TV, TNN, PSTNN and our model with different sampling rates. The best values are highlighted in bolder fonts.} \centering \label{table_video_suzie} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{cccccccc} \hline \hline && &SR =0.05 &&&& \\ PQI & nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 7.259 & \textbf{29.464} & 13.801 & 23.385 & 17.447 & 22.005 \\ SSIM & 0.009 & \textbf{0.807} & 0.094 & 0.622 & 0.192 & 0.563 \\ FSIM & 0.454 & \textbf{0.885} & 0.42 & 0.792 & 0.59 & 0.776 \\ ERGA & 1057.282 & \textbf{83.571} & 501.117 & 167.927 & 327.678 & 194.844 \\ MSAM & 77.324 & \textbf{3.622} & 24.095 & 6.927 & 13.775 & 7.797 \\ \hline &&&SR = 0.1&&&&\\ PQI & nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR& 7.493& \textbf{32.056}& 22.356& 26.189& 26.647& 26.032\\ SSIM& 0.014& \textbf{0.878}& 0.605& 0.74& 0.68& 0.692\\ FSIM& 0.426& \textbf{0.924}& 0.758& 0.838& 0.843& 0.846\\ ERGA& 1029.096& \textbf{62.314}& 196.059& 124.369 &117.104& 124.923\\ MSAM& 71.725& \textbf{2.764}& 6.99& 5.423& 5.171& 5.405 \\ \hline &&&SR = 0.2&&&&\\ PQI & nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR& 8.005& \textbf{34.378}& 32.064& 27.274& 30.566& 30.561\\ SSIM& 0.02& \textbf{0.918}& 0.872& 0.782& 0.829& 0.831\\ FSIM& 0.391& \textbf{0.948}& 0.916& 0.853& 0.91& 0.911\\ ERGA& 970.285& \textbf{47.877}& 66.692& 109.627& 75.472& 75.598\\ MSAM& 63.522& \textbf{2.183}& 2.81& 4.812& 3.399& 3.395\\ \hline \hline \end{tabular}} \end{table} To accurately evaluate the performance of the test models, two types of standards are employed to quantitatively evaluate the quality of the completed tensors. The first one is the visual evaluation of the completed data, which is a qualitative evaluation standard. The second one is the five quantitative picture quality indices (PQIs), including the peak signal-to-noise ratio (PSNR) \cite{PSNR}, structural similarity index (SSIM) \cite{SSIM}, feature similarity (FSIM) \cite{FSIM}, erreur relative globale adimensionnelle de synth\`ese (ERGAS) \cite{EGRAS}, the mean the spectral angle mapper (SAM) \cite{SAM}. Larger PSNR, SSIM, FSIM and smaller ERGAS, SAM are, the better the completion performance of the corresponding model is. Since the experimental datasets are all third-order tensors, the PQIs for each frontal slice in the completed tensor are first calculated, and then the mean of these PQIs are finally used to evaluate the performance of the models. All experiments are performed on MATLAB 2019b, the CPU of the computer is Inter core i7@2.2GHz and the memory is 64GB. The code will be posted on the following URL: https://github.com/NavyZeng/LRATM. \subsection{Video Data} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_video_b94}}% \hfil \subfloat[95\% Masked]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_video_b94_sr005TNN} \caption{One slice of the recovered video for “suzie” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 5\%.} \label{figure_video_sr0.05} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01suzie_b10}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr01TNN} \caption{One slice of the recovered video for “suzie” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_video_sr0.1} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02suzie_b10}}% \hfil \subfloat[80\% Masked]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_video_b10_sr02TNN} \caption{One slice of the recovered video for “suzie” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 20\%.} \label{figure_video_sr0.2} \end{figure*} In this subsection, we compare our model with MF-TV, Tmac, TNN and PSTNN on two video datasets: "suzie" and “hall”\footnote{http://trace.eas.asu.edu/yuv/}, both of which are colored using YUV format, and two slices of them are shown in Fig. \ref{figure_video_sr0.05} and Fig. \ref{figure_hall_sr0.05}, respectively. Their sizes both are 144 $\times$ 176 $\times$ 150. We test all five methods on a series of sampling rates (SR): 5\%, 10\% and 20\%, and all the test models are evaluated in terms of quantitative comparison and visual evaluation. In addition, the $n$-rank is approximated by using the number of the largest 0.5\% singular values. For quantitative comparison, Table \ref{table_video_suzie} and Table \ref{table_video_hall} report the PQIs of the results completed by different methods. The best result for each PQI are marked in bold. From Table \ref{table_video_suzie} and Table \ref{table_video_hall}, it can be found that our model obtains the highest indices among the five tested models in all SR cases; Tmac obtains the second best PQIs, when the SR is set to 5\% or 10\%; While MF-TV obtains the second best PQIs, when SR is set to 20\%. The margins between the results by our model and the second best results are more than 5dB considering the PSNR. For visual evaluation, we illustrate one frontal slice of the completed results with different sampling rates in Fig. \ref{figure_video_sr0.05}, Fig. \ref{figure_video_sr0.1}, Fig. \ref{figure_video_sr0.2}, Fig. \ref{figure_hall_sr0.05} and Fig. \ref{figure_hall_sr0.1}. It is clear from the figures that the results of our model are closest to the ground truth than other tested models, especially at low sampling rates. Specifically, as shown in Fig. \ref{figure_hall_sr0.1}, Fig. \ref{figure_video_sr0.05} and Fig. \ref{figure_video_sr0.1}, when the sampling rates are 0.05 and 0.1, the advantages of our model are most obvious. Our model restores most of the structural information of the videos, while the videos restored by the competitive models contain only the outline of the images. At a higher sampling rate, as shown in Fig. \ref{figure_video_sr0.2} and Fig. \ref{figure_hall_sr0.1}, our model and competitive models both complete the main structural information of the images, but our model recovers more texture and detail information. \begin{table} [t] \centering \caption{The averaged PSNR, SSIM, FSIM, ERGA and SAM of the recovered results on video "hall" by Tmac, MF-TV, TNN, PSTNN and our model with different sampling rates. The best values are highlighted in bolder fonts.} \label{table_video_hall} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{cccccccc} \hline \hline &&&SR =0.05 &&&& \\ PQI& nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 4.82 & \textbf{28.347} & 13.539 & 22.101 & 16.075 & 20.78 \\ SSIM & 0.007 & \textbf{0.894} & 0.412 & 0.675 & 0.36 & 0.636 \\ FSIM & 0.387 & \textbf{0.920} & 0.612 & 0.789 & 0.672 & 0.792 \\ ERGA & 1225.779 & \textbf{83.146} & 452.351 & 168.866 & 335.52 & 195.315 \\ MSAM & 77.299 & \textbf{2.360} & 12.865 & 3.818 & 8.64 & 4.299 \\ \hline &&&SR = 0.1&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 5.055 & \textbf{31.804} & 24.855 & 26.936 & 29.014 & 28.433 \\ SSIM & 0.013 & \textbf{0.935} & 0.829 & 0.854 & 0.892 & 0.905 \\ FSIM & 0.393 & \textbf{0.950} & 0.873 & 0.888 & 0.934 & 0.936 \\ ERGA & 1193.075 & \textbf{56.998} & 131.422 & 97.185 & 77.395 & 82.259 \\ MSAM & 71.7 &\textbf{ 1.904} & 3.669 & 2.404 & 2.417 & 2.46 \\ \hline &&&SR = 0.2&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 5.567 & \textbf{33.941} & 33.006 & 27.648 & 33.629 & 33.691 \\ SSIM & 0.025 & \textbf{0.952} & 0.94 & 0.869 & 0.961 & 0.962 \\ FSIM & 0.403 & \textbf{0.964} & 0.954 & 0.897 & 0.973 & 0.974 \\ ERGA & 1124.737 & \textbf{44.581} & 50.971 & 89.271 & 46.123 & 45.851 \\ MSAM & 63.507 & \textbf{1.574} & 1.779 & 2.226 & 1.584 & 1.565 \\ \hline \hline \end{tabular}} \end{table} \subsection{Magnetic resonance imaging data} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005Original}}% \hfil \subfloat[95\% Masked]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_hall_b21_sr005TNN} \caption{One slice of the recovered video “hall” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 5\%.} \label{figure_hall_sr0.05} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01Original}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_hall_b104_sr01TNN} \caption{One slice of the recovered video “hall” by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_hall_sr0.1} \end{figure*} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_suzie_sr005}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_suzie_sr01}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_suzie_sr02}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_suzie_sr005}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_suzie_sr01}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_suzie_sr02}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_suzie_sr005}}% \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_suzie_sr01}}% \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_suzie_sr02}}% \caption{The PSNR, SSIM and FSIM of the recovered video "suzie" by MF-TV, Tmac, TNN, PSTNN and our model for all slices, respectively.} \label{PSNR and SSIM of video} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01MRI_b7}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b7_sr01TNN} \caption{One slice of the recovered MRI by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_MR_sr0.1_1} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01MRI_b83}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b83_sr01TNN} \caption{One slice of the recovered MRI by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_MR_sr0.1_2} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01MRI_b95}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b95_sr01TNN} \caption{One slice of the recovered MRI by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_MR_sr0.1_3} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01MRI_b118}}% \hfil \subfloat[90\% Masked]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_MRI_b118_sr01TNN} \caption{One slice of the recovered MRI by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 10\%.} \label{figure_MR_sr0.1_4} \end{figure*} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_mri_sr005}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_mri_sr01}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{psnr_mri_sr02}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_mri_sr005}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_mri_sr01}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{ssim_mri_sr02}} \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_mri_sr005}}% \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_mri_sr01}}% \hfil \subfloat[]{\includegraphics[width=0.2\linewidth]{fsim_mri_sr02}}% \caption{The PSNR, SSIM and FSIM of the recovered MRI by MF-TV, Tmac, TNN, PSTNN and our model for all slices, respectively.} \label{PSNR and SSIM of MRI} \end{figure*} To further verify the versatility of our model for different datasets, in this subsection we compare our model with MF-TV, Tmac, TNN and PSTNN on MRI dataset, i.e., the cubical MRI data\footnote{http://brainweb.bic.mni.mcgill.ca/brainweb/selection$\_$normal.html}. Its one slice is shown in Fig. \ref{figure_MR_sr0.1_1}. The size of the dataset is 181 $\times$ 217 $\times$ 150. We test all five methods on a series of sampling rates: 5\%, 10\%, 20\% and 30\%. In addition, the $n$-rank is approximated by using the number of the largest 0.5\% singular values. For quantitative comparison, Table \ref{table_MRI} reports the PQIs of the results completed by different methods. The best result for each PQI are marked in bold. From Table \ref{table_MRI}, it can be found that our method obtains the highest indices among the five tested methods in all SR cases. Further, the same advantage of our model can also be seen in Fig. \ref{PSNR and SSIM of MRI} which reports the PSNR, SSIM and FSIM of each slice. For visual comparison, Fig. \ref{figure_MR_sr0.1_1}, Fig. \ref{figure_MR_sr0.1_2}, Fig. \ref{figure_MR_sr0.1_3} and Fig. \ref{figure_MR_sr0.1_4} show slices with 90\% missing values and the corresponding completed slices by all the tested methods. From the results, we see again that our model can better retain the local details and texture information of the images, and effectively restore the main structure of the images than other compared methods. Therefore, one can see that the recovered data obtained by our model has the best visual evaluation. \begin{table}[t] \centering \caption{The averaged PSNR, SSIM, FSIM, ERGA and SAM of the recovered results on MRI by Tmac, MF-TV, TNN, PSTNN and our model with different sampling rates. The best values are highlighted in bolder fonts.} \label{table_MRI} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{ccccccccc} \hline \hline &&&SR =0.05 &&&& \\ PQI& nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 10.258 & \textbf{26.414} & 12.332 & 20.51 & 15.859 & 18.218 \\ SSIM & 0.228 & \textbf{0.722} & 0.099 & 0.45 & 0.224 & 0.27 \\ FSIM & 0.473 & \textbf{0.834} & 0.52 & 0.711 & 0.642 & 0.646 \\ ERGA & 1030.203 & \textbf{184.279} & 814.747 & 339.385 & 545.77 & 434.774 \\ MSAM & 76.54 & \textbf{20.411} & 55.603 & 31.367 & 36.355 & 31.11 \\ \hline &&&SR = 0.1&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 10.492 & \textbf{32.652} & 15.406 & 21.411 & 22.061 & 22.535 \\ SSIM & 0.241 & \textbf{0.912} & 0.25 & 0.531 & 0.482 & 0.536 \\ FSIM & 0.511 & \textbf{0.926} & 0.587 & 0.732 & 0.764 & 0.78 \\ ERGA & 1002.8 & \textbf{89.116} & 584.827 & 308.655 & 275.473 & 266.753 \\ MSAM & 70.986 & \textbf{14.637} & 41.826 & 29.345 & 24.585 & 24.6 \\ \hline &&&SR = 0.2&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 11.003 & \textbf{36.529} & 27.062 & 22.33 & 29.152 & 28.571 \\ SSIM & 0.271 & \textbf{0.962} & 0.737 & 0.586 & 0.804 & 0.802 \\ FSIM & 0.564 & \textbf{0.963} & 0.84 & 0.754 & 0.895 & 0.891 \\ ERGA & 945.583 & \textbf{57.037} & 173.636 & 276.269 & 127.133 & 136.182 \\ MSAM & 62.887 & \textbf{11.559} & 21.792 & 27.267 & 17.513 & 17.855 \\ \hline \hline \end{tabular}} \end{table} \subsection{Hyperspectral Image Data} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72.png}}% \hfil \subfloat[97.5\% Masked]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr0025TNN} \caption{One slice of the recovered HSI "Cuprite" by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 2.5\%.} \label{figure_HSI_sr0.025} \end{figure*} \begin{figure*}[!t] \centering \subfloat[Original]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72.png}}% \hfil \subfloat[95\% Masked]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005Nosiy}}% \hfil \subfloat[our model]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005LxLxNN}}% \hfil \subfloat[MF-TV]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005MF_TV}}% \hfil \subfloat[Tmac]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005Tmac}}% \hfil \subfloat[PSTNN]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005PSTNN}}% \hfil \subfloat[TNN]{\includegraphics[width=0.13\linewidth]{Image_HSI_b72_sr005TNN} \caption{One slice of the recovered HSI "Cuprite" by our model, MF-TV, Tmac, PSTNN and TNN. The sampling rate is 5\%.} \label{figure_HSI_sr0.05} \end{figure*} \begin{figure*}[!t] \centering \subfloat[PSNR]{\includegraphics[width=0.15\linewidth]{psnr_hsi_sr0025}}% \hfil \subfloat[SSIM]{\includegraphics[width=0.15\linewidth]{ssim_hsi_sr0025}}% \hfil \subfloat[FSIM]{\includegraphics[width=0.15\linewidth]{fsim_hsi_sr0025}}% \hfil \subfloat[PSNR]{\includegraphics[width=0.15\linewidth]{psnr_hsi_sr005}}% \hfil \subfloat[SSIM]{\includegraphics[width=0.15\linewidth]{ssim_hsi_sr005}}% \hfil \subfloat[FSIM]{\includegraphics[width=0.15\linewidth]{fsim_hsi_sr005}}% \caption{The PSNR, SSIM and FSIM of the recovered HSI "Cuprite" by MF-TV, Tmac, TNN, PSTNN and our model for all slices, respectively.(a)-(c): 97.5\% entries missing, (d)-(f): 95\% entries missing.} \label{PSNR and SSIM of HSI} \end{figure*} In this section, we compare our model with MF-TV, Tmac, TNN and PSTNN on one HSI dataset: Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Cuprite data\footnote{http://aviris.jpl.nasa.gov/html/aviris.freedata.html}. The size of AVIRIS Cuprite data is 150 $\times$ 150 $\times$ 188. Its one slice is shown in Fig. \ref{figure_HSI_sr0.05}. We test all five methods on a series of sampling rates: 2.5\%, 5\% and 10\%. In addition, the $n$-rank is approximated by using the number of the largest 0.3\% singular values. For quantitative comparison, Table \ref{table_HSI} reports the average PQIs of each tested method with three different sampling rates. At sampling rates 0.025 and 0.05, Fig. \ref{PSNR and SSIM of HSI} reports the PSNR, SSIM and FSIM of each frontal slice in the completed results. For visual comparison, Fig. \ref{figure_HSI_sr0.05} and Fig. \ref{figure_HSI_sr0.025} show slices of the sampled data with 97.5\% and 95\% missing values and the corresponding recovered slices by the tested methods. From the results, we see again that our model not only obtains the highest PQIs, but also recovers more structure information and spatial details of the images than compared methods, especially at low sampling rates. \begin{table} \caption{The averaged PSNR, SSIM, FSIM, ERGA and SAM of the recovered results on hyperspectral image "Cuprite" by Tmac, MF-TV, TNN, PSTNN and our model with different sampling rates. The best values are highlighted in bolder fonts.} \centering \label{table_HSI} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{ccccccccc} \hline \hline &&&SR =0.025&&&& \\ PQI& nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 7.666 & \textbf{34.595} & 26.115 & 21.25 & 13.387 & 22.783 \\ SSIM & 0.007 & \textbf{0.861} & 0.539 & 0.412 & 0.124 & 0.554 \\ FSIM & 0.48 & \textbf{0.916} & 0.765 & 0.755 & 0.613 & 0.775 \\ ERGA & 1043.633 & \textbf{50.383} & 237.074 & 235.594 & 539.574 & 245.333 \\ MSAM & 81.221 & \textbf{1.662} & 12.913 & 7.842 & 17.98 & 9.156 \\ \hline &&&SR = 0.05&&&&\\ PQI& nosiy& our model& MF-TV& TMac& PSTNN& TNN\\ PSNR & 7.779 & \textbf{38.202} & 34.684 & 28.945 & 20.621 & 26.579 \\ SSIM & 0.01 & \textbf{0.928} & 0.845 & 0.712 & 0.31 & 0.663 \\ FSIM & 0.471 & \textbf{0.960} & 0.915 & 0.846 & 0.735 & 0.836 \\ ERGA & 1030.139 & \textbf{41.898} & 89.372 & 93.352 & 234.445 & 154.292 \\ MSAM & 77.268 & \textbf{1.559} & 4.386 & 3.278 & 7.886 & 5.413 \\ \hline &&&SR = 0.1&&&&\\ PQI & nosiy & our model & MF-TV & TMac & PSTNN & TNN \\ PSNR & 8.013 &39.056 & \textbf{40.888} & 35.627 & 35.51 & 35.015 \\ SSIM & 0.014 &0.939 & \textbf{0.957} & 0.885 & 0.907 & 0.897 \\ FSIM & 0.451 &0.966 & \textbf{0.978} & 0.931 & 0.951 & 0.943 \\ ERGA & 1002.75 &34.544 & \textbf{34.263} & 44.518 & 54.421 & 57.537 \\ MSAM & 71.695 &\textbf{1.299} &1.46 & 1.445 & 2.072 & 2.192 \\ \hline \hline \end{tabular}} \end{table} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=005_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=01_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=02_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=005_SSIM}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=01_SSIM}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{video-Hall_SR=02_SSIM}}% \caption{Sensitivity analysis of parameter $\gamma$ in video "hall" dataset. (a-c) Change in the MPSNR value: SR= 0.05, 0.1 and 0.2. (e-f) Change in the MSSIM value: 0.05, 0.1 and 0.2, respectively.} \label{parameter_analysis_hall} \end{figure*} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=005_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=01_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=02_PSNR}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=005_SSIM}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=01_SSIM}}% \hfil \subfloat[]{\includegraphics[width=0.15\linewidth]{MRI_SR=02_SSIM}}% \caption{Sensitivity analysis of parameter $\gamma$ in MRI dataset. (a-c) Change in the MPSNR value: SR= 0.05, 0.1 and 0.2. (e-f) Change in the MSSIM value: 0.05, 0.1 and 0.2, respectively.} \label{parameter_analysis_MRI} \end{figure*} \begin{figure*}[!t] \centering \subfloat[SR=0.025]{\includegraphics[width=0.15\linewidth]{HSI_SR=0025_PSNR}}% \hfil \subfloat[SR=0.05]{\includegraphics[width=0.15\linewidth]{HSI_SR=005_PSNR}}% \hfil \subfloat[SR=0.1]{\includegraphics[width=0.15\linewidth]{HSI_SR=01_PSNR}}% \hfil \subfloat[SR=0.025]{\includegraphics[width=0.15\linewidth]{HSI_SR=0025_SSIM}}% \hfil \subfloat[SR=0.05]{\includegraphics[width=0.15\linewidth]{HSI_SR=005_SSIM}}% \hfil \subfloat[SR=0.1]{\includegraphics[width=0.15\linewidth]{HSI_SR=01_SSIM}}% \caption{Sensitivity analysis of parameter $\gamma$ in HSI dataset. (a-c) Change in the MPSNR value: SR= 0.025, 0.05 and 01. (e-f) Change in the MSSIM value: 0.025, 0.05 and 0.1, respectively.} \label{parameter_analysis_HSI} \end{figure*} \begin{table*} \centering \caption{Parameters setting in the proposed algorithm} \label{table:parameter_setting} \begin{tabular}{ccccc ccccc ccc} \hline & &Vdieo-suzie& & &Video-hall& & &MRI dataset& & &HSI dataset& \\ \hline SR& 0.05& 0.1&0.2 &0.05& 0.1&0.2 &0.05& 0.1&0.2 &0.025& 0.05&0.1\\ $\gamma$& 2.3&2.5&2.7&1.7&2.6&2.6&1.3&2.5&3.0&3.5&5.5&5.7\\ \hline \end{tabular} \end{table*} \subsection{Discussion} Considering the fact that the $\gamma$ in $\mathbf{X}_n$ and $\mathbf{A}_n$ have certain proportional relationship, we simply set the $\gamma$ in $\mathbf{X}_n$ as 0.1, and tune the $\gamma$ in $\mathbf{A}_n$ carefully. For experiments on video datasets , MRI and HSI dataset, Fig. \ref{parameter_analysis_hall}, Fig. \ref{parameter_analysis_MRI} and Fig. \ref{parameter_analysis_HSI} show the effect of parameter $\gamma$ on PSNR and SSIM at different sampling rates, respectively. To enhance the repeatability of our model, we list the optimal $\gamma$ of various datasets at different sampling rates in Table \ref{table:parameter_setting}. In addition, we manually set both $\tau_n$ and $\lambda_n$ as 0.01. \section{Conclusions} \label{conclusion} In this paper, we propose a novel double low-rank tensor model based on multi-mode matrix decomposition for tensor completion. Instead of using the traditional single nuclear norm or its relaxation to represent the low-rank prior of underlying tensor directly, we first apply parallel matrix factorization to all modes of underlying tensor, then, a novel double nonconvex $L_{\gamma}$ norm is designed to represent the underlying joint-manifold drawn from the mode factorization factors. An BSUM based algorithm is designed to efficiently solve the proposed model, and it can be demonstrated that our numerical scheme converges to the coordinatewise minimizers. The proposed model has been evaluated on three types of public tensor datasets, which show that our algorithm can complete a variety of low-rank tensors with significantly fewer samples than the compared methods. \section*{Acknowledgment} The authors would like to express their thanks to Dr. C. Lu, Dr. Y. Xu, Dr. T. Ji and Dr. T. Jiang for sharing their codes for the tested methods. In addition, this research is supported by the Fundamental Research Funds for the Central Universities under Grant No. 2452019073 and the National Natural Science Foundation of China under Grant No. 61876153. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran
{'timestamp': '2020-06-16T02:10:27', 'yymm': '2005', 'arxiv_id': '2005.14346', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14346'}
arxiv
\section{Introduction} A Bayesian network (BN) is a probabilistic graphical model consisting of a labeled directed acyclic graph (DAG) $\mathcal{G} = (V, E)$, in which the vertex set $V = \{V_1, \dot
s, V_m\}$ corresponds to $m$ random variables, and the edge set $E$ prescribes a decomposition of the joint probability distribution of the random variables based on their parents in $\mathcal{G}$. The edge set $E$ encodes Markov relations on the nodes in the sense that each node is conditionally independent of its non-descendents given its parents. BNs have been used in knowledge discovery \citep{spirtes2000causation, chen2018causal}, classification \citep{aliferis2010local}, feature selection \citep{gao2015structured}, latent variable discovery \citep{lazic2013structural} and genetics \citep{ott2003finding}. They also play a vital part in causal inference \citep{pearl2009causal}. In this paper, we propose mixed-integer quadratic programming (MIQP) formulations for learning the optimal DAG structure of BNs given $n$ continuous observations from a system of linear structural equation models (SEMs). While there exist exact integer-programming (IP) formulations for learning DAG structure with \emph{discrete} data \citep{cussens2010maximum, cussens2012bayesian, hemmecke2012characteristic, studeny2013polyhedral, bartlett2013advances, JMLR:v17:14-479,oates2016exact, bartlett2017integer, cussens2017polyhedral,cussens2017bayesian}, the development of {tailored} computational tools for learning the optimal DAG structure from \emph{continuous} data has received less attention. In principle, exact methods developed for discrete data can be applied to {continuous} data. However, such methods result in exponentially sized formulations in terms of the number of binary variables. A common practice to circumvent the exponential number of binary variables is to limit the in-degree of each node \citep{cussens2012bayesian,cussens2017bayesian, bartlett2017integer}. But, this may result in sub-optimal solutions. On the contrary, MIQP formulations for learning DAGs corresponding to linear SEMs require a \textit{polynomial} number of binary variables. This is because for BNs with linear SEMs, the score function --- i.e., the penalized negative log-likelihood (PNL) --- can be explicitly written as a function of the coefficients of linear SEMs \citep{shojaie2010penalized, van2013ell, park2017bayesian, manzour2019integer}. Continuous BNs with linear SEMs have witnessed a growing interest in the statistics and computer science communities \citep{van2013ell, raskutti2013learning, loh2014high, ghoshal2016information, solus2017consistency}. In particular, it has been shown that the solution obtained from solving the PNL augmented by $\ell_0$ regularization achieves desirable statistical properties \citep{peters2013identifiability, van2013ell, loh2014high}. Moreover, if the model is \emph{identifiable} \citep{peters2013identifiability, loh2014high}, such a solution is guaranteed to uncover the true causal DAG when the sample size $n$ is large enough. However, given the difficulty of obtaining exact solutions, existing approaches for learning DAGs from linear SEMs have primarily relied on \emph{heuristics}, using techniques such as coordinate descent \citep{fu2013learning, aragam2015concave, han2016estimation} and non-convex continuous optimization \citep{zheng2018dags}. Unfortunately, these heuristics are not guaranteed to achieve the desirable properties of the global optimal solution. Moreover, it is difficult to evaluate the statistical properties of a sub-optimal solution with no optimality guarantees \citep{koivisto2012advances}. To bridge this gap, in this paper we develop mathematical formulations for learning optimal BNs from linear SEMs using a PNL objective with $\ell_0$ regularization. By connecting the optimality gap of the mixed-integer program to the statistical properties of the solution, we also establish an \emph{early stopping criterion} under which we can terminate the branch-and-bound procedure and attain a solution which asymptotically recovers the true parameters with high probability. Our work is related to recent efforts to develop exact tailored methods for DAG learning from continuous data.\ \cite{xiang2013lasso} show that $A^{\ast}$-lasso algorithm tailored for DAG structure learning from continuous data with $\ell_1$-regularization is more effective than the previous approaches based on dynamic programming \citep[e.g.,][]{silander2006simple} that are suitable for both discrete and continuous data.\ \cite{park2017bayesian} develop a mathematical program for DAG structure learning with $\ell_1$ regularization. \cite{manzour2019integer} improve and extend the formulation by \cite{park2017bayesian} for DAG learning from continuous data with both $\ell_0$ and $\ell_1$ regularizations. The numerical experiments by \cite{manzour2019integer} demonstrate that as the number of nodes grows, their MIQP formulation outperforms $A^{\ast}$-lasso and the existing IP methods; this improvement is both in terms of reducing the IP optimality gap, when the algorithm is stopped due to a time limit, and in terms of computational time, when the instances can be solved to optimality. In light of these recent efforts, the current paper makes important contributions to this problem at the intersection of statistics and optimization. \begin{itemize} \item The statistical properties of \textit{optimal} PNL with $\ell_0$ regularization have been studied extensively \citep{loh2014high,van2013ell}. However, it is often difficult to obtain an optimal solution and no results have been established on the statistical properties of approximate solutions. In this paper, we give an early stopping criterion for the branch-and-bound process under which the approximate solution gives consistent estimates of the true coefficients of the linear SEM. Our result leverages the statistical consistency of the PNL estimate with $\ell_0$ regularization \citep{van2013ell, peters2013identifiability} along with the properties of the branch-and-bound method wherein both lower and upper bound values on the objective function are available at each iteration. By connecting these two properties, we obtain a concrete early stopping criterion, as well as a simple proof of consistency of the approximate solution. To the best of our knowledge, this result is the first of its kind for DAG learning. \item In spite of recent progress, a key challenge in learning DAGs from linear SEMs is enforcing bounds on arc weights. This is commonly modeled using the standard ``big-$M$ constraint" approach \citep{park2017bayesian, manzour2019integer}. As shown by \cite{manzour2019integer}, this strategy leads to poor continuous relaxations for the problem, which in turn results in slow lower bound improvement in the branch-and-bound tree. In particular, \cite{manzour2019integer} establish that all existing big-$M$ formulations achieve the same continuous relaxation objective function under a mild condition (see Proposition~\ref{Prop2}). To circumvent this issue, we present a mixed-integer second-order cone program (MISOCP), which gives a tighter continuous relaxation than existing big-$M$ formulations. This formulation can be solved by powerful state-of-the-art optimization packages. Our numerical results show the superior performance of MISOCP compared to the existing big-$M$ formulations in terms of improving the lower bound and reducing the optimality gap. \end{itemize} The rest of the paper is organized as follows. In Section~\ref{Sec: SEMs}, we define the DAG structure learning problem corresponding to linear SEMs, and give a general framework for the problem. In Section~\ref{Cons}, we present our early stopping criterion and establish the asymptotic properties of the solution obtained under this stopping rule. We review existing mathematical formulations in Section~\ref{Sec: Previous work}, and present our proposed mathematical formulations in Section~\ref{Sec: Math models}. Results of comprehensive numerical studies are presented in Section~\ref{Sec: Computational}. We end the paper with a summary in Section~\ref{Sec: Conclusion}. \raggedbottom \section{Problem setup: Penalized DAG estimation with linear SEMs} \label{Sec: SEMs} Let $\mathcal{M} = (V, E)$ be an undirected and possibly cyclic super-structure graph with node set $V=\{1,2,\dots,m\}$ and edge set $E \subseteq V \times V$; let $\overrightarrow{\mathcal{M}} = (V, \overrightarrow{E})$ be the corresponding bi-directional graph with $\overrightarrow{E} =\{(j,k), (k,j) | (j,k) \in E\}$. We refer to undirected edges as \emph{edges} and directed edges as \emph{arcs}. We assume that causal effects of continuous random variables in a DAG $\mathcal{G}_0$ are represented by $m$ linear regressions of the form \begin{equation} \label{LSLM} X_k = \sum_{j \in pa^{\mathcal{G}_0}_k} \beta_{jk} X_j + \epsilon_k, \quad k=1,\dots, m, \end{equation} \noindent where $X_k$ is the random variable associated with node $k$, $pa^{\mathcal{G}_0}_k$ represents the parents of node $k$ in $\mathcal{G}_0$, i.e., the set of nodes with arcs pointing to $k$; the latent random variable $\epsilon_k$ denotes the unexplained variation in node $k$; and BN parameter $\beta_{jk}$ specifies the effect of node $j$ on $k$ for $j \in pa^{\mathcal{G}_0}_k$. The above model is known as a linear SEM \citep{pearl2009causal}. Let $\mathcal{X}=(\mathcal{X}_1, \dots , \mathcal{X}_m)$ be the $n \times m$ data matrix with $n$ rows representing i.i.d.\ samples from each random variable, and $m$ columns representing random variables $X_1, \ldots, X_m$. The linear SEM \eqref{LSLM} can be compactly written in matrix form as $\mathcal{X} = \mathcal{X}{B} + \mathcal{E}$, where ${B} = [\beta] \in \mathbb{R}^{m \times m}$ is a matrix with $\beta_{kk}=0$ for $k=1,\dots,m$, $\beta_{jk}=0$ for all $(j,k) \notin E$, and $\mathcal{E}$ is the $n\times m$ `noise' matrix. Then, $\mathcal{G}(B)$ denotes the directed graph on $m$ nodes such that arc $(j,k)$ appears in $\mathcal{G}(B)$ if and only if $\beta_{jk} \neq 0$. Throughout the paper, we will use $B$ and $\beta$ to denote the matrix of coefficients and its vectorized version. A key challenge when estimating DAGs by minimizing the loss function \eqref{eqn:lklhd} is that the true DAG is generally not identifiable from observational data. However, for certain SEM distributions, the true DAG is in fact identifiable from observational data. Two important examples are linear SEMs with possibly non-Gaussian homoscedastic noise variables \citep{peters2013identifiability}, as well as linear SEMs with unequal noise variances that are known up to a constant \citep{loh2014high}. In these special cases, the true DAG can be identified from observational data, without requiring the (strong) `faithfulness' assumption, which is known to be restrictive in high dimensions \citep{uhler2013geometry, sondhi2019reduced}. Given these important implications, in this paper we focus on learning Bayesian networks corresponding to the above \emph{identifiable} linear SEMs. The negative log likelihood for an identifiable linear SEM \eqref{LSLM} with equal noise variances is proportional to \begin{equation}\label{eqn:lklhd} l(\beta; \mathcal{X}) =n\,\text{tr}\left\{(I-{B})(I-{B})^\top \widehat{\Sigma}\right\}; \end{equation} here $\widehat{\Sigma} =n^{-1} \mathcal{X}^\top \mathcal{X}$ is the empirical covariance matrix, and $I$ is the identity matrix \citep{shojaie2010penalized, van2013ell}. To learn \textit{sparse} DAGs, \citet{van2013ell} propose to augment the negative log likelihood with an $\ell_0$ regularization term. Given a super-structure $\mathcal{M}$, the optimization problem corresponding to this penalized negative log-likelihood (PNL$\mathcal{M}$) is given by \begin{subequations} \label{eq:PNLMform} \begin{align} \textbf{PNL$\mathcal{M}$} \quad & \underset{B \in {\mathbb R}^{m \times m}}{\min} \quad \Score(\beta):= l(\beta; \mathcal{X}) + \lambda_n \|\beta\|_0 \label{Eq: Opt} \\ \text{s.t.} \, \, & \mathcal{G}(B) \, \, \text{induces a DAG from} \, {\overrightarrow{\mathcal{M},}} \label{Eq: DAG const} \end{align} \end{subequations} where the tuning parameter $\lambda_n$ controls the degree of the $\ell_0$ regularization, and the constraint \eqref{Eq: DAG const} stipulates that the resulting directed subgraph is a DAG induced from $\overrightarrow{\mathcal{M}}$. When $\mathcal{M}$ corresponds to a complete graph, PNL$\mathcal{M}$ reduces to the original PNL of \citet{van2013ell}. The choice of $\ell_0$ regularization in \eqref{eq:PNLMform} is deliberate. Although $\ell_1$ regularization has attractive computational and statistical properties in high-dimensional regression \citep{bulmann2011statistics}, many of these advantages disappear in the context of DAG structure learning \citep{fu2013learning, aragam2015concave}. By considering $\ell_0$ regularization, \cite{van2013ell} establish the consistency of PNL under appropriate assumptions. More specifically, for a Gaussian SEM, they show that the estimated DAG has (asymptotically) the same number of edges as the DAG with minimal number of edges (minimal-edge I-MAP), and establish the consistency of PNL for learning sparse DAGs. These results are formally stated in Proposition~\ref{prop:van} in the next section. \begin{remark}\label{rem:L2} A Tikhonov ($\ell_2$) regularization term, $\mu \|\beta\|_2^2$, for a given $\mu > 0$, can also be added to the objective \eqref{Eq: Opt} to obtain more stable solutions \citep{bertsimas2016best}. \end{remark} In our earlier work \citep{manzour2019integer}, we observe that existing mathematical formulations are slow to converge to a provably optimal solution, $\beta^\star$, of \eqref{eq:PNLMform} using the state-of-the-art optimization solvers. Therefore, the solution process needs to be terminated early to yield a feasible solution, $\hat \beta$ with a positive optimality gap, i.e., a positive difference between the upper bound on $\Score(\beta^\star)$ provided by $\Score(\hat \beta)$ and a lower bound on $\Score(\beta^\star)$ provided by the best continuous relaxation obtained by the branch-and-bound algorithm upon termination. However, statistical properties of such a sub-optimal solution are not well-understood. Therefore, there exists a gap between theory and computation: while the optimal solution has nice statistical properties, the properties of the solutions obtained from approximate computational algorithms are not known. Moreover, due to the non-convex and complex nature of the problem, characterizing the properties of the solutions provided by heuristics is especially challenging. In the next section, we bridge this gap by developing a concrete early stopping criterion and establishing the consistency of the solution obtained using this criterion. \section{Early stopping criterion for DAG learning} \label{Cons} In this section, we establish a sufficient condition for the approximate solution of PNL$\mathcal{M}$, $\hat{\beta}$ to be consistent for the true coefficients, $\beta^{0}$; that is $\|\beta^{0} - \hat{\beta}\|_2^2 = \mathcal{O}\left(s^0\log(m) / n \right)$, where $s^0$ is the number of arcs in the true DAG, and $x$ $\asymp$ $y$ means that $x$ converges to $y$ asymptotically. This result is obtained by leveraging an important property of the branch-and-bound process for integer programming that provides both lower and upper bounds on the objective function $ \Score(\beta^\star)$ upon early stopping, as well as the consistency results of the PNL estimate with $\ell_0$ regularization. Using the insight from this new result, we then propose a concrete stopping criterion for terminating the branch-and-bound process that results in consistent parameter estimates. Let $LB$ and $UB$ respectively denote the lower and upper bounds on the optimal objective function value \eqref{Eq: Opt} obtained from solving \eqref{eq:PNLMform} under an early stopping criterion (i.e., when the obtained solution is not necessarily optimal). We define the difference between the upper and lower bounds as the \emph{absolute} optimality gap: $GAP = UB -LB$. Let $\hat{\mathcal{G}}$ and $ \hat{\beta}$ denote the structure of the DAG and coefficients of the arcs from optimization model \eqref{eq:PNLMform} under the early stopping condition with sample size $n$ and regularization parameter $\lambda_n$. Let ${\mathcal{G}^{\star}}$ and $\beta^{\star}$ denote the DAG structure and coefficients of arcs obtained from the optimal solution of \eqref{eq:PNLMform}, and $\mathcal{G}^{0}$ and $\beta^{0}$ denote the true DAG structure and the coefficient of arcs, respectively. We denote the number of arcs in $\hat{\mathcal{G}}$, $\mathcal{G}^{0}$, and ${\mathcal{G}^{\star}}$ by $\hat{s}$, $s^0$, and $s^{\star}$, respectively. The score value in \eqref{Eq: Opt} of each solution is denoted by $\Score(\phi)$ where $\phi \in \{\beta^{\star}, \hat{\beta}, \beta^0\}$. Next, we present our main result. Our result extends \citeauthor{van2013ell}'s result on consistency of PNL$\mathcal{M}$ for the optimal, but computationally unattainable, estimator, $\beta^{\star}$ to an approximate estimator, $\hat\beta$, obtained from early stopping. In the following (including the statement of our main result, Proposition~\ref{EarlyProp}), we assume that the super-structure $\mathcal{M}$ is known \emph{a priori}. The setting where $\mathcal{M}$ is estimated from data is discussed at the end of the section. We begin by stating the key result from \cite{van2013ell} and the required assumptions. Throughout, we consider a Gaussian linear SEM of the form \eqref{LSLM}. We denote the variance of error terms, $\epsilon_j$, by $\sigma_{jj}^2$ and the true covariance matrix of the set of random variables, $(X_1,\ldots, X_m)$ by the $m \times m$ matrix $\Sigma$. \begin{cond}\label{cond:1} For some constant $\sigma_0^2$, it holds that $\max_{j=1,\ldots,m}\sigma_{jj}^2 \leq \sigma_0^2$. Moreover, the smallest eigenvalue of $\Sigma$, $\kappa_{\min}(\Sigma)$, is nonzero. \end{cond} \begin{cond}\label{cond:2} Let, as in \cite{van2013ell}, $\widetilde\Omega(\pi)$ be the precision matrix of the vector of noise variables for an SEM given permutation $\pi$ of nodes. Denoting the diagonal entires of this matrix by $\tilde \omega_{jj}$, there exists a constant $\omega_0 > 0$ such that if $\widetilde\Omega(\pi)$ is not a multiple of the identity matrix, then \[ m^{-1} \sum_{j=1}^m\left( (\tilde\omega_{jj})^2 -1 \right)^2 > 1/ \omega_0. \] \end{cond} \begin{prop} (Theorem 5.1 in \cite{van2013ell}) \label{prop:van} Suppose Assumptions~\ref{cond:1} and \ref{cond:2} hold. Let $\alpha_0:= \min\{\frac{4}{m}, 0.05\}$. Then for an $\ell_0$ regularization parameter $\lambda \asymp \log(m)/n$, it holds with probability at least $1-\alpha_0$ that \[ \|\beta^{\star}-\beta^{0}\|_2^2 + \lambda s^{\star} = \mathcal{O}\left(\lambda s^0\right). \] \end{prop} Here, $\lambda=\lambda_n/n$, because the loss function \eqref{eqn:lklhd} is that of \cite{van2013ell} scaled by the sample size $n$. Before presenting our main result, we state one more condition on the covariance matrix of the random variables generated by linear SEM. For a given subset $S \subset \{1,\ldots,m\}$, let $S^c$ denote its complement, i.e., $S^c:= \{1,\ldots,m\}\setminus S.$ \textbf{Definition \citep{raskutti2010restricted}.} Define the set $\mathcal{C}(S; \eta) := \{v \in \mathbb{R}^{m}\, \, | \, \, \| v_{S^c}\|_1 \leq \eta \|v_S\|_1\}$ for a given subset $S \subset \{1,\ldots,m\}$ and constant $\eta \geq 1$. The $m \times m$ sample covariance matrix $\widehat{\Sigma} = n^{-1}\mathcal{X}^\top \mathcal{X}$ satisfies the \textit{restricted eigenvalue (RE) condition over $S$} with parameters $(\eta,\gamma) \in [1,\infty) \times [0,\infty)$ if \[ \frac{1}{n} v^\top \mathcal{X}^{\top} \mathcal{X} v = \frac{1}{n} \|\mathcal{X}v\|_2^2 \geq \gamma^2 \|v\|_2^2, \quad \forall v \in \mathcal{C}(S;\eta). \] If this condition holds for all subsets $S$ with cardinality $s$, we say that $\widehat{\Sigma}$ satisfies a \textit{restricted eigenvalue (RE) condition of order $s$} with parameters $(\eta,\gamma)$. The $m \times m$ population covariance matrix $\Sigma$ is said to satisfy the RE condition if \[ \| \Sigma^{1/2} v \|_2 \ge \gamma \| v \|_2 \quad \forall v \in \mathcal{C}(S;\eta). \] \citet{raskutti2010restricted} show that if $\Sigma$ satisfies the RE condition, then there exists constants $c$ and $c'$ such that with probability at least $1 - c' e^{-c n}$, $\widehat{\Sigma}$ also satisfies the RE condition with parameters $(\eta,\gamma/8)$. More specifically, their proof of Corollary~1 shows that for any $v \in \mathcal{C}(S; \eta)$, \begin{equation}\label{eqn:REdef} \| v \|_2^2 \le c_1 \| \mathcal{X} v \|_2^2, \end{equation} where $c_1 = n^{-1} \left\{\frac{\gamma}{4} - 9(1+\alpha) \sigma_0 \sqrt{\frac{s^0 \log(m)}{n}}\right\}^{-2}$ for $\sigma_0$ defined in Assumption~\ref{cond:1}. In fact, in the low-dimensional setting implied by condition \eqref{eqn:ncond}, the inequality \eqref{eqn:REdef} holds with probability one because, when $m \ll n$, for any $v \in \mathbb{R}^m$ we have $ \|\mathcal{X}v\|_2^2 \geq \kappa_{\min}(\mathcal{X}) |\|v\|_2^2. $ Thus, \eqref{eqn:REdef} holds with $c_1 = 1 / \kappa_{\min}(\mathcal{X})$. \begin{prop} \label{EarlyProp} Suppose $\Sigma$ satisfies the RE condition of order $s^0$ with parameters $(\eta,\gamma)$ and that for constants $c_2, c_3 > 0$, \begin{equation}\label{eqn:ncond} n > \max\left\{ c_2 \frac{\sigma_0^2(1+\eta)^2}{\gamma^2}s^0 \log(m), \, c_3 m\log(n) \right\}. \end{equation} Suppose also that Assumptions~\ref{cond:1} and \ref{cond:2} hold. Let $\alpha_{0} = \min \{\frac{4}{m}, 0.05\}$ and $\lambda \asymp \log(m) / n$. Then, the estimator $\hat\beta$ obtained from early stopping of the branch-and-bound process such that GAP $\asymp \mathcal{O}\left(\frac{\log(m)}{n} s^0\right)$ satisfies $ \|\hat\beta - \beta^{0}\|_2^2 \asymp \mathcal{O}\left(\frac{\log(m)}{n}s^0\right)$ with probability $\min\{1- \alpha_0, 1- c' e^{-cn}\}$ for constants $c$ and $c'$ used for the RE condition. \end{prop} \begin{proof} First, by the triangle inequality and the fact that $2ab \leq a^2 + b^2, \forall a,b \in \mathbb{R}$, \begin{equation}\label{eqn:newbnd1} \| \hat\beta - \beta^0 \|_2^2 \leq 2\| \hat\beta - \beta^\star \|_2^2 + 2\| \beta^\star - \beta^ 0 \|_2^2. \end{equation} Further, by the sparsity of $\beta^\star$ from Proposition~\ref{prop:van}, $\hat\beta - \beta^\star$ belongs to the set $\mathcal{C}(S^0; \eta)$, where $S^0 = \{j: \beta^0_j \ne 0 \}$ and $|S^0| = s^0$. Thus, \begin{equation}\label{eqn:lowerbnd} \| \hat\beta - \beta^\star \|_2^2 \leq c_1 \| \mathcal{X}(\hat\beta - \beta^\star) \|_2^2. \end{equation} Now, noting that $\ell(\beta;\mathcal{X}) = \| \mathcal{X} - \mathcal{X}\beta \|_2^2$ (see, e.g., the expanded version in Eq.~\eqref{CP-obj}), we can write a Taylor series expansion of $\ell(\hat\beta;\mathcal{X})$ around $\ell(\beta^\star;\mathcal{X})$ to get \begin{align*} \| \mathcal{X}(\hat\beta - \beta^\star) \|_2^2 = \ell(\hat\beta;\mathcal{X}) - \ell(\beta^\star;\mathcal{X}) - 2(\hat\beta - \beta^\star)^\top \mathcal{X}^\top \mathcal{X} (\beta^\star - \beta^0) + 2(\hat\beta - \beta^\star)^\top \mathcal{X}^\top \mathcal{E}. \end{align*} Here, we also use the fact that $\mathcal{X} = \mathcal{X}B^0 + \mathcal{E}$. Thus, using triangle inequality again, we get \begin{align*} \| \hat\beta & - \beta^\star \|_2^2 \leq \\ & c_1\left| \ell(\hat\beta;\mathcal{X}) - \ell(\beta^\star;\mathcal{X}) \right| + 2c_1 \kappa_{\max}(\mathcal{X}^\top \mathcal{X}) \| \hat\beta - \beta^\star \|_2 \| \beta^\star - \beta^0 \|_2 + 2c_1 \| \hat\beta - \beta^\star \|_2 \| \mathcal{X}^\top\mathcal{E} \|_2, \end{align*} where $\kappa_{\max}$ denotes the maximum eigenvalue of the matrix. Let $Z = \| \hat\beta - \beta^\star \|_2$, and denote $ \Pi = 2c_1 \left[ \kappa_{\max}(\mathcal{X}^\top \mathcal{X}) \| \beta^\star - \beta^0 \|_2 + \| \mathcal{X}^\top\mathcal{E} \|_2 \right], $ and $ \Gamma = c_1\left| \ell(\hat\beta;\mathcal{X}) - \ell(\beta^\star;\mathcal{X}) \right|. $ Then, the above inequality can be written as $Z^2 \leq \Pi Z + \Gamma$, which implies that $Z \leq \left(\Pi + \sqrt{\Pi^2 + 4\Gamma} \,\right) / 2$. Let $\mathcal{T}$ be the event under which $\Pi = o(1)$. Then, on the set $\mathcal{T}$, \begin{equation}\label{eqn:newbnd2} \| \hat\beta - \beta^\star \|_2^2 \leq c_1\left| \ell(\hat\beta;\mathcal{X}) - \ell(\beta^\star;\mathcal{X}) \right| + o(1). \end{equation} Plugging in \eqref{eqn:newbnd2} into \eqref{eqn:newbnd1}, on the set $\mathcal{T}$ we get \begin{align}\label{eqn:newbnd3} \| \hat\beta - \beta^0 \|_2^2 & \leq 2 c_1\left| \ell(\hat\beta;\mathcal{X}) - \ell(\beta^\star;\mathcal{X}) \right| + 2\| \beta^\star - \beta^ 0 \|_2^2 + o(1) \nonumber\\ & \leq 2 c_1 \left| \ell(\hat\beta; \mathcal{X}) - \ell(\beta^{\star}; \mathcal{X}) + \lambda \hat{s} - \lambda s^\star \right| + 2\| \beta^{\star} - \beta^{0} \|_2^2 + 2 c_1 |\lambda s^{\star} - \lambda \hat{s}| + o(1) \nonumber \\ & \leq 2 c_1 \underset{\Score(\hat{\beta}) - \Score(\beta^{\star})}{\underbrace{ \left| \ell(\hat\beta; \mathcal{X}) - \ell(\beta^{\star}; \mathcal{X}) + \lambda \hat{s} - \lambda s^\star \right| } } + 2\| \beta^{\star} - \beta^{0} \|_2^2 + 2 c_1 \lambda s^{\star} + o(1) \nonumber \\ & \leq 2 c_1 GAP + 2\left(\| \beta^{\star} - \beta^{0} \|_2^2 + c_1 \lambda s^{\star}\right) + o(1), \end{align} where the last inequality follows from the fact that, by definition, $\left\vert \Score(\hat{\beta}) - \Score(\beta^{\star}) \right\vert \leq GAP$. Now, by Proposition~\ref{prop:van}, we know that with probability at least $1 - \alpha_0$, $\| \beta^{\star} - \beta^{0} \|_2^2 = \mathcal{O}\left(s^0\log(m)/n\right)$, and $ c_1 \lambda s^{\star} = \mathcal{O}\left(c_1 s^0\log(m)/n\right)$. Moreover, by the RE condition, with probability at least $1- c' e^{-cn}$, $c_1 = \mathcal{O}(1)$. Finally, using concentration inequalities for the Gaussian SEM noise $\mathcal{E}$ \citep[e.g.][]{bulmann2011statistics}, the probability of the set $\mathcal{T}$ is lower bounded by the probability that $\| \beta^{\star} - \beta^{0} \|_2^2 = \mathcal{O}\left(s^0\log(m)/n\right)$, which is $1- \alpha_0$. Thus, stopping the branch-and-bound algorithm when $GAP = \mathcal{O}(\lambda s^{0})$ guarantees that, with probability at least $\min\{1- \alpha_0, 1- c' e^{-cn}\}$, $\|\hat{\beta}-\beta^0\|_2^2 = \mathcal{O}\left( s^0 \log(m) / n \right)$. \end{proof} Proposition~\ref{EarlyProp} suggests that the algorithm can be stopped by setting a threshold $c^{\star} \lambda s^{0}$ on the value of $GAP = | UB - LB |$ for a constant $c^{\star} > 0$, say $c^{\star}=1$. Such a solution will then achieve the same desirable statistical properties as the optimal solution $\beta^{\star}$. However, while $\lambda$ can be chosen data-adaptively (as discussed in Section~\ref{Sec: Computational}), the value for $s^0$ is not known. However, one can find an upper bound for $s^0$ based on the number of edges in the super-structure $\mathcal{M}$. In particular, if $\mathcal{M}$ is the moral graph \citep{pearl2009causal} with $s_m$ edges, then $s^0 \leq s_m$. Thus, in this case, a consistent parameter estimate can be obtained if the branch-and-bound process is stopped when $GAP \le s_m \lambda$. The above results, including the specific choice of early stopping criterion, are also valid if the super-structure $\mathcal{M}$ corresponding to the moral graph is not known \emph{a priori}. That is because the moral graph can be consistently estimated from data using recent developments in graphical modeling; see \citet{drton2017structure} for a review of the literature. While some of the existing algorithms based on $\ell_1$-penalty require an additional \emph{irrepresentability} condition \citep{meinshausen2006high, saegusa2016joint}, this assumption can be relaxed by using instead an adaptive lasso penalty or by thresholding the initial lasso estimates \citep{bulmann2011statistics}. In light of Proposition \ref{EarlyProp}, it is of great interest to develop algorithms that converge to a solution with a small optimality gap expeditiously. To achieve this, one approach is to obtain better lower bounds using the branch-and-bound process from strong mathematical formulations for \eqref{eq:PNLMform}. To this end, we next review existing formulations of \eqref{eq:PNLMform}. \section{Existing Formulations of DAG Learning with Linear SEMs} \label{Sec: Previous work} In this section, we review known mathematical formulations for DAG learning with linear SEMs. We first outline the necessary notation below. \\ \\ \noindent \textbf{Index Sets}\\ $V = \{1,2,\dots,m\}$: index set of random variables;\\ $\mathcal{D}= \{1,2,\dots,n\}$: index set of samples. \vspace{0.1in}\\ \noindent \textbf{Input} \\ $\mathcal{M}=(V,E)$: an undirected super-structure graph (e.g., the moral graph);\\ $\overrightarrow{\mathcal{M}}=(V,\overrightarrow{E})$: the bi-directional graph corresponding to the undirected graph $\mathcal{M}$; \\ $\mathcal{X} = (\mathcal{X}_1, \dots, \mathcal{X}_m)$, where $\mathcal{X}_v = (x_{1v}, x_{2v}, \dots, x_{nv})^{\top}$ and $x_{dv}$ denotes $d$th sample ($d \in \mathcal{D}$) of random variable $X_v$; note $\mathcal{X} \in \mathbb{R}^{n \times m}$; \\ $\lambda_n:$ tuning parameter (penalty coefficient for $\ell_0$ regularization).\\ \noindent \textbf{Continuous optimization variables} \\ $\beta_{jk}$: weight of arc $(j, k)$ representing the regression coefficients $\forall (j,k) \in \overrightarrow{E}$.\\ \noindent\textbf{Binary optimization variables} \\ $z_{jk}=1 \, \, \text{if arc} \, \, (j, k) \, \text{exists in a DAG}; \text{otherwise} \, 0, \, \forall (j,k) \in \overrightarrow{E}$, \\ $g_{jk}=1 \, \, \text{if} \, \, \beta_{jk} \neq 0; \, \text{otherwise} \, \, 0, \, \forall (j,k) \in \overrightarrow{E}$. Let $F(\beta, g)= \sum_{k\in V}\sum_{d\in \mathcal{D}} \Big(x_{dk}-\sum_{(j,k) \in \overrightarrow{E}} \beta_{jk}x_{dj}\Big)^2 + \lambda_n\sum_{(j,k) \in \overrightarrow{E}} g_{jk}$. The PNL$\mathcal{M}$ can be cast as the following optimization problem: \begin{subequations} \begin{alignat}{3} \label{CP-obj} \quad \quad \underset{}{\min}\quad & \, F(\beta, g), \\ &\label{CP-con2}\mathcal{G}(B) \, \, \text{induces a DAG from} \, {\overrightarrow{\mathcal{M}}}, \\ & \label{CP-con1} \beta_{jk}(1-g_{jk})=0, \quad && \forall (j,k) \in \overrightarrow{E,}\\ &\label{CP-con3} g_{jk} \in \{0,1\},\quad && \forall (j,k) \in \overrightarrow{E}. \end{alignat} \label{MIQP1} \end{subequations} \noindent The objective function \eqref{CP-obj} is an expanded version of $\mathcal L(\beta)$ in PNL$\mathcal{M}$, where we use the indicator variable $g_{jk}$ to encode the $\ell_0$ regularization. The constraints in \eqref{CP-con2} rule out cycles. The constraints in \eqref{CP-con1} are non-linear and stipulate that $\beta_{jk} \neq 0$ only if $g_{jk}=1$. There are two sources of difficulty in solving \eqref{CP-obj}-\eqref{CP-con3}: (i) the acyclic nature of DAG imposed by the combinatorial constraints in \eqref{CP-con2}; (ii) the set of \textit{nonlinear} constraints in \eqref{CP-con1}, which stipulates that $\beta_{jk} \neq 0$ only if there exists an arc $(j,k)$ in $\mathcal{G}(B)$.\ In Section \ref{lit1}, we discuss related studies to address the former, whereas in Section \ref{lit2} we present relevant literature for the latter. \subsection{Linear encodings of the acyclicity constraints \eqref{CP-con2}} \label{lit1} There are several ways to ensure that the estimated graph does not contain any cycles. The first approach is to add a constraint for each cycle in the graph, so that at least one arc in this cycle must not exist in $\mathcal G(B)$.\ A \textit{cutting plane} (CP) method is used to solve such a formulation which may require generating an exponential number of constraints. Another way to rule out cycles is by imposing constraints such that the nodes follow a topological order \citep{park2017bayesian}. A topological ordering is a unique ordering of the nodes of a graph from 1 to $m$ such that the graph contains an arc $(j,k)$ if node $j$ appears before node $k$ in the order. We refer to this formulation as \textit{topological ordering} (TO). The \textit{layered network} (LN) formulation proposed by \cite{manzour2019integer} improves the TO formulation by reducing the number of binary variables. \cite{manzour2019integer} discuss these formulations in detail. Let $\mathcal{C}$ be the set of all possible directed cycles and $\mathcal{C}_A \in \mathcal{C}$ be the set of arcs defining a cycle. The CP formulation removes cycles by imposing the following constraints for \eqref{CP-con2} \begin{equation} \label{CE} \textbf{CP} \quad \sum_{(j,k ) \in \, \mathcal{C}_A} g_{jk} \leq |\mathcal{C}_A|-1, \quad \forall \mathcal{C}_A \in \mathcal{C}. \end{equation} Define decision variables $z_{jk} \in \{0,1\}$ for all $(j,k) \in \overrightarrow{E}$ and $o_{rs} \in \{0,1\}$ for all $r, s \in \{1, \dots, m\}$. The variable $z_{jk}$ takes value 1 if there is an arc $(j,k)$ in the network, and $o_{rs}$ takes value 1 if the topological order of node $r$ equals $s$. The TO formulation rules out cycles in the graph by the following constraints \begin{subequations}\label{TO} \begin{alignat}{3} \textbf{TO} \quad & \label{TO-con3} g_{jk} \leq z_{jk}, \quad && \forall (j,k) \in \overrightarrow{E}, \\ \label{TO-con4} & z_{jk} - m z_{kj} \leq \sum_{s \in V} s \, (o_{ks} - o_{js}), \quad&& \forall (j,k) \in \overrightarrow{E},\\ \label{TO-con5} & \sum_{s \in V} o_{rs} =1 \quad && \forall r \in V, \\ \label{TO-con6} & \sum_{r \in V} o_{rs} =1 \quad &&\forall s \in V. \end{alignat} \end{subequations} The third way to remove cycles is by imposing the condition that the resulting graph is a layered network. This can be achived by the following set of constraints in the LN formulation: \begin{subequations}\label{LN} \begin{alignat}{3} \label{LN-con3} \textbf{LN} \quad & g_{jk} \leq z_{jk} \quad&& \forall (j,k) \in \overrightarrow{E}, \\ \label{LN-con4} & z_{jk} - (m-1) z_{kj} \leq \psi_k - \psi_j \quad &&\forall (j,k) \in \overrightarrow{E}. \end{alignat} \label{eq:LN} \end{subequations} \noindent Let $\psi_k$ be the \textit{layer value} for node $k$. The set of constraints in \eqref{LN-con4} ensures that if the layer of node $j$ appears before that of node $k$ (i.e., there is a direct path from node $j$ to node $k$), then $\psi_k \geq \psi_j + 1$. This rules out any cycles. The set of constraints in \eqref{LN-con4} imposes that if $z_{ij} = 1$ and $z_{jk} = 1$, then $z_{ik} = 1$. Thus, additional binary vector $g$ along with the set of constraints in \eqref{LN-con3} is needed to correctly encode the $\ell_0$ regularization. Similar reasoning applies for the TO formulation; see \cite{manzour2019integer}. \subsection{Linear encodings of the non-convex constraints \eqref{CP-con1}} \label{lit2} The nonconvexity of the set of constraints in \eqref{CP-con1} causes challenges in obtaining provably optimal solutions with existing optimization software. Therefore, we consider convex representations of this set of constraints. First, we consider a linear representation of the constraints in \eqref{CP-con1}. Although the existing formulations discussed in Section \ref{lit1} differ in their approach to ruling out cycles, one major commonality among them is that they replace the non-linear constraint \eqref{CP-con1} by so called \emph{big-$M$ constraints} given by \begin{equation}\label{eq:bigM} -M g_{jk} \leq \beta_{jk} \leq M g_{jk}, \forall (j,k) \in \overrightarrow{E}, \end{equation} for a large enough $M$. Unfortunately, these big-$M$ constraints \eqref{eq:bigM} are poor approximations of \eqref{CP-con1}, especially in this problem, because no natural and tight value for $M$ exist. Although a few techniques have been proposed for obtaining the big-$M$ parameter for sparse regression problem \citep{bertsimas2017sparse,gomez2018mixed}, the resulting parameters are often too large in practice. Further, finding a tight big-$M$ parameter itself is a difficult problem to solve for DAG structure learning. Consider \eqref{CP-obj}-\eqref{CP-con3} by substituting \eqref{CP-con1} by the linear big-$M$ constraints \eqref{eq:bigM} and writing the objective function in a matrix form. We denote the resulting formulation, which has a convex quadratic objective and linear constraints, by the following {MIQP}. \begin{subequations}\label{eq:LNform} \begin{alignat}{3} \quad \label{L-obj} \textbf{MIQP}\quad \min & \quad \text{tr}\left[(I- B)(I-B)^{\top}\mathcal{X}^{\top}\mathcal{X}\right] + \lambda_n \sum_{(j,k) \in \overrightarrow{E}} g_{jk}\\ & \eqref{CP-con2}, \eqref{eq:bigM} \label{LN-con1} \\ & \label{LN-con5} g_{jk} \in \{0,1\} \quad \forall (j,k) \in \overrightarrow{E}. \end{alignat} \end{subequations} Depending on which types of constraints are used in lieu of \eqref{CP-con2}, as explained in Section \ref{lit1}, {MIQP} \eqref{eq:LNform} results in three different formulations: {MIQP+CP}, which uses \eqref{CE}, {MIQP+TO}, which uses \eqref{TO}, and {MIQP+LN}, which uses \eqref{LN}, respectively. To discuss the challenges of the big-$M$ approach, we give a definition followed by two propositions. \begin{definition}\label{def:2} A formulation $A$ is said to be \emph{stronger} than formulation $B$ if $\mathcal{R}(A) \subset \mathcal{R} (B)$ where $\mathcal{R}(A)$ and $\mathcal{R}(B)$ correspond to the feasible regions of continuous relaxations of $A$ and $B$, respectively. \end{definition} \begin{prop}{(Proposition 3 in \cite{manzour2019integer})} {\it The {MIQP+TO} and {MIQP+CP} formulations are stronger than the {MIQP+LN} formulation.} \label{Prop:strong} \end{prop} \begin{prop}{(Proposition 5 in \cite{manzour2019integer}) \label{Prop2}} \it{Let $\beta^{\star}_{jk}$ denote the optimal coefficient associated with an arc $(j,k) \in \overrightarrow{E}$ from problem \eqref{eq:PNLMform}.\ For the same variable branching in the branch-and-bound process, the continuous relaxations of the {MIQP+LN} formulation for $\ell_0$ regularizations attain the same optimal objective function value as {MIQP+TO} and {MIQP+CP}, if $M \geq 2 \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^{\star}_{jk}|$.} \label{Prop5: BB} \end{prop} Proposition \ref{Prop:strong} implies that the {MIQP+TO} and {MIQP+CP} formulations are stronger than the {MIQP+LN} formulation. Nonetheless, Proposition \ref{Prop5: BB} establishes that for sufficiently large values of $M$, stronger formulations attain the same continuous relaxation objective function value as the weaker formulation throughout the branch-and-bound tree. The optimal solution to the continuous relaxation of MIQP formulations of DAG structure learning may not be at an extreme point of the convex hull of feasible points. Hence, stronger formulations do not necessarily ensure better lower bounds. This is in contrast to a mixed-integer program (MIP) with linear objective, whose continuous relaxation is a linear program (LP). In that case, there exists an optimal solution that is an extreme point of the corresponding feasible set.\ As a result, a better lower bound can be obtained from a stronger formulation that better approximates the convex hull of a mixed-integer linear program; this generally leads to faster convergence. A prime example is the traveling salesman problem (TSP), for which stronger formulations attain better computational performance \citep{oncan2009comparative}. In contrast, the numerical results by \cite{manzour2019integer} show that {MIQP+LN} has better computational performance because it is a compact formulation with the fewest constraints and the same continuous relaxation bounds. Our next result, which is adapted from \cite{dong2015regularization} to the DAG structure learning problem, shows that the continuous relaxation of {MIQP} is equivalent to the optimal solution to the problem where the $\ell_0$-regularization term is replaced with an $\ell_1$-regularization term (i.e., $\|\beta\|_1=\sum_{(j,k) \in \overrightarrow{E}}|\beta_{jk}|$) with a particular choice of the $\ell_1$ penalty. This motivates us to consider tighter continuous relaxation for MIQP. Let $(\beta^R, g^R)$ be an optimal solution to the continuous relaxation of {MIQP}. \begin{prop} For $M \geq 2 \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^R_{jk}|$, a continuous relaxation of {MIQP} \eqref{eq:LNform}, where the binary variables are relaxed, is equivalent to the problem where the $\ell_0$ regularization term is replaced with an $\ell_1$-regularization term with penalty parameter $\tilde{\lambda}=\frac{\lambda_n}{M}$. \label{L1} \end{prop} \begin{proof} For $M \geq 2 \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^R_{jk}|$, the value $g^R_{jk}$ is $\frac{\beta^R_{jk}}{M}$ in an optimal solution to the continuous relaxation of {MIQP} \eqref{eq:LNform}. Otherwise, we can reduce the value of the decision variable $g^R$ without violating any constraints while reducing the objective function. Note that since $M \geq 2 \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^R_{jk}|$, we have $\frac{\beta_{jk}^R}{M} \leq 1, \, \forall (j,k) \in \overrightarrow{E}$. To show that the set of constraints in \eqref{CP-con2} is satisfied, we consider the set of CP constraints. In this case, the set of constraints \eqref{CP-con2} holds, i.e., $\sum_{(j,k ) \in \, \mathcal{C}_A} \frac{\beta^{R}_{jk}}{M} \leq |\mathcal{C}_A|-1, \quad \forall \mathcal{C}_A \in \mathcal{C}$, because $M \geq 2 \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^R_{jk}|$. This implies that $g_{jk}^R=\frac{\beta_{jk}^R}{M}$ is the optimal solution. Thus, the objective function reduces to $\ell_1$ regularization with the coefficient $\frac{\lambda_n}{M}$. Finally, Proposition \ref{Prop2} establishes that for $M \geq 2 \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^\star_{jk}|$, the objective function value of the continuous relaxations of {MIQP+CP}, {MIQP+LN} and {MIQP+TO} are equivalent. This implies that the continuous relaxations of all formulations are equivalent, which completes the proof. \end{proof} Despite the promising performance of {MIQP+LN}, its continuous relaxation objective function value provides a weak lower bound due to the big-$M$ constraints. To circumvent this issue, a natural strategy is to improve the big-$M$ value. Nonetheless, existing methods which ensure a valid big-$M$ value or heuristic techniques \citep{park2017bayesian,gomez2018mixed} do not lead to tight big-$M$ values. For instance, the heuristic technique by \cite{park2017bayesian} to obtain big-$M$ values always satisfies the condition in Proposition \ref{Prop:strong} and exact techniques are expected to produce even larger big-$M$ values. Therefore, we next directly develop tighter approximations for \eqref{CP-con1}. \section{New Perspectives for Mathematical Formulations of DAG Learning} \label{Sec: Math models} In this section, we discuss improved mathematical formulations for learning DAG structure of a BN based on convex (instead of linear) encodings of the constraints in \eqref{CP-con1}. Problem \eqref{MIQP1} is an MIQP with non-convex complementarity constraints \eqref{CP-con1}, a class of problems which has received a fair amount of attention from the operations research community over the last decade \citep{frangioni2006perspective, frangioni2007sdp, frangioni2009computational, frangioni2011projected, gomez2018mixed}. There has also been recent interest in leveraging these developments to solve sparse regression problems with $\ell_0$ regularization \citep{pilanci2015sparse, dong2015regularization, xie2018ccp, atamturk2019rank,wei2019convexification}. Next, we review applications of MIQPs with complementarity constraints of the form \eqref{CP-con1} for solving sparse regression with $\ell_0$ regularization. \cite{frangioni2011projected} develop a so-called projected perspective relaxation method, to solve the perspective relaxation of mixed-integer nonlinear programming problems with a convex objective function and complementarity constraints. This reformulation requires that the corresponding binary variables are not involved in other constraints. Therefore, it is suitable for $\ell_0$ sparse regression, but cannot be applied for DAG structure learning. \cite{pilanci2015sparse} show how a broad class of $\ell_0$-regularized problems, including sparse regression as a special case, can be formulated exactly as optimization problems. The authors use the Tikhonov regularization term $\mu\|\beta\|_2^2$ and convex analysis to construct an improved convex relaxation using the reverse Huber penalty. In a similar vein, \cite{bertsimas2017sparse} exploit the Tikhonov regularization and develop an efficient algorithm by reformulating the sparse regression mathematical formulation as a saddle-point optimization problem with an outer linear integer optimization problem and an inner dual quadratic optimization problem which is capable of solving high-dimensional sparse regressions. \cite{xie2018ccp} apply the perspective formulation of sparse regression optimization problem with both $\ell_0$ and the Tikhonov regularization. The authors establish that the continuous relaxation of the perspective formulation is equivalent to the continuous relaxation of the formulation given by \cite{bertsimas2017sparse}. \cite{dong2015regularization} propose perspective relaxation for $\ell_0$ sparse regression optimization formulation and establish that the popular sparsity-inducing concave penalty function known as the minimax concave penalty \citep{zhang2010nearly} and the reverse Huber penalty \citep{pilanci2015sparse} can be obtained as special cases of the perspective relaxation -- thus the relaxations of formulations by \cite{zhang2010nearly,pilanci2015sparse, bertsimas2017sparse, xie2018ccp} are equivalent. The authors obtain an optimal perspective relaxation that is no weaker than any perspective relaxation. Among the related approaches, the optimal perspective relaxation by \cite{dong2015regularization} is the only one that does not explicitly require the use of Tikhonov regularization. The perspective formulation, which in essence is a fractional non-linear program, can be cast either as a mixed-integer second-order cone program (MISOCP) or a semi-infinite mixed-integer linear program (SIMILP). Both formulations can be solved directly by state-of-the-art optimization packages. \cite{dong2015regularization} and \cite{atamturk2019rank} solve the continuous relaxations and then use a heuristic approach (e.g., rounding techniques) to obtain a feasible solution (an upper bound). In this paper, we directly solve the MISOCP and SIMILP formulations for learning sparse DAG structures. Next, we present how perspective formulation can be suitably applied for DAG structure learning with $\ell_0$ regularization. We further cast the problem as MISOCP and SIMILP. To this end, we express the objective function \eqref{L-obj} in the following way: \begin{subequations}\label{eq:LNform} \begin{alignat}{3} \label{PR-obj} \quad & \text{tr}[(I- B)(I-B)^{\top}\mathcal{X}^{\top}\mathcal{X}] + \lambda_n \sum_{(j,k) \in \overrightarrow{E}} g_{jk}\\ \label{PR-obj4} \quad & = \text{tr}[(I- B-B^\top)\mathcal{X}^{\top}\mathcal{X} + 2BB^\top \mathcal{X}^{\top}\mathcal{X} + \lambda_n \sum_{(j,k) \in \overrightarrow{E}} g_{jk}. \end{alignat} \end{subequations} \noindent Let $\delta \in \mathbb{R}_{+}^{m}$ be a vector such that $\mathcal{X}^\top\mathcal{X} -{D}_\delta \succeq 0$, where $D_\delta=\text{diag}(\delta_1, \dots, \delta_m)$ and $A \succeq 0$ means that matrix $A$ is positive semi-definite. By splitting the quadratic term $\mathcal{X}^{\top}\mathcal{X}= (\mathcal{X}^{\top}\mathcal{X} -D_\delta)+D_\delta$ in \eqref{PR-obj4}, the objective function can be expressed as \begin{equation}\label{eq:LNform1} \text{tr}\left[(I- B-B^\top)\mathcal{X}^{\top}\mathcal{X} + BB^\top(\mathcal{X}^{\top}\mathcal{X} - D_\delta)\right] + \text{tr}\left(B B^\top D_\delta\right) +\lambda_n \sum_{(j,k) \in \overrightarrow{E}} g_{jk}. \end{equation} Let $Q= \mathcal{X}^{\top}\mathcal{X} - D_{\delta}$. (In the presence of Tikhonov regularization with tuning parameter $\mu> 0$, we let $Q= \mathcal{X}^{\top}\mathcal{X} + \mu I- D_{\delta}$ as described in Remark~\ref{rem:L2}.) Then, Cholesky decomposition can be applied to decompose $Q$ as $q^{\top}q$ (note $Q \succeq 0$). As a result, $\text{tr}\left(B B^\top Q\right) = \text{tr}\left(B B^\top {q^\top} {q}\right) = \sum_{i=1}^{m} \sum_{j=1}^{m} \left(\sum_{(\ell,j) \in \overrightarrow{E}} \beta_{\ell j}q_{i\ell}\right)^2$. The separable component can also be expressed as $\text{tr}\left(B B^\top D_\delta\right) = \sum_{j=1}^{m} \sum_{(j,k) \in \overrightarrow{E}} \delta_j\beta_{jk}^2$. Using this notation, the objective \eqref{eq:LNform1} can be written as \begin{equation} \nonumber \label{Obj} \text{tr}\left[(I- B-B^\top)\mathcal{X}^{\top}\mathcal{X} + BB^\top Q\right] +\sum_{j=1}^{m} \sum_{(j,k) \in \overrightarrow{E}} \delta_j\beta_{jk}^2 + \lambda_n\sum_{(j,k) \in \overrightarrow{E}} g_{jk}. \end{equation} \noindent The Perspective Reformulation (PRef) of {MIQP} is then given by \begin{subequations}\label{eq:PR} \begin{alignat}{3} \textbf{PRef} \label{PR-Obj} \quad \min \quad &\text{tr}\big[(I- B-B^\top)\mathcal{X}^{\top}\mathcal{X} + BB^\top Q\big] + \\ & \nonumber \sum_{j=1}^{m} \sum_{(j,k) \in \overrightarrow{E}} \delta_j \frac{\beta_{jk}^2}{g_{jk}} + \lambda_n\sum_{(j,k) \in \overrightarrow{E}} g_{jk},\\ & \eqref{LN-con1}-\eqref{LN-con5}. \end{alignat} \end{subequations} The objective function \eqref{PR-Obj} is formally undefined when some $g_{jk}$ = 0. More precisely, we use the convention that $\frac{\beta^2_{jk}}{g_{jk}}=0$ when $\beta_{jk} = g_{jk} = 0$ and $\frac{\beta^2_{jk}}{g_{jk}}=+\infty$ when $\beta_{jk} \neq 0$ and $g_{jk}=0$ \citep{frangioni2009computational}. The continuous relaxation of PRef, referred to as the perspective relaxation, is much stronger than the continuous relaxation of MIQP \citep{pilanci2015sparse}. However, an issue with PRef is that the objective function is nonlinear due to the fractional term. There are two ways to reformulate PRef. One as a mixed-integer second-order conic program (MISOCP) (see, Section \ref{SOCP}) and the other as a semi-infinite mixed-integer linear program (SIMILP) (see, Section \ref{SIP}). \subsection{Mixed-integer second-order conic program} \label{SOCP} Let $s_{jk}$ be additional variables representing $\beta_{jk}^2$. Then, the MISOCP formulation is given by \begin{subequations} \label{eq:misocp} \begin{alignat}{3} \textbf{MISOCP}\quad \min \quad &\text{tr}\left[(I- B-B^\top)\mathcal{X}^{\top}\mathcal{X} + BB^\top Q\right] + \\ & \nonumber \sum_{j=1}^{m} \sum_{(j,k) \in \overrightarrow{E}} \delta_j s_{jk} + \lambda_n \sum_{(j,k) \in \overrightarrow{E}} g_{jk},\\ & \label{SOCP-C1} s_{jk}g_{jk} \geq \beta_{jk}^2 \quad (j,k) \in \overrightarrow{E},\\ & \label{SOCP-C2} 0\le s_{jk} \leq M^2 g_{jk} \quad (j,k) \in \overrightarrow{E},\\ & \eqref{LN-con1}-\eqref{LN-con5}. \end{alignat} \end{subequations} Here, the constraints in \eqref{SOCP-C1} imply that $\beta_{jk} \neq 0$ only when $z_{jk} = 1$. The constraints in \eqref{SOCP-C1} are second-order conic representable because they can be written in the form of $\sqrt{4\beta_{jk}^2+ (s_{jk}-g_{jk})^2}\leq s_{jk}+g_{jk}$. The set of constraints in \eqref{SOCP-C2} is valid since $\beta_{jk} \leq Mg_{jk}$ implies $\beta_{jk}^2 \leq M^2g^2_{jk}= M^2g^2_{jk}$ and $g_{jk}^2=g_{jk}$ for $g_{jk} \in \{0,1\}$. The set of constraints in \eqref{SOCP-C2} is not required, yet they improve the computational efficiency especially when we restrict the big-$M$ value. \cite{xie2018ccp} report similar behavior for sparse regression. When we relax $g_{jk}\in \{0,1\}$ and let $g_{jk}\in[0,1]$, we obtain the continuous relaxation of {MISOCP} \eqref{eq:misocp}. Let us denote the feasible region of continuous relaxation of {MISOCP} \eqref{eq:misocp} and {MIQP} \eqref{eq:LNform} by $\mathcal{R}$MISOCP and $\mathcal{R}$MIQP, and the objective function values by OFV($\mathcal{R}$MISOCP) and OFV($\mathcal{R}$MIQP), respectively. For a more general problem than ours, \cite{cui2013convex} give a detailed proof establishing that the feasible region of the former is contained in the feasible region of latter i.e., $\mathcal{R}$MISOCP $\subset \mathcal{R}MIQP$. This implies that OFV($\mathcal{R}$MISOCP) $ \not > $ OFV($\mathcal{R}$MIQP). Therefore, we are able to obtain stronger lower bounds using MISOCP than MIQP. \subsection{Mixed-integer semi-infinite integer linear program} \label{SIP} An alternative approach to reformulate PRef is via \textit{perspective cuts} developed by \cite{frangioni2006perspective,frangioni2007sdp}. To apply perspective cuts, we use the reformulation idea first proposed in \cite{frangioni2006perspective} by introducing dummy decision matrix $D$ to distinguish the separable and non-separable part of the objective function; we also add the additional constraint $d = \beta$ where $d_{jk}$ is $(j,k)$ element of matrix $D$ and $\beta$ is the decision variable in the optimization problem. Following this approach, {MIQP} can be reformulated as an SIMILP: \begin{subequations} \begin{alignat}{3} \textbf{SIMILP}\quad \min \quad &\text{tr}\left[(I- B-B^\top)\mathcal{X}^{\top}\mathcal{X} + DD^\top Q\right] + \\ & \nonumber \sum_{j=1}^{m} \sum_{(j,k) \in \overrightarrow{E}} \delta_j v_{jk} + \lambda_n \sum_{(j,k) \in \overrightarrow{E}} g_{jk}, \\ & \label{SIP-C0} d_{jk} = {\beta}_{jk} \quad (j,k) \in \overrightarrow{E}, \\ & \label{SIP-C1} v_{jk} \geq 2 \bar{\beta}_{jk}\beta_{jk} - \bar{\beta}_{jk}^2g_{jk} \quad \forall \bar{\beta}_{jk} \in [-M, M] \quad \forall (j,k) \in \overrightarrow{E}, \\ & \label{SIP-C2} \eqref{LN-con1}-\eqref{LN-con5}, \\ & v_{jk} \geq 0, \quad (j,k) \in \overrightarrow{E}. \end{alignat} \end{subequations} The set of constraints in \eqref{SIP-C1} are known as perspective cuts. Note that there are infinitely many such constraints. Although this problem cannot be solved directly, it lends itself to a delayed cut generation approach whereby a (small) finite subset of constraints in \eqref{SIP-C1} is kept, the current solution $(\beta^{\star}, g^{\star}, v^{\star})$ of the relaxation is obtained, and all the violated inequalities for the relaxation solution are added for $\bar{\beta}_{jk} = \frac{\beta^{\star}_{jk}}{g^{\star}_{jk}}$ (assuming $\frac{0}{0} = 0$). This process is repeated until termination criteria are met. This procedure can be implemented using the cut callback function available by off-the-shelf solvers such as Gurobi or CPLEX. \subsection{Selecting $\delta$} \label{deltavalue} In the MISOCP and SIMILP formulations, one important question is how to identify a valid $\delta$. A natural choice is diag$(\delta) = (\lambda_{\min} - \varepsilon)e$ where $\lambda_{\min}$ is the minimum eigenvalue of $\mathcal{X}^\top\mathcal{X}$, $\varepsilon > 0$ is a sufficiently small number to avoid numerical instability of estimating eigenvalues, and $e$ is a column vector of ones. The issue with this approach is that if $\lambda_{\min} =0$, then $\text{diag}({\delta})$ becomes a trivial 0 matrix. If $\text{diag}(\delta)$ turns out to be a zero matrix, then MISOCP formulation reduces to the big-$M$ formulation. \cite{frangioni2007sdp} present an effective approach for obtaining a valid $\delta$ by solving the following semidefinite program (SDP) \begin{subequations} \begin{alignat}{3} \label{delta} \max \left\{\sum_{i \in V} \delta_i \,|\, \mathcal{X}^\top \mathcal{X} - \diag(\delta) \succeq 0, \delta_i \geq 0\right\}. \end{alignat} \end{subequations} This formulation can attain a non-zero $D_{\delta}$ even if $\lambda_{\min}=0$. Numerical results by \cite{frangioni2007sdp} show that this method compares favorably with the minimum eigenvalue approach. \cite{zheng2014improving} propose an SDP approach, which obtains $D_{\delta}$ such that the continuous relaxation of {MISOCP} \eqref{eq:misocp} is as tight as possible. Similar to \cite{dong2015regularization}, our formulation does not require adding a Tikhonov regularization. In this case, PRef is effective when $\mathcal{X}^\top\mathcal{X}$ is sufficiently diagonally dominant. When $n \geq m$ and each row of $\mathcal{X}$ is independent, then $\mathcal{X}^\top\mathcal{X}$ is guaranteed to be a positive semi-definite matrix \citep{dong2015regularization}. On the other hand, when $n < m$, $\mathcal{X}^\top\mathcal{X}$ is not full-rank. Therefore, a Tikhonov regularization term should be added with sufficiently large $\mu$ to make $\mathcal{X}^\top\mathcal{X} + \mu I \succeq 0 $ \citep{dong2015regularization} in order to benefit from the strengthening provided by PRef. \section{Experiments} \label{Sec: Computational} In this section, we report the results of our numerical experiments that compare different formulations and evaluate the effect of different tuning parameters and estimation strategies. Our experiments are performed on a cluster operating on UNIX with Intel Xeon E5-2640v4 2.4GHz.\ All formulations are implemented in the Python programming language. Gurobi 8.1 is used as the solver. Unless otherwise stated, a time limit of $50m$ (in seconds), where $m$ denotes the number of nodes, and an MIQP relative optimality gap of $0.01$ are imposed across all experiments after which runs are aborted. The \emph{relative} optimality gap is calculated by RGAP$:=\frac{UB(X)-LB(X)}{UB(X)}$ where UB(X) denotes the objective value associated with the best feasible integer solution (incumbent) and LB(X) represents the best obtained lower bound during the branch-and-bound process for the formulation $X \in \{{MIQP}, {SIMILP}, {MISOCP}\}$. Unless otherwise stated, we assume $\lambda_n=\ln(n)$ which corresponds to the Bayesian information criterion (BIC) score. To select the big-$M$ parameter, $M$, in all formulations we use the proposal of \citet{park2017bayesian}. Specifically, given $\lambda_n$, we solve each problem without cycle prevention constraints and obtain $\beta^R$. We then use the upper bound $M = 2 \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^R_{jk}|$. Although this value does not guarantee an upper bound for $M$, the results provided in \cite{park2017bayesian} and \cite{manzour2019integer} computationally confirm that this approach gives a large enough value of $M$. The goals of our computational study are twofold. First, we compare the various mathematical formulations to determine which gives us the best performance in Subsection~\ref{sec:synth-data}, compare the sensitivity to the model parameters in Subsection~\ref{lambda}, and the choice of the regularization term in Subsection~\ref{sec:compl2}. Second, in Subsection~\ref{sec:compearly} we use the best-performing formulation to investigate the implications of the early stopping condition on the quality of the solution with respect to the true graph. To be able to perform such a study, we use synthetic data so that the true graph is available. We use the package \texttt{pcalg} in \texttt{R} to generate random graphs. First, we create a DAG by \texttt{randomDAG} function and assign random arc weights (i.e., $\beta$) from a uniform distribution, $\mathcal{U}[0.1, 1]$. Next, the resulting DAG and random coefficients are fed into the \texttt{rmvDAG} function to generate multivariate data based on linear SEMs (columns of matrix $\mathcal X$) with the standard normal error distribution. We consider $m\in\{10,20,30,40\}$ nodes and $n=100$ samples. The average outgoing degree of each node, denoted by $d$, is set to two. We generate 10 random Erd\H{o}s-R\'enyi graphs for each setting ($m$, $n$, $d$). We observe that in our instances, the minimum eigenvalue of $\mathcal{X}^\top \mathcal{X}$ across all instances is 3.26 and the maximum eigenvalue is 14.21. Two types of problem instances are considered: (i) a set of instances with known moral graph corresponding to the true DAG; (ii) a set of instances with a complete undirected graph, i.e., assuming no prior knowledge. We refer to the first class of instances as \textit{moral} instances and to the second class as \textit{complete} instances. The observational data, $\mathcal{X}$, for both classes of instances are the same. The function \texttt{moralize(graph)} in the \texttt{pcalg} \texttt{R}-package is used to generated the moral graph from the true DAG. Although the moral graph can be consistently estimated from data using penalized estimation procedures with polynomial complexity \citep[e.g.,][]{loh2014high}, the quality of moral graph affects all optimization models. Therefore, we use the true moral graph in our experiments. \subsection{Comparison of Mathematical Formulations} \label{sec:synth-data} We use the following MIQP-based metrics to measure the quality of a solution: relative optimality gap (RGAP), computation time in seconds (Time), Upper Bound (UB), Lower Bound (LB), objective function value (OFV) of the initial continuous relaxation, and the number of explored nodes in the branch-and-bound tree ($\#$ BB). An in-depth analysis comparing the existing mathematical formulations that rely on linear encodings of the constraints in \eqref{CP-con1} for MIQP formulations is conducted by \cite{manzour2019integer}. The authors conclude that {MIQP+LN} formulation outperforms the other MIQP formulations, and the promising performance of MIQP+LN can be attributed to its size: (1) {MIQP+LN} has fewer binary variables and constraints than {MIQP+TO}, (2) {MIQP+LN} is a compact (polynomial-sized) formulation in contrast to {MIQP+CP} which has an exponential number of constraints. Therefore, in this paper, we analyze the formulations based on the convex encodings of the constraints in \eqref{CP-con1}. \subsubsection{Comparison of MISOCP formulations} \label{sec:MISOCP-experiments} We next experiment with MISOCP formulations. For the set of constraints in \eqref{CP-con2}, we use LN, TO, and CP constraints discussed in Section \ref{lit1} resulting in three formulations denoted as {MISOCP+LN}, {MISOCP+TO}, {MISOCP+CP}, respectively. The {MISOCP+TO} formulation fails to find a feasible solution for instances with 30 and 40 nodes, see Table \ref{Details}. For moral instances, the optimality gap for {MISOCP+TO} are 0.000 and 0.021 for instances with 10 and 20 nodes, respectively; for complete instances, the optimality gap for {MISOCP+TO} formulation are 0.009 and 0.272 for instances with 10 and 20 nodes, respectively. Moreover, Table \ref{Details} illustrates that {MISOCP+LN} performs better than {MISOCP+TO} for even small instances (i.e., 10 and 20 nodes). \begin{table}[t] \caption{Optimality gaps for {MISOCP+TO} and {MISOCP+LN} formulations} \label{Details} \centering \footnotesize{ \begin{tabular}{l|l|ll|l|l} \hline & \multicolumn{2}{c}{Moral} & & \multicolumn{2}{c}{Complete} \\ \hline $m$ & {MISOCP+TO} & {MISOCP+LN} & & {MISOCP+TO} & {MISOCP+LN} \\ \hline 10 & 0.000 & 0.000 & & 0.009 & 0.008 \\ 20 & 0.021 & 0.006 & & 0.272 & 0.195 \\ 30 & - & 0.010 & & - & 0.195 \\ 40 & - & 0.042 & & - & 0.436 \\ \hline \end{tabular} \\ ``-" denotes that no feasible solution, i.e., UB, is obtained within the time limit, so optimality gap cannot be computed. } \end{table} For {MISOCP+CP}, instead of incorporating all constraints given by \eqref{CE}, we begin with no constraint of type \eqref{CE}. Given an integer solution with cycles, we detect a cycle and impose a new cycle prevention constraint to remove the detected cycle. Depth First Search (DFS) can detect a cycle in a directed graph with complexity $O(|V|+|E|)$. Gurobi \texttt{LazyCallback} function is used, which allows adding cycle prevention constraints in the branch-and-bound algorithm, whenever an integer solution with cycles is found. The same approach is used by \cite{park2017bayesian} to solve the corresponding MIQP+CP. Note that Gurobi solver follows a branch-and-cut implementation and adds many general-purpose and special-purpose cutting planes. Figures \ref{Figurea: MISOCP} and \ref{Figureb: MISOCP} show that {MISOCP+LN} outperforms {MISOCP+CP} in terms of relative optimality gap and computational time. In addition, {MISOCP+LN} attains better upper and lower bounds than {MISOCP+CP} (see, Figures \ref{Figurec: MISOCP} and \ref{Figured: MISOCP}). {MISOCP+CP} requires the solution of a second-order cone program (SOCP) after each cut, which reduces its computational efficiency and results in higher optimality gaps than {MISOCP+LN}. {MISOCP+TO} requires many binary variables which makes the problem very inefficient when the network becomes denser and larger as shown in Table \ref{Details}. Therefore, we do not illustrate the {MISOCP+TO} results in Figure \ref{Figure: MIQP}. \begin{figure*}[t!] \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{MISOCPOptimalityGAP_MIPGAP_} \caption{RGAPs} \label{Figurea: MISOCP} \end{subfigure}% ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{MISOCPTime_seconds_} \caption{Time (in seconds)} \label{Figureb: MISOCP} \end{subfigure} ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{MISOCPUpperBound} \caption{Best upper bounds} \label{Figurec: MISOCP} \end{subfigure} ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{MISOCPLowerBound} \caption{Best lower bounds} \label{Figured: MISOCP} \end{subfigure} ~ \caption{Optimization-based measures for MISOCP+LN (green, left bar) and MISOCP+CP (yellow, right bar) formulations for $n=100$ and $\lambda_n=\ln(n)$.} \label{Figure: MIQP} \end{figure*} \begin{figure*}[t!] \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{SIMILPOptimalityGAP_MIPGAP_} \caption{RGAPs} \label{Figurea: SIMILP} \end{subfigure ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{SIMILPTime_seconds_} \caption{Time (in seconds)} \label{Figureb: SIMILP} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{SIMILPUpperBound} \caption{Best upper bounds} \label{Figurec: SIMILP} \end{subfigure} ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{SIMILPLowerBound} \caption{Best lower bounds} \label{Figured: SIMILP} \end{subfigure} \caption{Optimization-based measures for \textbf{MISOCP+LN}, \textbf{MIQP+LN}, and \textbf{MISILP+LN} formulations for $n=100$ and $\lambda_n=\ln(n)$.} \label{Figure: SIMLP} \end{figure*} \begin{figure*}[t!] \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{MIQPandMISOCPOptimalityGAP_MIPGAP_} \caption{RGAPs} \label{Figurea: Best} \end{subfigure ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{MIQPandMISOCPTime_seconds_} \caption{Time (in seconds)} \label{Figureb: Best} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{MIQPandMISOCPUpperBound} \caption{Best upper bounds} \label{Figurec: Best} \end{subfigure} ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{MIQPandMISOCPLowerBound} \caption{Best lower bounds} \label{Figured: Best} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{BB} \caption{Number of Branch and Bound nodes} \label{Figuree: Best} \end{subfigure} ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.22]{OFV} \caption{Continuous relaxation objective function} \label{Figuref: Best} \end{subfigure} \label{Figure: Best} \caption{Optimization-based measures for \textbf{MISOCP+LN}, \textbf{MIQP+LN} formulations for $n=100$ and $\lambda_n=\ln(n)$.} \label{Figure: MISOCP-MIQP} \end{figure*} \subsubsection{Comparison of MISOCP versus SIMILP} \label{sec:MISOCP ver MIMILP-experiments} Our computational experiments show that the SIMILP formulation generally performs poorly when compared to {MISOCP+LN} and {MIQP+LN} in terms of optimality gap, upper bound, and computational time. We report the results for {SIMILP+LN}, {MISOCP+LN}, and {MIQP+LN} formulations in Figure \ref{Figure: SIMLP}. We only consider the LN formulation because that is the best performing model among the alternatives both for MISOCP and MIQP formulations. Figures \ref{Figurea: SIMILP} and \ref{Figureb: SIMILP} show the relative optimality gaps and computational times for these three formulations. Figures \ref{Figurec: SIMILP} and \ref{Figured: SIMILP} demonstrate that {SIMILP+LN} attains lower bounds that are comparable with other two formulations. In particular, for complete instances with large number of nodes, {SIMILP+LN} attains better lower bounds than {MIQP+LN}. Nonetheless, {SIMILP+LN} fails to obtain good upper bounds. Therefore, the relative optimality gap is considerably larger for {SIMILP+LN}. The poor performance of {SIMILP+LN} might be because state-of-the-art optimization packages (e.g., Gurobi, CPLEX) use many heuristics to obtain a good feasible solution (i.e., upper bound) for a compact formulation. In contrast, SIMILP is not a compact formulation, and we build the SIMILP gradually by adding violated constraints iteratively. Therefore, a feasible solution to the original formulation is not available while solving the relaxations with a subset of the constraints \eqref{SIP-C1}. Moreover, the optimization solvers capable of solving MISOCP formulations have witnessed noticeable improvement due to theoretical developments in this field. In particular, Gurobi reports 20\% and 38\% improvement in solution time for versions 8 and 8.1, respectively. In addition, Gurobi v8.1 reports over four times faster solution times than CPLEX for solving MISOCP on their benchmark instances. \subsubsection{Comparison of MISOCP versus MIQP formulations} \label{sec:MISOCP ver MIQP experiments} In this section, we demonstrate the benefit of using the second-order conic formulation {MISOCP+LN} instead of the linear big-$M$ formulation {MIQP+LN}. As before, we only consider the LN formulation for this purpose. Figures \ref{Figurea: Best} and \ref{Figureb: Best} show that {MISOCP+LN} performs better than MIQP+LN in terms of the average relative optimality gap across all number of nodes $m \in \{10,20,30,40\}$. The only exception is $m=40$ for moral instances, for which {MIQP+LN} performs better than {MISOCP+LN}. Nonetheless, we observe that {MISOCP+LN} clearly outperforms {MIQP+LN} for complete instances which are more difficult to solve. Figures \ref{Figurec: Best} and \ref{Figured: Best} show the performance of both formulations in terms of the resulting upper and lower bounds on the objective function. We observe that {MISOCP+LN} attains better lower bounds especially for complete instances. However, {MISOCP+LN} cannot always obtain a better upper bound. In other words, {MISOCP+LN} is more effective in improving the lower bound instead of the upper bound as expected. Figures \ref{Figuree: Best} and \ref{Figuref: Best} show that {MISOCP+LN} uses fewer branch-and-bound nodes and achieves better continuous relaxation values than {MIQP+LN}. \subsection{Analyzing the Choices of $\lambda_n$ and $M$} \label{lambda} We now experiment on different values for $\lambda_n$ and $M$ to assess the effects of these parameters on the performance of {MISOCP+LN} and {MIQP+LN}. First, we change $\lambda_n \in \{\ln{(n)}, 2\ln(n), 4\ln(n)\}$ while keeping the value of $M$ the same (i.e., $M=2\underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^\star_{jk}|$). Table \ref{Table: lambda} shows that as $\lambda_n$ increases, {MISOCP+LN} consistently performs better than \linebreak MIQP+LN in terms of the relative optimality gap, computational time, the number of branch-and-bound nodes, and continuous relaxation objective function value. Indeed, the difference becomes even more pronounced for more difficult cases (i.e., complete instances). For instance, for $\lambda_n = 4 \ln(n)=18.4$, the optimality gap reduces from 0.465 to 0.374, an over 24\% improvement. \begin{table}[t!] \fontsize{60}{30}\selectfont \caption{Computational results for different values of $\lambda_n = t \ln(n)$ for $t \in \{1,2,4\}$} \label{Table: IP} \resizebox{1\textwidth}{!}{ \begin{adjustbox}{}{} \begin{tabular}{llllllllllllllll|lllllllllllllll} \\ \Xhline{2\arrayrulewidth} \Xhline{2\arrayrulewidth} &&& \multicolumn{11}{c}{Moral} &&& \multicolumn{11}{c}{Complete} \\ \Xhline{2\arrayrulewidth} \Xhline{2\arrayrulewidth} & \multicolumn{2}{c}{Instances} & & \multicolumn{2}{c}{RGAP} & & \multicolumn{2}{c}{Time} & & \multicolumn{2}{c}{$\#$ nodes} & & \multicolumn{2}{c}{Relaxation OFV} & & \multicolumn{2}{c}{RGAP} & & \multicolumn{2}{c}{Time} & & \multicolumn{2}{c}{$\#$ nodes} & & \multicolumn{2}{c}{Relaxation OFV} \\ & $m$ &$\lambda_n$ & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP \\ \Xhline{2\arrayrulewidth} &10 & 4.6 & & * & * & & 3 & 2 & & 1306 & 3715 & & 738.7 & 664.9 & & * & * & & 65 & 74 & & 38850 & 114433 & & 724.4 & 629.3 \\ &10 & 9.2 & & * & * & & 4 & 3 & & 1116 & 2936 & & 784.6 & 693.5 & & * & * & & 31 & 39 & & 15736 & 55543 & & 772.5 & 662.2 \\ &10 & 18.4 & & * & * & & 3 & 2 & & 1269 & 2457 & & 857.0 & 747.5 & & * & * & & 26 & 29 & & 18223 & 41197 & & 844.5 & 720.2 \\ &20 & 4.6 & & * & * & & 69 & 51 & & 46513 & 76261 & & 1474.2 & 1325.8 & &\textbf{0.195} & 0.275 & & 1000 & 1000 & & 101509 & 238765 & & 1404.9 & 1144.5 \\ &20 & 9.2 & & * & * & & 26 & 27 & & 10695 & 31458 & & 1589.6 & 1406.8 & & \textbf{0.152} & 0.250 & & 1000 & 1000 & & 152206 & 274514 & & 1526.9 & 1238.6 \\ &20 & 18.4 & & * & * & & 24 & 36 & & 9574 & 33788 & & 1763.7 & 1552.7 & &\textbf{0.113 }& 0.208 & & 944 & 1000 & & 159789 & 277687 & & 1697.1 & 1395.0 \\ &30 & 4.6 & & \textbf{0.010} & 0.011 & & 378 & 527 & & 121358 & 514979 & & 2230.1 & 2037.7 & & \textbf{0.298} & 0.441 & & 1500 & 1500 & & 38474 & 64240 & & 2024.0 & 1569.7 \\ &30 & 9.2 & & * & * & & 104 & 291 & & 33371 & 248190 & & 2392.4 & 2168.5 & & \textbf{0.239} & 0.395 & & 1500 & 1500 & & 59034 & 71475 & & 2217.5 & 1741.5 \\ &30 & 18.4 & & * & * & & 48 & 74 & & 15649 & 57909 & & 2608.3 & 2383.8 & & \textbf{0.215} & 0.318 & & 1500 & 1500 & & 74952 & 96586 & & 2449.2 & 2006.9 \\ &40 & 4.6 & & 0.042 & \textbf{0.037 }& & 1551 & 1615 & & 664496 & 2565247 & & {2979.3} & 2748.6 & & \textbf{0.436} & 0.545 & & 2000 & 2000 & & 23083 & 49050 & & 2582.0 & 1946.3 \\ &40 & 9.2 & & \textbf{0.024} & 0.036 & & 1125 & 1336 & & 353256 & 1347702 & & 3200.7 & 2923.5 & & \textbf{0.397} & 0.473 & & 2000 & 2000 & & 29279 & 73917 & & 2869.9 & 2216.9 \\ &40 & 18.4 & & \textbf{0.024} & 0.035 & & 1099 & 1375 & & 434648 & 1137666 & & 3521.8 & 3225.4 & & \textbf{0.374} & 0.465 & & 2000 & 2000 & & 31298 & 60697 & & 3240.1 & 2633.1 \\ \Xhline{2\arrayrulewidth} \multicolumn{14}{l}{\huge{* indicates that the problem is solved to the optimality tolerance.}} && \multicolumn{11}{c}{} \\ \multicolumn{14}{l}{\huge{Better RGAPs are in bold.}} && \multicolumn{11}{c}{} \end{tabular} \end{adjustbox}} \vskip 2ex \label{Table: lambda} \end{table} \begin{table}[t!] \fontsize{60}{30}\selectfont \caption{Computational results for different values of $\gamma$} \label{Table: IP} \resizebox{1\textwidth}{!}{ \begin{adjustbox}{}{} \begin{tabular}{llllllllllllllll|lllllllllllllll} \\ \Xhline{2\arrayrulewidth} \Xhline{2\arrayrulewidth} &&& \multicolumn{11}{c}{Moral} &&& \multicolumn{11}{c}{Complete} \\ \Xhline{2\arrayrulewidth} \Xhline{2\arrayrulewidth} & \multicolumn{2}{c}{Instances} & & \multicolumn{2}{c}{RGAP} & & \multicolumn{2}{c}{Time} & & \multicolumn{2}{c}{$\#$ nodes} & & \multicolumn{2}{c}{Relaxation OFV} & & \multicolumn{2}{c}{RGAP} & & \multicolumn{2}{c}{Time} & & \multicolumn{2}{c}{$\#$ nodes} & & \multicolumn{2}{c}{Relaxation OFV} \\ & $m$ &$\gamma$ & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP \\ \Xhline{2\arrayrulewidth} &10 & 2 & & * & * & & 3 & 2 & & 1306 & 3715 & & 738.7 & 664.9 & & * & * & & 65 & 74 & & 38850 & 114433 & & 724.4 & 629.3 \\ &10 & 5 & & * & * & & 5 & 2 & & 1433 & 3026 & & 717.9 & 647.1 & & * & * & & 81 & 82 & & 42675 & 130112 & & 705.1 & 607.8 \\ &10 & 10 & & * & * & & 5 & 2 & & 1523 & 2564 & & 712.5 & 641.1 & & * & * & & 74 & 100 & & 35576 & 174085 & & 699.8 & 600.3 \\ &20 & 2 & & * & * & & 69 & 51 & & 46513 & 76261 & & 1474.2 & 1325.8 & & \textbf{0.195} & 0.275 & & 1000 & 1000 & & 101509 & 238765 & & 1404.9 & 1144.5 \\ &20 & 5 & & * & * & & 103 & 156 & & 65951 & 209595 & & 1438.2 & 1274.2 & & \textbf{0.211} & 0.308 & & 1000 & 1000 & & 97940 & 225050 & & 1375.3 & 1080.9 \\ &20 & 10 & & * & * & & 215 & 207 & & 150250 & 349335 & & 1427.7 & 1256.6 & & \textbf{0.230} & 0.310 & & 1000 & 1000 & & 90864 & 257998 & & 1366.3 & 1058.2 \\ &30 & 2 & & \textbf{0.010} & 0.011 & & 378 & 527 & & 121358 & 514979 & & 2230.1 & 2037.7 & & \textbf{0.298} & 0.441 & & 1500 & 1500 & & 38474 & 64240 & & 2024.0 & 1569.7 \\ &30 & 5 & & \textbf{0.011} & 0.014 & & 571 & 620 & & 164852 & 527847 & & 2173.9 & 1950.3 & & \textbf{0.336} & 0.474 & & 1501 & 1500 & & 33120 & 64339 & & 1969.4 & 1448.4 \\ &30 & 10 & & 0.024 & \textbf{0.014} & & 630 & 638 & & 202635 & 585234 & & 2156.5 & 1919.6 & & \textbf{0.349} & 0.480 & & 1500 & 1500 & & 30579 & 77100 & & 1951.2 & 1404.0 \\ &40 & 2 & & 0.042 & \textbf{0.037} & & 1551 & 1615 & & 664496 & 2565247 & & 2979.3 & 2748.6 & & \textbf{0.436} & 0.545 & & 2000 & 2000 & & 23083 & 49050 & & 2582.0 & 1946.3 \\ &40 & 5 & & \textbf{0.045} & 0.047 & & 1643 & 1634 & & 638323 & 1347868 & & 2895.6 & 2635.0 & & \textbf{0.579} & 0.580 & & 2000 & 2000 & & 12076 & 30858 & & 2488.0 & 1751.7 \\ &40 & 10 & & \textbf{0.056} & 0.057 & & 1639 & 1632 & & 599281 & 1584187 & & 2869.2 & 2595.6 & & \textbf{0.585} & 0.594 & & 2000 & 2000 & & 11847 & 30222 & & 2456.1 & 1679.6 \\ \Xhline{2\arrayrulewidth} \multicolumn{14}{l}{\huge{* indicates that the problem is solved to the optimality tolerance.}} && \multicolumn{11}{c}{} \\ \multicolumn{14}{l}{\huge{Better RGAPs are in bold.}} && \multicolumn{11}{c}{} \end{tabular} \end{adjustbox}} \vskip 2ex \label{Table: M} \end{table} Finally, we study the influence of the big-$M$ parameter. Instead of a coefficient $\gamma=2$ in \cite{park2017bayesian}, we experiment with $M = \gamma \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^{R}_{jk}|$ for $\gamma \in \{2, 5, 10\}$ in Table \ref{Table: M}, where $|\beta^{R}_{jk}|$ denotes the optimal solution of each optimization problem without the constraints to remove cycles. The larger the big-$M$ parameter, the worse the effectiveness of both models.\ However, {MISOCP+LN} tightens the formulation using the conic constraints whereas {MIQP+LN} does not have any means to tighten the formulation instead of big-$M$ constraints which have poor relaxation. For $M > 2 \underset{(j,k) \in \overrightarrow{E}}{\max} \, |\beta^{R}_{jk}|$, {MISOCP+LN} outperforms {MIQP+LN} in all measures, in most cases. \subsection{The Effect of Tikhonov Regularization} \label{sec:compl2} In this subsection, we consider the effect of adding a Tikhonov regularization term to the objective (see Remark \ref{rem:L2}) by considering $\mu \in \{0, \ln(n), 2\ln(n)\}$ while keeping the values of $\lambda_n = \ln(n)$ and $M$ the same as before. Table \ref{Table: mu} demonstrates that for all instances with $\mu>0$, {MISOCP+LN} outperforms {MIQP+LN}. For instance, for $m=40$ and $\mu=18.4$, {MISOCP+LN} improves the optimality gap from 0.445 to 0.366, an improvement over 21\%. The reason for this improvement is that $\mu>0$ makes the matrix more diagonally dominant; therefore, it makes the conic constraints more effective in tightening the formulation and obtaining a better optimality gap. \begin{table}[t!] \fontsize{60}{30}\selectfont \caption{Computational results for different values of $\mu$} \label{Table: IP} \resizebox{1\textwidth}{!}{ \begin{adjustbox}{}{} \begin{tabular}{llllllllllllllll|lllllllllllllll} \\ \Xhline{2\arrayrulewidth} \Xhline{2\arrayrulewidth} &&& \multicolumn{11}{c}{Moral} &&& \multicolumn{11}{c}{Complete} \\ \Xhline{2\arrayrulewidth} \Xhline{2\arrayrulewidth} & \multicolumn{2}{c}{Instances} & & \multicolumn{2}{c}{RGAP} & & \multicolumn{2}{c}{Time} & & \multicolumn{2}{c}{$\#$ nodes} & & \multicolumn{2}{c}{Relaxation OFV} & & \multicolumn{2}{c}{RGAP} & & \multicolumn{2}{c}{Time} & & \multicolumn{2}{c}{$\#$ nodes} & & \multicolumn{2}{c}{Relaxation OFV} \\ & $m$ &$\mu$ & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP & & MISOCP & MIQP \\ \Xhline{2\arrayrulewidth} &10 & 0 & & * & * & & 3 & 2 & & 1306 & 3715 & & 738.7 & 664.9 & & * & * & & 65 & 74 & & 38850 & 114433 & & 724.4 & 629.3 \\ &10 & 4.6 & & * & * & & 4 & 2 & & 1043 & 2758 & & 802.0 & 708.5 & & * & * & & 69 & 72 & & 38778 & 119825 & & 789.3 & 675.7 \\ &10 & 9.2 & & * & * & & 4 & 2 & & 1067 & 2231 & & 858.0 & 748.1 & & * & * & & 72 & 74 & & 36326 & 114383 & & 843.2 & 712.3 \\ &20 & 0 & & * & * & & 69 & 51 & & 46513 & 76261 & & 1474.2 & 1325.8 & & \textbf{0.195} & 0.2752 & & 1000 & 1000 & & 101509 & 238765 & & 1404.9 & 1144.5 \\ &20 & 4.6 & & * & * & & 45 & 45 & & 15111 & 55302 & & 1604.1 & 1426.5 & & \textbf{0.1666} & 0.2416 & & 1000 & 1000 & & 102467 & 249490 & & 1551.7 & 1267.1 \\ &20 & 9.2 & & * & * & & 43 & 55 & & 15384 & 62297 & & 1716.8 & 1515.7 & &\textbf{0.1422} & 0.2228 & & 1000 & 1000 & & 94360 & 258194 & & 1668.3 & 1355.1 \\ &30 & 0 & & \textbf{0.010} & 0.011 & & 378 & 527 & & 121358 & 514979 & & 2230.1 & 2037.7 & & \textbf{0.298} & 0.4408 & & 1500 & 1500 & & 38474 & 64240 & & 2024.0 & 1569.7 \\ &30 & 4.6 & & \textbf{0.008} & 0.011 & & 310 & 392 & & 76668 & 358544 & & 2432.5 & 2187.7 & & \textbf{0.2368} & 0.387 & & 1500 & 1500 & & 45473 & 69258 & & 2286.4 & 1788.5 \\ &30 & 9.2 & & \textbf{0.009} & 0.010 & & 67 & 377 & & 12410 & 320632 & & 2612.6 & 2311.4 & & \textbf{0.2092} & 0.3666 & & 1500 & 1500 & & 41241 & 68661 & & 2484.3 & 1915.7 \\ &40 & 0 & & 0.042 & \textbf{0.037} & & 1551 & 1615 & & 664496 & 2565247 & & 2979.3 & 2748.6 & & \textbf{0.4364} & 0.5452 & & 2000 & 2000 & & 23083 & 49050 & & 2582.0 & 1946.3 \\ &40 & 4.6 & & \textbf{0.027} & 0.029 & & 1331 & 1620 & & 422654 & 1303301 & & 3281.6 & 2972.8 & & \textbf{0.3538} & 0.4708 & & 2000 & 2000 & & 13209 & 30995 & & 2985.4 & 2261.3 \\ &40 & 9.2 & & \textbf{0.020} & 0.028 & & 870 & 1507 & & 239214 & 1762210 & & 3575.4 & 3165.3 & & \textbf{0.3668} & 0.4454 & & 2000 & 2000 & & 13884 & 54638 & & 3321.7 & 2468.7 \\ \Xhline{2\arrayrulewidth} \multicolumn{14}{l}{\huge{* indicates that the problem is solved to the optimality tolerance.}} && \multicolumn{11}{c}{} \\ \multicolumn{14}{l}{\huge{Better RGAPs are in bold.}} && \multicolumn{11}{c}{} \end{tabular} \end{adjustbox}} \vskip 2ex \label{Table: mu} \end{table} \subsection{Practical Implications of Early Stopping}\label{sec:compearly} In this subsection, we evaluate the quality of the estimated DAGs obtained from {MISOCP+LN} by comparing them with the ground truth DAG. To this end, we use the average structural Hamming distance $(\mathrm{SHD})$ which counts the number of arc differences (additions, deletions, or reversals) required to transform the estimated DAG to the true DAG. Since Gurobi sets a minimum relative gap RGAP$=1e^{-4}$, the solution obtained within this relative gap is considered optimal. Finally, because the convergence of the branch-and-bound process may be slow in some cases, we set a time limit of 100$m$. To test the quality of the solution obtained with an early stopping criterion, we set the absolute optimality gap parameter as $GAP=\frac{\log(m)}{n}s_m$ and the $\ell_0$ regularization parameter as $\lambda_n=\log m$ as suggested by Proposition \ref{EarlyProp} for achieving a consistent estimate. We compare the resulting suboptimal solution to the solution obtained by setting $GAP= UB -LB=0$ to obtain the truly optimal solution. Table \ref{Early} shows the numerical results for the average solution time (in seconds) for instances that solve within the time limit, the number of instances that were not solved within the time limit, the actual absolute optimality gap at termination, the average SHD of the resulting DAGs, and in parenthesis, the standard deviation of the SHD scores, across 10 runs for moral instances. Table \ref{Early} indicates that the average SHD for $GAP=\frac{\log(m)}{n}s_m$ is close to that of the truly optimal solution. Note that a lower GAP does not necessarily lead to a better SHD score; see, e.g., $m=20$. From a computational standpoint, we observe that by using the early stopping criterion, we are able to obtain consistent solutions before reaching the time limit for more instances. In particular, four instances reach the time limit for GAP=0 before finding the optimal solution as opposed to only two for early stopping, Note that we only report the average solution time if the algorithm terminates before hitting the time limit, which explains why the average time appears smaller for optimal than early stopping. Taking into account the time to obtain the best integer solution, the average time for $m=40$ is 1678.485 seconds for GAP=0, whereas it is 954.79 seconds for the early stopping setting. Furthermore, stopping early does not sacrifice from the quality of the resulting DAG as can be seen from the SHD scores. \begin{table}[t!] \centering{ \caption{Structural Hamming distances (SHD) for early stopping with $n= 100, \lambda_n = \log m$, GAP $\leq \tau$ for moral instances. The superscripts $^{i}$ indicate that out of ten runs, $i$ instances reach the time limit of $100m$.} \footnotesize{ \begin{tabular}{ll|ccc|cccc} \hline & & \multicolumn{3}{c|}{$\tau= 0$} & \multicolumn{3}{c}{$\tau=\frac{\log (m)}{n}s_m$} & \\ \hline $m$ & $s_m$ & Time & GAP & SHD (std) & Time & GAP & SHD (std) \\ \hline 10 & 19 & 0.71 & 0.002 & 0 (0) & 0.64 & 0.002 & 0 (0) \\ 20 & 58 & 31.99 & 0.062 & 0.80 (1.23) & 16.84 & 0.165 & 0.55 (1.01) \\ 30 & 109 & $51.41^{2}$ & 0.210 & 1.25 (0.89) & $28.27^{2}$ & 0.557 & 1.29 (0.95) \\ 40 & 138 & $784.85^{4}$ & 0.370 & 0.67 (0.52) & $1547.90^{2}$ & 1.411 & 0.71 (0.49) \\ \hline \end{tabular} \label{Early}} } \end{table} \section{Conclusion} \label{Sec: Conclusion} In this paper, we study the problem of learning an optimal directed acyclic graph (DAG) from continuous observational data, where the causal effect among the random variables is linear. The central problem is a quadratic optimization problem with regularization. We present a mixed-integer second order conic program ({MISOCP}) which entails a tighter relaxation than existing formulations with linear constraints. Our results show that {MISOCP} can successfully improve the lower bound and results in better optimality gap when compared with other formulations based on big-$M$ constraints, especially for dense and large instances. Moreover, we establish an early stopping criterion under which we can terminate branch-and-bound and achieve a solution which is asymptotically optimal. \section*{Acknowledgments} Simge K\"u\c{c}\"ukyavuz and Linchuan Wei were supported, in part, by ONR grant N00014-19-1-2321. Ali Shojaie was supported by NSF grant DMS-1561814 and NIH grant R01GM114029. \bibliographystyle{plain}
{'timestamp': '2020-06-01T02:03:57', 'yymm': '2005', 'arxiv_id': '2005.14315', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14315'}
arxiv
\section{Introduction} Neural conversation systems which treat dialogue response generation as a sequence generation task \cite{neural_conversation_model} often produce generic and incoherent response
s \cite{shao-etal-2017-generating}. The primary reason for this is that, unlike humans, such systems do not have any access to background knowledge about the topic of conversation. For example, while chatting about movies, we use our background knowledge about the movie in the form of plot details, reviews and comments that we might have read. To enrich such neural conversation systems, some recent works \cite{moghe-etal-2018-towards,wizard_of_wikipedia,zhou-etal-2018-dataset} incorporate external knowledge in the form of documents which are relevant to the current conversation. For example, \cite{moghe-etal-2018-towards}, released a dataset containing conversations about movies where every alternate utterance is extracted from a background document about the movie. This background document contains plot details, reviews and Reddit comments about the movie. The focus thus shifts from sequence generation to identifying relevant snippets from the background document and modifying them suitably to form an appropriate response given the current conversational context. Intuitively, any model for this task should exploit semantic, structural and sequential information from the conversation context and the background document. For illustration, consider the chat shown in Figure \ref{dataset_example} from the Holl-E movie conversations dataset \cite{moghe-etal-2018-towards}. In this example, Speaker 1 nudges Speaker 2 to talk about how James's wife was irritated because of his career. The right response to this conversation comes from the line beginning at \textit{``His wife Mae \dots''}. However, to generate this response, it is essential to understand that (i) \textit{His} refers to James from the previous sentence; (ii) \textit{quit boxing} is a contiguous phrase, and (iii) \textit{quit} and \textit{he would stop} mean the same. We need to exploit (i) \textbf{structural} information, such as, the co-reference edge between \textit{His-James} (ii) the \textbf{sequential} information in \textit{quit boxing} and (iii) the \textbf{semantic} similarity (or synonymy relation) between \textit{quit} and \textit{he would stop}. \begin{figure} \centering \begin{tikzpicture} \node[draw, text width=7.3cm] at (-6,0) {\textbf{Source Doc:} ... At this point James Braddock (Russel Crowe) was a light heavyweight boxer, who was forced to retired from the ring after breaking his hand in his last fight. \textbf{His wife Mae had prayed for years that he would quit boxing, before becoming permanently injured}. ... \textbf{Conversation:} \\ Speaker 1(N): Yes very true, this is a real rags to riches story. Russell Crowe was excellent as usual. \\ Speaker 2(R): Russell Crowe owns the character of James Bradock, the unlikely hero who makes the most of his second chance. He's a good fighter turned hack. \\ Speaker 1(N): Totally! Oh by the way do you remember his wife ... how she wished he would stop\\ Speaker 2(P): His wife Mae had prayed for years that he would quit boxing, before becoming permanently injured.\\ }; \node[draw] at (-6,-6.2){\includegraphics[scale=0.37]{dep.png}}; \end{tikzpicture} \caption{Sample conversation from the Holl-E Dataset. For simplicity, we show only a few of the edges. The edge in blue corresponds to co-reference edge, the edges in green are dependency edges and the edge in red is the entity edge. } \label{dataset_example} \end{figure} To capture such multi-faceted information from the document and the conversation context we propose a new architecture which combines BERT with explicit sequence and structure information. We start with the deep contextualized word representations learnt by BERT which capture distributional semantics. We then enrich these representations with sequential information by allowing the words to interact with each other by passing them through a bidirectional LSTM as is the standard practice in many NLP tasks. Lastly, we add explicit structural information in the form of dependency graphs, co-reference graphs, and entity co-occurrence graphs. To allow interactions between words related through such structures, we use GCNs which essentially aggregate information from the neighborhood of a word in the graph. Of course, combining BERT with LSTMs in itself is not new and has been tried in the original work \cite{devlin-etal-2019-bert} for the task of Named Entity Recognition. Similarly, the work in \cite{bastings-etal-2017-graph} combines LSTMs with GCNs for the task of machine translation. To the best of our knowledge this is the first work which combines BERT with explicit structural information. We investigate several interesting questions in the context of dialogue response generation. For example, (i) Are BERT-based models best suited for this task? (ii) Should BERT representations be enriched with sequential information first or structural information? (iii) Are dependency graph structures more important for this task or entity co-occurence graphs? (iv) Given the recent claims that BERT captures syntactic information, does it help to explicitly enrich it with syntactic information using GCNs? To systematically investigate such questions we propose a simple plug-and-play \textit{Semantics-Sequences-Structures} (\textit{SSS}) framework which allows us to combine different semantic representations (GloVe, BERT, ELMo) with different structural priors (dependency graphs, co-reference graphs, \textit{etc.}). It also allows us to use different ways of combining structural and sequential information, \textit{e.g.}, LSTM first followed by GCN or vice versa or both in parallel. Using this framework we perform a series of experiments on the \textit{Holl-E} dataset and make some interesting observations. First, we observe that the conventional adaptation of GCNs for NLP tasks, where contextualized embeddings obtained through LSTMs are fed as input to a GCN, exhibits poor performance. To overcome this, we propose some simple alternatives and show that they lead to better performance. Second, we observe that while BERT performs better than GloVe and ELMo, it still benefits from explicit structural information captured by GCNs. We find this interesting because some recent works \cite{tenney-etal-2019-bert,jawahar-etal-2019-bert,hewitt-manning-2019-structural} suggest that BERT captures syntactic information, but our results suggest that there is still more information to be captured by adding explicit structural priors. Third, we observe that certain graph structures are more useful for this task than others. Lastly, our best model which uses a specific combination of semantic, sequential and structural information improves over the baseline by 7.95\%. \section{Related work} There is active interest in using external knowledge to improve informativeness of responses for goal-oriented as well as chit-chat conversations \cite{lowe2015,KGNCM,moghe-etal-2018-towards,wizard_of_wikipedia}. Even the teams participating in the annual Alexa Prize competition \cite{alexa_prize} have benefited by using several knowledge resources. This external knowledge can be in the form of knowledge graphs or unstructured texts such as documents. Many NLP systems including conversation systems use RNNs as their basic building block which typically capture $n$-gram or sequential information. Adding structural information through tree-based structures \cite{tai-etal-2015-improved} or graph based structures \cite{marcheggiani-titov-2017-encoding} on top of this has shown improved results on several tasks. For example, GCNs have been used to improve neural machine translation \cite{marcheggiani-etal-2018-exploiting} by exploiting the semantic structure of the source sentence. Similarly, GCNs have been used with dependency graphs to incorporate structural information for semantic role labelling \cite{marcheggiani-titov-2017-encoding}, neural machine translation \cite{bastings-etal-2017-graph} and entity relation information in question answering \cite{de-cao-etal-2019-question} and temporal information for neural dating of documents \cite{vashishth-etal-2018-dating}. There have been advances in learning deep contextualized word representations \cite{peters-etal-2018-deep,devlin-etal-2019-bert} with a hope that such representations will implicitly learn structural and relational information with interaction between words at multiple layers \cite{jawahar-etal-2019-bert,peters-etal-2018-dissecting}. These recent developments have led to many interesting questions about the best way of exploiting rich information from sentences and documents. We try to answer some of these questions in the context of background aware dialogue response generation. \section{Background} \label{sec:background} In this section, we provide a background on how GCNs have been leveraged in NLP to incorporate different linguistic structures. \if 0 \subsection{Deep Contextualized Representations} \noindent \textbf{ELMo} \cite{elmo} computes deep, contextualized word representations by training a neural network on a 1 Billion Word dataset with a language modeling objective. The architecture consists of a context-independent CNN layer followed by two layers of Bi-LSTMs. The final ELMo embedding of a word is the linear combination of its embeddings learned at these 3 layers. The embeddings learned at each layer can be further fine-tuned for a particular task. \\ \noindent \textbf{BERT} \cite{bert} uses a multi-layer bidirectional Transformer \cite{transformer} as its base architecture and is trained using a corpus containing 3.3B tokens. It uses a masked language modelling objective and next sentence prediction to learn deep bidirectional representations. \\ Due to space constraints, we refer the reader to the respective papers for more details. \fi \label{sec:gcn-background} The Syntactic-GCN proposed in \cite{marcheggiani-titov-2017-encoding} is a GCN \cite{kipf2016semi} variant which can model multiple edge types and edge directions. It can also dynamically determine the importance of an edge. They only work with one graph structure at a time with the most popular structure being the dependency graph of a sentence. For convenience, we refer to Syntactic-GCNs as GCNs from here on. Let $G$ denote a graph defined on a text sequence (sentence, passage or document) with nodes as words and edges representing a directed relation between words. Let $\mathcal{N}$ denote a dictionary of list of neighbors with $\mathcal{N}(v)$ referring to the neighbors of a specific node $v$, including itself (self-loop). Let $dir(u,v) \in \{in, out, self\}$ denote the direction of the edge, $(u,v)$. Let $\mathcal{L}$ be the set of different edge types and let $L(u,v) \in \mathcal{L}$ denote the label of the edge, $(u,v)$. The $(k+1)$-hop representation of a node $v$ is computed as \begin{equation} h^{(k+1)}_v=\sigma(\sum_{u \in\mathcal{N}(v)}g^{(k)}_{(u,v)}(W_{dir(u,v)}^{(k)}{h}_u^{(k)}+ b_{L(u,v)}^{(k)}) \label{eqn:final-syntactic-gcn} \end{equation} where $\sigma$ is the activation function, $g_{(u,v)} \in \mathbb{R}$ is the predicted importance of the edge $(u,v)$ and $h_v \in \mathcal{R}^m$ is node, $v$'s embedding. $W_{dir(u,v)} \in \{W_{in},W_{out}, W_{self}\}$ depending on the direction $dir(u,v)$ and $W_{in}$, $W_{self}$ and $W_{out} \in \mathcal{R}^{m*m}$. The importance of an edge $g_{(u,v)}$ is determined by an edge gating mechanism w.r.t. the node of interest, $u$ as given below: \begin{equation} g_{(u,v)} = sigmoid \big( h_{u} \ . \ W_{dir(u,v)} + b_{L(u,v)} \big) \label{eqn:edge-gating} \end{equation} In summary, a GCN computes new representation of a node $u$ by aggregating information from it's neighborhood $\mathcal{N}(v)$. When $k$=0, the aggregation happens only from immediate neighbors, i.e., 1 hop neighbors. As the value of $k$ increases the aggregation implicitly happens from a larger neighborhood. \section{Proposed Model} \label{sec:proposed} Given a document $D$ and a conversational context $Q$ the task is to generate the response $\mathbf{y} = y_1, y_2, ...., y_m$. This can be modeled as the problem of finding a $\mathbf{y}^*$ that maximizes the probability $P(\mathbf{y}$\textbar $D,Q)$ which can be further decomposed as \begin{align*} \mathbf{y}^*= \arg\max_\mathbf{y} \prod_{t=1}^{m} P(y_t | y_1,...,y_{t-1},Q,D) \end{align*} As has become a standard practice in most NLG tasks, we model the above probability using a neural network comprising of an encoder, a decoder, an attention mechanism and a copy mechanism. The copy mechanism essentially helps to directly copy words from the document $D$ instead of predicting them from the vocabulary. Our main contribution is in improving the document encoder where we use a plug-and-play framework to combine semantic, structural and sequential information from different sources. This enriched document encoder could be coupled with any existing model. In this work, we couple it with the popular GTTP model \cite{see-etal-2017-get} as used by the authors of the \textit{Holl-E} dataset. In other words, we use the same attention mechanism, decoder and copy mechanism as GTTP but augment it with an enriched document encoder. Below, we first describe the document encoder and then very briefly describe the other components of the model. We also refer the reader to the supplementary material for more details. \subsection{Encoder} Our encoder contains a semantics layer, a sequential layer and a structural layer to compute a representation for the document words which is a sequence of words $w_1$, $w_2$, ..., , $w_m$. We refer to this as a plug-and-play document encoder simply because it allows us to plug in different semantic representations, different graph structures and different simple but effective mechanisms for combining structural and semantic information. \noindent\textbf{Semantics Layer:} Similar to almost all NLP models, we capture semantic information using word embeddings. In particular, we utilize the ability of BERT to capture deep contextualised representations and later combine it with explicit structural information. This allows us to evaluate (i) whether BERT is better suited for this task as compared to other embeddings such as ELMo and GloVe and (ii) whether BERT already captures syntactic information completely (as claimed by recent works) or can it benefit form additional syntactic information as described below. \noindent\textbf{Structure Layer:} To capture structural information we propose multi-graph GCN, \textit{M-GCN}, a simple extension of GCN to extract relevant multi-hop multi-relational dependencies from multiple structures/graphs efficiently. In particular, we generalize $G$ to denote a labelled multi-graph, \textit{i.e.}, a graph which can contain multiple (parallel) labelled edges between the same pair of nodes. Let $\mathcal{R}$ denote the set of different graphs (structures) considered and let $G=\{\mathcal{N}_1, \mathcal{N}_2\dots \mathcal{N}_{|\mathcal{R}|}\}$ be a set of dictionary of neighbors from the $|\mathcal{R}|$ graphs. We extend the Syntactic GCN defined in Eqn: \ref{eqn:final-syntactic-gcn} to multiple graphs by having $|\mathcal{R}|$ graph convolutions at each layer as given in Eqn: \ref{eqn:m-gcn}. Here, $g\_conv(\mathcal{N})$ is the graph convolution defined in Eqn: \ref{eqn:final-syntactic-gcn} with $\sigma$ as the identity function. Further, we remove the individual node (or word) $i$ from the neighbourhood list $\mathcal{N}(i)$ and model the node information separately using the parameter $W_{self}$. \begin{equation} h^{(k+1)}_i = ReLU \big( (h^{(k)}_iW^{(k)}_{self} + \sum_{\mathcal{N} \in G } g\_conv(\mathcal{N}) \big) \label{eqn:m-gcn} \end{equation} This formulation is advantageous over having $|\mathcal{R}|$ different GCNs as it can extract information from multi-hop pathways and can use information across different graphs with every GCN layer (hop). Note that $h^{0}_i$ is the embedding obtained for word $v$ from the semantic layer. For ease of notation, we use the following functional form to represent the final representation computed by by M-GCN after $k$-hops starting from the initial representation $h^{0}_i$, given $G$. \begin{equation*} h_i = M\textnormal{-}GCN(h_i^0, G, k) \end{equation*} \noindent\textbf{Sequence Layer:} The purpose of this layer is to capture sequential information. Once again, following standard practice, we pass the word representations computed by the previous layer through a bidirectional LSTM to compute a sequence contextualized representation for each word. As described in the next subsection, depending upon the manner in which we combine these layers, the previous layer could either be the structure layer or the semantics layer. \begin{figure*} \centering \resizebox{\textwidth}{7.7 cm}{\includegraphics{SSSFramework.png}} \caption{The \textit{SSS} framework} \label{img:sss-design} \end{figure*} \subsection{Combining structural and sequential information} \label{sec:SSS Framework} As mentioned earlier, for a given document $D$ containing words $w_1, w_2, w_3, \dots, w_m$, we first obtain word representations $x_1, x_2, x_3, \dots, x_m$ using BERT (or ELMo or GloVe). At this point we have three different choices for enriching the representations using structural and sequential information: (i) structure first followed by sequence (ii) sequence first followed by structure or (iii) structure and sequence in parallel. We depict these three choices pictorially in Figure \ref{img:sss-design} and describe them below with appropriate names for future reference. \subsubsection{Sequence contextualized GCN (\textit{Seq-GCN})} \textit{Seq-GCN} is similar to the model proposed in \cite{bastings-etal-2017-graph,marcheggiani-titov-2017-encoding} where the word representations $x_1, x_2, x_3, \dots, x_m$ are first fed through a BiLSTM to obtain sequence contextualized representations as shown below. \begin{equation*} h_{i}^{seq} = BiLSTM(h_{i-1}^{seq}, x_i) \label{eqn:c-gcns-a} \end{equation*} These representations $h_1, h_2, h_3, \dots, h_m$ are then fed to the M-GCN along with the graph $G$ to compute a $k$-hop aggregated representation as shown below: \begin{equation*} h_i^{str} = M\textnormal{-}GCN(h_{i}^{seq}, G, k) \label{eqn:c-gcns-b} \end{equation*} This final representation $h_i^{final} = h_i^{str}$ for the $i$-th word thus combines semantics, sequential and structural information in that order. This is a popular way of combining GCNs with LSTMs but our experiments suggest that this does not work well for our task. We thus explore two other variants as explained below. \subsubsection{Structure contextualized LSTM (\textit{Str-LSTM})} Here, we first feed the word representations $x_1, x_2, x_3, \dots, x_m$ to M-GCN to obtain structure aware representations as shown below. \begin{equation*} h_i^{str} = M\textnormal{-}GCN(x_{i}, G, k) \label{eqn:g-lstms-a} \end{equation*} These structure aware representations are then passed through a BiLSTM to capture sequence information as shown below: \begin{align*} h_{i}^{seq} = BiLSTM(h_{i-1}^{seq}, h_i^{str}) \label{eqn:g-lstms-b} \end{align*} This final representation $h_i^{final} = h_i^{seq}$ for the $i$-th word thus combines semantics, structural and sequential information in that order. \subsubsection{Parallel GCN-LSTM (\textit{Par-GCN-LSTM})} Here, both M-GCN and BiLSTMs are fed with word embeddings $x_i$ as input to aggregate structural and sequential information independently as shown below: \begin{align*} h_i^{str} &= M\textnormal{-}GCN(x_{i}, G, k) \\ h_{i}^{seq} &= BiLSTM(h_{i-1}^{seq}, x_i) \end{align*} The final representation, $h_i^{final}$, for each word is computed as $h_i^{final} = h_{i}^{str} + h_{i}^{seq}$ and combines structural and sequential information in parallel as opposed to a serial combination in the previous two variants. \subsection{Decoder, Attention and Copy Mechanism} Once the final representation for each word is computed, an attention weighted aggregation, $c_t$, of these representations is fed to the decoder at each time step $t$. The decoder itself is a LSTM which computes a new state vector $s_t$ at every timestep $t$ as \begin{align*} s_t = LSTM (s_{t-1}, c_t) \end{align*} The decoder then uses this $s_t$ to compute a distribution over the vocabulary where the probability of the $i$-th word in the vocabulary is given by $p_i = softmax(Vs_t + Wc_t + b)_i$. In addition, the decoder also has a copy mechanism wherein, at every timestep $t$, it could either choose the word with the highest probability $p_i$ or copy that word from the input which was assigned the highest attention weight at timestep $t$. Such copying mechanism is useful in tasks such as ours where many words in the output are copied from the document $D$. We refer the reader to the GTTP paper for more details of the standard copy mechanism. \if 0 \noindent \textbf{Parallel v/s Sequential combination}: In sequential combination (\textit{Str-C-LSTM}, \textit{Seq-C-GCN}) , the last layer takes in a particular contextual word representation and modifies it by incorporating additional information. Whereas with the parallel model (\textit{Par-GCN-LSTM}), both the sequence and structure contextual representation are directly used. In sequential combination, a latter layer might modify the information from the preceding layer beyond recovery through direct means. It is a concerning issue when a layer adds additional information rather than merely performing a (non) linear projection as it is susceptible to significantly modifying the original input. This is termed as the node information morphing in the GCN literature \cite{vijayan2018hopf, li2018deeper}, where the information of the node gets morphed by neighborhood information with every new layer. This is not an issue with the parallel model but a weighted combination is not powerful enough to learn complex correlations between outputs. \fi \section{Experimental setup} \label{sec:experiments} In this section, we briefly describe the dataset and task setup followed by the pre-processing steps we carried to obtain different linguistic graph structures on this dataset. We then describe the different baseline models. \subsection{Dataset description} We evaluate our models using Holl-E, an English language movie conversation dataset \cite{moghe-etal-2018-towards} which contains $\sim$ 9k movie chats and $\sim$ 90k utterances. Every chat in this dataset is associated with a specific background knowledge resource from among the plot of the movie, the review of the movie, comments about the movie, and occasionally a fact table. Every even utterance in the chat is generated by copying and or modifying sentences from this unstructured background knowledge. The task here is to generate/retrieve a response using conversation history and appropriate background resource. Here, we focus only on the \textit{oracle} setup where the correct resource from which the response was created is provided explicitly. We use the same train, test, and validation splits as provided by the authors of the paper. \subsection{Construction of linguistic graphs} We consider leveraging three different graph-based structures for this task. Specifically, we evaluate the popular syntactic word dependency graph (\textit{Dep-G}), entity co-reference graph (\textit{Coref-G}) and entity co-occurrence graph (\textit{Ent-G}). Unlike the word dependency graph, the two entity level graphs can capture dependencies that may span across sentences in a document. We use the dependency parser provided by SpaCy\footnote{https://spacy.io/} to obtain the dependency graph (\textit{Dep-G}) for every sentence. For the construction of the co-reference graph (\textit{Coref-G}), we use the NeuralCoref model \footnote{https://github.com/huggingface/neuralcoref. Code at https://github.com/nikitacs16/horovod\_gcn\_pointer\_generator } integrated with SpaCy. For the construction of the entity graph (\textit{Ent-G}), we first perform named-entity recognition using SpaCy and connect all the entities that lie in a window of $k=20$. \subsection{Baselines} We categorize our baseline methods as follows: \\ \textbf{Without Background knowledge}: We consider the simple Sequence-to-Sequence (S2S) \cite{neural_conversation_model} architecture that conditions the response generation only on the \textit{previous} utterance and completely ignores the other utterances as well as the background document. We also consider HRED \cite{HRED}, a hierarchical variant of the S2S architecture which conditions the response generation on the entire conversation history in addition to the last utterance. Of course, we do not expect these models to perform well as they completely ignore the background knowledge but we include them for the sake of completeness.\\ \textbf{With Background Knowledge}: To the S2S architecture we add an LSTM encoder to encode the document. The output is now conditioned on this representation in addition to the previous utterance. We refer to this architecture as S2S-D. Next, we use GTTP \cite{see-etal-2017-get} which is a variant of the S2S-D architecture with a copy-or-generate decoder; at every time-step, the decoder decides to copy from the background knowledge or generate from the fixed vocabulary. We also report the performance of the BiRNN + GCN architecture that uses dependency graph only as discussed in \cite{marcheggiani-titov-2017-encoding}. Finally, we note that in our task many words in the output need to be copied sequentially from the input background document which makes it very similar to the task of span prediction as used in Question Answering. We thus also evaluate BiDAF \cite{BIDAF}, a popular question-answering architecture, that extracts a span from the background knowledge as a response using complex attention mechanisms. For a fair comparison, we evaluate the spans retrieved by the model against the ground truth responses. We use BLEU-4 and ROUGE (1/2/L) as the evaluation metrics as suggested in the dataset paper. Using automatic metrics is more reliable in this setting than the open domain conversational setting as the variability in responses is limited to the information in the background document. We provide implementation details in the Appendix A. \section{Results and Discussion} \label{sec:analysis} In Table \ref{table:baselines_and_best_model}, we compare our architecture against the baselines as discussed above. \textit{SSS}(BERT) is our proposed architecture in terms of the \textit{SSS} framework. We report best results within \textit{SSS} chosen across 108 configurations comprising of four different graph combinations, three different contextual and structural infusion methods, three M-GCN layers, and, three embeddings. \begin{table} \begin{center} \resizebox{\linewidth}{!}{% \begin{tabular}{|c|c|c|c|c|} \hline Model & BLEU & \multicolumn{3}{c|}{ROUGE} \\ \hline & & 1 & 2 & L \\ \hline S2S & 4.63 & 26.91 & 9.34 & 21.58 \\ HRED & 5.23 & 24.55 & 7.61 & 18.87 \\ \hline S2S-D & 11.71 & 26.36 & 13.36 & 21.96 \\ GTTP & 13.97 & 36.17 & 24.84 & 31.07 \\ BiRNN+GCN & 14.70 & 36.24 & 24.60 & 31.29 \\ BiDAF & 16.79 & 26.73 & 18.82 & 23.58 \\ \hline \textit{SSS}(GloVe) & 18.96 & 38.61 & 26.92 & 33.77 \\ \textit{SSS}(ELMo) & 19.32 & 39.65 & 27.37 & 34.86 \\ \textit{SSS}(BERT) & \textbf{22.78} & \textbf{40.09} & \textbf{27.83} & \textbf{35.20} \\ \hline \end{tabular} } \caption{Results of automatic evaluation. Our proposed architecture \textit{SSS}(BERT) outperforms the baseline methods.} \label{table:baselines_and_best_model} \end{center} \end{table} The best model was chosen based on performance of the validation set. From Table \ref{table:baselines_and_best_model}, it is clear that our improvements in incorporating structural and sequential information with BERT in the \textit{SSS} encoder framework significantly outperforms all other models. \subsection{Qualitative Evaluation} We conducted human evaluation for the \textit{SSS} models from Table \ref{table:baselines_and_best_model} against the generated responses of GTTP. We presented 100 randomly sampled outputs to three different annotators. The annotators were asked to pick from four options: A, B, both, and none. The annotators were told these were conversations between friends. Tallying the majority vote, we obtain win/loss/both/none for \textit{SSS}(BERT) as 29/25/29/17, \textit{SSS}(GloVe) as 24/17/47/12 and \textit{SSS}(ELMo) as 22/23/41/14. This suggests qualitative improvement using \textit{SSS} framework. We also provide some generated examples in the Appendix B1. We found that the \texttt{SSS} framework had less confusion in generating the opening responses than the GTTP baseline. These ``conversation starters'' have a unique template for every opening scenario and thus have different syntactic structures respectively. We hypothesize that the presence of dependency graphs over these respective sentences helps to alleviate the confusion as seen in Example 1. The second example illustrates why incorporating structural information is important for this task. We also observed that \textit{SSS} encoder framework does not improve on the aspects of human creativity such as diversity, initiating a context-switch, and common sense reasoning as seen in Example 3. \begin{table} \begin{center} \resizebox{\linewidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|} \hline Emb & Paradigm & BLEU & \multicolumn{3}{c|}{ROUGE} \\ \hline & & & 1 & 2 & L \\ \hline \multirow{3}{*}{GloVe} & Sem & 4.4 & 29.72 & 11.72 & 22.99 \\ & Sem+Seq & 14.83 & 36.17 & 24.84 & 31.07 \\ & \textit{SSS}& 18.96 & 38.61 & 26.92 & 33.77 \\ \hline \multirow{3}{*}{ELMo} & Sem & 14.36 & 32.04 & 18.75 & 26.71 \\ & Sem+Seq & 14.61 & 35.54 & 24.58 & 30.71 \\ & \textit{SSS}& 19.32 & 39.65 & 27.37 & 34.86 \\ \hline \multirow{3}{*}{BERT} & Sem & 11.26 & 33.86 & 16.73 & 26.44 \\ & Sem+Seq & 18.49 & 37.85 & 25.32 & 32.58 \\ & \textit{SSS}& 22.78 & 40.09 & 27.83 & 35.2 \\ \hline \end{tabular} } \end{center} \caption{Performance of components within the \textit{SSS} framework.} \label{table:SSS-Framework} \end{table} \subsection{Ablation studies on the \textit{SSS} framework} We report the component-wise results for the \textit{SSS} framework in Table \ref{table:SSS-Framework}. The \textit{Sem} models condition the response generation directly on the word embeddings. We observe that ELMo and BERT perform much better than GloVe embeddings. The \textit{Sem+Seq} models condition the decoder on the representation obtained after passing the word embeddings through the LSTM layer. These models outperform their respective \textit{Sem} models. The gain with ELMo is not significant because the underlying architecture already has two BiLSTM layers whcih are anyways being fine-tuned for the task. Hence the addition of one more LSTM layer may not contribute to learning any new sequential word information. It is clear from Table \ref{table:SSS-Framework} that the \textit{SSS} models, that use structure information as well, obtain a significant boost in performance, validating the need for incorporating all three types of information in the architecture. \begin{table*} \begin{center} \resizebox{\linewidth}{!}{% \begin{tabular}{|c|cccc|cccc|cccc|} \hline Emb & \multicolumn{4}{c|}{Seq-GCN} & \multicolumn{4}{c|}{Str-LSTM} & \multicolumn{4}{c|}{Par-GCN-LSTM} \\ \hline & \multicolumn{1}{c|}{BLEU} & \multicolumn{3}{c|}{ROUGE} & \multicolumn{1}{|c|}{BLEU} & \multicolumn{3}{c|}{ROUGE} & \multicolumn{1}{c|}{BLEU} & \multicolumn{3}{c|}{ROUGE} \\ \hline & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{L} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{L} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{L} \\ \hline GloVe & 15.61 & 36.6 & 24.54 & 31.68 & 18.96 & 38.61 & 26.92 & 33.77 & 17.1 & 37.04 & 25.70 & 32.2 \\ \hline ELMo & 18.44 & 37.92 & 26.62 & 33.05 & 19.32 & 39.65 & 27.37 & 34.86 & 16.35 & 37.28 & 25.67 & 32.12 \\ \hline BERT & 20.43 & 40.04 & 26.94 & 34.85 & 22.78 & 40.09 & 27.83 & 35.20 & 21.32 & 39.9 & 27.60 & 34.87 \\ \hline \end{tabular}} \end{center} \caption{Performance of different hybrid architectures to combine structural information with sequence information} \label{table:SSS-Models} \end{table*} \subsection{Combining structural and sequential information} The response generation task of our dataset is a span based generation task where phrases of text are expected to be copied or generated as they are. The sequential information is thus crucial to reproduce these long phrases from background knowledge. This is strongly reflected in Table \ref{table:SSS-Models} where \textit{Str-LSTM} which has the LSTM layer on top of GCN layers performs the best across the hybrid architectures discussed in Figure \ref{img:sss-design}. The \textit{Str-LSTM} model can better capture sequential information with structurally and syntactically rich representations obtained through the initial GCN layer. The \textit{Par-GCN-LSTM} model performs second best. However, in the parallel model, the LSTM cannot leverage the structural information directly and relies only on the word embeddings. \textit{Seq-GCN} model performs the worst among all the three as the GCN layer at the top is likely to modify the sequence information from the LSTMs. \subsection{Understanding the effect of structural priors} While a combination of intra-sentence and inter-sentence graphs is helpful across all the models, the best performing model with BERT embeddings relies only on the dependency graph. In case of GloVe based experiments, the entity and co-reference relations were not independently useful with the \textit{Str-LSTM} and \textit{Par-GCN-LSTM} models, but when used together gave a significant performance boost, especially for \textit{Str-LSTM}. However, most of the BERT based and ELMo based models achieved competitive performance with individual entity and co-reference graphs. There is no clear trend across the models. Hence, probing these embedding models is essential to identify which structural information is captured implicitly by the embeddings and which structural information needs to be added explicitly. For the quantitative results, please refer to the Appendix B2. \subsection{Structural information in deep contextualised representations} Earlier work has suggested that deep contextualized representations capture syntax and co-reference relations \cite{peters-etal-2018-dissecting,jawahar-etal-2019-bert,tenney-etal-2019-bert, hewitt-manning-2019-structural}. We revisit Table \ref{table:SSS-Framework} and consider the \textit{Sem+Seq} models with ELMo and BERT embeddings as two architectures that \textit{implicitly} capture structural information. We observe that the \textit{SSS} model using the simpler GloVe embedding outperforms the ELMo \textit{Sem+Seq} model and performs slightly better than the BERT \textit{Sem+Seq} model. Given that the \textit{SSS} models outperform the corresponding \textit{Sem+Seq} model, the extent to which the deep contextualized word representations learn the syntax and other linguistic properties implicitly is questionable. Also, this calls for better loss functions for learning deep contextualised representations that can incorporate structural information explicitly. More importantly, all the configurations of \textit{SSS} (GloVe) have lesser memory footprint in comparison to both ELMo and BERT based models. Validation and training of GloVe models require one-half, sometimes even one-fourth of computing resources. Thus, the simple addition of structural information through the GCN layer to the established Sequence-to-Sequence framework that can perform comparably to stand-alone expensive models is an important step towards Green AI\cite{green_ai}. \section{Conclusion} We demonstrated the usefulness of incorporating structural information for the task of background aware dialogue response generation. We infused the structural information explicitly in the standard semantic+sequential model and observed performance boost. We studied different structural linguistic priors and different ways to combine sequential and structural information. We also observe that explicit incorporation of structural information helps the richer deep contextualized representation based architectures. We believe that the analysis presented in this work would serve as a blueprint for analysing future work on GCNs ensuring that the gains reported are robust and evaluated across different configurations.
{'timestamp': '2020-06-01T02:09:57', 'yymm': '2005', 'arxiv_id': '2005.14491', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14491'}
arxiv
\section{Introduction} Throughout this article, we shall assume that all schemes have an ample family of line bundles. For example, every quasi-projective scheme over a commutative ring has an ample f
amily of line bundles. Usually, $K$-theory of vector bundles behaves well under this assumption. By a quasi-projective scheme will always mean a scheme which is quasi-projective over some commutative ring. Let $\mathcal{E}$ be a vector bundle of rank $r+1$ over a quasi-projective scheme $X,$ and $\mathbb{P}(\mathcal{E})$ denote the associated projective space bundle with the structure map $\pi: \mathbb{P}(\mathcal{E}) \to X.$ Then the projective bundle theorem for algebraic $K$-theory says that (see Theorem V.1.5 of \cite{wei 1}) for all $q \in \mathbb{Z},$ there is an isomorphism of groups $$ K_{q}(X)^{r+1} \stackrel{\cong}\to K_{q}(\mathbb{P}(\mathcal{E})).$$ The goal of this article is to present a relative version of the above stated result for Heller's relative $K_{0}.$ Given a map of schemes $f: X \to S,$ let $K(f)$ denote the homotopy fibre of $K(S) \to K(X).$ Here $K(X)$ denotes the non-connective Bass $K$-theory spectrum of the scheme $X.$ Then $K_{n}(f),$ the $n$-th relative $K$-group of $f,$ is defined as $\pi_{n}K(f).$ In \cite{Hel}, Heller defined the relative $K_{0}$-group for a functor between certain categories. In this article, we are interested in Heller's relative $K_{0}$-groups for the pullback functor between the categories of vector bundles associated with a map of schemes. Given a map of schemes $f: X \to S,$ Heller's relative $K_{0}$ of $f^{*}$ is generated by the triples $(V_{1}, \alpha, V_{2})$ with suitable relations, where $V_{1}, V_{2}$ are vector bundles over $S$ and $\alpha: f^{*}V_{1}\to f^{*}V_{2}$ is an isomorphism (see Section \ref{k group definition}). We write $K_{0}^{He}(f)$ for the Heller relative $K_{0}$ of $f^{*}.$ It is natural to wonder the following: Is there any group isomorphism between $K_{0}(f)$ and $K_{0}^{He}(f)$? It is known that there is a group isomorphism if $f: X \to S$ is a map of schemes with $X$ affine. More precisely, if $f: X \to S$ is a map of schemes with $X$ affine then there is an isomorphism of groups $K_{0}^{He}(f) \stackrel{\cong}\to K_{0}(f)$(see \cite [Theorem 1.5 and Example 1.16]{RI}). For an arbitrary $X,$ we do not know whether the above mentioned result is true or not. Therefore, it is interesting to study the group $K_{0}^{He}(\mathbb{P}_{X}^{r} \to \mathbb{P}_{S}^{r}).$ Given a map of quasi-projective schemes $f: X \to S,$ we have the following commutative diagram \begin{equation}\label{basicdiagram1} \begin{CD} \mathbb{P}(\mathcal{E})\times_{S} X =\mathbb{P}(f^{*}\mathcal{E}) @> \mathbb{P}(f) >> \mathbb{P}(\mathcal{E})\\ @V \pi_{X} VV @V \pi_{S} VV \\ X @> f >> S, \end{CD} \end{equation} where $\mathcal{E}$ is a vector bundle over $S.$ Note that if $\mathcal{E}= \mathcal{O}_{S}^{r+1}$ then $\mathbb{P}(f)$ is just the map $\mathbb{P}_{X}^{r} \to \mathbb{P}_{S}^{r}.$ Under this situation, it is not hard to see that there is a group isomorphism (see Lemma \ref{projective bundle formula for K_n}) $$K_{n}(f)^{r+1} \stackrel{\cong}\to K_{n}(\mathbb{P}(f)).$$ We prove a similar result for relative $K_{0}^{He}$ in section \ref{first main theorem}. More precisely, we prove the following theorem (see Theorem \ref{proj formula for k heller}). \begin{theorem}\label{projective bundle for relative K_0} Let $\mathbb{P}(f): \mathbb{P}(f^{*}\mathcal{E}) \to \mathbb{P}(\mathcal{E})$ be a map as in diagram (\ref{basicdiagram1}). Assume that $f: X \to S$ is a flat map of quasi-projective schemes and ${\rm {rank}}(\mathcal{\mathcal{E}})= r+1.$ Then there is an isomorphism of groups $$K_{0}^{He}(f)^{r+1} \stackrel{\cong}\to K_{0}^{He}(\mathbb{P}(f)).$$ \end{theorem} Using the Theorem \ref{projective bundle for relative K_0}, we also prove the following (see Corollary \ref{He=k}): \begin{corollary} Suppose $\mathbb{P}(f),$ $f$ and $\mathcal{E}$ are as in Theorem \ref{projective bundle for relative K_0}. Further, we assume that $X$ is an affine scheme. Then there is an isomorphism of groups $K_{0}^{He}(\mathbb{P}(f))\cong K_{0}(\mathbb{P}(f)).$ \end{corollary} {\bf Acknowledgements:} The author would like to thank Charles Weibel for his helpful comments during the preparation of this article. He would also like to thank the referee for valuable comments and suggestions. \section{Preliminaries} For a scheme $X,$ let ${\bf{Mod}}(X)$ denote the category of $\mathcal{O}_{X}$-modules and ${\bf{VB}}(X)$ denote the category of vector bundles on $X$. It is well known that ${\bf{Mod}}(X)$ is an abelian category and ${\bf{VB}}(X)$ is an exact subcategory of ${\bf{Mod}}(X).$ Given a map of schemes $f: X \to S,$ we define a category ${\bf{Mod}}(f)$ whose objects are triples $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})$ with $\mathcal{F}_{1}, \mathcal{F}_{2} \in {\bf{Mod}}(S)$ and $\alpha: f^{*}\mathcal{F}_{1}\cong f^{*}\mathcal{F}_{2}$ in ${\bf{Mod}}(X).$ A morphism $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \to (\mathcal{F}_{1}^{'}, \alpha^{'}, \mathcal{F}_{2}^{'})$ is a pair of maps $u: \mathcal{F}_{1} \to \mathcal{F}_{1}^{'}$, $v: \mathcal{F}_{2} \to \mathcal{F}_{2}^{'}$ in ${\bf{Mod}}(S)$ such that $\alpha^{'} f^{*}u= f^{*}v~ \alpha$. In a similar way, we can define ${\bf{VB}}(f)$ by replacing $\mathcal{O}_{S}$-modules with vector bundles on $S$. If $f$ is a flat map then $(\operatorname{ker}(u), \tilde{\alpha}, \operatorname{ker}(v))\in {\bf{Mod}}(f),$ where $$\tilde{\alpha}: f^{*}\operatorname{ker}(u)\cong \operatorname{ker}(f^{*}u)\cong \operatorname{ker}(f^{*}v)\cong f^{*}\operatorname{ker}(v).$$ Hereafter, we assume that $f: X \to S$ is a flat map of schemes. Given a morphism $(u, v): (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \to (\mathcal{F}_{1}^{'}, \alpha^{'}, \mathcal{F}_{2}^{'})$ in ${\bf{Mod}}(f),$ we define $$\operatorname{ker}(u, v):= (\operatorname{ker}(u), \tilde{\alpha}, \operatorname{ker}(v)) ~{\rm {and}}~ \operatorname{coker} (u, v):= (\operatorname{coker}(u), \tilde{\alpha{'}}, \operatorname{coker}(v)),$$ where $\tilde{\alpha{'}}$ induces from $\alpha^{'}.$ Under this definition, ${\bf{Mod}}(f)$ is an abelian category. The following lemma is an easy observation. \begin{lemma}\label{exact seq} Let \begin{equation}\label{exactness in Mod} (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \stackrel{(u,v)}\to (\mathcal{F}_{1}^{'}, \alpha^{'}, \mathcal{F}_{2}^{'})\stackrel{(u^{'}, v^{'})}\to (\mathcal{F}_{1}^{''}, \alpha^{''}, \mathcal{F}_{2}^{''}) \end{equation} be a sequence in ${\bf{Mod}}(f).$ Then (\ref{exactness in Mod}) is exact if and only if $$ \mathcal{F}_{1} \stackrel{u}\to \mathcal{F}_{1}^{'} \stackrel{u^{'}}\to \mathcal{F}_{1}^{''}$$ and $$\mathcal{F}_{2} \stackrel{v}\to \mathcal{F}_{2}^{'} \stackrel{v^{'}}\to \mathcal{F}_{2}^{''}$$ both are exact in ${\bf{Mod}}(S).$ \end{lemma} Suppose that the sequence $$0 \to (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \stackrel{(u,v)}\to (\mathcal{F}_{1}^{'}, \alpha^{'}, \mathcal{F}_{2}^{'})\stackrel{(u^{'}, v^{'})}\to (\mathcal{F}_{1}^{''}, \alpha^{''}, \mathcal{F}_{2}^{''})\to 0$$ is exact in ${\bf{Mod}}(f)$ with $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}), (\mathcal{F}_{1}^{''}, \alpha^{''}, \mathcal{F}_{2}^{''}) \in {\bf{VB}}(f).$ Since ${\bf{VB}}(S)$ is an exact subcategory of ${\bf{Mod}}(S),$ $p: \mathcal{F}_{1}^{'}\cong V_{1}$ and $q: \mathcal{F}_{2}^{'}\cong V_{2},$ where $V_{1}, V_{2} \in {\bf{VB}}(S)$(using Lemma \ref{exact seq}). Then $(p, q): (\mathcal{F}_{1}^{'}, \alpha^{'}, \mathcal{F}_{2}^{'})\cong (V_{1}, \beta, V_{2})$ in ${\bf{VB}}(f),$ where $\beta= f^{*}q.\alpha^{'}.f^{*}p^{-1}.$ This proves the following: \begin{lemma}\label{exact subcategory} ${\bf{VB}}(f)$ is an exact subcategory of ${\bf{Mod}}(f).$ \end{lemma} \section{Relative K-theory}\label{k group definition} Let $f: X \to S$ be a map of schemes. Let $K(f)$ be the homotopy fibre of $K(S) \to K(X).$ Here $K(X)$ denotes the non-connective Bass $K$-theory spectrum of the scheme $X.$ Then $$K_{n}(f):= \pi_{n}K(f)$$ for $n\in \mathbb{Z}.$ In \cite{Hel}, Heller introduced the relative $K_{0}$-groups for a functor between certain categories (see also \cite{Bass-Tata}). Following Heller \cite{Hel}, very recently R. Iwasa in \cite{RI} define the relative $K_{0}$-groups for an exact functor between small exact categories. We now recall the definition from \cite{RI} in a special situation. For more details, we refer to section 1 of \cite{RI}. Consider ${\bf{VB}}(f)$ as the relative category associated to the pullback functor $f^{*}: {\bf{VB}}(S) \to {\bf{VB}}(X).$ We define $K_{0}(f^{*})$ to be the abelian group generated by $[(V_{1}, \alpha, V_{2})],$ where $(V_{1}, \alpha, V_{2}) \in {\bf{VB}}(f).$ The relations are: \begin{itemize} \item $[V^{'}] + [V^{''}] = [V]$ for every exact sequence $0 \to V^{'} \to V \to V^{''}\to 0$ in ${\bf{VB}}(f);$ \item $[(V_{1},\alpha,V_{2})] + [(V_{2},\beta,V_{3})] = [(V_{1},\beta\alpha,V_{3})]$ for every pair $(V_{1},\alpha,V_{2}), (V_{2},\beta,V_{3})$ of objects in ${\bf{VB}}(f).$ \end{itemize} We prefer to write $K_{0}^{He}(f)$ for $K_{0}(f^{*}).$ Next, we discuss the relationship between $K_{0}(f)$ and $K_{0}^{He}(f).$ By applying Theorem 1.5 of \cite{RI} to the functor $f^{*}: {\bf{VB}}(S) \to {\bf{VB}}(X)$ with $X$ affine, we get the following: \begin{lemma}\label{K^b= K for affine} Let $f: X \to S$ be a map of schemes with $X$ affine. Then there is an isomorphism $K_{0}^{He}(f)\stackrel{\cong}\to K_{0}(f).$ \end{lemma} The next result is the projective bundle formula for relative $K$-theory. \begin{lemma}\label{projective bundle formula for K_n} Let $\mathbb{P}(f): \mathbb{P}(f^{*}\mathcal{E}) \to \mathbb{P}(\mathcal{E})$ be a map as in diagram (\ref{basicdiagram1}). Assume that the rank of $\mathcal{E}$ is $r+1.$ Then there is a natural isomorphism of groups $K_{n}(f)^{r+1}\stackrel{\cong}\to K_{n}(\mathbb{P}(f))$ for all $n \in \mathbb{Z}.$ \end{lemma} \begin{proof} By Theorem V.1.5 of \cite{wei 1}, we have an equivalence $K(S)^{r+1}\simeq K(\mathbb{P}(\mathcal{E})).$ Consider the following commutative diagram of $K$-theory spectra $$\begin{CD} K(f)^{r+1} @>>> K(S)^{r+1} @>>> K(X)^{r+1}\\ @VVV @VV\simeq V @VV\simeq V \\ K(\mathbb{P}(f)) @>>> K(\mathbb{P}(\mathcal{E})) @>>> K(\mathbb{P}(f^{*}\mathcal{E})). \end{CD}$$ This induces a commutative diagram of long exact sequences \small $$\begin{CD} \dots @>>> K_{n+1}(X)^{r+1} @>>> K_{n}(f)^{r+1} @>>> K_{n}(S)^{r+1} @>>> K_{n}(X)^{r+1} @>>> \dots \\ @. @VV\cong V @VVV @VV\cong V @VV\cong V @. \\ \dots @>>> K_{n+1}(\mathbb{P}(f^{*}\mathcal{E})) @>>> K_{n}(\mathbb{P}(f)) @>>> K_{n}(\mathbb{P}(\mathcal{E})) @>>> K_{n}(\mathbb{P}(f^{*}\mathcal{E})) @>>> \dots. \end{CD}$$ \small Hence the assertion. \end{proof} The rest of the paper is dedicated to proving the projective bundle formula for $K_{0}^{He}.$ \section{Mumford-regular bundles} Let $\mathcal{E}$ be a vector bundle of finite rank over a quasi-projective scheme $X.$ Let $\mathbb{P}= \mathbb{P}(\mathcal{E})$ be the associated projective space bundle. There is a natural map $\pi=\pi_{X}: \mathbb{P}(\mathcal{E}) \to X.$ Some relevant details can be found in \cite[Chapter 8]{GT}. A quasi-coherent $\mathcal{O}_{\mathbb{P}}$-module $\mathcal{F}$ is said to be { \it Mumford-regular} if for all $q>0$ the higher derived sheaves $R^{q}\pi_{*}(\mathcal{F}(-q))=0.$ Here $\mathcal{F}(n)$ is the twisted sheaf $\mathcal{F}\otimes \mathcal{O}_{\mathbb{P}}(n).$ We now recall some known results pertaining to Mumford-regular modules. For details, we refer to \cite[Section 8]{DQ} and \cite[Chapter II.8]{wei 1}. \begin{lemma}\label{besic1} If $\mathcal{F}$ is Mumford-regular, then: \begin{enumerate} \item The twist $\mathcal{F}(n)$ are Mumford-regular for all $n\geq 0.$ \item The canonical map $\varepsilon: \pi^{*}\pi_{*}(\mathcal{F}) \to \mathcal{F}$ is onto. \end{enumerate} \end{lemma} \begin{proof} See Proposition II.8.7.3 of \cite{wei 1}. \end{proof} \begin{lemma}\label{basic3} The functor $\pi_{*}$ is exact from Mumford-regular modules to $\mathcal{O}_{X}$-modules. \end{lemma} \begin{proof} See Lemma II.8.7.4 of \cite{wei 1}. \end{proof} \begin{lemma}\label{basic2} Let $\mathcal{F}$ be a vector bundle on $\mathbb{P}.$ \begin{enumerate} \item $\mathcal{F}(n)$ is a Mumford-regular vector bundle on $\mathbb{P}$ for all large enough $n.$ \item If $\mathcal{F}$ is Mumford-regular, then $\pi_{*}\mathcal{F}$ is a vector bundle on $X.$ \end{enumerate} \end{lemma} \begin{proof} See Lemma II.8.7.5 of \cite{wei 1}. \end{proof} Let $f: X \to S$ be a map of quasi-projective schemes and $\mathcal{E}$ be a vector bundle of finite rank on $S.$ Then we have a commutative diagram (\ref{basicdiagram1}). Let ${\bf {MR}(\mathbb{P}(\mathcal{E}))}$ denote the category of Mumford-regular vector bundles. Now, we define a category ${\bf{MR}}(\mathbb{P}(f))$ whose objects are triples $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})$ with $\mathcal{F}_{1}, \mathcal{F}_{2} \in {\bf {MR}(\mathbb{P}(\mathcal{E}))}$ and $\alpha: \mathbb{P}(f)^{*}\mathcal{F}_{1}\cong \mathbb{P}(f)^{*}\mathcal{F}_{2}$ in ${\bf {MR}(\mathbb{P}(\mathcal{E}))}.$ A morphism $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \to (\mathcal{F}_{1}^{'}, \alpha^{'}, \mathcal{F}_{2}^{'})$ is a pair of maps $u: \mathcal{F}_{1} \to \mathcal{F}_{1}^{'}$, $v: \mathcal{F}_{2} \to \mathcal{F}_{2}^{'}$ in ${\bf {MR}(\mathbb{P}(\mathcal{E}))}$ such that $\alpha^{'} \mathbb{P}(f)^{*}u= \mathbb{P}(f)^{*}v~ \alpha$. Assume that $f: X \to S$ is a flat map. Let $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2} )\in {\bf {MR}}(\mathbb{P}(f)).$ By Lemma \ref{besic1}(2), there are canonical onto maps $\varepsilon_{i}: \pi_{S}^{*}\pi_{S*}(\mathcal{F}_{i}) \to \mathcal{F}_{i}$ for $i=1, 2.$ We also have $$\mathbb{P}(f)^{*}\pi_{S}^{*}\pi_{S*}(\mathcal{F}_{i})= \pi_{X}^{*}f^{*}\pi_{S*}\mathcal{F}_{i}\cong \pi_{X}^{*}\pi_{X*}\mathbb{P}(f)^{*}\mathcal{F}_{i} ~~{\rm {for}}~~ i=1,2,$$ where the first equality by the commutativity of the diagram (\ref{basicdiagram1}) and the second isomorphism by the flat base change theorem (see Lemma 30.5.2 of \cite{sp}). Thus, we get an isomorphism $\mathbb{P}(f)^{*}\pi_{S}^{*}\pi_{S*}(\mathcal{F}_{1})\cong \mathbb{P}(f)^{*}\pi_{S}^{*}\pi_{S*}(\mathcal{F}_{2})$ and is denoted by $\pi_{X}^{*}\pi_{X*}\alpha.$ Since $\pi_{S}$ is quasi-compact and separated, $\pi_{S}^{*}\pi_{S*}(\mathcal{F}_{i})\in {\bf {MR}(\mathbb{P}(\mathcal{E}))}$ for $i= 1, 2$ by Example II.8.7.2 of \cite{wei 1}. This implies that $$(\pi_{S}^{*}\pi_{S*}\mathcal{F}_{1}, \pi_{X}^{*}\pi_{X*}\alpha, \pi_{S}^{*}\pi_{S*}\mathcal{F}_{2}) \in {\bf{MR}}(\mathbb{P}(f)).$$ Note that the canonical map $\varepsilon: \pi^{*}\pi_{*}(\mathcal{F}) \to \mathcal{F}$ is natural in $\mathcal{F}.$ So, the diagram \begin{equation}\label{basicdiagram} \begin{CD} \mathbb{P}(f)^{*}\pi_{S}^{*}\pi_{S*}\mathcal{F}_{1}\cong \pi_{X}^{*}\pi_{X*}\mathbb{P}(f)^{*}\mathcal{F}_{1} @> \mathbb{P}(f)^{*}\varepsilon_{1} >> \mathbb{P}(f)^{*}\mathcal{F}_{1}\\ @V \pi_{X}^{*}\pi_{X*}\alpha VV @V \alpha VV \\ \mathbb{P}(f)^{*}\pi_{S}^{*}\pi_{S*}\mathcal{F}_{2}\cong \pi_{X}^{*}\pi_{X*}\mathbb{P}(f)^{*}\mathcal{F}_{2} @> \mathbb{P}(f)^{*}\varepsilon_{2} >> \mathbb{P}(f)^{*}\mathcal{F}_{2} \end{CD} \end{equation} is commutative by the naturality of $\mathbb{P}(f)^{*}\varepsilon$. Hence we get the following (by Lemma \ref{exact seq}): \begin{lemma}\label{surjection} $(\varepsilon_{1}, \varepsilon_{2}): (\pi_{S}^{*}\pi_{S*}\mathcal{F}_{1}, \pi_{X}^{*}\pi_{X*}\alpha, \pi_{S}^{*}\pi_{S*}\mathcal{F}_{2}) \to (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})$ is a morphism in ${\bf{MR}}(\mathbb{P}(f))$ and it is onto. Here $f: X \to S$ is a flat map. \end{lemma} \section{Relative version of Quillen's Resolution Theorem} In this section, we prove a relative version of Quillen's resolution theorem which will play an important role in the later part of the paper. Throughout this section, $f: X \to S$ is a flat map of quasi-projective schemes. First, we recall some notations from \cite{DQ} and \cite{wei 1}. Given a Mumford-regular $\mathcal{O}_{\mathbb{P}}$-module $\mathcal{F},$ we define a sequence of $\mathcal{O}_{X}$-modules $T_{n}= T_{n}\mathcal{F}$ and $\mathcal{O}_{\mathbb{P}}$-modules $Z_{n}= Z_{n}\mathcal{F}$ as follows. Starting with $T_{0}\mathcal{F}=\pi_{*}\mathcal{F}$ and $Z_{-1}= \mathcal{F}.$ Since $\mathcal{F}$ is Mumford-regular, there is a canonical onto map $\varepsilon: \pi^{*}\pi_{*}(\mathcal{F}) \to \mathcal{F}$ (see Lemma \ref{besic1}). Let $Z_{0}\mathcal{F}= \operatorname{ker} \varepsilon.$ So, we get an exact sequence $$ 0 \to Z_{0}\mathcal{F} \to \pi^{*}T_{0}\mathcal{F} \to Z_{-1}\mathcal{F}\to 0.$$ Inductively, we define $$ T_{n}\mathcal{F}= \pi_{*}Z_{n-1}(n),~ Z_{n}\mathcal{F}= \operatorname{ker}(\varepsilon)(-n),$$ where $\varepsilon$ is the canonical map $\pi^{*}\pi_{*}Z_{n-1}(n) \to Z_{n-1}(n).$ Therefore, we have sequences \begin{equation}\label{Resolution seq} 0 \to Z_{n}(n) \to \pi^{*}T_{n}\mathcal{F} \to Z_{n-1}(n)\to 0, \end{equation} which are exact except possibly at $Z_{n-1}(n).$ Now we state a result which is known as the Quillen Resolution theorem. \begin{theorem}\label{QRT} Let $\mathcal{F}$ be a vector bundle on $\mathbb{P}(\mathcal{E}),$ rank$(\mathcal{E})= r+1.$ If $\mathcal{F}$ is Mumford-regular, then $Z_{r}=0,$ and the sequences (\ref{Resolution seq}) are exact for $n\geq 0,$ so there is an exact sequence \begin{equation} 0 \to (\pi^{*}T_{r}\mathcal{F})(-r) \stackrel{\varepsilon(-r)}\to \dots \to (\pi^{*}T_{i}\mathcal{F})(-i)\stackrel{\varepsilon(-i)}\to \dots \stackrel{\varepsilon(-1)}\to \pi^{*}T_{0}\mathcal{F} \to Z_{-1}\mathcal{F}\to 0. \end{equation} \end{theorem} \begin{proof} See Theorem 8.7.8 of \cite{wei 1}. \end{proof} Next, our goal is to prove a relative version of the above theorem. To do this let us first fix some notation. For $n \in \mathbb{Z},$ we define the relative twist functor $(n)^{rel}: {\bf{Mod}}(\mathbb{P}(f)) \to {\bf{Mod}}(\mathbb{P}(f))$ by $$(n)^{rel}(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})= (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})(n):= (\mathcal{F}_{1}(n), \alpha(n), \mathcal{F}_{2}(n)),$$ where $\mathbb{P}(f)$ as in diagram (\ref{basicdiagram1}) and $\alpha(n):= \alpha \otimes id: \mathbb{P}(f)^{*}\mathcal{F}_{1}\otimes \mathcal{O}_{\mathbb{P}(f^{*}\mathcal{E})}(n)\cong \mathbb{P}(f)^{*}\mathcal{F}_{2}\otimes \mathcal{O}_{\mathbb{P}(f^{*}\mathcal{E})}(n). $ \begin{lemma}\label{shift exact} For $n \in \mathbb{Z},$ $(n)^{rel}$ is an exact functor on ${\bf{Mod}}(\mathbb{P}(f)).$ \end{lemma} \begin{proof} Since $\mathcal{O}_{\mathbb{P}(\mathcal{E})}(n)$ is flat over $\mathcal{O}_{\mathbb{P}(\mathcal{E})},$ the twist $(n)$ is an exact functor on ${\bf{Mod}}(\mathbb{P}(\mathcal{E})).$ Hence the result follows by Lemma \ref{exact seq}. \end{proof} Let $\mathcal{F}:= (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \in {\bf{MR}}(\mathbb{P}(f)).$ Since $f$ is a flat map, $$\pi_{X*}\alpha: f^{*}\pi_{S*}\mathcal{F}_{1}\cong \pi_{X*}\mathbb{P}(f)^{*}\mathcal{F}_{1}\cong \pi_{X*}\mathbb{P}(f)^{*}\mathcal{F}_{2}\cong f^{*}\pi_{S*}\mathcal{F}_{2}.$$ Let $$ T_{0}\alpha= \pi_{X*}\alpha, ~~ Z_{-1}\alpha= \alpha.$$ Then we define $$\mathcal{T}_{0}(\mathcal{F})= (T_{0}\mathcal{F}_{1}, T_{0}\alpha, T_{0}\mathcal{F}_{2}) ~~{\rm and}~~ \mathcal{Z}_{-1}\mathcal{F}=(Z_{-1}\mathcal{F}_{1}, Z_{-1}\alpha, Z_{-1}\mathcal{F}_{2})= \mathcal{F},$$ where $T_{0}\mathcal{F}_{i}= \pi_{S*}\mathcal{F}_{i}$ for $i=1, 2.$ Clearly, $\mathcal{T}_{0}(\mathcal{F})\in {\bf{Mod}}(f).$ Let $\mathcal{Z}_{0}\mathcal{F}= (Z_{0}\mathcal{F}_{1}, Z_{0}\alpha, Z_{0}\mathcal{F}_{2}),$ where $Z_{0}\alpha= \pi_{X}^{*}T_{0}\alpha.$ Since $\mathbb{P}(f)$ is a flat map, $\mathcal{Z}_{0}\mathcal{F}\in {\bf{Mod}}(\mathbb{P}(f)).$ Inductively, we define $$\mathcal{T}_{n}(\mathcal{F})= (T_{n}\mathcal{F}_{1}, T_{n}\alpha, T_{n}\mathcal{F}_{2}) ~~{\rm and}~~ \mathcal{Z}_{n}\mathcal{F}=(Z_{n}\mathcal{F}_{1}, Z_{n}\alpha, Z_{n}\mathcal{F}_{2}),$$ where $T_{n}\alpha= \pi_{X*}((Z_{n-1}\alpha)(n))$ and $Z_{n}\alpha=(\pi_{X}^{*}T_{n}\alpha)(-n).$ One can easily check that for each $n \in \mathbb{N},$ $\mathcal{T}_{n}(\mathcal{F}) \in {\bf{Mod}}(f)$ and $\mathcal{Z}_{n}\mathcal{F} \in {\bf{Mod}}(\mathbb{P}(f)).$ Define $$\pi_{S}^{*}\mathcal{T}_{n}(\mathcal{F}):= (\pi_{S}^{*}T_{n}\mathcal{F}_{1}, \pi_{X}^{*}T_{n}\alpha, \pi_{S}^{*}T_{n}\mathcal{F}_{2}).$$ Thus we have sequences \begin{equation}\label{relative Resolution seq} 0 \to \mathcal{Z}_{n}\mathcal{F}(n) \to \pi_{S}^{*}\mathcal{T}_{n}\mathcal{F} \stackrel{(\varepsilon_{1}, \varepsilon_{2})}\longrightarrow \mathcal{Z}_{n-1}\mathcal{F}(n) \end{equation} in ${\bf{Mod}}(\mathbb{P}(f)).$ We are now ready to prove a relative version of Theorem \ref{QRT}. \begin{theorem}\label{relative Q Resolution} Let $\mathcal{F}:= (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \in {\bf{MR}}(\mathbb{P}(f)),$ where $\mathbb{P}(f)$ as in diagram (\ref{basicdiagram1}) with rank$(\mathcal{E})= r+1.$ Then there is an exact sequence \tiny \begin{equation}\label{relative exact Q Resolution} 0 \to (\pi_{S}^{*}\mathcal{T}_{r}\mathcal{F})(-r) \stackrel{(\varepsilon_{1}(-r),\varepsilon_{2}(-r))} \longrightarrow \dots \to (\pi_{S}^{*}\mathcal{T}_{i}\mathcal{F})(-i)\stackrel{(\varepsilon_{1}(-i),\varepsilon_{2}(-i))}\longrightarrow \dots \stackrel{(\varepsilon_{1}(-1),\varepsilon_{2}(-1))}\longrightarrow \pi_{S}^{*}\mathcal{T}_{0}\mathcal{F} \to \mathcal{F}\to 0 \end{equation} \tiny \normalsize in ${\bf{Mod}}(\mathbb{P}(f)).$ Moreover, each $\mathcal{F}\mapsto \mathcal{T}_{i}\mathcal{F}$ is an exact functor from ${\bf{MR}}(\mathbb{P}(f))$ to ${\bf{VB}}(f).$ \end{theorem} \begin{proof} Since $\mathcal{F}_{1}, \mathcal{F}_{2}$ both are Mumford-regular, so are $Z_{n-1}\mathcal{F}_{1}(n), Z_{n-1}\mathcal{F}_{2}(n)$ (see the proof of Theorem II. 8.7.8 of \cite{wei 1}). By Lemma \ref{surjection}, the sequences (\ref{relative Resolution seq}) are exact at $\mathcal{Z}_{n-1}\mathcal{F}(n),$ i.e., we have exact sequences \begin{equation}\label{relative exact Q} 0 \to \mathcal{Z}_{n}\mathcal{F}(n) \to \pi_{S}^{*}\mathcal{T}_{n}\mathcal{F} \stackrel{(\varepsilon_{1}, \varepsilon_{2})}\longrightarrow \mathcal{Z}_{n-1}\mathcal{F}(n)\to 0. \end{equation} Now the twists of the sequences (\ref{relative exact Q}) fit together into the sequence of the form (\ref{relative exact Q Resolution}). For the second part, each $\mathcal{F}\mapsto T_{i}\mathcal{F}$ is an exact functor from ${\bf {MR}(\mathbb{P}(\mathcal{E}))}$ to ${\bf{VB}}(X)$ by Corollary II.8.7.9 of \cite{wei 1}. Hence the assertion follows from Lemma \ref{exact seq}. \end{proof} \section{Projective bundle formula for relative $K_{0}^{He}$}\label{first main theorem} In this section, we prove that the projective bundle formula holds for Heller's relative $K_{0}^{He}$ of a flat map. Throughout this section, $f: X \to S$ is a flat map of quasi-projective schemes. Also, $\mathbb{P}(f)$ always mean the map $\mathbb{P}(f^{*}\mathcal{E}) \to \mathbb{P}(\mathcal{E})$ as in diagram (\ref{basicdiagram1}) with rank$(\mathcal{E})= r+1.$ We observe in Lemma \ref{exact subcategory} that ${\bf{VB}}(f)$ is an exact subcategory of ${\bf{Mod}}(f).$ So we can define $K_{0}({\bf{VB}}(f))$ in the sense of Quillen absolute $K_{0}$ of exact categories. By definition, $K_{0}({\bf{VB}}(f))$ is the abelian group generated $[(V_{1}, \alpha, V_{2})],$ where $(V_{1}, \alpha, V_{2}) \in {\bf{VB}}(f),$ and relations $[V^{'}] + [V^{''}] = [V]$ for every exact sequence $0 \to V^{'} \to V \to V^{''}\to 0$ in ${\bf{VB}}(f).$ It is denoted by $K_{0}^{Q}(f).$ Clearly, there is a natural surjection \begin{equation} \eta^{f}: K_{0}^{Q}(f) \to K_{0}^{He}(f) \end{equation} and $\operatorname{ker} (\eta^{f})$ is generated by $[(V_{1},\alpha,V_{2})] + [(V_{2},\beta,V_{3})] - [(V_{1},\beta\alpha,V_{3})]$ for every pair $(V_{1},\alpha,V_{2}), (V_{2},\beta,V_{3})$ of objects in ${\bf{VB}}(f).$ The $n$-th twist of ${\bf{MR}}(\mathbb{P}(f)),$ notation ${\bf{MR}}(\mathbb{P}(f))(n),$ is a category consisting of objects $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})$ of ${\bf{VB}}(\mathbb{P}(f))$ such that $(\mathcal{F}_{1}(-n), \alpha(-n), \mathcal{F}_{2}(-n))$ is in ${\bf{MR}}(\mathbb{P}(f)).$ Each ${\bf{MR}}(\mathbb{P}(f))(n)$ is an exact category because the relative twisting is an exact functor (see Lemma \ref{shift exact}) and the Mumford-regular modules are closed under extensions (see Lemma II.8.7.4 of \cite{wei 1}). By Lemma \ref{besic1}(1), we have \small \begin{equation*} {\bf{MR}}(\mathbb{P}(f))={\bf{MR}}(\mathbb{P}(f))(0)\subseteq {\bf{MR}}(\mathbb{P}(f))(-1)\subseteq \dots \subseteq {\bf{MR}}(\mathbb{P}(f))(n)\subseteq {\bf{MR}}(\mathbb{P}(f))(n-1)\subseteq \dots \end{equation*} \small \begin{theorem}\label{MR=VB} For all $n\leq 0,$ $K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f))\cong K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f))(n)\cong K_{0}^{Q}(\mathbb{P}(f))$ induced by the inclusion ${\bf{MR}}(\mathbb{P}(f))(n)\subset {\bf{VB}}(\mathbb{P}(f)).$ \end{theorem} \begin{proof} Let $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})\in {\bf{VB}}(\mathbb{P}(f)).$ By Lemma \ref{basic2}(1), $\mathcal{F}_{1}(n),$ $\mathcal{F}_{2}(n) \in {\bf {MR}(\mathbb{P}(\mathcal{E}))}$ for $n\geq 0$ large enough. Then $$(\mathcal{F}_{1}(n), \alpha(n), \mathcal{F}_{2}(n))\in {\bf{MR}}(\mathbb{P}(f))(-n)$$ for $n\geq 0$ large enough. So, it is clear that $\cup_{n\leq 0} {\bf{MR}}(\mathbb{P}(f))(n)= {\bf{VB}}(\mathbb{P}(f)).$ Since $K_{0}^{Q}$ commutes with filtered colimits (see Example II. 7.1.7 of \cite{wei 1}), we have $K_{0}^{Q}{\bf{VB}}(\mathbb{P}(f))= \varinjlim_{n} K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f))(n).$ For each inclusion $${\bf{MR}}(\mathbb{P}(f))(n)\subseteq {\bf{MR}}(\mathbb{P}(f))(n-1),$$ we have the induced map $l_{n}:K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f))(n) \to K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f))(n-1).$ So, it is enough to show that each such $l_{n}$ is an isomorphism. For each $i>0,$ $$(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})\mapsto (\mathcal{F}_{1}(i)\otimes \pi_{S}^{*}\wedge^{i}\mathcal{E}, \alpha(i)\otimes id, \mathcal{F}_{2}(i)\otimes \pi_{S}^{*}\wedge^{i}\mathcal{E})$$ defines an exact functor from ${\bf{MR}}(\mathbb{P}(f))(n-1)$ to ${\bf{MR}}(\mathbb{P}(f))(n).$ It induces a homomorphism $\lambda_{i}: K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f))(n-1) \to K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f))(n).$ For a vector bundle $\mathcal{F}$ in ${\bf{VB}}(\mathbb{P}(\mathcal{E})),$ we have the Koszul resolution (see the proof of Lemma 1.3 in the section 8 of \cite{DQ}) $$ 0 \to \mathcal{F}\to \mathcal{F}(1)\otimes \pi^{*}\wedge \mathcal{E}^{\vee} \to \dots \to \mathcal{F}(r+1) \otimes \pi^{*}\wedge^{r+1}\mathcal{E}^{\vee} \to 0.$$ Here $\mathcal{E}^{\vee}$ denotes the dual of $\mathcal{E}.$ Similarly, a relative version of Koszul resolution is (by Lemma \ref{exact seq}) \begin{multline*} 0 \to (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \to (\mathcal{F}_{1}(1)\otimes \pi_{S}^{*}\wedge \mathcal{E}^{\vee}, \alpha(1)\otimes id, \mathcal{F}_{2}(1)\otimes \pi_{S}^{*}\wedge \mathcal{E}^{\vee}) \\ \to \dots \to (\mathcal{F}_{1}(r+1)\otimes \pi_{S}^{*}\wedge^{r+1}\mathcal{E}^{\vee}, \alpha(r+1)\otimes id, \mathcal{F}_{2}(r+1)\otimes \pi_{S}^{*}\wedge^{r+1}\mathcal{E}^{\vee}) \to 0. \end{multline*} By the additivity theorem (see Theorem 2, Collolary 3 of \cite{DQ}), the map $\sum_{i>0} (-1)^{i-1} \lambda_{i}$ is an inverse to the map $l_{n}.$ Hence the assertion. \end{proof} Let $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})\in {\bf{VB}}(f).$ Then the assignment $$u_{i}: (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})\mapsto (\pi_{S}^{*}\mathcal{F}_{1}, \pi_{X}^{*}\alpha, \pi_{S}^{*}\mathcal{F}_{2})(-i)$$ defines an exact functor from ${\bf{VB}}(f)$ to ${\bf{VB}}(\mathbb{P}(f)).$ Let $u_{i*}$ denote the induced map $ K_{0}^{Q}(f) \to K_{0}^{Q}(\mathbb{P}(f)).$ For notational convenience, we prefer to write $\mathcal{F}_{k}$ (resp. $\pi_{S}^{*}\mathcal{F}_{k}$) instead of $(\mathcal{F}_{k1}, \alpha_{k}, \mathcal{F}_{k2})$(resp. $(\pi_{S}^{*}\mathcal{F}_{k1}, \pi_{X}^{*}\alpha_{k}, \pi_{S}^{*}\mathcal{F}_{k2})$). We can now define a group homomorphism $$u^{Q}: K_{0}^{Q}(f)^{r+1} \to K_{0}^{Q}(\mathbb{P}(f))$$ by sending $([\mathcal{F}_{k}])_{k=0, 1, \dots, r}$ to $\sum_{k=0}^{r} u_{k*}[\mathcal{F}_{k}]=\sum_{k=0}^{r} [u_{k}\mathcal{F}_{k}]= \sum_{k=0}^{r}[\pi_{S}^{*}\mathcal{F}_{k}(-k)].$ Note that each $u_{i}$ also induces a map $K_{0}^{He}(f) \to K_{0}^{He}(\mathbb{P}(f)).$ Therefore, in a similar way we can define a group homomorphism $$u^{He}: K_{0}^{He}(f)^{r+1} \to K_{0}^{He}(\mathbb{P}(f)).$$ \begin{theorem}\label{Proj formula for bass k} The map $u^{Q}: K_{0}^{Q}(f)^{r+1} \to K_{0}^{Q}(\mathbb{P}(f))$ is an isomorphism. \end{theorem} \begin{proof} By Theorem \ref{relative Q Resolution}, each $\mathcal{T}_{n}$ is an exact functor from ${\bf{MR}}(\mathbb{P}(f))$ to ${\bf{VB}}(f).$ Hence we can define a group homomorphism $$\varphi: K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f)) \to K_{0}^{Q}(f)^{r+1}, [\mathcal{F}] \mapsto ([\mathcal{T}_{0}\mathcal{F}], -[\mathcal{T}_{1}\mathcal{F}], \dots, (-1)^{r}[\mathcal{T}_{r}\mathcal{F}]),$$ where $\mathcal{F}$ denotes the triples $(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}).$ Then the composition map $$u^{Q}\varphi: K_{0}^{Q}(\mathbb{P}(f))\stackrel{\cong}\leftarrow K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f)) \stackrel{\varphi}\to K_{0}^{Q}(f)^{r+1} \stackrel{u^{Q}}\to K_{0}^{Q}(\mathbb{P}(f))$$ sends $[\mathcal{F}]$ to $\sum_{k=0}^{r}(-1)^{k}[(\pi^{*}\mathcal{T}_{k}\mathcal{F})(-k)],$ which is equal to $[\mathcal{F}]$ by Theorem \ref{relative Q Resolution} and the additivity theorem (see Theorem 2, Collolary 3 of \cite{DQ}). This shows that $u^{Q}$ is onto. The assignment $$v_{i}: \mathcal{F}:=(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}) \mapsto \pi_{S*}(\mathcal{F}(i)):=(\pi_{S*}(\mathcal{F}_{1}(i)), \pi_{X*}(\alpha (i)), \pi_{S*}(\mathcal{F}_{2}(i)))$$ is also an exact functor from ${\bf{MR}}(\mathbb{P}(f))$ to ${\bf{VB}}(f)$ by Lemmas \ref{basic3} and \ref{basic2}. Let $v_{i*}$ denote the induced map $$ K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f)) \to K_{0}^{Q}(f), [\mathcal{F}] \mapsto [v_{i}\mathcal{F}].$$ Using these $v_{i}$'s, we can define a group homomorphism $$v^{Q}: K_{0}^{Q}{\bf{MR}}(\mathbb{P}(f)) \to K_{0}^{Q}(f)^{r+1}, [\mathcal{F}] \mapsto ([v_{0}\mathcal{F}], [v_{1}\mathcal{F}], \dots, [v_{r}\mathcal{F}]).$$ Then the composition map (using Theorem \ref{MR=VB}) $$v^{Q}u^{Q}: K_{0}^{Q}(f)^{r+1} \to K_{0}^{Q}(f)^{r+1}$$ is given by the matrix $(v_{i*}u_{j*}).$ Recall from Example II. 8.7.2 of \cite{wei 1} that for a quasi-coherent $\mathcal{O}_{X}$-module $\mathcal{N},$ we have $\pi_{*}\pi^{*} \mathcal{N}= \mathcal{N},$ $\pi_{*}\pi^{*} \mathcal{N}(n)= 0$ for $n<0$ and $\pi_{*}\pi^{*} \mathcal{N}(n)= {\rm {Sym}}_{n}\mathcal{E} \otimes \mathcal{N}$ for $n>0.$ Thus $$v_{i*}u_{j*}[(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})]= [(\pi_{S*}((\pi_{S}^{*}\mathcal{F}_{1})(i-j)), \pi_{X*}((\pi_{X}^{*}\alpha)(i-j)), \pi_{S*}((\pi_{S}^{*}\mathcal{F}_{2})(i-j))].$$ Since the diagram $$\begin{CD} f^{*}\pi_{S*}\pi_{S}^{*}\mathcal{F}_{1}\cong \pi_{X*}\pi_{X}^{*}f^{*}\mathcal{F}_{1} @> = >> f^{*}\mathcal{F}_{1}\\ @V \pi_{X*}\pi_{X}^{*}\alpha VV @V \alpha VV \\ f^{*}\pi_{S*}\pi_{S}^{*}\mathcal{F}_{2}\cong \pi_{X*}\pi_{X}^{*}f^{*}\mathcal{F}_{2} @> = >> f^{*}\mathcal{F}_{2} \end{CD}$$ is commutative, $[(\pi_{S*}\pi_{S}^{*}\mathcal{F}_{1}, \pi_{X*}\pi_{X}^{*}\alpha, \pi_{S*}\pi_{S}^{*}\mathcal{F}_{2})]= [(\mathcal{F}_{1}, \alpha, \mathcal{F}_{2})]$ in $K_{0}^{Q}(f).$ This implies that $v_{i*}u_{j*}=0$ for $i<j$ and $v_{i*}u_{j*}= {\rm{id}}$ for $i=j.$ We get that $ (v_{i*}u_{j*})$ is a lower triangular matrix with all of its diagonal entries equal to ${\rm{id}}.$ Therefore $v^{Q}u^{Q}$ is an isomorphism and hence $u^{Q}$ is one-one. \end{proof} Next, we prove a similar result for $K_{0}^{He}.$ \begin{theorem}\label{proj formula for k heller} The map $u^{He}: K_{0}^{He}(f)^{r+1} \to K_{0}^{He}(\mathbb{P}(f))$ is an isomorphism. \end{theorem} \begin{proof} We consider the following commutative diagram $$\begin{CD} 0 @>>> \operatorname{ker}(\eta^{MR}) @>>> K_{0}^{Q}({\bf{MR}}(\mathbb{P}(f)) @ > \eta^{MR} >> K_{0}^{He}(\mathbb{P}(f)) @>>> 0 \\ @. @VVV @V \cong VV @V = VV \\ 0 @>>> \operatorname{ker}(\eta^{\mathbb{P}(f)}) @>>> K_{0}^{Q}(\mathbb{P}(f)) @ >\eta^{\mathbb{P}(f)} >> K_{0}^{He}(\mathbb{P}(f)) @>>> 0, \end{CD}$$ where the middle map is an isomorphism by Theorem \ref{MR=VB}. So we get $\operatorname{ker}(\eta^{MR})\cong \operatorname{ker}(\eta^{\mathbb{P}(f)}).$ Let $\mathcal{F}_{12}:= (\mathcal{F}_{1}, \alpha, \mathcal{F}_{2}),$ $\mathcal{F}_{23}:= (\mathcal{F}_{2}, \beta, \mathcal{F}_{3})$ and $\mathcal{F}_{13}:= (\mathcal{F}_{1}, \beta\alpha, \mathcal{F}_{3})$ be in ${\bf{MR}}(\mathbb{P}(f)).$ Note that $\varphi([\mathcal{F}_{12}] + [\mathcal{F}_{23}]- [\mathcal{F}_{13}])\in \operatorname{ker}((\eta^{f})^{r+1})$ whenever $ ([\mathcal{F}_{12}] + [\mathcal{F}_{23}]- [\mathcal{F}_{13}]) \in \operatorname{ker}(\eta^{MR})$ because each $\mathcal{T}_{n}$ is an exact functor from ${\bf{MR}}(\mathbb{P}(f))$ to ${\bf{VB}}(f)$ (see Theorem \ref{Proj formula for bass k} for the map $\varphi$). Then the composition map $$u^{Q}\varphi: \operatorname{ker}(\eta^{\mathbb{P}(f)})\stackrel{\cong}\leftarrow \operatorname{ker}(\eta^{MR}) \stackrel{\varphi}\to \operatorname{ker}((\eta^{f})^{r+1}) \stackrel{u^{Q}}\to \operatorname{ker}(\eta^{\mathbb{P}(f)})$$ sends $[\mathcal{F}_{12}] + [\mathcal{F}_{23}]- [\mathcal{F}_{13}]$ to $$\sum_{k=0}^{r}(-1)^{k}[(\pi_{S}^{*}\mathcal{T}_{k}\mathcal{F}_{12})(-k)] + \sum_{k=0}^{r}(-1)^{k}[(\pi_{S}^{*}\mathcal{T}_{k}\mathcal{F}_{23})(-k)] - \sum_{k=0}^{r}(-1)^{k}[(\pi_{S}^{*}\mathcal{T}_{k}\mathcal{F}_{13})(-k)]$$ which is equal to $[\mathcal{F}_{12}] + [\mathcal{F}_{23}]- [\mathcal{F}_{13}]$ by the additivity theorem (see Theorem 2, Collolary 3 of \cite{DQ}). This shows that $u^{Q}|_{\operatorname{ker}((\eta^{f})^{r+1})}$ is onto. Therefore, we get the desired isomorphism from the following commutative diagram $$\begin{CD} 0 @>>> \operatorname{ker}((\eta^{f})^{r+1}) @>>> K_{0}^{Q}(f)^{r+1} @ > (\eta^{f})^{r+1} >> K_{0}^{He}(f)^{r+1} @>>> 0 \\ @. @VVV @V u^{Q} VV @V u^{He} VV \\ 0 @>>> \operatorname{ker}(\eta^{\mathbb{P}(f)}) @>>> K_{0}^{Q}(\mathbb{P}(f)) @ >\eta^{\mathbb{P}(f)} >> K_{0}^{He}(\mathbb{P}(f)) @>>> 0, \end{CD}$$ where $u^{Q}$ is an isomorphism by Theorem \ref{Proj formula for bass k} and the first vertical arrow is also an isomorphism by the above observation. \end{proof} \begin{corollary}\label{He=k} Suppose $\mathbb{P}(f),$ $f$ and $\mathcal{E}$ are as in Theorem \ref{proj formula for k heller}. Further, we assume that $X$ is an affine scheme. Then there is an isomorphism of groups $K_{0}^{He}(\mathbb{P}(f))\cong K_{0}(\mathbb{P}(f)).$ \end{corollary} \begin{proof} By Lemma \ref{projective bundle formula for K_n}, $K_{0}(f)^{r+1} \stackrel{\cong}\to K_{0}(\mathbb{P}(f)).$ We also have an isomorphism $K_{0}^{He}(f)^{r+1}\stackrel{\cong}\to K_{0}(f)^{r+1}$ by Lemma \ref{K^b= K for affine}. Hence we get $$K_{0}(\mathbb{P}(f))\stackrel{\cong}\leftarrow K_{0}(f)^{r+1} \stackrel{\cong}\leftarrow K_{0}^{He}(f)^{r+1} \stackrel{\cong}\to K_{0}^{He}(\mathbb{P}(f)),$$ where the last isomorphism by Theorem \ref{proj formula for k heller}. \end{proof}
{'timestamp': '2020-06-01T02:09:21', 'yymm': '2005', 'arxiv_id': '2005.14467', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14467'}
arxiv
\section{Introduction} Intracranial aneurysms (IAs) are abnormal vessel wall dilatations in the cerebral vasculature and, according to a study that included $94,912$ subjects~\cite{vlak2011prevalence
}, have a high 3.2\% prevalence in the general population. Most IAs are small and it is estimated that 50-80\% do not rupture during a person's lifetime. Still, rupture of IA is one of the most common causes of subarachnoid hemorrhage (SAH) \cite{van2007subarachnoid}, a condition with 50\% mortality rate~\cite{etminan2019worldwide}. For small IAs with diameter $<5$ mm the chances of rupture are below 1\%, but increase with ageing and potential IA growth, whereas rupture risk is generally higher for larger IAs. Early detection of IAs is thus necessary to open a window of opportunity to mitigate rupture risk and/or to determine the best time and type of treatment. Current clinical practice is to search for IAs by visual inspection of 3D angiographic images like CTA, MRA and DSA. Such visual inspection is time consuming (10-15 minutes per case) and is prone to human error. Even skilled experts have a sensitivity of 88\% for small IAs on the CTA~\cite{yang_small_2017}. This is among the reasons why in recent years many researchers have worked extensively on computer-assisted IA detection. \begin{figure}[!t] \begin{center} \includegraphics[width=4.5in]{DSA_CTA_MRA.pdf} \end{center} \caption{\small Vascular 3D surface extracted from different modalities: MRA \textit{on the left}, DSA \textit{in the middle} and CTA \textit{on the right}. A skilled neurosurgeon marked the aneurysms, \textit{shown in yellow}.} \label{modalities} \end{figure} \subsection{Background} Deep machine learning has become the most successful technique for IA detection. Nakao \textit{et al.}~\cite{nakao2018deep} utilized a 6-layer 3D convolutional neural network (CNN) to find IAs in TOF-MRA images. Similarly, Ueda \textit{et al.}~\cite{ueda2019deep} used ResNet-18, pre-trained to detect four vascular pathologies and then fine-tuned to detect IAs in TOF-MRA images. Sichtermann \textit{et al.}~\cite{sichtermann2019deep} employed the "Deepmedic" dual-pathway CNN with 11 layers and validated on 1.5T and 3T 3D TOF-MRA images. Jin \textit{et al.}~\cite{jin2019fully} trained and tested a combined U-net and BiConvLSTM on 2D DSA images. Duan \textit{et al.} \cite{duan2019automatic} proposed a two step algorithm: first stage involved a model to detect candidate regions on posterior communicating artery (PCoA) in the 2D DSA images, while second stage model was used to detect IAs. Dai~\textit{et al.}~\cite{dai2020deep} trained a CNN model to detect aneurysms on select 2D projections from 3D CTA images. Deep learning methods are capable of detecting IAs regardless of their shape and size, but require access to a massive annotated dataset. Another limitation is the use of intensity information, which renders these methods applicable for modalities they were trained on, while they also are not able to aggregate information from different modalities during training. Modality-independent IA detection methods generally employ hand-crafted shape mapping and feature extraction. For instance, Jerman \textit{et al.}~\cite{jerman2015miccai} applied blobness and vesselness filters to 3D DSAs to locate potential IA locations and then encoded the filters' responses into rotation and scale invariant features using spherical harmonics based local neighborhood representation. The authors hypothesized that filters yield similar response on other angiographic modalities and that the trained model is transferable to other modalities, but this was not demonstrated. Another approach by the same authors~\cite{jerman2017aneurysm} requires vessel segmentation and applies intra-vascular distance mapping to the extracted vascular surface to cast unstructured 3D information into 2D gridded maps. Based on such maps, the CNNs were used for aneurysm site detection. The approach achieved a sensitivity of 100\% at 2.4 false positive IA detections per image. While results were promising, validation was limited with only 15 3D DSA images containing 21 IAs. Zhou \textit{et al.}~\cite{zhou2019intracranial} cast IA detection as shape analysis. The segmented 3D cerebrovascular mesh model was parameterized into a planar flat-torus and local and global geometric features such as Gaussian curvature, shape diameter function and wave kernel signature were computed and input to three GoogleNet Inception V3 models. All three models were used to jointly detect IAs with adaptive learned weights. However, the authors selected for validation only bifurcations on the anterior cerebral artery (ACA) and internal carotid artery (ICA). A common disadvantage of the three mentioned methods is that hand-crafted routines may be suboptimal in terms of IA detection performance and are often computationally demanding. A computer-assisted method that can detect IAs in all angiographic modalities is required, since 3DRA and DSA are used in interventional suites during acute states, while while CTA and MRA are used in population screening and prevention. Second, methods should aggregate information from different modalities during training because the amount of annotated data available for validation is generally limited due to ethical reasons. A big limitation of some studies is that they focused only on some areas of the cerebral vasculature (ICA, PCoA or ACA), regardless of the fact that IAs can develop elsewhere. For instance, 61\% of IAs are located on ICA and ACA while remaining 39\% of IAs are located in other areas of cerebral vasculature~\cite{brown2010unruptured}. Hence, it is not justified to focus only on certain locations as this may bias the detection results. \subsection{Contributions} In this paper we aim to detect aneurysms from 3D meshes obtained from CTA, MRA and DSA images as shown in Figure~\ref{modalities}. Unstructured point clouds were sampled from those meshes as input into deep neural network (DNN), which is trained to classify each point into IA or vessel. By aggregating the responses across locally sampled point clouds in test stage the accumulated heatmap was output as the basis for IA detection. To prove that our approach can be applied and transferred across different modalities, we trained our model on meshes obtained from one modality (DSA) and validated on meshes of other modalities (CTA, MRA). This paper has three major contributions: (1) a novel IA detection algorithm that is applicable to different angiographic modalities; (2) our approach achieved state-of-the-art detection sensitivity on the test and cross-modality validation datasets; (3) our approach achieved significantly lower false positive rate per image compared to current state-of-the-art methods. \section{Data and Methods} \subsection{Data acquisition} Angiographic brain scans of 67 subjects were acquired at University medical center Ljubljana where the institutional ethics committee approved this study. For each subject included in the study informed patient consent was obtained. Imaging followed a standard clinical imaging protocol and included 56 DSA, 5 CTA and 5 MRA angiographic scans of the cerebral vasculature. Sixty three subjects had one or more unruptured IAs (76 in total), while remaining 3 cases were aneurysm-free. The mean diameter of the observed IAs was $9.22$ mm, with 8\% small (diameter $<5$ mm), 66\% medium ($5$ mm $<$ diameter $<10$ mm) and 26\% large size (diameter $>10$ mm) aneurysms. \subsection{Aneurysm detection} This section presents a novel modality agnostic method for detection of IAs, which involves (1) triangulated surface mesh extraction from 3D angiographic images, (2) parcellation of the surface mesh into local patches of unstructured point clouds, (3) learning their invariant representation using DNN for classification of points into either an aneurysm or vessel and, in testing stage, (4) aggregation of predicted point classification obtained on local surface patches into IA detection heatmaps across entire vascular surface. For training and validating the IA detection method the extracted surface meshes were presented to skilled neurosurgeon, who annotated each IA by painting its surface. The flowchart of the proposed method is shown in Figure~\ref{flowchart}. The four steps are detailed in next subsections. \begin{figure}[!t] \begin{center} \includegraphics[width=3.5in]{flow.pdf} \end{center} \caption{\small Flowchart of the proposed aneurysm detection method.} \label{flowchart} \end{figure} \subsubsection{(1) Surface mesh extraction.} Cerebrovascular angiographic modalities are 3D images that depict vascular structures with high intensity. Other anatomical structures may also be depicted such as cranial bones in CTA and soft brain tissue in MRA. From the CTAs the cranial bone was removed using simple image cropping. Then we used interactive thresholding followed by the application of marching cubes and smooth non-shrinking algorithms \cite{larrabide2011three,cebral2001medical}. Resulting meshes were input to the aneurysm detection method. \begin{figure}[!t] \begin{center} \includegraphics[width=4.7in]{extraction.pdf} \end{center} \caption{\small Cerebral angiogram (\textit{left}) was interactively thresholded and marching cubes and smooth non-shrinking applied to extract a surface mesh (\textit{middle}). Yellow sections on the extracted mesh is the aneurysm annotated by skilled neurosurgeon. The obtained mesh was further used to extract unstructured point clouds (\textit{right}), each containing 3000 points.} \label{heatmap} \end{figure} \subsubsection{(2) Surface mesh parcellation.} Surface mesh was transformed into unstructured point cloud by randomly selecting seed points on the whole 3D vascular surface mesh and, for each seed, sampling 3000 closest mesh vertices according to geodesic distance. Hence, each point cloud contained exactly $N=3000$ points. For prediction purposes, the surface was repeatedly parcellated, with random seed selection, until every point on 3D vascular mesh was sampled at least once. For training purposes, each 3D mesh was parcellated into 170 point clouds, with ratio 50:120 of clouds containing aneurysm and vessel classes, respectively. \subsubsection{(3) Point cloud representation learning for classification.} We employ the PointNet architecture~\cite{qi2017pointnet} to learn ordering, scale and rotation invariant representation of the input point clouds $\{x_i; i=1,\ldots,N\}, x_i\in\mathbb{R}^n$. The idea is to approximate a general function $f(\{x_1,\ldots,x_n\})\approx g(h(x_1),\ldots,h(h(x_n)))$ by finding two mapping functions, i.e. $h: \mathbb{R}^N \rightarrow \mathbb{R}^K$ and $g: \mathbb{R}^K \times \mathbb{R}^K \times \mathbb{R}^K\} \rightarrow \mathbb{R}$. Functions $h(\cdot)$ and $g(\cdot)$ are modeled as DNNs that take $N$ points as input, apply input and feature transformations, and then aggregate point features by max pooling. In this way, input shape is summarized by a sparse set of key points $K$. The network outputs classification scores for the two aneurysm and vessel classes. For DNN training we input the point clouds and manual annotation as output, then use negative log likelihood loss with Adam optimizer run for 100 epochs, learning rate of 0.001, decay rate of 0.5 and decay step 20. \begin{figure} \begin{center} \includegraphics[width=4.5in]{heatmap.pdf} \end{center} \caption{\small Predictions on local point clouds were aggregated into aneurysm heatmap and superimposed onto 3D mesh (\textit{left}). Examplar predictions (\textit{right}) on DSA, MRA and CTA (\textit{top to bottom}).} \label{heat} \end{figure} \subsubsection{(4) Prediction aggregration.} By extracting the surface mesh from the input image and parcellating into unstructured point clouds, the trained DNN was used to predict IA point labels for each of the point clouds. Then, the obtained soft class predictions were aggregated across all point clouds and their values normalized based on the number of point predictions. The final output was a heatmap for the aneurysm class as in Figure \ref{heat}, which indicates potential IA locations. \subsubsection{Implementation.} Methods were implemented in Python 3.6 and PyTorch 1.4.0, and executed on a workstation with Intel i7 CPU, 32 GB RAM and NVidia GPU. \section{Experiments \& Results} Performance of proposed aneurysm detection method was evaluated by comparing the obtained heatmaps to the manual annotation of IAs on the same surface as made by the skilled neurosurgeon. Simple threshold was applied to the heatmap to get binary results, then all surface sections larger than 50 connected points were labeled. If a labeled section overlapped with the manually annotated section this was considered a true positive (TP); if not, it was considered as false positive (FP). A false negative (FN) was noted in case no labeled section overlapped with the manual annotation. For threshold-free assessment we used free receiver operating characteristic curve (FROC), with the number of FPs per image (FP/I) on the horizontal axis and TP rate (sensitivity; TPR=TP/(TP+FN)) on the vertical axis, to evaluate detection performance. Area under FROC curve was also computed. \subsubsection{Same-modality detection performance.} The 57 DSA scans were applied for method training and evaluation based on three-fold cross-validation (3$\times$19). In each fold model was trained on 38 and tested on 19 images. On the test set the model successfully detected 63 out of 64 aneurysms (across all folds), while detecting only 13 FPs (0.2 FP/I). The sensitivity with respect to FP/I, as shown in FROC in Figure~\ref{FROC}, \textit{left}, was higher than 90\% with as low as 0.02 FP/I. The AUC was 0.91. \subsubsection{Cross-modality detection performance.} Cross-modality validation set included surface meshes extracted from CTA and MRA images. These were used to validate the cross-modality performance using models trained only on the DSAs. The method detected 12 out of 12 aneurysms, with 5 FPs (0.5 FP/I). The corresponding FROC is shown in Figure~\ref{FROC}, \textit{right} indicates 100\% sensitivity at 0.15 FP/I, and AUC of 0.91. These results are consistent with the performance of the method in case of same-modality detection. \begin{table}[!t] \caption{Comparison of state-of-the-art and the proposed methods.} \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Method} & \textbf{Input Modality} & \textbf{Num. cases} & \textbf{FPs/image} & \textbf{TPR (\%)} \\ \hline\hline Dai et al. (2020) \cite{dai2020deep} & 2D CTA & 208 & 8.6 & 91.8 \\ \hline Zhou et al. (2019) \cite{zhou2019intracranial} & \begin{tabular}[c]{@{}c@{}}mesh extracted from 3DRA,\\flat torus map with features \end{tabular} & 121 & 0.8 & 94.8 \\ \hline Sichtermann et al. (2019) \cite{sichtermann2019deep} & 3D TOF-MRA & 85 & 8.14 & 87 \\ \hline Ueda et al. (2019) \cite{ueda2019deep} & 3D TOF-MRA & 748 & 10 & 91 \\ \hline Jin et al. (2019) \cite{jin2019fully} & 2D DSA & 493 & 3.77 & 89.3 \\ \hline Nakao et al. (2018) \cite{nakao2018deep} & 3D TOF-MRA & 450 & 2.9 & 94.2 \\ \hline Jerman et al. (2017) \cite{jerman2017aneurysm} & 3D DSA & 15 & 2.4 & \bf 100 \\ \hline Proposed & \begin{tabular}[c]{@{}c@{}}mesh extracted from\\ 3D DSA, CTA and MRA \end{tabular} & 67 & \textbf{0.2} & 98.6 \\ \hline \end{tabular} \label{table} \end{table} \begin{figure}[!t] \begin{center} \includegraphics[width=4in]{free_roc.pdf} \end{center} \caption{\small Free receiver operating characteristic curve (FROC) across all modalities (\textit{left}) and for the cross-modality experiment (\textit{right}).} \label{FROC} \end{figure} \subsubsection{Comparison to state-of-the-art.} Across DSA (using cross-validation), CTA and MRA images the proposed method achieved an overall sensitivity of 98.6\% (75/76 aneurysms detected). Method execution time was less than one minute per image. Table~\ref{table} summarizes the results of state-of-the-art and the proposed methods. Comparing our approach to state-of-the-art methods showed that ours achieved the lowest number of FP/I among all methods, while, at the same time, a higher or comparable level of sensitivity. The CNN-based method by Jerman \textit{et al.}~\cite{jerman2017aneurysm} achieved a slightly higher sensitivity compared to the proposed method, however, their database was rather small with only 15 3D DSA images and their method had substantially more false positives (2.4 FP/I). While their method could in principle accommodate multi-modal detection, its performance in such scenario was not verified. \section{Discussion \& Conclusion} A novel method to detect aneurysms from 3D meshes obtained from CTA, MRA and DSA images was proposed. Method applied 3D mesh parcellation into unstructured point clouds, DNN for point classification and aggregation of predictions into the original 3D mesh. Since it is independent of intensity, the method is applicable to different angiographic modalities. The evaluated performance surpassed that of state-of-the-art detection method (Table~\ref{table}). Namely, the proposed method achieved high sensitivity on the same- (98.6\%) and cross-modality (100.0\%) test datasets, with the least false positives ($<0.2$ per image) compared to all other methods. More importantly, training was executed only with DSA cases, whilst the sensitivity (TPR) and specificity (FP/I) were consistent when testing the method on DSA or CTA and MRA. To the best of our knowledge, our method was the only one tested in the most difficult multi-modal scenario, i.e. training on one angiographic modality and testing on others. In our case, we used DSA scans for training and tested on CTA and MRA scans. Note that CTA and MRA scans had much lower resolution ($<$1/3 the resolution of DSA) and additional artifacts such as overlapping bony and tissue structures that adversely impact segmentation quality. But still, the observed IA detection performance was excellent and consistent with the performance in case of same-modality detection. One limitation of this study is the number of angiographic scans used for validation. With increasing number of scans, from different modalities, more challenging cases, such as small aneurysms, would inevitably arise. Our dataset contained 7 small aneurysms (diameter $<$5 mm). The benefit of our approach, however, is that it accommodates training datasets aggregated across modalities and, albeit not tested here due to limited cases, we expect this would prove beneficial to the method performance. Since our method was applied and validated to reliably detect intracranial aneurysms on three most common cerebrovascular angiographic modalities (CTA, DSA and MRA) it seems suitable for application in computer-assisted detection systems. The output prediction in form of the 3D heatmap superimposed on the extracted vascular surface can be used to help neurosurgeon detect aneurysms. Currently the visual inspection and detection may take from 5 to 15 minutes per image, depending on the image modality and the position and size of the aneurysm. Because the search for aneurysms is still done manually, the tool to visualize 3D vascular system and propose possible aneurysm locations would render (small) aneurysm detection more sensitive (visual sensitivity on CTAs was 88\%~\cite{yang_small_2017}) and more reliable. By using our method it take less then a minute to detect and visualize aneurysm locations, saving a substantial amount of manual effort and reduces inspection time. \bibliographystyle{splncs04}
{'timestamp': '2020-06-01T02:04:16', 'yymm': '2005', 'arxiv_id': '2005.14329', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14329'}
arxiv
\section{Comparison with experimental quantities} Figure \ref{f:experiment} illustrates the main experimental spectroscopic results on the superlattices (SL) \cite{gu2018,laverock2017}, in which tran
sport measurements established that the 2:7 and 3:6 SLs were insulating, whereas the 13:4 SL was metallic. The 6:5 SL was found to be metallic at room temperature, with an metal-insulator transition (MIT) at low temperature. The experimental results (performed at room temperature) show the evolution in correlated electron behavior extracted from x-ray absorption spectroscopy (XAS) and resonant inelastic x-ray scattering (RIXS). \begin{figure}[b] \centerline{\includegraphics[width=0.5\linewidth]{vopars_dmft.pdf}} \caption{Evolution of correlated electron behavior from experimental x-ray absorption spectroscopy (XAS) and resonant inelastic x-ray scattering (RIXS) measurements of SrVO$_3$/SrTiO$_3$ superlattices, reproduced from Ref.~\onlinecite{laverock2017}. From top to bottom, evolution in (a)~metallicity of SLs, (b)~quasiparticle (QP) bandwidth, (c)~QP spectral weight and (d)~the energy of the upper Hubbard band are shown.} \label{f:experiment} \end{figure} For completeness, we briefly outline the experimental properties and how they were extracted here. The metallicity [Fig.~\ref{f:experiment}(a)] was extracted from both XAS and RIXS as the leading edge of the O {\em K} edge XAS and from the intensity of the quasi-elastic peak in V {\em L} edge RIXS. The quasiparticle (QP) bandwidth [Fig.~\ref{f:experiment}(b)] was extracted from the SrVO$_3$ layber contribution to the O {\em K} edge XAS as the full-with at half-maximum of the QP peak. The QP spectral weight [Fig.~\ref{f:experiment}(c)] was also extracted from the SrVO$_3$ layer contribution to the O {\em K} edge XAS as the ratio of the area under the QP peak to the total area under unoccupied V $3d$ states (i.e.\ the sum of QP, upper Hubbard band (UHB) and $e_g$ spectral weights). Occupied states are not accessible in O {\em K} edge XAS. Finally, the UHB energy [Fig.~\ref{f:experiment}(d)] is accessible to both XAS and RIXS. From O {\em K} edge XAS, the UHB peak is directly observed, and its center is shown here. From V {\em L} edge RIXS, the UHB energy is available from transitions from occupied QP states to the unoccupied UHB. Both show equivalent evolution with SL structure. \begin{figure}[b!] \centerline{\includegraphics[width=0.9\linewidth]{exp_dia2.pdf}} \caption{A schematic illustration showing how the variables in the theory-experimental comparison were determined.} \label{f:exp_dia} \end{figure} Figure \ref{f:exp_dia} shows a schematic illustration comparing the extracted experimental quantities with their DFT+DMFT (density functional theory with dynamical mean-field theory) definitions. The QP bandwidth has been extracted from the DMFT spectral function by obtaining the width defined by the minima around the central QP peak. The QP ratio was determined by taking the ratio of the the quasiparticle weight (labeled $Q$ in Fig.\ \ref{f:exp_dia}) and the UHB weight (labeled $U$). Finally, the energy of the UHB was obtained by locating the peak in the DMFT spectral function, referenced to $\omega=0$. In the experimental RIXS process, the UHB peak energy represents the peak in the joint QP and UHB density of states, and therefore is referenced to an energy $\omega < 0$. To compare the theoretical and experimental quantities, we therefore shift all theoretical quantities to match for the 2:7 SL (the shift is $-0.584$~eV). \begin{figure*}[t] \centerline{\includegraphics[width=0.8\linewidth]{DoS2.pdf}} \caption{The DFT V $t_{2g}$ partial density of states (PDOS) of the bulk SrVO$_3$, 2:7, 3:6 and 6:5 SLs. The dashed borders outline which plots belong to the corresponding structure. Each panel shows the PDOS of the inequivalent V atoms in each structure, labeled by their impurity number in the DMFT cycles (imp 1 refers to the interface). The arrows indicate the contributions of quantized states to each inequivalent V atom. The greatest contribution are from the relatively flat bands along $\Gamma$-$X$.} \label{f:pdos} \end{figure*} \section{Density Functional Theory calculations} DFT calculations were performed using the {\sc Elk} FP-LAPW (full potential linearized augmented plane wave) code within the local density approximation (LDA) \cite{elk}. The results are in excellent agreement with previous pseudopotential calculations within the generalized gradient approximation (GGA) of the same SLs \cite{laverock2017}. Previous PES work \cite{yoshimatsu2010}, show how the dimensionality of SrVO$_3$ influences the MIT. In their results, ten SrVO$_3$ layers closely resembles bulk behavior. From this, we approximate the 13:4 SL in Ref.\ \onlinecite{laverock2017} with bulk DFT+DMFT calculations. Self-consistency was achieved on a $12 \times 12 \times 4$ mesh in the full Brillouin zone (BZ) for relatively low computational cost with sufficient sampling, corresponding to 84 k-points in the irreducible ($1/16^{\rm th}$) BZ. To stabilize the DFT self-consistent cycles, small values of mixing of the new potentials was used, at the cost of computational time. For bulk SrVO$_3$, a k-mesh of $12 \times 12 \times 12$ was used (84 k-points in the $1/48^{\rm th}$ irreducible BZ). The partial densities of states (PDOS) of $t_{2g}$ orbitals are shown in Fig.~\ref{f:pdos} for the bulk and 2:7, 3:6 and 6:5 SLs. Sharp peaks in the PDOS reflect the quantized electronic structure along $c$. For the inner layers of the thicker SLs, the PDOS more closely resembles that of bulk SrVO$_3$, e.g.\ impurity 3 of the 6:5 SL. Near the interface, the $xz(yz)$ PDOS extends to higher energies as a result of mixing of these states with Ti states in the SrTiO$_3$ layer. \begin{figure*}[t] \centerline{\includegraphics[width=0.92\linewidth]{characters.pdf}} \caption{The DFT V and Ti $t_{2g}$ band characters of the 6:5 superlattice (SL). The thickness of the lines indicates the total character of each V (left) and Ti (right) site. The top row shows the band characters at the interface, while the bottom row shows the character in the center of the each layer. On the right, a schematic illustration of the real-space probability distribution of the quantized subbands in the out-of-plane direction of the SrTiO$_3$ layers is shown. The edge of the box does not coincide with the interface Ti ion or its neighboring SrO layer due to the finite phase accumulated at the interface.} \label{f:char} \end{figure*} The characters of the subbands are shown in Fig.\ \ref{f:char} for each of the different V and Ti sites, using the 6:5 SL as an example. As expected, the V bands dominate the character at the Fermi level, with weak contribution from interfacial Ti ions. The spatial distribution of the subband wavefunctions of the SrVO$_3$ quantum wells can be seen directly in the characters. The V $n=0$ subband, with greatest amplitude in the centre of the well, has strong character in the central V ion and weak character at the interface. Correspondingly, the V $n=2$ subband has strongest character at the interface and is almost absent in the second layer close to where a node is expected in the quantum well wavefunction. At higher energies, the quantized V $e_g$ subbands appear above 1~eV. The Ti band characters shed light on the broadening of the Ti bands in the DFT+DMFT calculations shown in Fig.\ 3 of the main manuscript. As above, the central Ti ion contributes strongly to the Ti $n=0$ and $n=2$ subbands. On the other hand, the interfacial Ti ion contributes significantly to the Ti $n=1$, 2 and 3 subbands, with the largest contribution to the Ti $n=2$ subband. The interfacial Ti ion mixes most strongly with the V orbitals, which are the correlated orbitals in the subsequent DMFT cycle. This demonstrates how the spatial penetration of the Ti $n=1$ and $n=2$ subbands into the correlated SrVO$_3$ layers leads to substantial broadening of these subbands in the subsequent DMFT cycle. In contrast, the Ti $n=0$ subband is spatially deep within the SrTiO$_3$ layer and does not feel the effects of the correlated SrVO$_3$ orbitals very strongly, remaining reasonably sharp even in the insulating phase [Fig.\ 3(j) of the main manuscript]. \section{Quantized tight-binding model} \subsection{Bulk tight-binding bands} The tight-binding (TB) model was constructed up to 12$^{\rm th}$ nearest neighbors, consisting of 24 hopping terms, $t_i$, up to $[l,m,n] = [2,2,2]$. For the $xy$ band the TB dispersion, $\varepsilon_{xy}$, is given by, \begin{equation} \varepsilon_{xy} = E_{xy}^{0} + \sum_{lmn} t_{xy}^{lmn} \cos \left( l k_x + m k_y + n k_z \right), \label{e:tb} \end{equation} where the band energy, $E_{xy}^0$, corresponds to the crystal field energy. Since the purpose of our model is to accurately describe the bulk 3D DFT band structure, we do not attempt to analyze individual parameters, as has been done before \cite{liebsch2003}. Although terms corresponding to the 5$^{\rm th}$ nearest neighbor and higher had a magnitude of less than 10~meV, these terms were found to be necessary to adequately describe the FP-LAPW band structure. After fitting this model to the bulk LDA band structure in the full cubic Brillouin zone, we find the r.m.s.\ difference is less than 11~meV, with a maximum difference of 70~meV. \subsection{Quantum confinement} In order to account for the effects of quantum confinement of the V $3d$ electrons in the SrVO$_3$ layers, we apply the Bohr-Sommerfeld phase accumulation model \cite{chiang2000}, \begin{equation} 2k_z^n(E)L + \delta(E) = 2\pi n, \end{equation} where $n = 0, 1, 2, \dots$ is the quantum number, $2k_z^n(E)L$ is the total phase accumulated in traveling through the SrVO$_3$ layer and back, $k_z^n(E)$ is the quantized out-of-plane wavevector, $L = mc$ is the SrVO$_3$ layer thickness ($m$ and $c$ are the number of SrVO$_3$ layers and $c$-axis lattice parameter of the SrVO$_3$ layers, respectively). $\delta(E)$ is the total phase acquired due to reflection at both SrVO$_3$/SrTiO$_3$ interfaces. For asymmetric quantum wells, e.g.\ thin overlayers with a vacuum interface, $\delta = \phi_1 + \phi_2$ is composed of different individual phase shifts at each reflection; in our case of symmetric barriers, $\delta = 2\phi$, where $\phi$ is the phase at a single SrVO$_3$/SrTiO$_3$ interface. In general, $\delta = \delta(E)$ is explicitly dependent on the energy of the confined state. However, in order to simplify the fitting, and avoid unnecessary degrees of freedom, we instead implicitly include the energy dependence through different phases for each quantum number, $\delta = \delta_n$. With this, the quantization condition reduces to, \begin{equation} k_z^n = \frac{2 \pi n - \delta_n}{2 m c}, \end{equation} from which the quantized TB dispersion, $E_n(k_x,k_y,k_z^n)$, may be evaluated. \begin{figure}[t] \includegraphics[width=1.0\linewidth]{plotphase.pdf} \caption{The phase shift at the SrVO$_3$/SrTiO$_3$ interface for each quantized state in the SLs (note $\phi_n=\delta_n/2$ is shown), shown against the mean band energy.} \label{f:phases} \end{figure} \begin{table}[b] \begin{tabular}{c|c|ccc|ccc} SL & CF splitting & \multicolumn{3}{c|}{Intrinsic} & \multicolumn{3}{c}{Quantized bands} \\ \hspace*{0.30in} & (meV) & $W_{xy}$ & $W_{yz}$ & anis. & $W_{xy}$ & $W_{yz}$ & anis. \\ \hline 6:5 & 33 & 0.966 & 0.959 & 0.993 & 0.950 & 0.900 & 0.948 \\ 3:6 & 40 & 0.971 & 0.960 & 0.989 & 0.935 & 0.798 & 0.853 \\ 2:7 & 51 & 0.963 & 0.952 & 0.988 & 0.911 & 0.713 & 0.782 \\ \end{tabular} \caption{\label{t:qwbands} Results of fitting the FP-LAPW {\sc Elk} bands to a quantized tight-binding model. The crystal field (CF) splitting is the energy difference, $E^0_{yz} - E^0_{xy}$. The bandwidth (relative to bulk SrVO$_3$), $W_i$, of the $xy$ and $yz$ bands are shown for both intrinsic bands (before quantization) and for the quantized bands, alongside their anisotropy ($W_{yz}/W_{xy}$).} \end{table} \subsection{Full quantization parameters} For each SL, four parameters were fitted to describe the ``intrinsic'' band structure, and $n$ parameters described the confined bands. The quantized TB dispersion was fitted to the FP-LAPW {\sc Elk} band structure of the SLs. The four intrinsic parameters consist of band centers ($E_i^0$ in Eqn.~\ref{e:tb}) and band widths for the $xy$ and $xz(yz)$ bands. The band width parameter, $W_i$, is a multiplicative factor to the hopping terms, $t_i$ (the hopping terms themselves were fixed to the cubic bulk parameters determined above, effectively fixing the shape of the band). In addition to the intrinsic parameters, the phase shifts for each confined state, $\delta_n$, were also fitted. The fitted phases are shown in Fig.~\ref{f:phases} against the mean energy of each state, and closely follow the same roughly linear relationship with energy for all SLs. The results of fitting the FP-LAPW bands to the quantized TB model are shown in Table~\ref{t:qwbands}, separated into contributions from the underlying bulk ``intrinsic'' bands and after quantizing these bands. An example of the fitted band structure is shown in Fig.\ 1 of the main paper for the 2:7 SL. Since its wavefunction is perpendicular to the quantization axis, the $xy$ bandwidth is hardly affected by confinement, but the $xz(yz)$ bands are significantly narrowed compared with their intrinsic (bulk-like) counterparts. The confinement leads to the preferential filling of the quantized $xz(yz)$ out-of-plane bands as their $k_z$ dispersion is suppressed and they become 1D-like, which also pulls the Fermi level down slightly. We note that confinement alone is capable of reproducing the SL band structure to a large extent, correctly describing the narrowing of the quantized bandwidth and its variation with SrVO$_3$ layer thickness. This has been checked by restricting the ``intrinsic'' bands in the fit to the bulk bands (i.e.\ setting $E^0_i$ and $W_i$ to the bulk ones). This provides additional support that the band narrowing that eventually drives the MIT is primarily due to quantization effects rather than crystal field (CF) effects. \begin{figure*}[th!] \centerline{\includegraphics[width=0.85\linewidth]{U_SL.pdf}} \caption{The effect of U on the orbital charge $n_e$ (top), quasiparticle residue $Z$ (middle) and the spectral function around the Fermi level (bottom) for each SL one-shot and fully charge self-consistent (FCSC) DFT+DMFT calculation. The dashed line represents the bulk degenerate orbital charge. } \label{f:Uimp} \end{figure*} \section{Dynamical Mean-Field Theory calculations} \begin{figure}[t] \centerline{\includegraphics[width=1.0\linewidth]{sc_Aw.pdf}} \caption{Spectral functions of the 2:7 (top), 3:6 (middle) and 6:5 (bottom) SLs from fully charge self-consistent calculations, showing $xy$ (left) and $xz(yz)$ (right) orbitals. The spectra of the insulating 2:7 SL have been shifted such that the Fermi level lies at the center of the band gap of the $xy$ spectrum.} \label{f:Aw} \end{figure} The output from the {\sc Elk} DFT calculation was imported to the TRIQS library \cite{triqs} via an in-house interface with the dmftproj \cite{aichhorn2016} application. As in the literature \cite{zhong2015,bhandary2016,schuler2018}, only the V $t_{2g}$ bands were projected (using Wannier projectors) \cite{aichhorn2009} to construct the LDA Hamiltonian in Wannier space. These projectors were constructed in the following correlated energy windows: 2:7 SL: $[-1.36, 2.0]$~eV, 3:6 SL: $[-1.29, 2.0]$~eV, 6:5 SL: $[-1.29, 2.0]$~eV and bulk: $[-1.50, 1.90]$~eV. These windows were constructed such that all of the V t$_{2g}$ bands are included and the valence charge above the lower bound, corresponding to the charge in the V $t_{2g}$ orbitals, is equal to 1 per V impurity. Each DMFT cycle calculation used $84 \times 10^6$ Monte Carlo sweeps. In order to avoid potential complications from the ill-posed problem of analytic continuation, quantities were determined from the Green's function and self-energy on the imaginary time ($\tau$) or frequency axis as much as possible. The charge of each orbital ($n_{e}$) was determined by, \begin{equation} n_{e} = \sum_n G(i\omega_n)e^{i\omega_n0^{+}} \label{e:chg} \end{equation} within the TRIQS library. As there is negligible inter orbital-orbital overlap on the impurity, $n_{e}$ is diagonal. The spectral function at the Fermi level, $A(\omega=0)$, is an averaged quantity over a frequency window approximately equal to $T$ \cite{zhong2015}. Here, $A(\omega=0)$ is determined directly from the imaginary time Green's function by, \begin{equation} A(\omega=0) = \frac{\beta G(\tau=\frac{1}{2}\beta)}{\pi}, \label{e:A0} \end{equation} where $\beta$ is the inverse temperature in natural units. The value of QP residue $Z$ was determined by \begin{equation} Z = \bigg(1-\frac{\partial \Im[\Sigma(\mathrm{i}\mkern1mu\omega_n)]}{\partial \mathrm{i}\mkern1mu\omega_n}\bigg|_{\mathrm{i}\mkern1mu\omega_n\rightarrow 0^{+}}\bigg)^{-1}, \label{e:Z} \end{equation} where the $Z$ is evaluated from the differential of the imaginary part of the Matsubara self-energy at $\mathrm{i}\mkern1mu\omega_n\rightarrow 0^{+}$. For $U$ values far from the Fermi liquid regime (namely for the 6:5 $Z$ values close to the MIT), the $Z$ values were approximated by using the differential of the interpolated self-energies at $\mathrm{i}\mkern1mu\omega_n$ = 0. There are two ways to realize the insulating solution. First, by a divergence in $\Im[\Sigma(\mathrm{i}\mkern1mu\omega_n)]$, which comes naturally with $Z = 0$. Second, the combination of the $\Re[\Sigma(\mathrm{i}\mkern1mu\omega_n)]$ and the chemical potential might move the pole position outside of the non-interacting bandwidth, meaning that no QP peak is possible in the Green's function. In the latter case, we have $A(\omega=0)$ vanishing with non-diverging $\Im[\Sigma(\mathrm{i}\mkern1mu\omega_n)]$. In that case, that we also see here, we set $Z$ to zero manually. From this, the MIT $U$ value ($U_{\rm MIT}$) is defined as the lowest $U$ value in which $A(\omega=0)=0$. The spectral functions, $A(\omega)$, for each impurity were calculated from $G(\tau)$ using the {\em LineFitAnalyzer} technique of the maximum entropy analytic continuation method implemented within the {\em MaxEnt} application of TRIQS \cite{maxent}. The k-resolved spectral functions $A(\textbf{k},\omega)$ were calculated from the analytically continued self-energy. The effective and correlation subband mass enhancement factors, 1/$Z_{\nu}$ and 1/$Z^{\rm c}_{\nu}$, were calculated from the ratios of the Fermi velocities using \begin{equation} Z^{\rm c}_{\nu} = \frac{v^c_{\rm F}}{v^{\rm QTB}_{\rm F}} \end{equation} and \begin{equation} Z_{\nu} = \frac{v^c_{\rm F}}{v^i_{\rm F}}. \end{equation} Here, the Fermi velocities were determined from the gradient of the linearly expanded band dispersions along M-X around ${k}_{\rm F}$ of the DFT+DMFT subbands ($v^c_{\rm F}$), the quantized bands from QTB ($v^{\rm QTB}_{\rm F}$) and the intrinsic (bulk-like) TB bands ($v^i_{\rm F}$). The intrinsic bands were used as they incorporate the effect of renormalization due to strain. Therefore, $Z^{\rm c}_{\nu}$ and $Z_{\nu}$ describe the effect of renormalization from correlations and the combination of correlations and confinement (band) effects respectively. The DFT+DMFT subband energy centers, $E_{\nu,\mathbf{k}}$, were calculated by using \begin{equation} E_{\nu,\mathbf{k}}=\epsilon_{\nu,\mathbf{k}} - \mu + \Re[\Sigma_{\nu}(\mathbf{k},\omega=E_{\nu,\mathbf{k}})], \label{e:Enu} \end{equation} where $\epsilon_{\nu,\mathbf{k}}$ is the DFT energy, $\mu$ is the chemical potential and $\Re[\Sigma_{\nu}(\mathbf{k},\omega)]$ is the real part of the diagonal upfolded self-energy elements on the real frequency axis. The QP lifetime in the inset of Fig.\ 4 of the main manuscript was determined from the inverse imaginary part of the analytically continued upfolded self-energy. Finally, the subband energies at the $\Gamma$ high symmetry point in Fig.\ 4 of the main manuscript were determined from Eqn.~\ref{e:Enu}. \begin{figure}[t!] \centerline{\includegraphics[width=1.0\linewidth]{sc_chg.pdf}} \caption{ The averaged orbital charge over all layers from the DFT and DFT+DMFT Wannier V $xy$ and $xz(yz)$ orbitals for each SL and bulk. This includes charges from DFT, DFT+DMFT and average orbital charge per layer (orb. ave.) for each SL and bulk.} \label{f:chg} \end{figure} \subsection{One-shot and FCSC DFT+DMFT results} The main manuscript presents fully charge self-consistent (FCSC) DFT+DMFT calculations. Here, we present the results of one-shot (OS) DFT+DMFT for comparison. Overall, charge self-consistency slightly adjusts some details of the results, but the main conclusions of our study are already present in one-shot calculations. Figure \ref{f:Uimp} shows the $U$-dependent MIT for each SL and different DFT+DMFT methods. The behavior of the OS and FCSC calculations is very similar, exhibiting a similar $U_{\rm MIT}$ with similar characteristics, e.g.\ $A(\omega=0)$ and $Z$. Some differences are observed in the orbital polarization between the two methods, whereby the polarization is somewhat suppressed in the FCSC calculation compared with OS. This behavior, most notable for the 2:7 SL, is consistent with other studies \cite{bhandary2016,schuler2018,hampel2019}, and is caused by the charge redistribution with the rest of the system at the DFT stage. This trend from 2:7 to bulk is also seen in Fig.\ \ref{f:Zcomp} for the orbitally-averaged values of $Z$, where there are some differences in $\bar{Z}$ for the 2:7 SL between OS and FCSC, but the bulk values are very similar. \begin{figure}[t!] \centerline{\includegraphics[width=1.0\linewidth]{Zbar_comp.pdf}} \caption{ The comparison of the orbitally-averaged quasiparticle residue, $\bar{Z}$, between the one-shot (OS and fully charge self-consistent (FCSC) DFT+DMFT methods. The plot lines are guides to the eye.} \label{f:Zcomp} \end{figure} An important note to make about Fig.\ \ref{f:Uimp} is that $Z$ at the interface (imp 1) for the $xz(yz)$ orbitals tends to zero first for each SL. This suggests that the weight from the $xz(yz)$ QP peak depletes first. Therefore, when the interface $xz(yz)$ QP state has been fully depleted, this causes the SL to transition into the insulating state. From this, the interface between the oxides has a strong influence on the MIT. The $A(\omega=0)$ for imp 1 of the 6:5 also tends to zero which strengthens the argument for at least that SL. \begin{figure}[t!] \centerline{\includegraphics[width=1.0\linewidth]{Elk_Wien_mono2.pdf}} \caption{The orbital A($\omega$) comparisons between the one shot and fully charge self consistent (FCSC) DFT+DMFT methods from different input DFT codes for mono layer SrVO$_3$.} \label{f:mono} \end{figure} \section{Other Significant SL FCSC Results} Fig.~\ref{f:Aw} shows the A($\omega$) of each correlated impurity orbital in each SL at $U=5.7$ eV (the value used in the theoretical-experimental comparisons). It is evident that the 2:7 is insulating and the 3:6 and 6:5 is metallic from the absence/presence of the QP peak at the Fermi level. There are sharp features in the QP peaks around the Fermi level for the 3:6 and 6:5 SLs. These features are often attributed to spurious noise from the analytic continuation procedure, however, it may not be the case here due to the quantized bands being present around the Fermi level. The peak position of the Hubbard bands (notably the UHB) are closer in energy to the Fermi level for the interface layer (impurity 1) compared to the other layers for the 3:6 and 6:5. This is another indication that the interface layer is more correlated compared to the other layers. The splitting of the orbital degeneracy strongly effects the polarization of the orbital charge as shown in Fig.~\ref{f:chg}. It is interesting to note that the reduction of the number of layers significantly increases the charge in the xz(yz) orbitals, which appear to tend to half filling (whereas the xy orbitals are tending towards zero charge). This is a likely consequence of these orbitals trying to reduce the potential energy, analogous to what is seen in the previous mono layer calculations. The reduction in the orbitally-averaged DFT charge with lower SrVO$_3$ layers is likely due to hybridization with Ti at the interface. \subsection{Elk-TRIQS interface test: monolayer SrVO$_3$} The results presented used an in-house interface between {\sc Elk} and TRIQS, so this section presents comparison results between {\sc Elk} and Wien2k inputs into TRIQS to show that the interface works for a similar system, namely monolayer SrVO$_3$. The monolayer SrVO$_3$ calculation was set up in the same way and using the same parameters as in Ref.\ \onlinecite{schuler2018}. Figure \ref{f:mono} shows the comparison between $A(\omega)$ calculated from the different DFT code inputs. This comparison shows excellent agreement between the different inputs for the different DFT+DMFT methods. This test shows that the interface used between Elk and TRIQS is able to reliably perform DFT+DMFT calculations. \section{Effects of strain} We performed volume conserving strain calculations on bulk SrVO$_3$ to investigate the effect of CF has on the MIT while the bandwidths of the $t_{2g}$ orbitals are approximately unchanged. Compressive strain of 1\% was applied along the $c$-axis; the other axes were tensively strained to conserve volume compared with the bulk. This strain was chosen to yield a CF splitting of 53~meV, slightly larger than but comparable to the CF splitting of the 2:7 SL. The strained FCSC $U_{\text{MIT}}$ is approximately 6.525~eV, the same as for the bulk. The OS strained calculation had a slightly lower $U_{\text{MIT}}$ of 6.475 eV. Due to the small change on $U_{\text{MIT}}$, the CF splitting is insufficient to cause the MIT in these SLs.
{'timestamp': '2020-06-01T02:10:17', 'yymm': '2005', 'arxiv_id': '2005.14497', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14497'}
arxiv
\section{Introduction} Engineering of local magnetic properties of continuous thin films has long been an important topic in design and fabrication of future spintronic devices, high-density magnetic
data-storage devices, and magnetologic devices. Since Chappert \textit{et al.}~first reported the ability of ion irradiation to induce modification of local magnetic properties without altering topographic features,\cite{Chappert1998} ion-induced magnetic patterning has been investigated intensively and demonstrated to be realized through patterned modulations of magnetic anisotropy,\cite{McCord2005, McCord2009, Merkel2008, Jaafar2011} saturation magnetization,\cite{McCord2008,Bali2014} or exchange bias (EB).\cite{Fassbender2008} Most of these modulations rely mainly on changes in the interfacial structures, but also in the case of alloys on changes in the chemical ordering.\cite{Ravelosona2000, Menendez2008} Ion irradiation with sufficient energy and fluence has been used in most studies to achieve changes in local magnetic properties for magnetic patterning. By contrast, plasma treatment was utilized in very few experiments for the same purpose.\cite{Menendez2010} Among potential materials for ultrahigh-density magnetic recording media, the FePd ferromagnetic (FM) compound has been investigated intensively in recent years. FePd with a tetragonal \textit{L}1$_0$ phase has attracted much attention in light of its uniaxial magnetic anisotropy (UMA),\cite{Ivanov1973} perpendicular magnetic anisotropy (PMA),\cite{Wei2009} and high resistance to corrosion.\cite{Kryder2008} With particular fabrication methods such as specific deposition geometries or use of buffer layers, UMA and/or weak PMA are found in some FePd and other Fe-based alloys even without long-range \textit{L}1$_0$ ordering.\cite{Chi2012,Clavero2006,Jaafar2011} Capping of proper materials % such as Pd on FePd has been demonstrated to induce specific interaction at the interface due to the anisotropy of FePd,\cite{Durr1999,Clavero2008} which we consider can be a potential means of magnetic engineering. In this study, we demonstrate an easy method that utilizes e-beam lithography and O$_2$- or Ar-plasma treatment to magnetically pattern an FePd thin film with a surface Fe-oxide layer formed upon ambient exposure. The behaviors of the magnetic microstructures are investigated in detail. It is observed that the plasma treatment promotes further Fe oxidation, which prominently enhances the Pd concentration in the Pd-rich phase\cite{Hsu2017,Cialone2017} formed beneath the Fe oxide % on the upper side of the FePd layer. This alters the magnetic properties of the system, leading to interesting effects of domain pinning and competition between the UMA and EB observed in magneto-optic Kerr effect (MOKE) microscopy experiments. \section{Material and methods} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig1m2.pdf} \caption{(Color online) (a)--(g) illustrate the surface patterning process including e-beam lithography and plasma treatment. (h) and (i) are the optical images of the Fe oxide/FePd film at the (d) and (g) stages, respectively. (j) exhibits the atomic force microscopy image of the surface-patterned Fe oxide/FePd film, with the dashed-line profile shown in (k). }\label{Fig1} \end{figure} The 30-nm-thick FePd (50:50) alloy films were deposited on Al$_2$O$_3$(0001) substrates by e-beam heated co-evaporation in an ultrahigh vacuum chamber with a base pressure of $3\times10^{-9}$ mbar. Two evaporation guns were both aligned at 45$^{\circ}$ to the normal. This oblique deposition geometry allows UMA to be developed on the surface plane.\cite{Chi2012} The alloy compositions and film thicknesses were controlled by the respective deposition rates of the elements, and were calibrated by Auger electron spectroscopy, X-ray photoelectron spectroscopy (XPS), atomic force microscopy, and transmission electron microscopy with energy dispersive X-ray spectroscopy. X-ray diffraction results showed that the films grown do not exhibit long-range \textit{L}1$_0$ ordering. Then, the FePd films were naturally oxidized after exposure to ambient conditions, which lead to formation of a self-assembled thin Fe-oxide layer on the surfaces. E-beam lithography and O$_2$- or Ar-plasma treatment (Harrick Plasma Cleaner PDC-32G, radio frequency 13.56 MHz) were utilized for the patterning process on the surface of Fe oxide/FePd. The surface morphologies and chemical compositions are examined by atomic force microscopy and XPS, respectively. The microscopic magnetic behaviors, including magnetic hysteresis loops and magnetic domain structures, are monitored by a MOKE microscope with the presence of an in-plane magnetic field $H$.\cite{Chang2018} As shown in Figs.~\ref{Fig1}(a)--1(g), e-beam lithography was performed to create a microstructure pattern in poly(methyl methacrylate) (PMMA) for selective plasma treatment of the film surface afterwards, which results in a patterned Fe-oxide layer on top of the FePd alloy film. The optical images of an Fe oxide/FePd film at stages of Figs.~\ref{Fig1}(d) and 1(g) are shown in Figs.~\ref{Fig1}(h) and 1(i), respectively. Fig.~\ref{Fig1}(h) demonstrates a patterned PMMA layer with arrays of $50 \times 50$ $\upmu$m$^2$ square holes spaced 50 ${\upmu}$m apart. At a base pressure of 200 mTorr of the plasma chamber, 3 minutes of 10.5-Watt O$_2$ plasma was used to bombard the square areas, leaving the areas under PMMA intact. The plasma effect on the squares is hardly observable in the optical image in Fig.~\ref{Fig1}(i), but manifests itself in the atomic force microscopy image in Fig.~\ref{Fig1}(j), with the dashed-line profile shown in Fig.~\ref{Fig1}(k). In the intact region outside the squares, the FePd surface is still covered by the Fe-oxide clusters naturally self-assembled after ambient exposure. Inside the squares, however, the Fe-oxide layer has become much smoother with reduced grain sizes after the O$_2$ plasma treatment, and has been identified by XPS depth profiling technique to be only $\sim$1 nm thick. \section{Experimental results and discussion} \subsection{Magnetic patterning with O$_2$- or Ar-plasma treatment} \begin{figure*} \centering \includegraphics[width=1\textwidth]{fig2m2_ang.pdf} \caption{(Color online) (a) \textit{Longitudinal} MOKE hysteresis loops of the surface-patterned Fe oxide/FePd film with the magnetic field along the directions close to the magnetic easy (red curve) and hard (blue curve) axes, respectively. (b) \textit{Transverse} MOKE hysteresis loops of the Fe oxide/FePd film with the magnetic field along the direction close to the hard axis before (black curve) and after (red curve) plasma treatment, respectively. (c) Evolutions of the magnetic domain structures simultaneously recorded with the hysteresis loops in (b). Each image size is $450 \times 350$ $\upmu$m$^2$.}\label{Fig2} \end{figure*} The FePd alloy films reveal UMA induced by the oblique co-deposition. Fig.~\ref{Fig2}(a) shows the MOKE hysteresis loops of a surface-patterned Fe oxide/FePd film measured in the \textit{longitudinal} geometry with the magnetic field applied along the directions close to the magnetic easy (red curve) and hard (blue curve) axes, respectively. (The direction of the applied field or the Kerr sensitivity throughout the experiment is at an angle of $\sim$$15^{\circ}$ to the hard or easy axis, and these directions are labeled in the figures as $x$ and $y$ axes, respectively.) When the field direction is close to the easy axis, a square hysteresis loop is observed with a magnetic coercivity $H_\mathrm{c}$ of $\sim$4 Oe. When the field is instead applied in a direction close to the hard axis, which is in the film plane orthogonal to the easy axis, a slim hysteresis loop with extremely small remanence is observed. Fig.~\ref{Fig2}(b) shows the MOKE hysteresis loops measured with the magnetic field applied along the direction close to the hard axis in the \textit{transverse} geometry, in which the plane of incidence is orthogonal to the magnetic field. The black and red curves represent the transverse MOKE hysteresis loops measured from the same region of the film before (Fig.~\ref{Fig1}(d)) and after (Fig.~\ref{Fig1}(g)) plasma treatment, respectively. The hysteresis loop becomes wider after plasma treatment. The asymmetry of the hysteresis loops may originate from imperfect alignment between the field and the magnetic structure with higher-order anisotropies in a heterostructure with complexity.\cite{Camarero2005} Displayed in Fig.~\ref{Fig2}(c) is the evolution of the magnetic domain structures simultaneously recorded with the hysteresis loops in Fig.~\ref{Fig2}(b) with increasing magnetic field, and the effect of surface patterning can be clearly seen in these Kerr images. Both series of the images before and after plasma treatment show the preferred alignment of the magnetic domains along the easy axis. However, the evolution patterns are quite different from each other. Before plasma treatment with the patterned PMMA layer on the surface, there is no clear correlation between the magnetic domain structure and the arrays of $50 \times 50$ $\upmu$m$^2$ squares in PMMA. This indicates that the PMMA capping does not induce any observable change in the magnetic properties of the Fe oxide/FePd film, and thus the magnetic domain structure is irregular. By contrast, after plasma treatment and PMMA removal, a magnetic domain structure with periodic stripes is observed. During the reversal of the magnetization of the film from the negative (dark gray) to the positive (light gray) direction, the magnetic domains in the plasma-treated squares seem to be pinned, presenting a delay in their magnetization reversal, until a larger magnetic field is applied. Therefore, the field ($\sim$15 Oe) required for the film after patterned plasma treatment to complete the magnetization reversal is higher than that ($\sim$9 Oe) before plasma treatment. This is consistent with the observation in Fig.~\ref{Fig2}(b) with the after-plasma hysteresis loop being wider than the before-plasma loop. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{fig3p_ang.pdf} \caption{(Color online) (a) Magnetic domain structure of the surface-patterned Fe oxide/FePd film at $H=8.5$ Oe. The arrows indicate the antiparallel magnetization directions of the stripe domains. (b) Spatially-resolved transverse MOKE hysteresis loops from areas A (plasma-treated), B (intact), and C (intact between squares along the easy axis) indicated in (a), respectively. (c) \& (d) Zero-field transverse MOKE images at $\phi=45^\circ$ and $-45^\circ$, respectively, where $\phi$ is the angle between the current plane of incidence and that in (a). (e) \& (f) Zero-field polar MOKE images at $\phi=45^\circ$ and $-40^\circ$, respectively. Contrast has been enhanced in (c)--(f) for clarity.}\label{Fig3} \end{figure} Fig.~\ref{Fig3} illustrates the detailed orientations of the magnetic moments of the surface-patterned Fe oxide/FePd film at $H=8.5$ Oe. In the middle of the magnetization reversal when the field is around 8.5 Oe, the magnetic moments still prefer to align along the easy axis instead of along the direction of the magnetic field, but are constructed into an antiparallel configuration with a geometry matching the columns of the patterned square arrays. The plasma treatment inside the squares induces remarkable pinning of magnetic domains. The spatially-resolved transverse MOKE hysteresis loops of the plasma-treated area A and that of the intact area B as indicated in Fig.~\ref{Fig3}(a) are plotted in Fig.~\ref{Fig3}(b) for comparison, together with the hysteresis loop of the intact area C between the squares along the easy axis. The magnetic moments in areas A and C start to flip up gradually (i.e., to turn from whole to partial dark gray) at a field of around 8.5 Oe, whereas those in area B start to flip up earlier at a lower field around 6.5 Oe. These fields are indicated on the respective hysteresis loops in Fig.~\ref{Fig3}(b) with solid dots, located at the the sharp turning points in the lowest parts of the loop curves. This again suggests that the domain pinning effect is induced by plasma treatment in area A. The magnetic domain in area C between the squares, on the other hand, strongly depends on the anisotropy of the neighboring squares because of lateral modulation along the easy axis.\cite{Jaafar2011,Hierro-Rodriguez2012,Li2002,Menendez2017} Therefore, the field required for area C to reverse the magnetization is about the same as that for area A, which results in dark stripes connecting the squares along the easy axis in the Kerr image. Exhibited in Figs.~\ref{Fig3}(c) and \ref{Fig3}(d) is the observable reversal of the contrast in the transverse MOKE signal between inside and outside the squares at zero field as the plane of incidence changes from $\phi=45^\circ$ to $-45^\circ$, where $\phi$ is the angle of the present plane of incidence to that (i.e., the $y$ axis) in Fig.~\ref{Fig3}(a). (Contrast has been enhanced in Figs.~\ref{Fig3}(c)--\ref{Fig3}(f) for clarity.) This indicates that plasma treatment truly modifies the magnetic anisotropy of the film, not just its surface morphology, and the easy axis of the new anisotropy may be at an angle of $\sim$$-45^\circ$ to the $y$ axis in Fig.~\ref{Fig3}(a), i.e., $\sim$$-30^\circ$ to the original easy axis. Polar MOKE images at different angles are also obtained as shown in Figs.~\ref{Fig3}(e) and \ref{Fig3}(f), which interestingly reveal perpendicular magnetization of the film and again the contrast between inside and outside the squares. The perpendicular magnetization can be attributed to the anisotropy of FePd, which is known to be a ferromagnet (FM) with weak PMA.\cite{Durr1999} \begin{figure*} \centering \includegraphics[width=1\textwidth]{Fig4a_ang.pdf} \caption{(Color online) Evolutions of the magnetic domain structures as the magnetic field along the direction close to the hard axis is monotonically varied for (a) an O$_2$-plasma treated Fe oxide/FePd film with arrays of $25 \times 25$ $\upmu$m$^2$ squares with central square islands, and for (b) an Ar-plasma treated Fe oxide/FePd film with arrays of $50 \times 50$ $\upmu$m$^2$ squares. Yellow regions in the first image of (a) illustrate some of the plasma-treated areas, with a central $9 \times 9$ $\upmu$m$^2$ square island in each square left untreated under PMMA protection during plasma bombardment. Yellow arrows in the fifth image in (a) indicate magnetization reversals occurring on the central islands.}\label{Fig4a} \end{figure*} The structured magnetic domains observed in the surface-patterned Fe oxide/FePd films demonstrate that the method of surface patterning with plasma treatment can modulate the magnetic behaviors of a continuous film and create designed domains at the micro- or potentially even nanoscale. E-beam lithography provides high pattern resolution and excellent design freedom, which allows great flexibility in future developments and applications of magnetic patterning. Shown in Fig.~\ref{Fig4a}(a) is an example of a different pattern composed of smaller squares of sides 25 $\upmu$m, spaced 25 ${\upmu}$m apart. Each square has a central square island of side 9 $\upmu$m that was previously protected by PMMA from O$_2$ plasma. As one can see from the series of images from left to right with increasing magnetic field applied along the direction close to the hard axis, the magnetic patterning and domain pinning phenomena here are similar to those observed in the previous case of $50 \times 50$ $\upmu$m$^2$ squares, except that traits of the central-island effect are found in the present case, with fine light-gray domain strips (some indicated with yellow arrows) running across the square through the central island, showing that the reversal of the magnetization inside a square starts from the \textit{unpinned} central island previously protected from plasma. Fig.~\ref{Fig4a}(b), on the other hand, shows the result of $50 \times 50$ $\upmu$m$^2$ squares treated with Ar plasma instead of O$_2$ plasma, demonstrating that Ar plasma has similar patterning effects to those of O$_2$ plasma. This implies that the sample surface treated with Ar plasma may have undergone structural modification similar to that with O$_2$-plasma treatment. \subsection{Effect of plasma treatment time on magnetic patterning} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fig4m_ang.pdf} \caption{(Color online) (a) Transverse MOKE hysteresis loops of a surface-patterned Fe oxide/FePd film with different treatment times of O$_2$ plasma from 0 to 180 seconds. The lower panel offers a zoomed-in view. (b) Kerr images taken simultaneously with the hysteresis loops in (a) from $H=5$ Oe to 9 Oe. Patterned PMMA is left on the sample surface for easy recognition of the squares.}\label{Fig4} \end{figure} The effect of plasma treatment is further examined by studying the evolution of the magnetic domain structure as a function of the treatment time on another Fe oxide/FePd film. Shown in Figs.~\ref{Fig4}(a) and \ref{Fig4}(b) are the transverse MOKE hysteresis loops and the Kerr images after different O$_2$-plasma treatment time durations with the field direction close to the hard axis. The lower panel of Fig.~\ref{Fig4}(a) is a zoomed-in view of the upper panel and focuses on the magnetization reversal process displayed in Fig.~\ref{Fig4}(b). It can be seen that the domain-pinning behavior of this film is most prominent at a treatment time of 90 seconds, from the fact that the periodic dark strips in Fig.~\ref{Fig4}(b) are most persistent at 90 seconds as the field is increased to 9 Oe. As the treatment time is increased further from this optimal value, the domain-pinning effect becomes less pronounced. The hysteresis loops in the lower panel of Fig.~\ref{Fig4}(a) also show that the Kerr signal of the 90-second curve is smaller than all the others when $H\sim 9$ Oe, presenting a most prominent delay in magnetization reversal. The most probable reason for long-time treatment to fail in domain pinning is that too much plasma bombardment leads to serious interfacial mixing, which is assumed to cause a reduction of the exchange interaction across the interface.\cite{Fassbender2004} As the Fe-oxide layer gets thinner during plasma treatment over a longer duration to provide less protection, the ion bombardment on the FePd layer may cause disorder effects that lead to essential changes in the magnetic properties and exchange interaction.\cite{Fassbender2008,Fassbender2003,Ehresmann2011,Schmidt2014,Bennett2018,Gaul2016} To investigate the possibility of the disorder effects in our case, the penetration depths of plasma ions into the film and the displacement damage profile are estimated by \textsc{srim} simulation.\cite{Ziegler2010} The simulation results confirm that the ions are mostly stopped in the oxide layer of the films treated with the optimal plasma treatment time, making the disorder effects negligible in the FePd layer. The mechanisms involved in the magnetic patterning are most likely to occur mainly in the FePd layer, where a Pd-rich phase is formed upon Fe oxidation, as will be discussed in detail in the following sections. \subsection{XPS analyses}\label{xps} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Fig5x.pdf} \caption{(Color online) (a) Fe 2\textit{p} and (b) O 1\textit{s} XPS of the film surface before and after O$_2$-plasma treatment, respectively. (c) Fe 2\textit{p} and (b) O 1\textit{s} XPS of the film surface before and after Ar-plasma treatment, respectively. $\alpha$-Fe$_2$O$_3$ 2\textit{p} peaks are observed at 710.5 eV and 724.0 eV, and O 1\textit{s} are observed around 529.7 eV.}\label{Fig5} \end{figure} The composition of the surface Fe-oxide layer of the film is analyzed using XPS. Displayed in Fig.~\ref{Fig5}(a) are the Fe 2\textit{p} spectra before and after O$_2$-plasma treatment, respectively. It can be seen that, after plasma treatment, the $\alpha$-Fe$_2$O$_3$'s Fe 2\textit{p}$_{3/2}$ peak at 710.5 eV and Fe 2\textit{p}$_{1/2}$ peak at 724.0 eV are both remarkably enhanced. The satellite peak of Fe 2\textit{p}$_{3/2}$ for $\alpha$-Fe$_2$O$_3$ is also observed around 719.5 eV.\cite{Mills1983} Other forms of Fe oxides such as Fe$_3$O$_4$ may coexist in smaller proportions in the oxide layer, if their signature peaks are hidden by the main $\alpha$-Fe$_2$O$_3$ peaks. The Fe metal signal at 707 eV from the FePd underneath the oxide layer can also been seen. Presented in Fig.~\ref{Fig5}(b) are the O 1\textit{s} spectra respectively before and after O$_2$-plasma treatment, which also demonstrate an obvious enhancement of the $\alpha$-Fe$_2$O$_3$ signal around 529.7 eV. An additional peak is observed around 531.8 eV, which may be attributed to oxygen vacancies\cite{Wang2019} or surface contaminants of carbon and hydroxide. A very small shift of the $\alpha$-Fe$_2$O$_3$ O 1\textit{s} peak to a higher energy is noticed after plasma treatment, which is probably related to a decrease in the number of oxygen vacancies.\cite{Wang2019} These data indicate that the O$_2$-plasma treatment does not remove the oxide layer, but instead modifies the morphology and composition of it, and even further oxidizes more Fe in the existing FePd layer underneath and thus prominently increases the Pd concentration in the Pd-rich phase beneath the oxide\cite{Hsu2017,Cialone2017} (Fig.~\ref{Fig_interface}) with more Fe oxide formed on the surface. The pronounced increase of the $\alpha$-Fe$_2$O$_3$ signal can be ascribed mainly to the oxidation effect of the O$_2$ plasma, and partially to the more-densely-packed finer oxide particles created after plasma treatment and the reduction of surface contaminants identified by other XPS C 1\textit{s} measurements. Similar XPS features are also observed for the sample treated with Ar plasma, as shown in Figs.~\ref{Fig5}(c) and \ref{Fig5}(d). It is interesting to notice that Ar-plasma treatment also results in an enhancement of the oxide signal. The probable explanation is that Ar-plasma treatment gives rise to surface radicals,\cite{Zhang2005} which then promotes oxidation with the presence of water and residual oxygen in the process chamber and on the sample surface.\cite{Surdu-Bob2002} It is known that local variations in the exchange or magnetocrystalline anisotropy energies can attract or even pin the domain walls under suitable conditions.\cite{Kim2005} Similar phenomena have been intensively studied in antiferromagnet (AFM)/ferromagnet (FM) systems with magnetic defects \cite{Fassbender2008,Kim2005,Nikitenko2000,Vallejo-Fernandez2011} or nonmagnetic dilution treatments.\cite{Vallejo-Fernandez2011,Nowak2002,Fecioru-Morariu2007,Miltenyi2000} With $\alpha$-Fe$_2$O$_3$ being a \textit{canted} AFM at room temperature, it was first speculated that the mechanism of the magnetic patterning might lie in the magnetic heterostructure of the $\alpha$-Fe$_2$O$_3$ layer and the ferromagnetic FePd layer. However, it requires well-oriented crystal planes (FM coupling within the (0001) planes, and AFM coupling across neighboring planes)\cite{Catti1995} or low temperatures\cite{Zysler1994} for $\alpha$-Fe$_2$O$_3$ to exhibit stable AFM behaviors, and therefore it is unlikely for the amorphous $\alpha$-Fe$_2$O$_3$/FePd interface to host reliable AFM--FM exchange interaction at room temperature. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig_interface-short.pdf} \caption{(Color online) Schematic diagram of the Pd-richer/FePd interface underneath the oxide layer in the plasma-treated area. An in-plane anisotropy phase is established near the Pd-richer/FePd interface.} \label{Fig_interface} \end{figure} The most plausible approach to interpret the domain-pinning effect is to consider the interaction at the Pd-rich/FePd interface. FePd is an FM with weak PMA (as revealed in Figs.~\ref{Fig3}(e) and \ref{Fig3}(f)), and Pd capping on FePd has been demonstrated to induce an additional in-plane magnetic anisotropy near the Pd/FePd interface.\cite{Durr1999,Clavero2008} With weak PMA, well-ordered alternating up-and-down magnetic domains\cite{Hierro-Rodriguez2013} are formed in FePd at zero field, and alternating left-and-right in-plane domains appear near the interface.\cite{Durr1999,Sort2004} As plasma treatment leads to segregation of more Fe atoms from the FePd layer to form more Fe oxide on the surface, the Pd-rich phase\cite{Hsu2017,Cialone2017} near the interface of FePd is turned into a Pd-\textit{richer} phase, establishing a Pd-richer/FePd interface underneath, as illustrated in Fig.~\ref{Fig_interface}. The stronger interfacial in-plane anisotropy created inside the separate plasma-treated squares thereby establishes an anisotropy that is different from the continuous UMA outside the squares, which leads to a delay in reversal of the magnetization observed inside the squares. More theoretical and experimental work may be needed to fully understand the underlying physics. \subsection{Interplay between exchange bias and uniaxial magnetic anisotropy} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Fig6m_ang_step.pdf} \caption{(Color online) Spatially-resolved MOKE hysteresis loops of areas A (plasma-treated) and B (intact) on the surface-patterned Fe oxide/FePd film (a) in the \textit{longitudinal} geometry with the external magnetic field along the direction close to the \textit{easy} axis, (b) in the \textit{longitudinal} geometry with the field direction close to the \textit{hard} axis, and (c)--(e) in the \textit{transverse} geometry with the field direction close to the \textit{hard} axis. Hysteresis loops are obtained with initial magnetization at $H_\mathrm{int}=\pm 3$ kOe along the direction close to the easy axis, and after demagnetization (labeled `Demag.'), respectively. The inset in (a) is a blown-up view of the top and bottom parts of the `Demag.' loop from area A. In (b)--(e), the loops of area A are red, and those of area B are black. In (c)--(e), arrows indicate the sweep directions, and blue crosses indicate the spots where the Kerr images on the right are taken.}\label{Fig6} \end{figure} To further investigate exchange interaction and anisotropy in our system, a high external magnetic field of $\pm$3 kOe is applied to the surface-patterned Fe oxide/FePd film along the direction (i.e., the $y$ axis) close to the easy axis to commence an initial magnetization, followed by measurements under small transient fields of 0 Oe $\rightarrow \pm 15$ Oe $\rightarrow$ 0 Oe along the direction close to the easy axis (Fig.~\ref{Fig6}(a)), or 0 Oe $\rightarrow \pm 50$ Oe $\rightarrow$ 0 Oe along the direction close to the hard axis (Figs.~\ref{Fig6}(b)--(e)). Fig.~\ref{Fig6}(a) displays the longitudinal MOKE hysteresis loops with the field direction close to the easy axis after initial magnetization. The blue loops are obtained after initial magnetization at $-$3 kOe, the red ones are after initial magnetization at $+$3 kOe, and the black ones are after demagnetization following an initial magnetization at 3 kOe. The demagnetization is conducted using an oscillating magnetic field of decreasing magnitude until the magnetization is reduced to zero at zero field. The shifts of the loops with high-field initial magnetization with respect to the demagnetized one declare the existence of EB in our system, and that the directions of the EB fields can be freely controlled by the initial-magnetization directions. These loops shift in the same direction as the initial high field, which is referred as `positive' spontaneous EB. Setting EB usually involves cooling an AFM/FM bilayer below the N\'{e}el temperature in the presence of a magnetic field.\cite{Meiklejohn1957} It has also been shown that related phenomenon can occur in coupled two FMs with dissimilar magnetic properties,\cite{Fullerton1998,Berger2004} where the role of the AFM is fulfilled by a hard FM layer, and EB setting is realized by applying a sufficiently high initial magnetic field without the need of cooling preprocessing.\cite{Berger2004,Binek2006} This explains the EB effect observed in our system, where the Pd-rich layer with remnants of Fe is a soft FM\cite{Crangle1960}, whereas the FePd layer is an FM with weak PMA and an induced in-plane anisotropy near the interface (Fig.~\ref{Fig_interface}). The initial high magnetic field induces a non-zero net in-plane magnetic moment in the Pd-rich layer, which stems from an enlargement of the closure domains oriented parallel to the initial field at the expense of those oriented antiparallel to the field. The coupling between the Pd-rich layer and the net in-plane magnetization of the FePd layer then accounts for the observed shift of the hysteresis loop.\cite{Sort2004} The positive spontaneous EB arises from the interfacial AFM coupling between FePd and the Pd-rich layer, with a net interfacial moments that are not reversed during the measurements of the hysteresis loops.\cite{Won2007,Mangin2006} It is worth noting that many of these hysteresis loops exhibit a pair of small kinks on the top and bottom of each hysteresis loop near the central field of the loop. An example is shown in the inset of the top panel of Fig.~\ref{Fig6}(a), which is blown-up view of the top and bottom parts of the `Demag.' loop in the panel, with the kinks indicated by the black arrows. These kinks are of magnitude close to the noise level and therefore can be easily missed, but their reproducibility confirms their existence. The pair of top and bottom small kinks points to an additional switching field attributed to a minority soft layer in the system,\cite{Fullerton1998,Chen2019} which in our case is the Pd-rich layer. These kinks have also been observed in the hysteresis loop of a SmCo film with a mixture of Sm$_x$Co$_y$ phases being present,\cite{Fullerton1998} which play similar roles to the FePd phase and the Pd-rich phase in our system. It is noticed in Fig.~\ref{Fig6}(a) that the loops taken inside (labeled `area A') and outside (labeled `area B') the $50 \times 50$ $\upmu$m$^2$ square exhibit positive EB of similar magnitudes. Since the Fe oxide in area B was naturally formed in ambient air without plasma treatment, the Pd-rich layer under the oxide is not as `rich' as that in the plasma-treated area A, but seems to be able to generate EB of a similar magnitude to that of area A, at least under the condition of initial magnetization at $\pm$3 kOe. However, the EB effect in area A can be easily distinguished from that in area B if longitudinal MOKE hysteresis loops are recorded with the magnetic field applied along the direction close to the \textit{hard} axis, as shown in Fig.~\ref{Fig6}(b). With initial magnetization at $-$3 kOe, the loop of area B (black curve) is apparently slimmer than that of area A (red curve) in the positive signal, which reveals different anisotropy between the two areas. The interplay between the EB and the anisotropy further manifests itself in the spatially-resolved \textit{transverse} MOKE hysteresis loops shown in Figs.~\ref{Fig6}(c)--(e) with the field direction close to the hard axis. Fig.~\ref{Fig6}(c) shows the loops taken after demagnetization, accompanied with a corresponding Kerr image taken at 0 Oe. The two loops from areas A and B approximately overlap with each other, and thus the stripe domain pattern is barely seen in the Kerr image. However, if the sample is measured after initial magnetization at a high field without demagnetization, distinct behaviors are observed as displayed in Figs.~\ref{Fig6}(d) and \ref{Fig6}(e). Whereas the loop of the intact area B approximately follows the route seen in Fig.~\ref{Fig6}(c), the loop of the plasma-treated area A stays in the same sign of the magnetization along the Kerr-sensitivity direction, which is positive with initial magnetization at a field of $-$3 kOe (Fig.~\ref{Fig6}(d)), and negative with initial magnetization at $+$3 kOe (Fig.~\ref{Fig6}(e)). This again implies an in-plane AFM coupling existing in the system. The opposite magnetization in areas A and B as the field is swept to 0 Oe at the spots marked with the blue crosses on the hysteresis loops leads to magnetic-domain patterns with clear contrast shown in the Kerr images on the right. This clear contrast at zero field cannot be achieved without the presence of EB. Besides, the stripe domains with EB (Kerr images in Figs.~\ref{Fig6}(d) and \ref{Fig6}(e)) are straighter than the ones observed without EB (lower panel in Fig.~\ref{Fig2}(c)). The phenomenon observed in Figs.~\ref{Fig6}(c)--\ref{Fig6}(e) may be understood in terms of the competition between the UMA of our sample (i.e., having one easy axis) and the EB set by the high initial external field applied along the $y$ axis, which is at an angle of $\sim$15$^{\circ}$ to the easy axis. When no EB is present, as in the case after demagnetization displayed in Fig.~\ref{Fig6}(c), the hysteresis loops are similar to what we have observed in Fig.~\ref{Fig2}(b). However, as EB becomes present, one needs to consider the interplay between EB and UMA. According to theoretical studies,\cite{Meiklejohn1957, Chung2005} the look of the transverse MOKE hysteresis loop with the field applied orthogonal to the EB direction depends on the relative strength of UMA to EB and the angle between their directions. UMA tends to exert a torque to rotate the moments, and hence should result in opposite signs of the magnetization along the direction orthogonal to the external field when EB is small, provided that the angle between the directions of UMA and EB is large enough ($> \sim$15$^{\circ}$). If the angle between them is too small, or if EB is strong compared to UMA, the direction of the magnetization is determined dominantly by EB instead, leading to the same sign of the magnetization along the Kerr-sensitivity direction (i.e., parallel or antiparallel to EB) during both reversals. For the intact area B, the Kerr signal has the same sign during both reversals of its transverse MOKE hysteresis loop, which is positive in Fig.~\ref{Fig6}(d) and negative in \ref{Fig6}(e). This is probably because the angle between the UMA (i.e., the easy axis) and the EB (set by the initial high field) is small ($\sim$15$^{\circ}$). For the plasma-treated area A, however, each transverse MOKE hysteresis loop exhibits both positive and negative branches. This may be explained by the new UMA created after plasma treatment, as implied in Figs.~\ref{Fig3}(c) and \ref{Fig3}(d), which has a direction at a much larger angle of $\sim$45$^{\circ}$ to the EB. The phenomenon of the interplay between EB and UMA further confirms the effectiveness of our microscopic magnetic-property modulation method that conveniently utilizes lithographed plasma treatment on the Fe oxide/FePd thin films. The drastic difference in the microscopic magnetization behavior between the two cases with and without EB permits a practical switch from one mode of magnetic-domain behavior to another by simply turing EB on or off at room temperature. \section{Conclusions} In summary, we demonstrate control of magnetic domain structures of magnetically micro-patterned Fe oxide/FePd thin films through e-beam lithography and O$_2$- or Ar-plasma treatment of the surface. Transverse MOKE measurements with the magnetic field applied along the direction close to the magnetic hard axis reveal that the magnetic field required to reverse the magnetization of the plasma-treated areas inside the $50 \times 50$ $\upmu$m$^2$ squares in periodic arrays is larger than that for the untreated areas, except that the untreated areas that are aligned with the treated squares along the easy axis are magnetically coupled with the squares. This results in periodic stripes of domains observed in the transverse MOKE images during magnetization reversal. Besides, an intriguing competition between the uniaxial anisotropy and the exchange bias is observed in our system, and can be microscopically controlled by altering the magnetic anisotropy via lithographed plasma treatment. The method of surface patterning with plasma treatment for magnetic-domain engineering may be applied in design and fabrication of future data-storage and spintronic devices. \section*{Acknowledgment} We acknowledge the group of Prof. Hsiang-Chih Chiu for assisting us with the plasma source. This study is sponsored by the Ministry of Science and Technology of Taiwan under Grants Nos.~MOST 105-2628-M-003-001-MY3, MOST 105-2633-M-003-001, and MOST 107-2112-M-003-004. \section*{References}
{'timestamp': '2020-06-01T02:07:01', 'yymm': '2005', 'arxiv_id': '2005.14398', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14398'}
arxiv
\section{Introduction} The following set of features are derived from other packages with slight modifications to make them compatible with the \verb"gji" documentclass. They can be accessed by inc
luding the \verb"extra" option to the \verb"gji" document class e.g. \begin{verbatim} \documentclass[extra,mreferee]{gji} \end{verbatim} \section{Reserved Space and Boxed Figures } It is possible to reserve space for a figure (or to draw a box of a specified size) using the \verb!\figbox! command. The \verb!\figbox! command takes 3 arguments: the horizontal and vertical sizes of the space that is to be left blank, as well as the contents of the box. For example, \begin{verbatim} \begin{figure} \figbox{6cm}{5cm}{Paste orbits here} \caption{The orbits of some planets} \end{figure} \end{verbatim} \begin{figure*} \figbox{6cm}{5cm}{Paste orbits here} \caption{Illustrating the use of the figbox command for a display of the orbits of some planets} \end{figure*} makes a box 4~cm wide and 6~cm high with the caption text below it. The text `Paste orbits here' is simply a note to the author that is printed centered in the framed box. This third argument could be left empty, or could contain commands for importing a computer plot. There is also a starred version \verb!\figbox*! that behaves exactly the same as \verb!\figbox! except that no frame is drawn around the figure. This is most useful for real figures, whereas the unstarred command is more appropriate for reserved space for glued-in figures. It is possible to have \verb!\figbox! and \verb!\figbox*! scale automatically to the size of its contents, something that is also appropriate when the contents are an imported graphics file. In this case, leave both the dimensions empty. As an example, if the orbit plot exists as an encapsulated PostScript file named \texttt{orbit.eps}, then it could be included directly with the commands \begin{verbatim} \begin{figure} \figbox*{}{} \includegraphics[width=4cm]{orbit.eps}} \caption{The orbits of some planets} \end{figure} \end{verbatim} For this to work, you must have loaded the \texttt{graphicx} package with \verb!\usepackage! at the beginning of the file, and you must have a PostScript driver for the output. (There are other packages with different syntaxes for importing graphics; use the one that you are most familiar with.) \section{Alternative texts for one or two columns} Mathematical formulas often have to be fiddled to fit them into the narrow confines of a single column in two-column format, whereas they will fit with no problem in the wide columns of the manuscript mode. This often results in the author having to massage his formulas when he changes between manuscript and camera-ready options, and then back again when he want to print the manuscript once more. The special command \verb!\iftwocol! allows both versions of the text to be included in the one document, for automatic selection depending on whether two-column mode is active or not. Its syntax is\label{iftwocol} \begin{quote} \normalsize \verb!\iftwocol{!{\em yes\/}\verb!}{!{\em no\/}\verb!}! \end{quote} where {\em yes\/} is the text that is inserted if two-columns are in effect, and {\em no\/} the text that is otherwise taken. This command may be used in other situations, but the main application is in the case of mathematics. \section{Literature citations} \textit{Geophysical Journal International} uses the author-year system of literature citation, something that is not supported by standard \LaTeX. The \verb"gji" documentclass provides partial support but more comprehensive features are available using features from the \verb"egs" package and the \verb"natbib" module developed by P.W.Daly. Since there are two ways of making a citation in the author-year system, either as Jones et al.\ (1990) or as (Jones et al., 1990), there are two variations of the original \verb!\cite! command. Suppose the key for the above reference is \texttt{jones90}, then use \begin{flushleft} \verb!\citet{jones90}! for Jones et al.\ (1990)\\ \verb!\citep{jones90}! for (Jones et al., 1990)\\ \verb!\citep[p.~32]{jones90}! for (Jones et al., 1990, p.~32)\\ \verb!\citep[e.g.,][]{jones90}! for (e.g., Jones et al., 1990)\\ \verb!\citep[e.g.,][p.~32]{jones90}! for (e.g., Jones et al., 1990, p.~32). \end{flushleft} Note that the use of the optional arguments to add notes within the brackets of the citation: a single note behaves as in standard \LaTeX, as a note \emph{after} the citation; however, with two notes (non-standard), the first goes \emph{before}, the second \emph{after} it. Two other citation commands are available: \begin{flushleft} \verb!\citeauthor{jones90}! prints Jones et al.\\ \verb!\citeyear{jones90}! prints 1990. \end{flushleft} For the above examples to function properly, either the \verb"gji" bibliography style must be used with \btx, or the \texttt{thebibliography} environment must be formatted accordingly. \begin{flushleft} With \btx\\[1ex] \verb!\bibliographystyle{!gji\verb!}!\\ \verb! \section{Introduction} In addition to the standard submission of hardcopy from authors, \textit{Geophysical Journal International} accepts machine-readable forms of papers in \LaTeX. The layout design for \textit{Geophysical Journal International} has been implemented as a \LaTeXe\ class file derived from the MN style file for Monthly Notices of the Royal Astronomical Society. The GJI classfile is based on the \verb"ARTICLE" style as discussed in the \LaTeX\ manual \cite {la}. Commands which differ from the standard \LaTeX\ interface, or which are provided in addition to the standard interface, are explained in this guide. This guide is not a substitute for the \LaTeX\ manual itself. Authors planning to submit their papers in \LaTeX\ are advised to use \verb"gji.cls" as early as possible in the creation of their files. This guide is modified from that produced by Woollatt et al (1994) to describe the features of the MN style. A very accessible guide to the features of \LaTeXe and the differences from the earlier version is provided by Kopka \& Daly \shortcite{kd}. This reference provides in chapter 9 a summary of \LaTeX\ error messages and also a full description of standard \LaTeX\ commands in Appendix F. \subsection{The GJI document classes} The use of \LaTeX\ document classes allows a simple change of class (or class option) to transform the appearance of your document. The GJI class file preserves the standard \LaTeX\ interface such that any document which can be produced using the standard \LaTeX\ \verb"ARTICLE" class can also be produced with the GJI class. However, the measure (or width of text) is narrower than the default for \verb"ARTICLE", therefore line breaks will change and long equations may need re-setting. \subsection{General style issues} For general style issues, authors are referred to the `Instructions for Authors' on the inside back cover of \textit{Geophysical Journal International}. Authors who are interested in the details of style are referred to Butcher \shortcite {bu} and The Chicago Manual \shortcite {cm}. The language of the journal is British English and spelling should conform to this. Use should be made of symbolic references (\verb"\ref") in order to protect against late changes of order, etc. \subsection{Submission of \LaTeX\ articles to the journal} Papers should initially be submitted in the usual way to: The Executive Secretary, Royal Astronomical Society, {\em or\/} the EGS Editor, {\em or\/} the DGG editor, {\em or\/} the American Editor, {\em or\/} the Pacific Region Editor, as set out in the Instructions for Authors on the inside back cover of each issue of Geophysical Journal International. Four hard copies should be supplied including figures, normally using the \verb"[referee]" option, for papers with a high mathematical content the \verb"[mreferee]" option is recommended. In each case a separate page of figure captions is preferred. One of the copies should be single-sided, while the other two should be weight-reduced, by being either single-spaced or double-sided. Copies of figures should also be supplied. Authors should ensure that their figures are suitable (in terms of lettering size, etc.) for the reductions they intend; they should not attempt to include their figures inside a \TeX\ or \LaTeX\ file by using \verb"\special" or one of the style files for figure handling. Note that articles, or revised versions thereof, may not currently be submitted by electronic mail. However when the article is accepted for publication the \LaTeX\ file can be sent to the publisher by \verb"ftp" together with appropriate forms of figures. Instructions will be provided following acceptance. \section{Using the GJI class file} If the file \verb"gji.cls" is not already in the appropriate system directory for \LaTeX\ files, either arrange for it to be put there, or copy it to your working directory. The class file and related material, such as this guide, can be accessed via the journal web-site at http://www.blackwellpublishing.com/journals/gji under {\em Author Guidelines}. The GJI document class is implemented as a complete document class, {\em not\/} a document class option. In order to use the GJI style, replace \verb"article" by \verb"gji" in the \verb"\documentclass" command at the beginning of your document: \begin{verbatim} \documentclass{article} \end{verbatim} is replaced by \begin{verbatim} \documentclass{gji} \end{verbatim} In general, the following standard document class options should {\em not\/} be used with the GJI style: \begin{enumerate} \item \texttt{10pt}, \texttt{11pt}, \texttt{12pt} -- unavailable; \item \texttt{twoside} (no associated style file) -- \texttt{twoside} is the default; \item \texttt{fleqn}, \texttt{leqno}, \texttt{titlepage} -- should not be used (\verb"fleqn" is already incorporated into the GJI style); \item \texttt{twocolumn} -- is not necessary as it is the default style. \end{enumerate} In \LaTeX2e the use of postscript fonts and the inclusion of non-standard options is carried out through the \verb"\usepackage" command, rather than as options as in earlier versions. Thus the Times font can be used for text by including \begin{verbatim} \usepackage{times} \end{verbatim} on the line immediately after the \verb"\documentclass". If necessary, \texttt{ifthen} and \texttt{bezier} can be included as packages. The GJI class file has been designed to operate with the standard version of \verb"lfonts.tex" that is distributed as part of \LaTeX . If you have access to the source file for this guide, \verb"gjilguid2e.tex", attempt to typeset it. If you find font problems you might investigate whether a non-standard version of \verb"lfonts.tex" has been installed in your system. \subsection{Additional document class options}\label{classoptions} The following additional class options are available with the GJI style: \begin{description} \item \texttt{onecolumn} -- to be used \textit{only} when two-column output is unable to accommodate long equations; \item \texttt{landscape} -- for producing wide figures and tables which need to be included in landscape format (i.e.\ sideways) rather than portrait (i.e.\ upright). This option is described below. \item \texttt{doublespacing} -- this will double-space your article by setting \verb"\baselinestretch" to 2. \item \texttt{referee} -- 12/20pt text size, single column, designed for submission of papers. \item \texttt{mreferee} -- 11/17pt text size, single column designed for submission of papers with mathematical content. \item \texttt{camera} -- designed for use with computer modern fonts to produce a closer representation of GJI style for camera ready material. \item \texttt{galley} -- no running heads, no attempt to align the bottom of columns. \end{description} \subsection{Landscape pages} If a table or illustration is too wide to fit the standard measure, it must be turned, with its caption, through 90 degrees anticlockwise. Landscape illustrations and/or tables cannot be produced directly using the GJI style file because \TeX\ itself cannot turn the page, and not all device drivers provide such a facility. The following procedure can be used to produce such pages. \begin{enumerate} \item Use the \verb"table*" or \verb"figure*" environments in your document to create the space for your table or figure on the appropriate page of your document. Include an empty caption in this environment to ensure the correct numbering of subsequent tables and figures. For instance, the following code prints a page with the running head, a message half way down and the figure number towards the bottom. If you are including a plate, the running headline is different, and you need to key in the three lines which are marked with \verb with an appropriate headline. \begin{verbatim} \begin{figure*} \vbox to220mm{\vfil Landscape figure to go here. \vfil} \caption{} \label{landfig} \end{figure*} \end{verbatim} \item Create a separate document with the corresponding document style but also with the \verb"landscape" document style option, and include the \verb"\pagestyle" command, as follows: \begin{verbatim} \documentclass[landscape]{gji} \pagestyle{empty} \end{verbatim} \item Include your complete tables and illustrations (or space for these) with captions using the \verb"table*" and \verb"figure*" environments. \item Before each float environment, use the \verb"\setcounter" command to ensure the correct numbering of the caption. For example, \begin{verbatim} \setcounter{table}{0} \begin{table*} \begin{minipage}{115mm} \caption{Images of global seismic tomography.} \label{tab1} \begin{tabular}{@{}llllcll} : \end{tabular} \end{minipage} \end{table*} \end{verbatim} The corresponding example for a figure would be: \begin{verbatim} \clearpage \setcounter{figure}{12} \begin{figure*} \vspace{144mm} \caption{Travel times for regional model.} \label{fig13} \end{figure*} \end{verbatim} \end{enumerate} \section{Additional facilities} In addition to all the standard \LaTeX\ design elements, the GJI style includes the following features. \begin{enumerate} \item Extended commands for specifying a short version of the title and author(s) for the running headlines; \item A \verb"summary" environment to produce a suitably indented Summary \item An \verb"abstract" environment which produces the GJI style of Summary \item A \verb"keywords" environment and a \verb"\nokeywords" command; \item Use of the \verb"description" environment for unnumbered lists. \item A starred version of the \verb"\caption" command to produce captions for continued figures or tables. \end{enumerate} In general, once you have used the additional \verb"gji.cls" facilities in your document, do not process it with a standard \LaTeX\ style file. \subsection{Titles and author's name} In the GJI style, the title of the article and the author's name (or authors' names) are used both at the beginning of the article for the main title and throughout the article as running headlines at the top of every page. The title is used on odd-numbered pages (rectos) and the author's name appears on even-numbered pages (versos). Although the main heading can run to several lines of text, the running headline must be a single line ($\leqslant 45$ characters). Moreover, the main heading can also incorporate new line commands (e.g. \verb"\\") but these are not acceptable in a running headline. To enable you to specify an alternative short title and an alternative short author's name, the standard \verb"\title" and \verb"\author" commands have been extended to take an optional argument to be used as the running headline. The running headlines for this guide were produced using the following code: \begin{verbatim} \title[Geophys.\ J.\ Int.: \LaTeXe\ Guide for Authors] {Geophysical Journal International: \LaTeXe\ style guide for authors} \end{verbatim} and \begin{verbatim} \author[B.L.N. Kennett] {B.L.N. Kennett$^1$ \thanks{Pacific Region Office, GJI} \\ $^{1}$Research School of Earth Sciences, Australian National University, Canberra ACT \emph{0200}, Australia } \end{verbatim} The \verb"\thanks" note produces a footnote to the title or author. \subsection{Key words and Summary} At the beginning of your article, the title should be generated in the usual way using the \verb"\maketitle" command. Immediately following the title you should include a Summary followed by a list of key words. The summary should be enclosed within an \verb"summary" environment, followed immediately by the key words enclosed in a \verb"keywords" environment. For example, the titles for this guide were produced by the following source: \begin{verbatim} \maketitle \begin{summary} This guide is for authors who are preparing papers for \textit{Geophysical Journal International} using the \LaTeXe\ document preparation system and the GJI style file. \end{summary} \begin{keywords} \LaTeXe\ -- class files: \verb"gji.cls"\ -- sample text -- user guide. \end{keywords} \section{Introduction} : \end{verbatim} The heading `\textbf{Key words}' is included automatically and the key words are followed by vertical space. If, for any reason, there are no key words, you should insert the \verb"\nokeywords" command immediately after the end of the \verb"summary" or \verb"abstract" environment. This ensures that the vertical space after the abstract and/or title is correct and that any \verb"thanks" acknowledgments are correctly included at the bottom of the first column. For example, \begin{verbatim} \maketitle \begin{abstract} : \end{abstract} \nokeywords \section{Introduction} : \end{verbatim} Note that the \verb"summary" and \verb"abstract" environments have the same effect for the documentclass \verb"gji.cls" \subsection{Lists} The GJI style provides numbered lists using the \verb"enumerate" environment and unnumbered lists using the \verb"description" environment with an empty label. Bulleted lists are not part of the GJI style and the \verb"itemize" environment should not be used. The enumerated list numbers each list item with roman numerals: \begin{enumerate} \item first item \item second item \item third item \end{enumerate} Alternative numbering styles can be achieved by inserting a redefinition of the number labelling command after the \verb"\begin{enumerate}". For example, the list \begin{enumerate} \renewcommand{\theenumi}{(\arabic{enumi})} \item first item \item second item \item etc\ldots \end{enumerate} was produced by: \begin{verbatim} \begin{enumerate} \renewcommand{\theenumi}{(\arabic{enumi})} \item first item : \end{enumerate} \end{verbatim} Unnumbered lists are provided using the \verb"description" environment. For example, \begin{description} \item First unnumbered item which has no label and is indented from the left margin. \item Second unnumbered item. \item Third unnumbered item. \end{description} was produced by, \begin{verbatim} \begin{description} \item First unnumbered item... \item Second unnumbered item. \item Third unnumbered item. \end{description} \end{verbatim} \subsection{Captions for continued figures and tables} The \verb"\caption*" command may be used to produce a caption with the same number as the previous caption (for the corresponding type of float). For instance, if a very large table does not fit on one page, it must be split into two floats; the second float should use the \verb"caption*" command with a suitable caption: \begin{verbatim} \begin{table} \caption*{-- \textit{continued}} \begin{tabular}{@{}lccll} : \end{tabular} \end{table} \end{verbatim} \begin{figure} \vspace{5.5cm} \caption{An example figure in which space has been left for the artwork.} \label{sample-figure} \end{figure} \section[]{Some guidelines for using\\* standard facilities} The following notes may help you achieve the best effects with the GJI style file. \subsection{Sections} \LaTeX\ provides five levels of section headings and they are all defined in the GJI style file: \begin{description} \item \verb"\section" \item \verb"\subsection" \item \verb"\subsubsection" \item \verb"\paragraph" \item \verb"\subparagraph" \end{description} Section numbers are given for section, subsection, subsubsection and paragraph headings. Section headings are automatically converted to upper case; if you need any other style, see the example in section~\ref{headings}. If you find your section/subsection (etc.)\ headings are wrapping round, you must use the \verb"\\*" to end individual lines and include the optional argument \verb"[]" in the section command. This ensures that the turnover is flushleft. \subsection{Illustrations (or figures)} \begin{figure*} \vspace{5.5cm} \caption{An example figure spanning two-columns in which space has been left for the artwork.} \label{twocol-figure} \end{figure*} The GJI style will cope with positioning of your illustrations and you should not use the positional qualifiers on the \verb"figure" environment which would override these decisions. See `Instructions for Authors' in {\em Geophysical Journal International\/} for submission of artwork. Figure captions should be below the figure itself, therefore the \verb"\caption" command should appear after the figure or space left for an illustration. For example, Fig.~\ref{sample-figure} is produced using the following commands: \begin{verbatim} \begin{figure} \vspace{5.5cm} \caption{An example figure in which space has been left for the artwork.} \label{sample-figure} \end{figure} \end{verbatim} Where a figure needs to span two-columns the \verb"figure*" environment should be used as in Fig.~\ref{twocol-figure} using the following commands \begin{verbatim} \begin{figure*} \vspace{5.5cm} \caption{An example figure spanning two-columns in which space has been left for the artwork.} \label{twocol-figure} \end{figure*} \end{verbatim} \subsection{Tables} The GJI style will cope with positioning of your tables and you should not use the positional qualifiers on the \verb"table" environment which would override these decisions. Table captions should be at the top, therefore the \verb"\caption" command should appear before the body of the table. The \verb"tabular" environment can be used to produce tables with single horizontal rules, which are allowed, if desired, at the head and foot only. This environment has been modified for the GJI style in the following ways: \begin{enumerate} \item additional vertical space is inserted on either side of a rule; \item vertical lines are not produced. \end{enumerate} Commands to redefine quantities such as \verb"\arraystretch" should be omitted. For example, Table~\ref{symbols} is produced using the following commands. \begin{table} \caption{Seismic velocities at major discontinuities.} \label{symbols} \begin{tabular}{@{}lcccccc} Class & depth & radius & $\alpha _{-}$ & $\alpha _{+}$ & $\beta _{-}$ & $\beta _{+}$ \\ ICB & 5154 & 1217 & 11.091 & 10.258 & 3.438 & 0. \\ CMB & 2889 & 3482 & 8.009 & 13.691 & 0. & 7.301 \\ \end{tabular} \medskip The ICB represents the boundary between the inner and outer cores and the CMB the boundary between the core and the mantle. Velocities with subscript $-$ are evaluated just below the discontinuity and those with subscript $+$ are evaluated just above the discontinuity. \end{table} \begin{verbatim} \begin{table} \caption{Seismic velocities at major discontinuities.} \label{symbols} \begin{tabular}{@{}lcccccc} Class & depth & radius & $\alpha _{-}$ & $\alpha _{+}$ & $\beta _{-}$ & $\beta _{+}$ \\ ICB & 5154 & 1217 & 11.091 & 10.258 & 3.438 & 0. \\ CMB & 2889 & 3482 & 8.009 & 13.691 & 0. & 7.301 \\ \end{tabular} \medskip The ICB represents the boundary ... ... evaluated just above the discontinuity. \end{table} \end{verbatim} If you have a table that is to extend over two columns, you need to use \verb"table*" in a minipage environment, i.e., you can say \begin{verbatim} \begin{table*} \begin{minipage}{80mm} \caption{Caption which will wrap round to the width of the minipage environment.} \begin{tabular} : \end{tabular} \end{minipage} \end{table*} \end{verbatim} The width of the minipage should more or less be the width of your table, so you can only guess on a value on the first pass. The value will have to be adjusted when your article is finally typeset, so don't worry about making it the exact size. \subsection{Running headlines} As described above, the title of the article and the author's name (or authors' names) are used as running headlines at the top of every page. The headline on right pages can list up to three names; for more than three use et~al. The \verb"\pagestyle" and \verb"\thispagestyle" commands should {\em not\/} be used. Similarly, the commands \verb"\markright" and \verb"\markboth" should not be necessary. \subsection{Typesetting mathematics} \subsubsection{Displayed mathematics} The GJI style will set displayed mathematics flush with the left margin, provided that you use the \LaTeX\ standard of open and closed square brackets as delimiters. The equation \[ \sum_{i=1}^p \lambda_i = {\mathrm{trace}}(\mathbf{S}) \] was typeset in the GJI style using the commands \begin{verbatim} \[ \sum_{i=1}^p \lambda_i = {\mathrm{trace}}(\mathbf{S}) \] \end{verbatim} This correct positioning should be compared with that for the following centred equation, $$ \alpha_{j+1} > \bar{\alpha}+ks_{\alpha} $$ which was (wrongly) typeset using double dollars as follows: \begin{verbatim} $$ \alpha_{j+1} > \bar{\alpha}+ks_{\alpha} $$ \end{verbatim} Note that \verb"\mathrm" will produce a roman character in math mode. For numbered equations use the \verb"equation" and \verb"eqnarray" environments which will give the correct positioning. If equation numbering by section is required the command \verb"\eqsecnum" should appear after \verb"begin{document}" at the head of the file. \subsubsection{Bold math italic}\label{boldmathitalic} The class file provides a font \verb"\mitbf" defined as: \begin{verbatim} \newcommand{\mitbf}[1]{ \hbox{\mathversion{bold}$#1$}} \end{verbatim} Which can be used as follows, to typset the equation \begin{equation} d(\mitbf{{s_{t_u}}}) = \langle [RM(\mitbf{{x_y}} + \mitbf{{s_t}}) - RM(\mitbf{{x_y}})]^2 \rangle \end{equation} the input should be \begin{verbatim} \begin{equation} d(\mitbf{{s_{t_u}}}) = \langle [RM(\mitbf{{x_y}} + \mitbf{{s_t}}) - RM(\mitbf{{x_y}})]^2 \rangle \end{equation} \end{verbatim} If you are using version 1 of the New Font Selection Scheme, you may have some messages in your log file that read something like ``Warning: Font/shape `cmm/b/it' in size~\hbox{$< \!\! 9 \!\! >$} not available on input line 649. Warning: Using external font `cmmi9' instead on input line 649.'' If you have such messages, your system will have substituted math italic characters where you wanted bold math italic ones: you are advised to upgrade to version 2. \subsubsection{Bold Greek}\label{boldgreek} To get bold Greek you use the same method as for bold math italic. Thus you can input \begin{verbatim} \[ \mitbf{{\alpha_{\mu}}} = \mitbf{\Theta} \alpha. \] \end{verbatim} to typeset the equation \[ \mitbf{{\alpha_{\mu}}} = \mitbf{\Theta} \alpha . \] \subsection{Points to note in formatting text}\label{formtext} A number of text characters require special attention so that \LaTeX\ can properly format a file. The following characters must be preceded by a backslash or \LaTeX\ will interpret them as commands: \begin{quote} ~~~~~~~~~\$~~~\&~~~\%~~~\#~~~\_~~~\{~~~and~~~\} \end{quote} must be typed \begin{center} \begin{quote} ~~~~~~\verb"\$"~~~\verb"\&"~~~\verb"\%"~~~\verb"\#" ~~~\verb"\_"~~~\verb"\{"~~~and~~~\verb"\}". \end{quote} \end{center} \LaTeX\ interprets all double quotes as closing quotes. Therefore quotation marks must be typed as pairs of opening and closing single quotes, for example, \texttt{ ``quoted text.''} Note that \LaTeX\ will not recognize greater than or less than symbols unless they are typed within math commands (\verb"$>$" or \verb"$<$"). \subsubsection{Special symbols} The macros for the special symbols in Tables~\ref{mathmode} and~\ref{anymode} have been taken from the Springer Verlag `Astronomy and Astrophysics' design, with their permission. They are directly compatible and use the same macro names. These symbols will work in all text sizes, but are only guaranteed to work in text and displaystyles. Some of the symbols will not get any smaller when they are used in sub- or superscripts, and will therefore be displayed at the wrong size. Don't worry about this as the typesetter will be able to sort this out. \begin{table*} \begin{minipage}{106mm} \caption{Special symbols which can only be used in math mode.} \label{mathmode} \begin{tabular}{@{}llllll} Input & Explanation & Output & Input & Explanation & Output\\ \hline \verb"\la" & less or approx & $\la$ & \verb"\ga" & greater or approx & $\ga$\\[2pt] \verb"\getsto" & gets over to & $\getsto$ & \verb"\cor" & corresponds to & $\cor$\\[2pt] \verb"\lid" & less or equal & $\lid$ & \verb"\gid" & greater or equal & $\gid$\\[2pt] \verb"\sol" & similar over less & $\sol$ & \verb"\sog" & similar over greater & $\sog$\\[2pt] \verb"\lse" & less over simeq & $\lse$ & \verb"\gse" & greater over simeq & $\gse$\\[2pt] \verb"\grole" & greater over less & $\grole$ & \verb"\leogr" & less over greater & $\leogr$\\[2pt] \verb"\loa" & less over approx & $\loa$ & \verb"\goa" & greater over approx & $\goa$\\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}{115mm} \caption{Special symbols which don't have to be used in math mode.} \label{anymode} \begin{tabular}{@{}llllll} Input & Explanation & Output & Input & Explanation & Output\\ \hline \verb"\sun" & sun symbol & $\sun$ & \verb"\earth" & earth symbol & $\earth$ \\[2pt] \verb"\degr" & degree &$\degr$ & \verb"\micron" & \micron & \micron \\[2pt] \verb"\diameter" & diameter & \diameter & \verb"\sq" & square & \squareforqed\\[2pt] \verb"\fd" & fraction of day & \fd & \verb"\fh" & fraction of hour & \fh\\[2pt] \verb"\fm" & fraction of minute & \fm & \verb"\fs" & fraction of second & \fs\\[2pt] \verb"\fdg" & fraction of degree & \fdg & \verb"\fp" & fraction of period & \fp\\[2pt] \verb"\farcs" & fraction of arcsecond & \farcs & \verb"\farcm" & fraction of arcmin & \farcm\\[2pt] \verb"\arcsec" & arcsecond & \arcsec & \verb"\arcmin" & arcminute & \arcmin\\ \hline \end{tabular} \end{minipage} \end{table*} The command \verb"\chemical" is provided to set chemical species with an even level for subscripts (not produced in standard mathematics mode). Thus \verb"\chemical{Fe_{2}^{2+}Cr_{2}O_{4}}" will produce \chemical{Fe_{2}^{2+}Cr_{2}O_{4}}. \subsection{Bibliography} Two methods are provided for managing citations and references. The first approach uses the \verb" \section{Introduction: file preparation and submission} The \verb"iopart" \LaTeXe\ article class file is provided to help authors prepare articles for submission to IOP Publishing journals. This document gives advice on preparing your submission, and specific instructions on how to use \verb"iopart.cls" to follow this advice. You do not have to use \verb"iopart.cls"; articles prepared using any other common class and style files can also be submitted. It is not necessary to mimic the appearance of a published article. The advice on \LaTeX\ file preparation in this document applies to the journals listed in table~\ref{jlab1}. If your journal is not listed please go to the journal website via \verb"http://iopscience.iop.org/journals" for specific submission instructions. \begin{table} \caption{\label{jlab1}Journals to which this document applies, and macros for the abbreviated journal names in {\tt iopart.cls}. Macros for other journal titles are listed in appendix\,A.} \footnotesize \begin{tabular}{@{}llll} \br Short form of journal title&Macro name&Short form of journal title&Macro name\\ \mr 2D Mater.&\verb"\TDM"&Mater. Res. Express&\verb"\MRE"\\ Biofabrication&\verb"\BF"&Meas. Sci. Technol.$^c$&\verb"\MST"\\ Bioinspir. Biomim.&\verb"\BB"&Methods Appl. Fluoresc.&\verb"\MAF"\\ Biomed. Mater.&\verb"\BMM"&Modelling Simul. Mater. Sci. Eng.&\verb"\MSMSE"\\ Class. Quantum Grav.&\verb"\CQG"&Nucl. Fusion&\verb"\NF"\\ Comput. Sci. Disc.&\verb"\CSD"&New J. Phys.&\verb"\NJP"\\ Environ. Res. Lett.&\verb"\ERL"&Nonlinearity$^{a,b}$&\verb"\NL"\\ Eur. J. Phys.&\verb"\EJP"&Nanotechnology&\verb"\NT"\\ Inverse Problems&\verb"\IP"&Phys. Biol.$^c$&\verb"\PB"\\ J. Breath Res.&\verb"\JBR"&Phys. Educ.$^a$&\verb"\PED"\\ J. Geophys. Eng.$^d$&\verb"\JGE"&Physiol. Meas.$^{c,d,e}$&\verb"\PM"\\ J. Micromech. Microeng.&\verb"\JMM"&Phys. Med. Biol.$^{c,d,e}$&\verb"\PMB"\\ J. Neural Eng.$^c$&\verb"\JNE"&Plasma Phys. Control. Fusion&\verb"\PPCF"\\ J. Opt.&\verb"\JOPT"&Phys. Scr.&\verb"\PS"\\ J. Phys. A: Math. Theor.&\verb"\jpa"&Plasma Sources Sci. Technol.&\verb"\PSST"\\ J. Phys. B: At. Mol. Opt. Phys.&\verb"\jpb"&Rep. Prog. Phys.$^{e}$&\verb"\RPP"\\ J. Phys: Condens. Matter&\verb"\JPCM"&Semicond. Sci. Technol.&\verb"\SST"\\ J. Phys. D: Appl. Phys.&\verb"\JPD"&Smart Mater. Struct.&\verb"\SMS"\\ J. Phys. G: Nucl. Part. Phys.&\verb"\jpg"&Supercond. Sci. Technol.&\verb"\SUST"\\ J. Radiol. Prot.$^a$&\verb"\JRP"&Surf. Topogr.: Metrol. Prop.&\verb"\STMP"\\ Metrologia&\verb"\MET"&Transl. Mater. Res.&\verb"\TMR"\\ \br \end{tabular}\\ $^{a}$UK spelling is required; $^{b}$MSC classification numbers are required; $^{c}$titles of articles are required in journal references; $^{d}$Harvard-style references must be used (see section \ref{except}); $^{e}$final page numbers of articles are required in journal references. \end{table} \normalsize Any special submission requirements for the journals are indicated with footnotes in table~\ref{jlab1}. Journals which require references in a particular format will need special care if you are using BibTeX, and you might need to use a \verb".bst" file that gives slightly non-standard output in order to supply any extra information required. It is not necessary to give references in the exact style of references used in published articles, as long as all of the required information is present. Also note that there is an incompatibility between \verb"amsmath.sty" and \verb"iopart.cls" which cannot be completely worked around. If your article relies on commands in \verb"amsmath.sty" that are not available in \verb"iopart.cls", you may wish to consider using a different class file. Whatever journal you are submitting to, please look at recent published articles (preferably articles in your subject area) to familiarize yourself with the features of the journal. We do not demand that your \LaTeX\ file closely resembles a published article---a generic `preprint' appearance of the sort commonly seen on \verb"arXiv.org" is fine---but your submission should be presented in a way that makes it easy for the referees to form an opinion of whether it is suitable for the journal. The generic advice in this document---on what to include in an abstract, how best to present complicated mathematical expressions, and so on---applies whatever class file you are using. \subsection{What you will need to supply} Submissions to our journals are handled via the ScholarOne web-based submission system. When you submit a new article to us you need only submit a PDF of your article. When you submit a revised version, we ask you to submit the source files as well. Upon acceptance for publication we will use the source files to produce a proof of your article in the journal style. \subsubsection{Text.}When you send us the source files for a revised version of your submission, you should send us the \LaTeX\ source code of your paper with all figures read in by the source code (see section \ref{figinc}). Articles can be prepared using almost any version of \TeX\ or \LaTeX{}, not just \LaTeX\ with the class file \verb"iopart.cls". You may split your \LaTeX\ file into several parts, but please show which is the `master' \LaTeX\ file that reads in all of the other ones by naming it appropriately. The `master' \LaTeX\ file must read in all other \LaTeX\ and figure files from the current directory. {\it Do not read in files from a different directory, e.g. \verb"\includegraphics{/figures/figure1.eps}" or \verb"\include{../usr/home/smith/myfiles/macros.tex}"---we store submitted files all together in a single directory with no subdirectories}. \begin{itemize} \item {\bf Using \LaTeX\ packages.} Most \LaTeXe\ packages can be used if they are available in common distributions of \LaTeXe; however, if it is essential to use a non-standard package then any extra files needed to process the article must also be supplied. Try to avoid using any packages that manipulate or change the standard \LaTeX\ fonts: published articles use fonts in the Times family, but we prefer that you use \LaTeX\ default Computer Modern fonts in your submission. The use of \LaTeX\ 2.09, and of plain \TeX\ and variants such as AMSTeX is acceptable, but a complete PDF of your submission should be supplied in these cases. \end{itemize} \subsubsection{Figures.} Figures should ideally be included in an article as encapsulated PostScript files (see section \ref{figinc}) or created using standard \LaTeX\ drawing commands. Please name all figure files using the guidelines in section \ref{fname}. We accept submissions that use pdf\TeX\ to include PDF or bitmap figures, but please ensure that you send us a PDF that uses PDF version 1.4 or lower (to avoid problems in the ScholarOne system). You can do this by putting \verb"\pdfminorversion=4" at the very start of your TeX file. \label{fig1}All figures should be included within the body of the text at an appropriate point or grouped together with their captions at the end of the article. A standard graphics inclusion package such as \verb"graphicx" should be used for figure inclusion, and the package should be declared in the usual way, for example with \verb"\usepackage{graphicx}", after the \verb"\documentclass" command. Authors should avoid using special effects generated by including verbatim PostScript code in the submitted \LaTeX\ file. Wherever possible, please try to use standard \LaTeX\ tools and packages. \subsubsection{References.\label{bibby}} You can produce your bibliography in the standard \LaTeX\ way using the \verb"\bibitem" command. Alternatively you can use BibTeX: our preferred \verb".bst" styles are: \begin{itemize} \item For the numerical (Vancouver) reference style we recommend that authors use \verb"unsrt.bst"; this does not quite follow the style of published articles in our journals but this is not a problem. Alternatively \verb"iopart-num.bst" created by Mark A Caprio produces a reference style that closely matches that in published articles. The file is available from \verb"http://ctan.org/tex-archive/biblio/bibtex/contrib/iopart-num/" . \item For alphabetical (Harvard) style references we recommend that authors use the \verb"harvard.sty" in conjunction with the \verb"jphysicsB.bst" BibTeX style file. These, and accompanying documentation, can be downloaded from \penalty-10000 \verb"http://www.ctan.org/tex-archive/macros/latex/contrib/harvard/". Note that the \verb"jphysicsB.bst" bibliography style does not include article titles in references to journal articles. To include the titles of journal articles you can use the style \verb"dcu.bst" which is included in the \verb"harvard.sty" package. The output differs a little from the final journal reference style, but all of the necessary information is present and the reference list will be formatted into journal house style as part of the production process if your article is accepted for publication. \end{itemize} \noindent Please make sure that you include your \verb".bib" bibliographic database file(s) and any \verb".bst" style file(s) you have used. \subsection{\label{copyright}Copyrighted material and ethical policy} If you wish to make use of previously published material for which you do not own the copyright then you must seek permission from the copyright holder, usually both the author and the publisher. It is your responsibility to obtain copyright permissions and this should be done prior to submitting your article. If you have obtained permission, please provide full details of the permission granted---for example, copies of the text of any e-mails or a copy of any letters you may have received. Figure captions must include an acknowledgment of the original source of the material even when permission to reuse has been obtained. Please read our ethical policy before writing your article. \subsection{Naming your files} \subsubsection{General.} Please name all your files, both figures and text, as follows: \begin{itemize} \item Use only characters from the set a to z, A to Z, 0 to 9 and underscore (\_). \item Do not use spaces or punctuation characters in file names. \item Do not use any accented characters such as \'a, \^e, \~n, \"o. \item Include an extension to indicate the file type (e.g., \verb".tex", \verb".eps", \verb".txt", etc). \item Use consistent upper and lower case in filenames and in your \LaTeX\ file. If your \LaTeX\ file contains the line \verb"\includegraphics{fig1.eps}" the figure file must be called \verb"fig1.eps" and not \verb"Fig1.eps" or \verb"fig1.EPS". If you are on a Unix system, please ensure that there are no pairs of figures whose names differ only in capitalization, such as \verb"fig_2a.eps" and \verb"fig_2A.eps", as Windows systems will be unable to keep the two files in the same directory. \end{itemize} When you submit your article files, they are manipulated and copied many times across multiple databases and file systems. Including non-standard characters in your filenames will cause problems when processing your article. \subsubsection{\label{fname}Naming your figure files.} In addition to the above points, please give each figure file a name which indicates the number of the figure it contains; for example, \verb"figure1.eps", \verb"figure2a.eps", etc. If the figure file contains a figure with multiple parts, for example figure 2(a) to 2(e), give it a name such as \verb"figure2a_2e.eps", and so forth. \subsection{How to send your files} Please send your submission via the ScholarOne submission system. Go to the journal home page, and use the `Submit an article' link on the right-hand side. \section{Preparing your article} \subsection{Sample coding for the start of an article} \label{startsample} The code for the start of a title page of a typical paper in the \verb"iopart.cls" style might read: \small\begin{verbatim} \documentclass[12pt]{iopart} \begin{document} \title[The anomalous magnetic moment of the neutrino]{The anomalous magnetic moment of the neutrino and its relation to the solar neutrino problem} \author{P J Smith$^1$, T M Collins$^2$, R J Jones$^3$\footnote{Present address: Department of Physics, University of Bristol, Tyndalls Park Road, Bristol BS8 1TS, UK.} and Janet Williams$^3$} \address{$^1$ Mathematics Faculty, Open University, Milton Keynes MK7~6AA, UK} \address{$^2$ Department of Mathematics, Imperial College, Prince Consort Road, London SW7~2BZ, UK} \address{$^3$ Department of Computer Science, University College London, Gower Street, London WC1E~6BT, UK} \ead{williams@ucl.ac.uk} \begin{abstract} ... \end{abstract} \keywords{magnetic moment, solar neutrinos, astrophysics} \submitto{\jpg} \maketitle \end{verbatim} \normalsize At the start of the \LaTeX\ source code please include commented material to identify the journal, author, and (if you are sending a revised version or a resubmission) the reference number that the journal has given to the submission. The first non-commented line should be \verb"\documentclass[12pt]{iopart}" to load the preprint class file. The normal text will be in the Computer Modern 12pt font. It is possible to specify 10pt font size by passing the option \verb"[10pt]" to the class file. Although it is possible to choose a font other than Computer Modern by loading external packages, this is not recommended. The article text begins after \verb"\begin{document}". Authors of very long articles may find it convenient to separate their article into a series of \LaTeX\ files each containing one section, and each of which is called in turn by the primary file. The files for each section should be read in from the current directory; please name the primary file clearly so that we know to run \LaTeX\ on this file. Authors may use any common \LaTeX\ \verb".sty" files. Authors may also define their own macros and definitions either in the main article \LaTeX\ file or in a separate \verb".tex" or \verb".sty" file that is read in by the main file, provided they do not overwrite existing definitions. It is helpful to the production staff if complicated author-defined macros are explained in a \LaTeX\ comment. The article class \verb"iopart.cls" can be used with other package files such as those loading the AMS extension fonts \verb"msam" and \verb"msbm", which provide the blackboard bold alphabet and various extra maths symbols as well as symbols useful in figure captions. An extra style file \verb"iopams.sty" is provided to load these packages and provide extra definitions for bold Greek letters. \subsection{\label{dblcol}Double-column layout} The \verb"iopart.cls" class file produces single-column output by default, but a two-column layout can be obtained by using \verb"\documentclass[10pt]" at the start of the file and \verb"\ioptwocol" after the \verb"\maketitle" command. Two-column output will begin on a new page (unlike in published double-column articles, where the two-column material starts on the same page as the abstract). In general we prefer to receive submissions in single-column format even for journals published in double-column style; however, the \verb"\ioptwocol" option may be useful to test figure sizes and equation breaks for these journals. When setting material in two columns you can use the asterisked versions of \LaTeX\ commands such as \verb"\begin{figure*} ... \end{figure*}" to set figures and tables across two columns. If you have any problems or any queries about producing two-column output, please contact us at \verb"submissions@iop.org". \section{The title and abstract page} If you use \verb"iopart.cls", the code for setting the title page information is slightly different from the normal default in \LaTeX. If you are using a different class file, you do not need to mimic the appearance of an \verb"iopart.cls" title page, but please ensure that all of the necessary information is present. \subsection{Titles and article types} The title is set using the command \verb"\title{#1}", where \verb"#1" is the title of the article. The first letter of the title should be capitalized with the rest in lower case. The title appears in bold case, but mathematical expressions within the title may be left in light-face type. If the title is too long to use as a running head at the top of each page (apart from the first) a short form can be provided as an optional argument (in square brackets) before the full title, i.e.\ \verb"\title[Short title]{Full title}". For article types other than papers, \verb"iopart.cls" has a generic heading \verb"\article[Short title]{TYPE}{Full title}" and some specific definitions given in table~\ref{arttype}. In each case (apart from Letters to the Editor and Fast Track Communications) an optional argument can be used immediately after the control sequence name to specify the short title; where no short title is given, the full title will be used as the running head. Not every article type has its own macro---use \verb"\article" for any not listed. A full list of the types of articles published by a journal is given in the submission information available via the journal home page. The generic heading could be used for articles such as those presented at a conference or workshop, e.g. \small\begin{verbatim} \article[Short title]{Workshop on High-Energy Physics}{Title} \end{verbatim}\normalsize Footnotes to titles may be given by using \verb"\footnote{Text of footnote.}" immediately after the title. Acknowledgment of funding should be included in the acknowledgments section rather than in a footnote. \begin{table} \caption{\label{arttype}Types of article defined in the {\tt iopart.cls} class file.} \footnotesize\rm \begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus12pt}}l}} \br Command& Article type\\ \mr \verb"\title{#1}"&Paper (no surtitle on first page)\\ \verb"\ftc{#1}"&Fast Track Communication\\ \verb"\review{#1}"&Review\\ \verb"\topical{#1}"&Topical Review\\ \verb"\comment{#1}"&Comment\\ \verb"\note{#1}"&Note\\ \verb"\paper{#1}"&Paper (no surtitle on first page)\\ \verb"\prelim{#1}"&Preliminary Communication\\ \verb"\rapid{#1}"&Rapid Communication\\ \verb"\letter{#1}"&Letter to the Editor\\ \verb"\article{#1}{#2}"&Other articles\\\ & (use this for any other type of article; surtitle is whatever is entered as {\tt \#1})\\ \br \end{tabular*} \end{table} \subsection{Authors' names and addresses} For the authors' names type \verb"\author{#1}", where \verb"#1" is the list of all authors' names. Western-style names should be written as initials then family name, with a comma after all but the last two names, which are separated by `and'. Initials should {\it not} be followed by full stops. First (given) names may be used if desired. Names in Chinese, Japanese and Korean styles should be written as you want them to appear in the published article. Authors in all IOP Publishing journals have the option to include their names in Chinese, Japanese or Korean characters in addition to the English name: see appendix B for details. If the authors are at different addresses a superscripted number, e.g. $^1$, \verb"$^1$", should be used after each name to reference the author to his/her address. If an author has additional information to appear as a footnote, such as a permanent address, a normal \LaTeX\ footnote command should be given after the family name and address marker with this extra information. The authors' affiliations follow the list of authors. Each address is set by using \verb"\address{#1}" with the address as the single parameter in braces. If there is more than one address then the appropriate superscripted number, followed by a space, should come at the start of the address. E-mail addresses are added by inserting the command \verb"\ead{#1}" after the postal address(es) where \verb"#1" is the e-mail address. See section~\ref{startsample} for sample coding. For more than one e-mail address, please use the command \verb"\eads{\mailto{#1}, \mailto{#2}}" with \verb"\mailto" surrounding each e-mail address. Please ensure that, at the very least, you state the e-mail address of the corresponding author. \subsection{The abstract} The abstract follows the addresses and should give readers concise information about the content of the article and indicate the main results obtained and conclusions drawn. It should be self-contained---there should be no references to figures, tables, equations, bibliographic references etc. It should be enclosed between \verb"\begin{abstract}" and \verb"\end{abstract}" commands. The abstract should normally be restricted to a single paragraph of around 200 words. \subsection{Subject classification numbers} We no longer ask authors to supply Physics and Astronomy Classification System (PACS) classification numbers. For submissions to {\it Nonlinearity}\/ we ask that you should supply Mathematics Subject Classification (MSC) codes. MSC numbers are included after the abstract using \verb"\ams{#1}". The command \verb"\submitto{#1}" can be inserted, where \verb"#1" is the journal name written in full or the appropriate control sequence as given in table~\ref{jlab1}. This command is not essential to the running of the file and can be omitted. \subsection{Keywords} Keywords are required for all submissions. Authors should supply a minimum of three (maximum seven) keywords appropriate to their article as a new paragraph starting \verb"\noindent{\it Keywords\/}:" after the end of the abstract. \subsection{Making a separate title page} To keep the header material on a separate page from the body of the text insert \verb"\maketitle" (or \verb"\newpage") before the start of the text. If \verb"\maketitle" is not included the text of the article will start immediately after the abstract. \section{The text} \subsection{Sections, subsections and subsubsections} The text of articles may be divided into sections, subsections and, where necessary, subsubsections. To start a new section, end the previous paragraph and then include \verb"\section" followed by the section heading within braces. Numbering of sections is done {\it automatically} in the headings: sections will be numbered 1, 2, 3, etc, subsections will be numbered 2.1, 2.2, 3.1, etc, and subsubsections will be numbered 2.3.1, 2.3.2, etc. Cross references to other sections in the text should, where possible, be made using labels (see section~\ref{xrefs}) but can also be made manually. See section~\ref{eqnum} for information on the numbering of displayed equations. Subsections and subsubsections are similar to sections but the commands are \verb"\subsection" and \verb"\subsubsection" respectively. Sections have a bold heading, subsections an italic heading and subsubsections an italic heading with the text following on directly. \small\begin{verbatim} \section{This is the section title} \subsection{This is the subsection title} \end{verbatim}\normalsize The first section is normally an introduction, which should state clearly the object of the work, its scope and the main advances reported, with brief references to relevant results by other workers. In long papers it is helpful to indicate the way in which the paper is arranged and the results presented. Footnotes should be avoided whenever possible and can often be included in the text as phrases or sentences in parentheses. If required, they should be used only for brief notes that do not fit conveniently into the text. The use of displayed mathematics in footnotes should be avoided wherever possible and no equations within a footnote should be numbered. The standard \LaTeX\ macro \verb"\footnote" should be used. Note that in \verb"iopart.cls" the \verb"\footnote" command produces footnotes indexed by a variety of different symbols, whereas in published articles we use numbered footnotes. This is not a problem: we will convert symbol-indexed footnotes to numbered ones during the production process. \subsection{Acknowledgments} Authors wishing to acknowledge assistance or encouragement from colleagues, special work by technical staff or financial support from organizations should do so in an unnumbered `Acknowledgments' section immediately following the last numbered section of the paper. In \verb"iopart.cls" the command \verb"\ack" sets the acknowledgments heading as an unnumbered section. Please ensure that you include all of the sources of funding and the funding contract reference numbers that you are contractually obliged to acknowledge. We often receive requests to add such information very late in the production process, or even after the article is published, and we cannot always do this. Please collect all of the necessary information from your co-authors and sponsors as early as possible. \subsection{Appendices} Technical detail that it is necessary to include, but that interrupts the flow of the article, may be consigned to an appendix. Any appendices should be included at the end of the main text of the paper, after the acknowledgments section (if any) but before the reference list. If there are two or more appendices they should be called Appendix A, Appendix B, etc. Numbered equations will be in the form (A.1), (A.2), etc, figures will appear as figure A1, figure B1, etc and tables as table A1, table B1, etc. The command \verb" \section{Abstract} Any search or sampling algorithm for solution of inverse problems needs guidance to be efficient. Many algorithms collect and apply information about the problem on the fly, and much improvement has been made in this way. However, as a consequence of the the No-Free-Lunch Theorem, the only way we can ensure a significantly better performance of search and sampling algorithms is to build in as much information about the problem as possible. In the special case of Markov Chain Monte Carlo sampling (MCMC) we review how this is done through the choice of proposal distribution, and we show how this way of adding more information about the problem can be made particularly efficient when based on an approximate physics model of the problem. A highly nonlinear inverse scattering problem with a high-dimensional model space serves as an illustration of the gain of efficiency through this approach. \vspace*{10mm}\noindent {\small {\it Keywords}: Inverse Problems, Seismic Inversion, Probabilistic Inversion, Markov Chain Monte Carlo, Sampling Methods.} \section{Introduction} Over the last 25 years, Monte Carlo methods have been established as a main tool for providing solutions and uncertainty estimates for small- to intermediate-scale, highly nonlinear inverse problems. This development is closely connected to the dramatic increase in computational speed over the last few decades. However, there has also been an increasing demand for solving inverse problems on a larger scale, with more time-consuming forward calculations, e.g., \cite{Fichtner18}, and more complex a priori information, e.g., \cite{Lange12,Grana10}. In this connection it has become clear that straightforward use of standard Monte Carlo algorithms is unfeasable, and recent years have seen a surge of modified samplers with more and more sophisticated sampling strategies \cite{Tierney99,Haario06,Vrugt16,Ying20}. Useful improvements have been found, but there is a growing impression amongst applicants that Monte Carlo strategies are fundamentally slow, and that alternatives should be found. This experience has indeed led to improvements where quite efficient solutions, all taylored to the problem at hand through a priori constraints and/or well-chosen simplifying assumptions, have shown promising results (see, e.g., \cite{Fjeldstad18}). Another recent development is an attempt to perform often time-consuming likelihood calculations with neural networks, trained on a very large number of model-data pairs sampled from an a prior probability distribution \cite{Andrieu03,Scheidt18,Nawaz19,Holm-Jensen20}. Research in Monte Carlo methods has often been based on a search for new -- often surprising -- inspiration that will allow efficient calculation with simple operations. In the early years of Monte Carlo developments there were many examples of this: Simulated Annealing \cite{Kirkpatrick83}, Hamiltonian Monte Carlo \cite{Duane87}, Simulated Tempering \cite{Marinari92}, Evolutionary Algorithms \cite{Holland92}, etc., all using ideas from other scientific fields to improve sampling, and the benefit has been new ways of building useful intuition to improve our understanding of sampling processes. In recent years we see a continuation of this trend in statistics literature \cite{Roberts09}, and all these methods have brought some success, the degree of which depends on the category of problems they are applied to. The 'race of Monte Carlo ideas' has been accompanied by intense discussions in the research community about the efficiency of algorithms. Not only have intuitive ideas been held up against each other, but arguments for and against methodologies have also been accompanied by numerical experiments to support the conclusions. This approach rests apparently on a sound basis, but if we take a closer look at the way algorithm comparisons are typically carried out, we discover a common deficiency: In very few cases, if any, algorithms are compared by solving {\em exactly} the same problem. At the surface, test problems look similar, but a closer look reveals that the information available to algorithms in the same test differs significantly. As a result, comparisons often become meaningless, but there is one thing that seems clear from most comparative studies: The more information about the inverse problem we build into the code of an algorithm, the more efficient the algorithm is. The purpose of this paper is to explore how additional information in Monte Carlo sampling may significantly reduce the computational workload. We will first discuss the reasons for the often excessive time-consumption of Monte Carlo strategies. We will then turn to the problem of finding and applying supplementary information to speed up calculations, not from external, independent sources (a priori information), but from the physical problem itself. Our aim will be to apply this information in a way that will not bias the sampling assymptotically. We shall explore and support our findings through numerical experiments. Our test example will be the acoustic inverse scattering problem for a vertical plane wave hitting a horizontally stratified medium with varying acoustic impedance (product of wavespeed and mass density). This problem is highly nonlinear due to internal multiple scattering (eccoes) and attenuation in the medium. Since our aim is to evaluate solutions and their uncertainties, we use Markov Chain Monte Carlo (MCMC) for the analysis. We compare a straightforward MCMC sampling approach, where the proposal distribution is arbitrary, with one where the proposal mechanism is designed from an approximation to the forward relation. The result is a significant improvement in the algorithm's efficiency. \section{Markov Chain Monte Carlo and the Proposal Problem} \subsection{Proposal Distributions} The basic idea behind any implementation of Markov Chain Monte Carlo (MCMC) is an interplay between {\em proposals} and {\em rejections}. In each iteration, sampling from a probability density $f({\bf x})$ over a space ${\cal X}$ proceeds from a current value ${\bf x}$ by first randomly proposing a new value ${{\bf x}}'$ according to the so-called {\em proposal distribution} $q({{\bf x}}' | {\bf x})$, followed by a random decision where ${{\bf x}}'$ is accepted, with probability \begin{equation} P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}={\rm min} \( \frac{f({{\bf x}}') q({\bf x}|{{\bf x}}')}{f({\bf x}) q({{\bf x}}'|{\bf x})},1 \). \label{eq: acc-prob} \end{equation} This acceptance probability ensures that, once an equilibrium sampling distribution is established, it will be maintained through {\em microscopic reversibility}, because the probability $P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'} q({{\bf x}}'|{\bf x}) f({\bf x})$ of a transition from ${\bf x}$ to ${{\bf x}}'$ equals the probability of the reverse transition, $P_{\rm acc}^{{{\bf x}}' \rightarrow {{\bf x}}} q({\bf x}|{{\bf x}}') f({{\bf x}}')$ \cite{Mosegaard02}. At this point it is important to note that the proposal distribution has no influence on the distribution to which the sampling converges, it only influences the speed of convergence. \bigskip\noindent The two most common types of proposal distributions are: \begin{enumerate} \item {\em Local} proposal distributions $q$, where $q({{\bf x}}'|{\bf x})$ depends on the starting point ${{\bf x}}$. A frequent assumption is translation invariance where $q({{\bf x}}'|{\bf x}) = q({{\bf x}}' + {\bf a}|{{\bf x}} + {\bf a})$ for any shift ${\bf a}$ in the parameter space. Another common assumption is symmetry : $q({{\bf x}}' | {\bf x}) = q({\bf x} | {{\bf x}}')$, and in this case we get a simpler expression expression for the acceptance probability (\ref{eq: acc-prob}): \begin{equation} P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}={\rm min} \( \frac{f({{\bf x}}')}{f({\bf x})},1 \). \label{eq_accept} \end{equation} \item {\em Global} proposal distributions $q$ that are independent of the starting point ${{\bf x}}$. This means that $q({\bf x}|{{\bf x}}') = h({\bf x})$ where $h({\bf x})$ is fixed during the sampling process. If $h({\bf x})$ is in some sense close to the target distribution $f({\bf x})$, $h$ is often called a "surrogate" (for $f$). \end{enumerate} An MCMC sampler is only efficient if large enough steps (connecting any two areas of high values of $f({\bf x})$ in a few steps) are frequently accepted. This ability critically depends on $q({{\bf x}}' | {\bf x})$, and requires that $q({{\bf x}}' | {\bf x})$ is (at least) locally similar to $f({\bf x}')$. This is revealed by a close look at the expression for the transition probability from ${\bf x}$ to ${\bf x}'$: \begin{equation} P({{\bf x}}' |{{\bf x}})= q({{\bf x}}'|{\bf x}) \cdot {\rm min} \( \frac{f({{\bf x}}') q({\bf x}|{{\bf x}}')}{f({\bf x}) q({{\bf x}}'|{\bf x})},1 \) \ , \label{eq_accept} \end{equation} showing that, for $f({{\bf x}}') \ge f({{\bf x}})$ and a large $q({\bf x}|{{\bf x}}')/q({{\bf x}}'|{\bf x})$, the transition ${{\bf x}} \rightarrow {{\bf x}}'$ is most likely, but for $f({{\bf x}}') < f({{\bf x}})$ it is only likely when \begin{enumerate} \item $f({{\bf x}}')$ and $q({{\bf x}}'|{{\bf x}})$ are both large at ${{\bf x}}'$, and \label{cond-1} \item $q({\bf x}|{{\bf x}}')/q({{\bf x}}'|{\bf x})$ is large \label{cond-2} \end{enumerate} We will now see how implementations of local and global proposals may address these questions. \subsection{Local proposals} The use of local proposals is an attempt to satisfy the above two conditions: \begin{enumerate} \item This condition is met by aiming to choose a $q({{\bf x}}'|{{\bf x}})$ so narrowly that most of $q$'s support coincides with high values of $f$. The underlying assumption here is that $f$ is somehow smooth in the neighborhood of ${{\bf x}}$. In the absense of external information about the smoothness of $f$, one must usually resort to experimentation with different widths of $q$. \item This condition is usually met by using a symmetric $q$: $q({{\bf x}}'|{\bf x}) = q({{\bf x}}|{\bf x}')$. In this way, the ratio $q({\bf x}|{{\bf x}}')/q({{\bf x}}'|{\bf x})$ is always $1$ (and hence never "small"). \end{enumerate} Local proposals are widely used, but they have at least two serious drawbacks. Firstly, if they are too narrow, the proposed steps will be so small that the algorithm needs many iterations to traverse the parameters space. As a result, many iterations are required to produce sufficiently many independent samples from the space. Secondly, even a very narrow proposal may not approximate the target distribution $f({\bf x})$ very well. To investigate and exemplify the latter problem in high-dimension\-al spaces, let us consider the case where the target distribution of ${{\bf x}}$ is Gaussian with covariance matrix ${\bf C}$ and mean ${\bf x}_0$: $f({{\bf x}}) = {\cal N}_{{\bf x}} ({\bf x}_0,{\bf C})$. Assume for illustration that our proposal distribution is an isotropic Gaussian $q({{\bf x}}|{{\bf x}}_q) = {\cal N}_{{\bf x}} ({\bf x}_q,{\bf C}_q)$ with mean ${{\bf x}}_q$ and covariance matrix ${\bf C}_q$, and that we, in the sampling process, have been fortunate to arrive at point with a high value of $f({{\bf x}})$, say, for simplicity, at its maximum point ${{\bf x}}_0$. We can now calculate the expected acceptance probability $P^{{{\bf x}}_0 \rightarrow {{\bf x}}}$ proposed in the next step by the algorithm: \begin{equation} \begin{split} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) &= \int_{{\cal X}} \frac{f({{\bf x}})}{f({{\bf x}}_0)} q({{\bf x}}|{{\bf x}}_0) d{{\bf x}} \\ &= \int_{{\cal X}} \frac{{\cal N}_{{\bf x}} ({\bf x}_0,{\bf C})}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})} {\cal N}_{{\bf x}} ({\bf x}_0,{\bf C}_q) d{{\bf x}} \\ &= \frac{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C}+{\bf C}_q)}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})}\int_{{\cal X}} {\cal N}_{{\bf x}} ({\bf x}_1,{\bf C}_1) d{{\bf x}} \end{split} \label{eq: Meanf} \end{equation} where \begin{equation} {\bf x}_1 = ({\bf C}^{-1} + {\bf C}_q^{-1})^{-1} ({\bf C}^{-1} {\bf x}_0 + {\bf C}_q^{-1} {\bf x}_0) = {\bf x}_0 \end{equation} and \begin{equation} {\bf C}_1 = ({\bf C}^{-1} + {\bf C}_q^{-1})^{-1} \, . \end{equation} Since the last integral in (\ref{eq: Meanf}) is $1$, we have the following expression for the expected acceptance probability: \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) = \frac{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C}+{\bf C}_q)}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})} = \( \frac{{\rm det}\( 2\pi {\bf C} \)}{{\rm det}\( 2\pi ({\bf C}+{\bf C}_q) \)} \)^{1/2} \, . \end{equation} Both ${\bf C}_q = \sigma_q^2 {{\bf I}}$ (with $\sigma_q^2 > 0$) and ${\bf C}$ are diagonal in the frame spanned by ${\bf C}$'s eigenvectors, and if we assume that the eigenvalues of ${\bf C}$ are $\sigma_1^2 \ge \dots \ge \sigma_N^2 > 0$, where $N$ is the dimension of ${\cal X}$, the eigenvalues of ${\bf C}+{\bf C}_q$ are $(\sigma_1^2+\sigma_q^2), \dots ,(\sigma_N^2+\sigma_q^2)$. From this we have \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) = \prod_{n=1}^N \( \frac{\sigma_n^2}{\sigma_n^2+\sigma_q^2} \)^{1/2} \, . \label{eq-eigenC1} \end{equation} From (\ref{eq-eigenC1}) we see that for any non-zero values of $\sigma_n$ and $\sigma_q$ we have \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) \rightarrow 0 \quad {\rm for} \quad N \rightarrow \infty \, . \end{equation} expressing the influence from the so-called 'curse of dimensionality' on the sampling process. If the proposed steps are kept very short ($\sigma_q$ is small compared to all $\sigma_n$), the decrease of $E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}})$ with $N$ is slow. But this situation is of no practical value, because adequate sampling by the algorithm requires that it can traverse high-probability areas of $f({{\bf x}})$ within a reasonable amount of time. For non-negligible step lengths, the situation is radically different. Indeed, if there exists an integer $K$ and a real constant $k$ such that $\sigma_q > k\sigma_n$ for all $n > K$, then $E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}})$ decreases more that exponentially with $N$. In other words, if the distribution $f({{\bf x}})$ is 'elongated' compared to the proposal $q$, that is, if it is broader than $q$ in only a fixed number $K < N$ of directions/dimensions, the mean number of accepted moves will decrease at least exponentially with the number of dimensions. As an example, let us consider the case where $\sigma_q^2 = 1$, and $\sigma_n^2 = 1/n$. For $N=2$ this gives an expected acceptance probability of $0.4082$, corresponding to a mean waiting time of about $0.4082^{-1} \approx 2.5$ iterations between accepted moves. For $N=10$ the expectation is $1.5828 \cdot 10^{-4}$, and for $N=100$ it decreases to $1.03 \cdot 10^{-80}$, giving a waiting time of about $3.0 \cdot 10^{62}$ years for 1 Billion iterations per second. The above analysis is carried out under the favorable assumption that the maximum of $f({{\bf x}})$ has been located by the algorithm, and does not even consider the serious difficulties faced by the sampling algorithm in the initial search for points with high values of $f({{\bf x}})$ (the {\em burn-in} phase). Hence, it is clear that the proposal mechanism, as defined by $q$, is the Achilles heel of the standard MCMC approach. \subsection{Global proposals} A global proposal $q({{\bf x}}'|{{\bf x}})$ is independent of ${{\bf x}}$ and hence it can be written $q({{\bf x}}'|{{\bf x}}) = h({{\bf x}}')$. The use of global proposals seeks to meet the requirements of (\ref{cond-1}) and (\ref{cond-2}) by choosing $h({{\bf x}}') \approx f({{\bf x}}')$, ensuring that \begin{enumerate} \item $q$ and $f$ are everywhere similar \item when $f({{\bf x}}') \leq f({{\bf x}})$ the condition $q({\bf x}|{{\bf x}}')/q({{\bf x}}'|{\bf x}) \gtrapprox 1$ is always met. \end{enumerate} In fact, from (\ref{eq_accept}) it is easily seen that global proposals are ideal if they closely resemble the target distribution. In the ideal case where $h({{\bf x}}') = f({{\bf x}}')$, the transition probability is equal to $f({{\bf x}}')$, and the sampler has no rejected moves. Arbitrarily large steps in the sample space are allowed, and therefore all sample points are statistically independent. However, the problem with global proposals is to find them in the first place. There are, in principle, two approaches: \begin{enumerate} \item Using, as proposal, a local approximation $h({\bf x})$ to $f({\bf x})$, estimated/interpolated from already visited sample points in the neighborhood of ${\bf x}$ \cite{Christen05,Ying20}. This proposal may be consistent with (similar to) $f$ in the neighborhood of existing sample points. \label{local-prop} \item Using a global approximation $h({\bf x})$ derived from external information about $f({\bf x})$, that is, {\em not} derived from already visited sample points. This proposal should be consistent (similar to) $f$ even far away from existing sample points. \label{global-prop} \end{enumerate} In the following we shall show an example of the use of global proposals in inverse problems. Our global proposal will be constructed from external information about the target distribution $f$ using an approximate forward function that is independent of known values of $f$. However, before we proceed, we shall first understand the fundamental advantage of (\ref{global-prop}) over (\ref{local-prop}). To this aim, we shall look into an important theorem, proven in the late 90s, namely the No-Free-Lunch Theorem \cite{Wolpert97}. \section{No-Free-Lunch Theorems and the importance of information} We will now make an important distinction between {\em blind algorithms} and {\em informed algorithms}. We use the following definitions: \begin{enumerate} \item A {\em blind algorithm} is an algorithm whose search or sampling is performed only via an {\em oracle}. An oracle is a function that, when called by the algorithm, is able to evaluate the target distribution $f$ at a given point ${{\bf x}}$. The oracle is used by the algorithm as a black box: No other properties of $f$ than the corresponding inputs and outputs are used. In computer science, blind algorithms are often called {\em heuristics}. For inversion, there are many well-known examples of blind algorithms in use: Regular MCMC, Simulated Annealing, Genetic Algorithms, Neural Networks, etc. \item An {\em informed algorithm} is an algorithm that, in addition to an oracle, uses known, {\em external} properties of $f$ to guide/improve the search or sampling. By external properties we mean any information about $f$ that is not given by samples from $f$. Examples of informed algorithms used in geophysical inversion are Hamiltonian Monte Carlo, exploiting that for seismic wave fields adjoint methods can be used to efficiently compute misfit gradients \cite{Fichtner18}, and Discriminative Variational Bayesian inversion exploiting knowledge about the statistics of the unknown model in case it is a Markov Random Field \cite{Nawaz19}. \end{enumerate} Based on the No-Free-Lunch Theorem (Wolpert and Macready, 1997), Mosegaard (2010) considered limits for the performance of algorithms designed for solution of inverse problems. The conclusion was that all blind inversion algorithms in finite-dimensional spaces (optimization-based as well as sampling-based) have exactly the same performance, when averaged over all conceivable inverse problems. Only an algorithm that take into account more characteristics of the "forward model" than given by the oracle can ensure performance that is superior to blind inversion algorithms. We can draw the conclusion that efficient inversion algorithms are the ones that operate in accordance with specific properties of the problem it is aiming to solve. If the problem is linear with known Gaussian noise statistics and a given Gaussian prior, it can be solved in "one iteration" (applying a closed-form solution formula). If the problem is mildly nonlinear with, e.g., Gaussian noise and Gaussian prior, our knowledge that the posterior probability distribution is unimodal will render the problem solvable in relatively few iterations. For a highly nonlinear problem, the situation is, in principle, the same, except that the term "highly nonlinear" usually signals a lack of knowledge of the shape of the posterior. The posterior may be highly multimodal and possess other pathologies, but we may still have some sparse knowledge about it, for instance that it has a certain smoothness. Irrespective of what we know about the target posterior distribution, we have the option of building this information into the algorithm. If we have plenty of information, we can create an efficient algorithm. If we have sparse information, our algorithm will need more computation time. Countless methods use interpolation methods to construct local or global approximations to the posterior and to use them as proposals in the sampling process, e.g., \cite{Christen05,Ginting11,Jin11,Stuart19,Ying20} Laloy et al, 2013; Georgia et al, 2019). These methods are useful and may improve performance, but they still suffer from the limitations set by the No-Free-Lunch Theorem, because they do not bring in additional, external information. In the following we will suggest an approach that allows us to design more efficient inversion algorithms through incorporation of additional, external information about the target distribution. The approach is general and can be used in deterministic as well as in sampling approaches. In this exposition we will focus on MCMC sampling, and our approach will be to replace a traditional, blind proposal mechanism with one built from a simplified forward model. Being based on approximate physics, the chance of obtaining a good global approximation to the posterior is high. \section{MCMC with Problem-dependent Proposals} Let us now consider algorithms that bring in new, external information about the target posterior distribution $f({{\bf x}})$. An approximation $\tilde{f} ({{\bf x}}) \approx f({{\bf x}})$, constructed from a simplified version of the physics behind the correct distribution $f$ will be used as a proposal. This proposal will not only be close to $f$ in the neighborhood of points already visited by the algorithm, it is also expected to work well far away from current samples, because it is guided by the physics of the problem. \subsection{Linear, Gaussian Problems} Sampling of solutions to a linear Gaussian problem through MCMC sampling is straightforward. Since we have an explicit expression for the Gaussian posterior, the distribution itself can be used as an optimal proposal. Samples from an $N$-dimensional standard (isotropic) Gaussian (mean $\bf 0$ and covariance ${\bf I}$) can be generated with, e.g., the Box-M\"uller method, and the desired samples ${{\bf m}}$ from a $N$-dimensional multivariate Gaussian with mean ${{\bf m}}_0$ and covariance ${\bf C}$ can be calculated as ${{\bf m}} ={{\bf m}}_0 + {{\bf A}}{{\bf m}}$, where ${{\bf A}}{{\bf A}}^T = {{\bf C}}$. The matrix ${{\bf A}}$ can be found by, for instance, Cholesky decomposition. \subsection{Nonlinear Problems} For nonlinear inverse problems, let us consider the general expression for the joint posterior probability in the formulation of Tarantola and Valette (1982): \begin{equation} \sigma({\bf d},{\bf m}) = \frac{\rho({\bf d},{\bf m}) \theta({\bf d},{\bf m})}{\mu({\bf d},{\bf m})} \end{equation} where ${\bf d}$ is data, ${\bf m}$ is the model parameters, and $\rho({\bf d},{\bf m})$ and $\mu({\bf d},{\bf m})$ is the prior and the homogeneous probability densities in the joint $({\bf d},{\bf m})$-space, respectively. The density $\theta({\bf d},{\bf m})$ expresses the "uncertainty of the forward relation" between ${\bf m}$ and data, ${\bf d}$. For simplicity, let us assume that the homogeneous probability density $\mu({\bf d},{\bf m})$, as well as the marginal prior in the model space $\rho_m ({\bf m})$ is constant, which leads us to the following expression for the joint posterior: \begin{equation} \sigma({\bf d},{\bf m}) = \rho({\bf d}) \theta({\bf d},{\bf m}) \end{equation} Under the further assumption that the observational data uncertainties are small, compared to the modelization errors, we arrive at the approximation \begin{equation} \sigma_m({\bf m}) = \sigma({\bf d},{\bf m}) \approx \theta({\bf d}_{obs},{\bf m}) \end{equation} This is a very rough approximation, but it should be remembered that we will not replace the accurate posterior by this expression. The approximation will only be used as a global proposal distribution to speed up the search/sampling from the correct posterior. The question is now how we can find an acceptable expression for $\theta({\bf d}_{obs},{\bf m})$. In this paper we will adopt the following simple procedure: \begin{enumerate} \item Choose a simplified forward function $\tilde{g}({\bf m})$ expressing much of the essential physics, and at the same time allowing an efficient (but probably inaccurate) inversion. This step can be skipped if a direct way to the following step (without a formal inversion) is available. \item Find a solution $\tilde{{\bf m}} = h({\bf d}_{obs})$ to the simplified problem with an acceptable datafit. \item Estimate the modelization error introduced by using $\tilde{g}({\bf m})$ instead of the accurate forward function $g({\bf m})$. This error is quantified by the distribution $\tilde{\theta}({\bf d}_{obs},{\bf m})$, which is also a rough approximation to the posterior ${\tilde{\sigma}}_m({\bf m})$ computed through $\tilde{g}({\bf m})$. The procedure is: \begin{enumerate} \item The "true" modelization error is $$\delta{{\bf m}}_{true} = \tilde{{\bf m}} - {{\bf m}}_{true} ,$$ but since ${{\bf m}}_{true}$ is unknown, we compute instead an approximate modelization error $$\delta{{\bf m}}_{approx} = \tilde{{\bf m}} - h(g(\tilde{{\bf m}})) .$$ The above formula estimates what the modelization would have been if $\tilde{{\bf m}}$ had been the true model. In case $\tilde{{\bf m}}$ is close to ${{\bf m}}_{true}$, we expect that $\delta{{\bf m}}_{approx}$ will be close to $\delta{{\bf m}}_{true}$. \item Use $\delta{{\bf m}}_{approx}$ to construct a reasonable approximation to the modelization error distribution $\tilde{\theta}({\bf d}_{obs},{\bf m})$, centered at $\tilde{{\bf m}}$. This can be done by assuming a functional form for $\tilde{\theta}({\bf d}_{obs},{\bf m})$ and by using the components of $\delta{{\bf m}}_{approx}$ to obtain the parameters of $\tilde{\theta}({\bf d}_{obs},{\bf m})$. An example of this can be found in the following section. \end{enumerate} \end{enumerate} \bigskip\noindent \section{Numerical Example} To illustrate the gain of computational efficiency obtained by using an even rough approximation to a high-dimensional target posterior as proposal, we shall look at a 1D inverse scattering problem. The unknown model is a horizontally stratified medium with 1000 homogeneous layers. Figure 1B shows the acoustic impedance as a function of distance from the surface. A plane-wave seismic pulse (modeled as a Ricker wavelet) is injected perpendicularly into the medium at the surface, and the data (backscattered waves from the medium) are recorded at the surface (Figure 1A left). The data are synthetic 1-D full-waveform seismic signals generated by the propagator matrix method, containing all multiple reflections, transmission losses and damping effects, so the inverse problem of recovering the model from the data is highly nonlinear. For comparison, an approximate seismogram, computed by convolution of the reflectivity with the Ricker wavelet, is shown in Figure 1A (middle), together with its error (deviation from the correct seismogram) to the right. Figure 1C shows an approximate solution to the inverse scattering problem in the absence of noise, computed by deconvolution, and converted to impedance through trace integration and addition of the slowly varying trend from Figure 1B. The approximate solution requires very little computation time, but is clearly inaccurate (compare to the "true" model in Figure 1B). The purpose of the study is to show how the approximate result can be used to efficiently produce a more accurate solution with uncertainty estimates using Markov Chain Monte Carlo (MCMC). \begin{figure} \includegraphics[width=5.0in]{MCinformedStep120520.pdf} \caption{\small (A) Left: Accurate seismogram from B; Center: seismogram computed by convolution; Right: error of the convolution seismogram. (B) True acoustic impedance (C) Acoustic impedance computed by deconvolution (impedance trend from B is added). (D) Envelope of true modelization error (deconvolution impedance minus true impedance). (E) Envelope of estimated modelization error. (F) A sample model from the Informed Proposal Monte Carlo inversion. (G) Median of 10000 sample models.} \label{fig: panel} \end{figure} \begin{figure} \begin{center} \includegraphics[width=2.5in]{IMC-MCMC-iter-2000-Misfits-xx.pdf} \caption{\small Convergence towards equilibrium of a classical MCMC algorithm (upper curve), attempting to sample solutions to our test inverse problem. The lower curve is the fast-converging Informed Proposal Monte Carlo (IPMC) algorithm, which was guided by linearized inversion. In this case the convergence of the guided algorithm was between $10^3$ and $10^4$ times faster than the classical MCMC algorithm (with an tuned, isotropic proposal).} \end{center} \end{figure} Our aim is to produce enough samples from the posterior probability distribution in reasonable time, and this raises a well-known problem, namely that the traditional MCMC approach in unfeasible for problems with more than a couple of hundred parameters. Our way of speeding up the sampling is to construct a global proposal distribution for the MCMC sampling using the approximate solution $\tilde{{\bf m}}$. First, we compute the estimated modelization error vector $\delta{{\bf m}}_{approx}$ using the method described in the previous section. Figure E shows the envelope of the components of this vector, and for comparison, the true modelization error (known in this synthetic data case) is shown in Figure D. The proposal distribution is then built as a Gaussian with mean $\tilde{{\bf m}}$ and a diagonal covariance matrix ${\bf C}_{\theta}$ whose diagonal is the squared components of the envelope function. The 1000-parameter problem is now solved in two ways: (1) via a classical MCMC with an isotropic ad-hoc proposal distribution where the step length is adjusted to obtain an acceptance rate of approximately 50\%, and (2) an Informed Proposal Monte Carlo (IPMC) algorithm driven by our proposal derived above. Figure 2 (upper curve) shows the slow convergence to equilibrium of the classical MCMC in the first 2000 iterations of the inversion process. The lower curve shows the much faster convergence of the algorithm guided by the linearized solution. The improvement in convergence time is significant, in this case between $10^3$ and $10^4$ times faster when started at the model $\tilde{{\bf m}}$ obtained by linear inversion (deconvolution). \subsection{Discussion} It is important to realize that the significantly improved efficiency provided by the physical proposal in this study is {\em not} resulting from prior constraints. Priors generally assign different probabilities to different solutions, but this is not the case with a proposal. A proposal only influences the frequency by which models are presented to the acceptance/rejection algorithm. The bias of the proposal will, asymptotically, be neutralized because it is compensated for in the acceptance probability. In this way it will only influence the efficiency of the sampler, not the asymptotic result. It should, however, be remembered that the most serious problem in non-linear inversion is that the number of models we can practically test is limited. And considering that highly non-linear problems are often so complex that they can only be safely solved with a high number of approximately independent samples from the posterior, it is clear that using an efficient proposal will not only be an improvement in speed, but also a potential improvement in quality of solutions. Simply speaking, we can expect to discover more significantly different solutions (peaks of the target distribution) within the allowed computer resources than with a plain MCMC implementation. We have illustrated how important it is for the proposal to mimic the posterior in MCMC sampling of solutions to inverse problems. However, the idea of using the physics of the problem to build a posterior-like proposal is not restricted to Monte Carlo sampling. Any method depending on a search for sample solutions or good data fits can potentially benefit from this strategy. In an interesting recent paper on variational full-waveform inversion \cite{Zhang20}, it is shown how variational methods may be used to modify samples from the prior into samples of the posterior in the solution of large-scale inverse problems. It is likely that this class of methods may, in the future, be further improved through application of informed proposal mechanisms. \subsection{Conclusion} We have analyzed the impact of proposal distributions on the performance of MCMC sampling methods when applied to the solution of inverse problems. We concluded that the "small step" strategies used in traditional implementations are relatively efficient because they impose a local consistency between the proposal distribution and the target (posterior) distribution: the target probabilities tend to be large where the proposal probabilities are large. Nevertheless, we showed by a simple analytical example that even local consistency may be difficult to obtain when local "small-step" proposals are arbitrary. Furthermore, a main problem with local proposals is the limited step length, which is strongly hampering the exploration of vast, high-dimensional spaces. The volumes of high-probability areas are negligible in such spaces, so burn-in times, and the times needed to pass from one maximum to another can be prohibitive for small-step algorithms. Our solution to these problems is to use global proposals built from external information about the target distribution. We propose to use simplified physics of the problem to ensure global consistency between the proposal and the target distribution. The efficiency of this approach will be highly problem-dependent and strongly conditioned on the choice of the external proposal, but we successfully carried out a test on a $1000$-parameter, highly nonlinear inverse scattering problem. Our gain in efficiency was in this case of the order of up to $10^4$. \subsection{Acknowledgments} This work was supported by Innovation Fund Denmark through the OPTION Project (5184-00025B). Klaus Mosegaard would like to thank Dr. Amir Khan and colleagues at the Department of Earth Sciences, ETH, for their hospitality and inspiring discussions during the fall 2017 where this work was initiated. \section{Introduction} Introduction introduction introduction introduction introduction introduction introduction introduction introduction introduction introduction introduction \section{Classical MCMC} In this section we follow Tarantola and Valette (1982a) whose formulation of the probabilistic inference problem is based on an expression for the conjunction of information given by two probability densities $f_1({\bf x})$ and $f_2({\bf x})$ over the same space ${\cal X}$: $$ (f_1 \wedge f_2) ({\bf x}) = \frac{f_1({\bf x}) f_2({\bf x})}{\mu({\bf x})}. $$ where ${\mu({\bf x})}$ is the homogeneous probability density assigning equal probabilities to equal volumes in ${\cal X}$. They adapt this formula to the inference problem by defining ${\bf x} = ({\bf d},{\bf m})$ as a point in the joint data-model space, and obtain the posterior probability distribution: $$ \sigma({\bf d},{\bf m}) = \frac{\rho({\bf d},{\bf m}) \theta({\bf d},{\bf m})}{\mu({\bf d},{\bf m})}. $$ Under the assumption that the priors over ${\bf d}$ and ${\bf m}$ are independent, that is, $\rho({\bf d},{\bf m})=\rho_d({\bf d}) \rho_m({\bf m})$, that $\mu({\bf d},{\bf m})$ is constant, and that the physical relation ${\bf d} = {\bf g}({\bf m})$ is exact, such that $\theta({\bf d},{\bf m})=\delta({\bf d} - {\bf g}({\bf m})) \mu({\bf m})$, we arrive at an expression for the posterior in the model space \begin{equation} \sigma_m({\bf m}) = L({\bf m})\rho_m({\bf m}) \label{TV-Bayes} \end{equation} with $L({\bf m})=\rho_d({\bf g}({\bf m})).$ Equation (\ref{TV-Bayes}) is equivalent to the classical Bayes Formula, which in our notation reads $$ \sigma_m({\bf m} | {\bf d}) = \frac{L({\bf d} | {\bf m})\rho_m({\bf m})}{\rho_d({\bf d})}. $$ The posterior distribution (\ref{TV-Bayes}) is generally impossible to calculate directly, except for linear inverse problems, so one has to rely on Monte Carlo sampling methods to generate realizations from the distribution. A commonly used approach is the Extended Metropolis Algorithm (\cite{MT95,M06}): \bigskip\noindent \begin{Algorithm} \label{algo: ext-Metro} \textbf{(Extended Metropolis).} Given a random algorithm $V\left( \mathbf{m}\right) $ which iteratively samples the prior probability density ${\rho_m (\mathbf{m})}$: \begin{equation} \mathbf{m}^{(n+1)}=V\left( \mathbf{m}^{(n)}\right) , \label{eq: priorwalk} \end{equation} and an algoritm $U\left( 0,1\right) $ producing random numbers from the interval $\left[ 0,1\right]$ with uniform probability. The random algorithm $W$, which iteratively updates the current parameter vector $\mathbf{m}^{(n)}$: \begin{equation} \mathbf{m}^{(n+1)}=W\left( \mathbf{m}^{(n)}\right) =\left\{ \begin{array}{l} V\left( \mathbf{m}^{(n)}\right) \text{ if }U\left( 0,1\right) \leq \min \left[ 1,\frac{L\left( V\left( \mathbf{m}^{(n)}\right) \right) }{L\left( \mathbf{m}^{(n)}\right) }\right] \\ \mathbf{m}^{(n)}\text{ else} \end{array} ,\right. \label{bigwalk} \end{equation} will asymptotically sample the posterior ${\sigma (\mathbf{m}) \propto L(\mathbf{m})\rho (% \mathbf{m})}$. \end{Algorithm} \bigskip\noindent Algorithm~\ref{algo: ext-Metro} works if the prior sampler $V$ is {\em irreducible} and {\em aperiodic} (see, e.g., \cite{MT02}). \bigskip\noindent The extended Metropolis will, in general, perform better than a regular Metropolis, since only models that are accepted a priori (by $V$) will be subject to the time consuming misfit calculation needed to evaluate $L({\bf m}) $. \section{No-Free-Lunch Theorems and the importance of information} \label{sec:NFL} Text text text text text text text text text text text text text \section{Blind and Informed Proposals Mechanisms} Despite the improved efficiency of Algorithm (\ref{algo: ext-Metro}) gained from the use of a prior sampler $V$ in the initial phase of each algorithm, practical experience shows that the efficiency of extended Metropolis is highly dependent on the choice of $V$. The problem is that the prior $\rho$ and the likelihood function $L$ are usually so different that the moves in the model space proposed by $V$ will usually lead to points with low values of $L$. As explained earlier, there is little hope that even the best, blind sampling strategy will be able to significantly improve on this problem. In the following we shall therefore describe a method that will allow us to inject information about the physical relation between data and model parameters into the sampling strategy. First, however, we will understand why the 'blind' strategy of the classical MCMC is so inefficient. \subsection{Proposal Distributions in Classical MCMC} The basic idea behind any implementation of the Metropolis Algorithm is an interplay between {\em proposals} and {\em rejections}. In each iteration, sampling from a probability density $f({\bf x})$ over a space ${\cal X}$ proceeds from a current value ${\bf x}$ by first proposing a new value ${{\bf x}}'$ from the so-called {\em proposal distribution} $q({{\bf x}}',{\bf x})$, followed by a random decision where ${{\bf x}}'$ is accepted with probability \begin{equation} P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}={\rm min} \( \frac{f({{\bf x}}') q({\bf x}|{{\bf x}}')}{f({\bf x}) q({{\bf x}}'|{\bf x})},1 \). \end{equation} This follows from the requirement that the probability that a transition takes place from ${\bf x}$ to ${{\bf x}}'$, namely $P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'} q({{\bf x}}'|{\bf x}) f({\bf x})$, must be equal to the probability of the reverse transition $P_{\rm acc}^{{{\bf x}}' \rightarrow {{\bf x}}} q({\bf x}|{{\bf x}}') f({{\bf x}}')$ to attain equilibrium sampling (detailed balance). At this point it is important to note that the proposal distribution has no influence on the distribution to which the sampling converges, it only influences the speed of convergence. It is common practice (but not a necessity) to work with symmetric proposal distributions: $q({{\bf x}}',{\bf x}) = q({\bf x} , {{\bf x}})$, and in this case we get the simpler expression \begin{equation} P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}={\rm min} \( \frac{f({{\bf x}}')}{f({\bf x})},1 \). \label{eq_accept} \end{equation} The Metropolis sampler is only efficient if large enough steps (to connect any two high value areas of $f({\bf x})$ in a few steps) will be frequently accepted. This ability critically depends on the $q({{\bf x}}',{\bf x})$, and that is reason why intensive research in recent years has been devoted to finding improved proposal distributions. A close look at the expression for the unconditional transition probability from ${\bf x}$ to ${\bf x}'$ (for symmetric $q$) \begin{equation} P^{{{\bf x}} \rightarrow {{\bf x}}'}=f({\bf x}) P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}=f({\bf x}) \cdot q({{\bf x}}'|{\bf x}) \cdot {\rm min} \( \frac{f({{\bf x}}')}{f({\bf x})},1 \). \end{equation} shows that, if $f({\bf x})'$ turns out to be low in places where $q({{\bf x}}'|{\bf x})$ is high, the move ${{\bf x}} \rightarrow {{\bf x}}'$ is very likely to be rejected. To investigate and exemplify the importance of this problem in high-dimension\-al spaces, let us consider the case where the distribution of ${{\bf x}}$ is Gaussian with covariance matrix ${\bf C}$ and mean ${\bf x}_0$: $f({{\bf x}}) = {\cal N}_{{\bf x}} ({\bf x}_0,{\bf C})$. Assume now that out proposal distribution is an isotropic Gaussian $q({{\bf x}}|{{\bf x}}_q) = {\cal N}_{{\bf x}} ({\bf x}_q,{\bf C}_q)$ with mean ${{\bf x}}_q$ and covariance matrix ${\bf C}_q$, and that we, in the sampling process, have arrived at point with a high value of $f({{\bf x}})$, say, for simplicity, at its maximum point ${{\bf x}}_0$. We can now calculate the expected acceptance probability $P^{{{\bf x}}_0 \rightarrow {{\bf x}}}$ proposed in the next step by the algorithm: \begin{align} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) &= \int_{{\cal X}} \frac{f({{\bf x}})}{f({{\bf x}}_0)} q({{\bf x}}|{{\bf x}}_0) d{{\bf x}} \label{eq: Meanf1} \\ &= \int_{{\cal X}} \frac{{\cal N}_{{\bf x}} ({\bf x}_0,{\bf C})}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})} {\cal N}_{{\bf x}} ({\bf x}_0,{\bf C}_q) d{{\bf x}} \\ &= \frac{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C}+{\bf C}_q)}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})}\int_{{\cal X}} {\cal N}_{{\bf x}} ({\bf x}_1,{\bf C}_1) d{{\bf x}} \label{eq: Meanf} \end{align} where \begin{equation*} {\bf x}_1 = ({\bf C}^{-1} + {\bf C}_q^{-1})^{-1} ({\bf C}^{-1} {\bf x}_0 + {\bf C}_q^{-1} {\bf x}_0) = {\bf x}_0 \end{equation*} and \begin{equation*} {\bf C}_1 = ({\bf C}^{-1} + {\bf C}_q^{-1})^{-1} \, . \end{equation*} Since the integral in (\ref{eq: Meanf}) is $1$, we have the following expression for the expected acceptance probability: \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) = \frac{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C}+{\bf C}_q)}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})} = \( \frac{{\rm det}\( 2\pi {\bf C} \)}{{\rm det}\( 2\pi ({\bf C}+{\bf C}_q) \)} \)^{1/2} \, . \end{equation} Both ${\bf C}_q = \sigma_q^2 {{\bf I}}$ (with $\sigma_q^2 > 0$) and ${\bf C}$ are diagonal in the frame spanned by ${\bf C}$'s eigenvectors, and if we assume that the eigenvalues of ${\bf C}$ are $\sigma_1^2 \ge \dots \ge \sigma_N^2 > 0$, where $N$ is the dimension of ${\cal X}$, the eigenvalues of ${\bf C}+{\bf C}_q$ are $(\sigma_1^2+\sigma_q^2), \dots ,(\sigma_N^2+\sigma_q^2)$. From this we have \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) = \prod_{n=1}^N \( \frac{\sigma_n^2}{\sigma_n^2+\sigma_q^2} \)^{1/2} \, . \label{eq-eigenC1} \end{equation} From (\ref{eq-eigenC1}) we see that for any non-zero values of $\sigma_n$ and $\sigma_q$ we have \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) \rightarrow 0 \quad {\rm for} \quad N \rightarrow \infty \, . \end{equation} expressing the influence on sampling of the so-called 'curse of dimensionality'. If the proposed steps are kept very short ($\sigma_q$ is small compared to all $\sigma_n$), the decrease of $E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}})$ with $N$ is slow. But this situation is of no practical value, because adequate sampling by the algorithm requires that it can traverse high-probability areas of $f({{\bf x}})$ within a reasonable amount of time. For non-negligible step lengths, the situation is radically different. Indeed, if there exists an integer $K$ and a real constant $k$ such that $\sigma_q > k\sigma_n$ for all $n > K$, then $E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}})$ decreases more that exponentially with $N$. In other words, if the distribution $f({{\bf x}})$ is 'elongated' compared to the proposal $q$, that is, if it is broader than $q$ in only a finite number of directions/dimensions, the mean number of accepted moves will decrease at least exponentially with the number of dimensions. As an example, let us consider the case where $\sigma_q^2 = 1$, and $\sigma_n^2 = 1/n$. For $N=2$ this gives an expected acceptance probability of $0.4082$, corresponding to a mean waiting time of about $0.4082^{-1} \approx 2.5$ iterations between accepted moves. For $N=10$ the expectation is $1.5828 \cdot 10^{-4}$, and for $N=100$ it decreases to $1.03 \cdot 10^{-80}$, giving a waiting time of about $3.0 \cdot 10^{62}$ years for 1 Billion iterations per second. The above analysis is carried out under the very favourable situation where the maximum of $f({{\bf x}})$ has been located by the algorithm, and does not even consider the serious difficulties faced by the sampling algorithm in the initial search for points with high values of $f({{\bf x}})$ (the {\em burn in} phase). So it is clear that the proposal mechanism, as defined by $q$, is the Achilles heel of the standard MCMC approach. \subsection{Sampling with Problem-dependent Proposals} The unsurmountable difficulties described in the previous section are rooted in the fact that standard MCMC calculations are essentially blind search algorithms supplemented with a few problem-specific features. To evade the fundamental problems of classical MCMC sampling, we shall therefore take the consequence of the legacy of the No-Free-Lunch theorem and propose a class of algorithms that explicitly incorporates information about the problem to be solved. To avoid a mismatch between the distribution $f({{\bf x}})$ to be sampled, and the proposal distribution $q({\bf x}' | {\bf x})$, we shall in the following look into proposals $q$ that are approximations to $f$, in the sense that \begin{equation} q({\bf x}' | {\bf x}) = \tilde{f} ({{\bf x}'}) \end{equation} where the Kullbach-Leibler divergence \begin{equation} D(f,\tilde{f}) = \int_{\cal X} f({{\bf x}}) \log \( \frac{f({{\bf x}})}{\tilde{f}({\bf x})} \) d{{\bf x}} \end{equation} is small. For a given inverse problem, the challenge is here to derive $\tilde{f}({{\bf x}})$. We shall devote the next section to this problem, where the focus will be on the special conditions characterizing inverse problems. \subsection{Inverse Problems: proposing through approximate forwards} When applying an MCMC method to inverse problems, where $f({\bf x})$ is the posterior $\sigma({\bf x})$ or the likelihood $L({\bf x})$, the expected acceptance probability is given by (\ref{eq: Meanf1}). This expression is largest (equal to $1$) when $f({{\bf x}}_0) = q({{\bf x}}|{{\bf x}}_0)$ for all ${\bf x}$, or, when $q$ is symmetric, $f({{\bf x}}) = q({{\bf x}}|{{\bf x}}_0)$ for all ${\bf x}_0$. In case $q({{\bf x}}|{{\bf x}}_0)$ deviates from $f({\bf x})$ there will be areas within ${\cal X}$ where $q({{\bf x}}|{{\bf x}}_0)<f({\bf x})$ as well as areas with $q({{\bf x}}|{{\bf x}}_0)>f({\bf x})$, because both $f$ and $q({{\bf x}}|{{\bf x}}_0)$ are positive with integral $1$ over ${\cal X}$. It is, however, the areas with $q({{\bf x}}|{{\bf x}}_0)<f({\bf x})$ that are critical, because they directly reduce the acceptance probability. In the following we exploit this fact to construct approximate likelihood functions or posterior distributions, allowing efficient sampling of solutions to inverse problems. \subsubsection{Approximate posteriors} Consider an inverse problem with observational data ${\bf d}^{obs}$ and computed data \begin{equation*} {{\bf d}} = {{\bf g}}({{\bf m}}) \, . \end{equation*} Here, ${{\bf g}}$ is an exact forward function. Suppose that we also have an approximate forward relation $\tilde{{\bf g}}$ that allows us efficiently compute approximate data \begin{equation*} \tilde{{\bf d}} = \tilde{{\bf g}}({{\bf m}}) \, . \end{equation*} and that a (possibly simplified) prior probability density $\tilde{\rho}_m({{\bf m}})$ is available. With this method in hand we can form an approximate posterior probability distribution \begin{equation} \tilde{\sigma}({{\bf m}}) = \tilde{L}({{\bf m}}) \tilde{\rho}_m({{\bf m}}) = \rho_d (\tilde{{\bf g}}({{\bf m}})) \tilde{\rho}_m({{\bf m}}) \end{equation} which can be used directly as a proposal distribution: \begin{equation} q({{\bf m}}'|{{\bf m}}) = \tilde{\sigma}({{\bf m}}') \, . \end{equation} Another approach is to use some 'pseudo-inverse' function ${\bf h}$ (if available) on the observed data ${\bf d}_{obs}$: \begin{equation*} {{\bf m}}_{est} = {{\bf h}}({\bf d}_{obs}) \, . \end{equation*} and to use the image $\sigma_{pseud}$ of the data distribution $\rho_d$ under ${\bf h}$: \begin{equation} \sigma_{pseud}({{\bf m}}) = \frac{\partial{\tilde{{\bf h}}}}{\partial{{\bf d}}}\rho_d({{\bf d}}) \, . \end{equation} as a proposal distribution \begin{equation} q({{\bf m}}'|{{\bf m}}) = \sigma_{pseud}({{\bf m}}') \, . \end{equation} However, the fact that we can use the above approximations to the posterior as proposals does not mean that they are efficient. In the following sections we shall analyse some necessary conditions for efficiency, and give examples of implementations. The efficiency conditions will, of course, depend on the category of problems we face (linear, mildly non-linear, highly non-linear), so we will consider these situations in turn. \subsubsection{Linear, Gaussian Problems} Sampling of solutions to a linear Gaussian problem through rejection sampling is straightforward. We have an explicit expression for the Gaussian posterior, and the distribution itself can be used as an exact proposal. Samples ${\bf m}$ from an $N$-dimensional standard (isotropic) Gaussian (mean $\bf 0$ and covariance ${\bf I}$) can be generated with the Box-M\"uller method, and the desired samples ${{\bf m}}$ from a $N$-dimensional multivariate Gaussian with mean ${{\bf m}}_0$ and covariance ${\bf C}$ can be calculated as ${{\bf m}} ={{\bf m}}_0 + {{\bf A}}{{\bf m}}$, where ${{\bf A}}{{\bf A}}^T = {{\bf C}}$. The matrix ${{\bf A}}$ can be found by, for instance, Cholesky decomposition. \subsubsection{Nonlinear Problems} We shall now propose a procedure for efficient sampling of solutions to nonlinear problems with a homogeneous (constant) prior. \medskip \noindent Given an inverse problem \begin{equation*} {{\bf d}} = {{\bf g}}({{\bf m}}) \end{equation*} with homogeneous prior, observed data ${{\bf d}}_{obs}$, and data covariance matrix ${\bf C}_d$. The problem is characterized by its likelihood function $L({{\bf m}}) = {\cal N}_{{\bf d}_{obs}}({{\bf g}}({{\bf m}}),{{\bf C}}_d)$. and its prior $\rho({\bf m}) = \mu({\bf m})$, and will in the following be expressed through the notation \begin{equation} {\cal P} = \{ {\cal N}_{{\bf d}_{obs}}({{\bf g}}({{\bf m}}),{{\bf C}}_d), \mu({\bf m}) \} \, . \end{equation} We shall furthermore consider an approximation to ${\cal P}$ \begin{equation} \tilde{{\cal P}} = \{ {\cal N}_{{\bf d}_{obs}}(\tilde{{\bf g}}({{\bf m}}),{{\bf C}}_d), \mu({\bf m}) \} \, , \end{equation} As a measure of the difference between the exact posterior and the approximate posterior in the joint $({\bf d},{\bf m})$-space, we shall use the Kullbach-Leibler divergence of the conditional $\sigma({\bf d} |{\bf m})$: \begin{align} D(\sigma({\bf d} |{\bf m}),\tilde{\sigma}({\bf d} |{\bf m})) &= \int_{\cal X} \sigma({{\bf d}}|{{\bf m}}) \log \( \frac{\sigma({{\bf d}}|{{\bf m}})}{\tilde{\sigma}({{\bf d}}|{\bf m})} \) d{{\bf d}} \label{eq: Kull} \\ &= \int_{\cal X} {\cal N}_{{\bf d}}({{\bf g}}({{\bf m}}),{{\bf C}}_d) \log \( \frac{{\cal N}_{{\bf d}}({{\bf g}}({{\bf m}}),{{\bf C}}_d)}{{\cal N}_{{\bf d}}(\tilde{{\bf g}}({{\bf m}}),{{\bf C}}_d)} \) d{{\bf d}} \\ &= \frac{1}{2} \({{\bf g}}({{\bf m}})-\tilde{{\bf g}}({{\bf m}}))^T {{\bf C}}_d^{-1} ({{\bf g}}({{\bf m}})-\tilde{{\bf g}}({{\bf m}})) \) \\ &= \frac{1}{2} \| {{\bf g}}({{\bf m}})-\tilde{{\bf g}}({{\bf m}}) \|_{{{\bf C}}_d}^2 \end{align} which is the ${{\cal L}}_2$-distance with weight matrix ${{\bf C}}_d$ between the approximate, computed data $\tilde{{\bf g}}({{\bf m}})$ and the exact, computed data ${{\bf g}}({{\bf m}})$. This shows that the amount of information missing in $\tilde{{\cal P}}$, compared to ${\cal P}$ (the Kullbach-Leibler divergence) can be measured directly from the difference between ${{\bf g}}({{\bf m}})$ and $\tilde{{\bf g}}({{\bf m}})$. \medskip\noindent This suggests the following: \noindent \begin{Algorithm} Use ${\cal N}_{{\bf d}_{obs}}(\tilde{{\bf g}}({{\bf m}}),{{\bf C}}_d)$ as the proposal distribution in Algoritm (\ref{algo: ext-Metro}). \label{algo: ext-Metro-KBMC} \end{Algorithm} \subsubsection{Example: A 100-parameter inverse scattering problem} Figure $\dots$ shows acoustic impedances of a 1000-parameter, horizontally stratified earth model, and the vertical-incidence seismic data generated from the model. The data are 1D full-waveform seismic signals, generated and recorded at the surface. The full-waveform data contains all multiple reflections, transmission losses and damping effects, so the inverse problem of recovering the model from the data is highly nonlinear. A linearized approximate solution to the inverse scattering problem (in the absence of noise) is shown to the right. This solution requires very little computation time, but is clearly inaccurate (compare to left graph). A more accurate solution, with uncertainty estimates, can be obtained by Markov chain Monte Carlo (MCMC) if enough samples can be obtained in reasonable time. Figure $\dots$ (upper curve) shows the lack of convergence to equilibrium of a classical MCMC in the first 2000 iterations of such an inversion process. The lower curve, however, shows much faster convergence of a Algorithm (\ref{algo: ext-Metro-KBMC}) whose random walk is guided by the linearized solution shown in Figure $\dots$. Algorithm (\ref{algo: ext-Metro-KBMC}) uses a forward modeling error distribution centered in the linearized solution to choose samples. The improvement in convergence time is dramatic, in this case of the order of $10^4$ when started at the same model as the Algorithm (\ref{algo: ext-Metro}). \begin{figure} \centering \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{TrueModel-xx.png} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{dobs.png} \end{subfigure} ~ \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{ApproxModel.png} \end{subfigure} \caption{A 1000-layer, horizontally stratified Earth model (left), the corresponding vertical-incidence, full-waveform seismic data (middle), and a classical, linearized solution to the inverse problem (right).}\label{fig:animals} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=4in]{Exp-1-Iter-10000-TrueErrorEnvelope.png} \caption{Linear Inversion: Modelization Error: True Error Envelope} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=4in]{Exp-1-Iter-10000-EstErrorEnvelope.png} \caption{Linear Inversion: Modelization Error: Est Error Envelope} \end{center} \end{figure} \vfill \begin{figure}[h] \begin{center} \includegraphics[width=4in]{IMC-MCMC-iter-2000-Misfits-x.png} \caption{Convergence towards equilibrium of a classical MCMC algorithm (upper curve), attempting to sample solutions to the inverse problem described in Figure 5. Lower curve is the fast-converging KBMC algorithm guided by the linearized inversion. In this case the convergence of the KBMC algorithm was of the order of 104 times faster than the MCMC algorithm.} \end{center} \end{figure} \vfill \medskip \noindent \subsubsection{Procedure 2: Sampling with General Prior} ({\dots soon to appear\dots) \clearpage \newcommand{\marginlabel}[1] {\mbox{}\marginpar{\raggedleft\hspace{0pt}#1}} \section{Introduction} The following set of features are derived from other packages with slight modifications to make them compatible with the \verb"gji" documentclass. They can be accessed by including the \verb"extra" option to the \verb"gji" document class e.g. \begin{verbatim} \documentclass[extra,mreferee]{gji} \end{verbatim} \section{Reserved Space and Boxed Figures } It is possible to reserve space for a figure (or to draw a box of a specified size) using the \verb!\figbox! command. The \verb!\figbox! command takes 3 arguments: the horizontal and vertical sizes of the space that is to be left blank, as well as the contents of the box. For example, \begin{verbatim} \begin{figure} \figbox{6cm}{5cm}{Paste orbits here} \caption{The orbits of some planets} \end{figure} \end{verbatim} \begin{figure*} \figbox{6cm}{5cm}{Paste orbits here} \caption{Illustrating the use of the figbox command for a display of the orbits of some planets} \end{figure*} makes a box 4~cm wide and 6~cm high with the caption text below it. The text `Paste orbits here' is simply a note to the author that is printed centered in the framed box. This third argument could be left empty, or could contain commands for importing a computer plot. There is also a starred version \verb!\figbox*! that behaves exactly the same as \verb!\figbox! except that no frame is drawn around the figure. This is most useful for real figures, whereas the unstarred command is more appropriate for reserved space for glued-in figures. It is possible to have \verb!\figbox! and \verb!\figbox*! scale automatically to the size of its contents, something that is also appropriate when the contents are an imported graphics file. In this case, leave both the dimensions empty. As an example, if the orbit plot exists as an encapsulated PostScript file named \texttt{orbit.eps}, then it could be included directly with the commands \begin{verbatim} \begin{figure} \figbox*{}{} \includegraphics[width=4cm]{orbit.eps}} \caption{The orbits of some planets} \end{figure} \end{verbatim} For this to work, you must have loaded the \texttt{graphicx} package with \verb!\usepackage! at the beginning of the file, and you must have a PostScript driver for the output. (There are other packages with different syntaxes for importing graphics; use the one that you are most familiar with.) \section{Alternative texts for one or two columns} Mathematical formulas often have to be fiddled to fit them into the narrow confines of a single column in two-column format, whereas they will fit with no problem in the wide columns of the manuscript mode. This often results in the author having to massage his formulas when he changes between manuscript and camera-ready options, and then back again when he want to print the manuscript once more. The special command \verb!\iftwocol! allows both versions of the text to be included in the one document, for automatic selection depending on whether two-column mode is active or not. Its syntax is\label{iftwocol} \begin{quote} \normalsize \verb!\iftwocol{!{\em yes\/}\verb!}{!{\em no\/}\verb!}! \end{quote} where {\em yes\/} is the text that is inserted if two-columns are in effect, and {\em no\/} the text that is otherwise taken. This command may be used in other situations, but the main application is in the case of mathematics. \section{Literature citations} \textit{Geophysical Journal International} uses the author-year system of literature citation, something that is not supported by standard \LaTeX. The \verb"gji" documentclass provides partial support but more comprehensive features are available using features from the \verb"egs" package and the \verb"natbib" module developed by P.W.Daly. Since there are two ways of making a citation in the author-year system, either as Jones et al.\ (1990) or as (Jones et al., 1990), there are two variations of the original \verb!\cite! command. Suppose the key for the above reference is \texttt{jones90}, then use \begin{flushleft} \verb!\citet{jones90}! for Jones et al.\ (1990)\\ \verb!\citep{jones90}! for (Jones et al., 1990)\\ \verb!\citep[p.~32]{jones90}! for (Jones et al., 1990, p.~32)\\ \verb!\citep[e.g.,][]{jones90}! for (e.g., Jones et al., 1990)\\ \verb!\citep[e.g.,][p.~32]{jones90}! for (e.g., Jones et al., 1990, p.~32). \end{flushleft} Note that the use of the optional arguments to add notes within the brackets of the citation: a single note behaves as in standard \LaTeX, as a note \emph{after} the citation; however, with two notes (non-standard), the first goes \emph{before}, the second \emph{after} it. Two other citation commands are available: \begin{flushleft} \verb!\citeauthor{jones90}! prints Jones et al.\\ \verb!\citeyear{jones90}! prints 1990. \end{flushleft} For the above examples to function properly, either the \verb"gji" bibliography style must be used with \btx, or the \texttt{thebibliography} environment must be formatted accordingly. \begin{flushleft} With \btx\\[1ex] \verb!\bibliographystyle{!gji\verb!}!\\ \verb! \section{Introduction} In addition to the standard submission of hardcopy from authors, \textit{Geophysical Journal International} accepts machine-readable forms of papers in \LaTeX. The layout design for \textit{Geophysical Journal International} has been implemented as a \LaTeXe\ class file derived from the MN style file for Monthly Notices of the Royal Astronomical Society. The GJI classfile is based on the \verb"ARTICLE" style as discussed in the \LaTeX\ manual \cite {la}. Commands which differ from the standard \LaTeX\ interface, or which are provided in addition to the standard interface, are explained in this guide. This guide is not a substitute for the \LaTeX\ manual itself. Authors planning to submit their papers in \LaTeX\ are advised to use \verb"gji.cls" as early as possible in the creation of their files. This guide is modified from that produced by Woollatt et al (1994) to describe the features of the MN style. A very accessible guide to the features of \LaTeXe and the differences from the earlier version is provided by Kopka \& Daly \shortcite{kd}. This reference provides in chapter 9 a summary of \LaTeX\ error messages and also a full description of standard \LaTeX\ commands in Appendix F. \subsection{The GJI document classes} The use of \LaTeX\ document classes allows a simple change of class (or class option) to transform the appearance of your document. The GJI class file preserves the standard \LaTeX\ interface such that any document which can be produced using the standard \LaTeX\ \verb"ARTICLE" class can also be produced with the GJI class. However, the measure (or width of text) is narrower than the default for \verb"ARTICLE", therefore line breaks will change and long equations may need re-setting. \subsection{General style issues} For general style issues, authors are referred to the `Instructions for Authors' on the inside back cover of \textit{Geophysical Journal International}. Authors who are interested in the details of style are referred to Butcher \shortcite {bu} and The Chicago Manual \shortcite {cm}. The language of the journal is British English and spelling should conform to this. Use should be made of symbolic references (\verb"\ref") in order to protect against late changes of order, etc. \subsection{Submission of \LaTeX\ articles to the journal} Papers should initially be submitted in the usual way to: The Executive Secretary, Royal Astronomical Society, {\em or\/} the EGS Editor, {\em or\/} the DGG editor, {\em or\/} the American Editor, {\em or\/} the Pacific Region Editor, as set out in the Instructions for Authors on the inside back cover of each issue of Geophysical Journal International. Four hard copies should be supplied including figures, normally using the \verb"[referee]" option, for papers with a high mathematical content the \verb"[mreferee]" option is recommended. In each case a separate page of figure captions is preferred. One of the copies should be single-sided, while the other two should be weight-reduced, by being either single-spaced or double-sided. Copies of figures should also be supplied. Authors should ensure that their figures are suitable (in terms of lettering size, etc.) for the reductions they intend; they should not attempt to include their figures inside a \TeX\ or \LaTeX\ file by using \verb"\special" or one of the style files for figure handling. Note that articles, or revised versions thereof, may not currently be submitted by electronic mail. However when the article is accepted for publication the \LaTeX\ file can be sent to the publisher by \verb"ftp" together with appropriate forms of figures. Instructions will be provided following acceptance. \section{Using the GJI class file} If the file \verb"gji.cls" is not already in the appropriate system directory for \LaTeX\ files, either arrange for it to be put there, or copy it to your working directory. The class file and related material, such as this guide, can be accessed via the journal web-site at http://www.blackwellpublishing.com/journals/gji under {\em Author Guidelines}. The GJI document class is implemented as a complete document class, {\em not\/} a document class option. In order to use the GJI style, replace \verb"article" by \verb"gji" in the \verb"\documentclass" command at the beginning of your document: \begin{verbatim} \documentclass{article} \end{verbatim} is replaced by \begin{verbatim} \documentclass{gji} \end{verbatim} In general, the following standard document class options should {\em not\/} be used with the GJI style: \begin{enumerate} \item \texttt{10pt}, \texttt{11pt}, \texttt{12pt} -- unavailable; \item \texttt{twoside} (no associated style file) -- \texttt{twoside} is the default; \item \texttt{fleqn}, \texttt{leqno}, \texttt{titlepage} -- should not be used (\verb"fleqn" is already incorporated into the GJI style); \item \texttt{twocolumn} -- is not necessary as it is the default style. \end{enumerate} In \LaTeX2e the use of postscript fonts and the inclusion of non-standard options is carried out through the \verb"\usepackage" command, rather than as options as in earlier versions. Thus the Times font can be used for text by including \begin{verbatim} \usepackage{times} \end{verbatim} on the line immediately after the \verb"\documentclass". If necessary, \texttt{ifthen} and \texttt{bezier} can be included as packages. The GJI class file has been designed to operate with the standard version of \verb"lfonts.tex" that is distributed as part of \LaTeX . If you have access to the source file for this guide, \verb"gjilguid2e.tex", attempt to typeset it. If you find font problems you might investigate whether a non-standard version of \verb"lfonts.tex" has been installed in your system. \subsection{Additional document class options}\label{classoptions} The following additional class options are available with the GJI style: \begin{description} \item \texttt{onecolumn} -- to be used \textit{only} when two-column output is unable to accommodate long equations; \item \texttt{landscape} -- for producing wide figures and tables which need to be included in landscape format (i.e.\ sideways) rather than portrait (i.e.\ upright). This option is described below. \item \texttt{doublespacing} -- this will double-space your article by setting \verb"\baselinestretch" to 2. \item \texttt{referee} -- 12/20pt text size, single column, designed for submission of papers. \item \texttt{mreferee} -- 11/17pt text size, single column designed for submission of papers with mathematical content. \item \texttt{camera} -- designed for use with computer modern fonts to produce a closer representation of GJI style for camera ready material. \item \texttt{galley} -- no running heads, no attempt to align the bottom of columns. \end{description} \subsection{Landscape pages} If a table or illustration is too wide to fit the standard measure, it must be turned, with its caption, through 90 degrees anticlockwise. Landscape illustrations and/or tables cannot be produced directly using the GJI style file because \TeX\ itself cannot turn the page, and not all device drivers provide such a facility. The following procedure can be used to produce such pages. \begin{enumerate} \item Use the \verb"table*" or \verb"figure*" environments in your document to create the space for your table or figure on the appropriate page of your document. Include an empty caption in this environment to ensure the correct numbering of subsequent tables and figures. For instance, the following code prints a page with the running head, a message half way down and the figure number towards the bottom. If you are including a plate, the running headline is different, and you need to key in the three lines which are marked with \verb with an appropriate headline. \begin{verbatim} \begin{figure*} \vbox to220mm{\vfil Landscape figure to go here. \vfil} \caption{} \label{landfig} \end{figure*} \end{verbatim} \item Create a separate document with the corresponding document style but also with the \verb"landscape" document style option, and include the \verb"\pagestyle" command, as follows: \begin{verbatim} \documentclass[landscape]{gji} \pagestyle{empty} \end{verbatim} \item Include your complete tables and illustrations (or space for these) with captions using the \verb"table*" and \verb"figure*" environments. \item Before each float environment, use the \verb"\setcounter" command to ensure the correct numbering of the caption. For example, \begin{verbatim} \setcounter{table}{0} \begin{table*} \begin{minipage}{115mm} \caption{Images of global seismic tomography.} \label{tab1} \begin{tabular}{@{}llllcll} : \end{tabular} \end{minipage} \end{table*} \end{verbatim} The corresponding example for a figure would be: \begin{verbatim} \clearpage \setcounter{figure}{12} \begin{figure*} \vspace{144mm} \caption{Travel times for regional model.} \label{fig13} \end{figure*} \end{verbatim} \end{enumerate} \section{Additional facilities} In addition to all the standard \LaTeX\ design elements, the GJI style includes the following features. \begin{enumerate} \item Extended commands for specifying a short version of the title and author(s) for the running headlines; \item A \verb"summary" environment to produce a suitably indented Summary \item An \verb"abstract" environment which produces the GJI style of Summary \item A \verb"keywords" environment and a \verb"\nokeywords" command; \item Use of the \verb"description" environment for unnumbered lists. \item A starred version of the \verb"\caption" command to produce captions for continued figures or tables. \end{enumerate} In general, once you have used the additional \verb"gji.cls" facilities in your document, do not process it with a standard \LaTeX\ style file. \subsection{Titles and author's name} In the GJI style, the title of the article and the author's name (or authors' names) are used both at the beginning of the article for the main title and throughout the article as running headlines at the top of every page. The title is used on odd-numbered pages (rectos) and the author's name appears on even-numbered pages (versos). Although the main heading can run to several lines of text, the running headline must be a single line ($\leqslant 45$ characters). Moreover, the main heading can also incorporate new line commands (e.g. \verb"\\") but these are not acceptable in a running headline. To enable you to specify an alternative short title and an alternative short author's name, the standard \verb"\title" and \verb"\author" commands have been extended to take an optional argument to be used as the running headline. The running headlines for this guide were produced using the following code: \begin{verbatim} \title[Geophys.\ J.\ Int.: \LaTeXe\ Guide for Authors] {Geophysical Journal International: \LaTeXe\ style guide for authors} \end{verbatim} and \begin{verbatim} \author[B.L.N. Kennett] {B.L.N. Kennett$^1$ \thanks{Pacific Region Office, GJI} \\ $^{1}$Research School of Earth Sciences, Australian National University, Canberra ACT \emph{0200}, Australia } \end{verbatim} The \verb"\thanks" note produces a footnote to the title or author. \subsection{Key words and Summary} At the beginning of your article, the title should be generated in the usual way using the \verb"\maketitle" command. Immediately following the title you should include a Summary followed by a list of key words. The summary should be enclosed within an \verb"summary" environment, followed immediately by the key words enclosed in a \verb"keywords" environment. For example, the titles for this guide were produced by the following source: \begin{verbatim} \maketitle \begin{summary} This guide is for authors who are preparing papers for \textit{Geophysical Journal International} using the \LaTeXe\ document preparation system and the GJI style file. \end{summary} \begin{keywords} \LaTeXe\ -- class files: \verb"gji.cls"\ -- sample text -- user guide. \end{keywords} \section{Introduction} : \end{verbatim} The heading `\textbf{Key words}' is included automatically and the key words are followed by vertical space. If, for any reason, there are no key words, you should insert the \verb"\nokeywords" command immediately after the end of the \verb"summary" or \verb"abstract" environment. This ensures that the vertical space after the abstract and/or title is correct and that any \verb"thanks" acknowledgments are correctly included at the bottom of the first column. For example, \begin{verbatim} \maketitle \begin{abstract} : \end{abstract} \nokeywords \section{Introduction} : \end{verbatim} Note that the \verb"summary" and \verb"abstract" environments have the same effect for the documentclass \verb"gji.cls" \subsection{Lists} The GJI style provides numbered lists using the \verb"enumerate" environment and unnumbered lists using the \verb"description" environment with an empty label. Bulleted lists are not part of the GJI style and the \verb"itemize" environment should not be used. The enumerated list numbers each list item with roman numerals: \begin{enumerate} \item first item \item second item \item third item \end{enumerate} Alternative numbering styles can be achieved by inserting a redefinition of the number labelling command after the \verb"\begin{enumerate}". For example, the list \begin{enumerate} \renewcommand{\theenumi}{(\arabic{enumi})} \item first item \item second item \item etc\ldots \end{enumerate} was produced by: \begin{verbatim} \begin{enumerate} \renewcommand{\theenumi}{(\arabic{enumi})} \item first item : \end{enumerate} \end{verbatim} Unnumbered lists are provided using the \verb"description" environment. For example, \begin{description} \item First unnumbered item which has no label and is indented from the left margin. \item Second unnumbered item. \item Third unnumbered item. \end{description} was produced by, \begin{verbatim} \begin{description} \item First unnumbered item... \item Second unnumbered item. \item Third unnumbered item. \end{description} \end{verbatim} \subsection{Captions for continued figures and tables} The \verb"\caption*" command may be used to produce a caption with the same number as the previous caption (for the corresponding type of float). For instance, if a very large table does not fit on one page, it must be split into two floats; the second float should use the \verb"caption*" command with a suitable caption: \begin{verbatim} \begin{table} \caption*{-- \textit{continued}} \begin{tabular}{@{}lccll} : \end{tabular} \end{table} \end{verbatim} \begin{figure} \vspace{5.5cm} \caption{An example figure in which space has been left for the artwork.} \label{sample-figure} \end{figure} \section[]{Some guidelines for using\\* standard facilities} The following notes may help you achieve the best effects with the GJI style file. \subsection{Sections} \LaTeX\ provides five levels of section headings and they are all defined in the GJI style file: \begin{description} \item \verb"\section" \item \verb"\subsection" \item \verb"\subsubsection" \item \verb"\paragraph" \item \verb"\subparagraph" \end{description} Section numbers are given for section, subsection, subsubsection and paragraph headings. Section headings are automatically converted to upper case; if you need any other style, see the example in section~\ref{headings}. If you find your section/subsection (etc.)\ headings are wrapping round, you must use the \verb"\\*" to end individual lines and include the optional argument \verb"[]" in the section command. This ensures that the turnover is flushleft. \subsection{Illustrations (or figures)} \begin{figure*} \vspace{5.5cm} \caption{An example figure spanning two-columns in which space has been left for the artwork.} \label{twocol-figure} \end{figure*} The GJI style will cope with positioning of your illustrations and you should not use the positional qualifiers on the \verb"figure" environment which would override these decisions. See `Instructions for Authors' in {\em Geophysical Journal International\/} for submission of artwork. Figure captions should be below the figure itself, therefore the \verb"\caption" command should appear after the figure or space left for an illustration. For example, Fig.~\ref{sample-figure} is produced using the following commands: \begin{verbatim} \begin{figure} \vspace{5.5cm} \caption{An example figure in which space has been left for the artwork.} \label{sample-figure} \end{figure} \end{verbatim} Where a figure needs to span two-columns the \verb"figure*" environment should be used as in Fig.~\ref{twocol-figure} using the following commands \begin{verbatim} \begin{figure*} \vspace{5.5cm} \caption{An example figure spanning two-columns in which space has been left for the artwork.} \label{twocol-figure} \end{figure*} \end{verbatim} \subsection{Tables} The GJI style will cope with positioning of your tables and you should not use the positional qualifiers on the \verb"table" environment which would override these decisions. Table captions should be at the top, therefore the \verb"\caption" command should appear before the body of the table. The \verb"tabular" environment can be used to produce tables with single horizontal rules, which are allowed, if desired, at the head and foot only. This environment has been modified for the GJI style in the following ways: \begin{enumerate} \item additional vertical space is inserted on either side of a rule; \item vertical lines are not produced. \end{enumerate} Commands to redefine quantities such as \verb"\arraystretch" should be omitted. For example, Table~\ref{symbols} is produced using the following commands. \begin{table} \caption{Seismic velocities at major discontinuities.} \label{symbols} \begin{tabular}{@{}lcccccc} Class & depth & radius & $\alpha _{-}$ & $\alpha _{+}$ & $\beta _{-}$ & $\beta _{+}$ \\ ICB & 5154 & 1217 & 11.091 & 10.258 & 3.438 & 0. \\ CMB & 2889 & 3482 & 8.009 & 13.691 & 0. & 7.301 \\ \end{tabular} \medskip The ICB represents the boundary between the inner and outer cores and the CMB the boundary between the core and the mantle. Velocities with subscript $-$ are evaluated just below the discontinuity and those with subscript $+$ are evaluated just above the discontinuity. \end{table} \begin{verbatim} \begin{table} \caption{Seismic velocities at major discontinuities.} \label{symbols} \begin{tabular}{@{}lcccccc} Class & depth & radius & $\alpha _{-}$ & $\alpha _{+}$ & $\beta _{-}$ & $\beta _{+}$ \\ ICB & 5154 & 1217 & 11.091 & 10.258 & 3.438 & 0. \\ CMB & 2889 & 3482 & 8.009 & 13.691 & 0. & 7.301 \\ \end{tabular} \medskip The ICB represents the boundary ... ... evaluated just above the discontinuity. \end{table} \end{verbatim} If you have a table that is to extend over two columns, you need to use \verb"table*" in a minipage environment, i.e., you can say \begin{verbatim} \begin{table*} \begin{minipage}{80mm} \caption{Caption which will wrap round to the width of the minipage environment.} \begin{tabular} : \end{tabular} \end{minipage} \end{table*} \end{verbatim} The width of the minipage should more or less be the width of your table, so you can only guess on a value on the first pass. The value will have to be adjusted when your article is finally typeset, so don't worry about making it the exact size. \subsection{Running headlines} As described above, the title of the article and the author's name (or authors' names) are used as running headlines at the top of every page. The headline on right pages can list up to three names; for more than three use et~al. The \verb"\pagestyle" and \verb"\thispagestyle" commands should {\em not\/} be used. Similarly, the commands \verb"\markright" and \verb"\markboth" should not be necessary. \subsection{Typesetting mathematics} \subsubsection{Displayed mathematics} The GJI style will set displayed mathematics flush with the left margin, provided that you use the \LaTeX\ standard of open and closed square brackets as delimiters. The equation \[ \sum_{i=1}^p \lambda_i = {\mathrm{trace}}(\mathbf{S}) \] was typeset in the GJI style using the commands \begin{verbatim} \[ \sum_{i=1}^p \lambda_i = {\mathrm{trace}}(\mathbf{S}) \] \end{verbatim} This correct positioning should be compared with that for the following centred equation, $$ \alpha_{j+1} > \bar{\alpha}+ks_{\alpha} $$ which was (wrongly) typeset using double dollars as follows: \begin{verbatim} $$ \alpha_{j+1} > \bar{\alpha}+ks_{\alpha} $$ \end{verbatim} Note that \verb"\mathrm" will produce a roman character in math mode. For numbered equations use the \verb"equation" and \verb"eqnarray" environments which will give the correct positioning. If equation numbering by section is required the command \verb"\eqsecnum" should appear after \verb"begin{document}" at the head of the file. \subsubsection{Bold math italic}\label{boldmathitalic} The class file provides a font \verb"\mitbf" defined as: \begin{verbatim} \newcommand{\mitbf}[1]{ \hbox{\mathversion{bold}$#1$}} \end{verbatim} Which can be used as follows, to typset the equation \begin{equation} d(\mitbf{{s_{t_u}}}) = \langle [RM(\mitbf{{x_y}} + \mitbf{{s_t}}) - RM(\mitbf{{x_y}})]^2 \rangle \end{equation} the input should be \begin{verbatim} \begin{equation} d(\mitbf{{s_{t_u}}}) = \langle [RM(\mitbf{{x_y}} + \mitbf{{s_t}}) - RM(\mitbf{{x_y}})]^2 \rangle \end{equation} \end{verbatim} If you are using version 1 of the New Font Selection Scheme, you may have some messages in your log file that read something like ``Warning: Font/shape `cmm/b/it' in size~\hbox{$< \!\! 9 \!\! >$} not available on input line 649. Warning: Using external font `cmmi9' instead on input line 649.'' If you have such messages, your system will have substituted math italic characters where you wanted bold math italic ones: you are advised to upgrade to version 2. \subsubsection{Bold Greek}\label{boldgreek} To get bold Greek you use the same method as for bold math italic. Thus you can input \begin{verbatim} \[ \mitbf{{\alpha_{\mu}}} = \mitbf{\Theta} \alpha. \] \end{verbatim} to typeset the equation \[ \mitbf{{\alpha_{\mu}}} = \mitbf{\Theta} \alpha . \] \subsection{Points to note in formatting text}\label{formtext} A number of text characters require special attention so that \LaTeX\ can properly format a file. The following characters must be preceded by a backslash or \LaTeX\ will interpret them as commands: \begin{quote} ~~~~~~~~~\$~~~\&~~~\%~~~\#~~~\_~~~\{~~~and~~~\} \end{quote} must be typed \begin{center} \begin{quote} ~~~~~~\verb"\$"~~~\verb"\&"~~~\verb"\%"~~~\verb"\#" ~~~\verb"\_"~~~\verb"\{"~~~and~~~\verb"\}". \end{quote} \end{center} \LaTeX\ interprets all double quotes as closing quotes. Therefore quotation marks must be typed as pairs of opening and closing single quotes, for example, \texttt{ ``quoted text.''} Note that \LaTeX\ will not recognize greater than or less than symbols unless they are typed within math commands (\verb"$>$" or \verb"$<$"). \subsubsection{Special symbols} The macros for the special symbols in Tables~\ref{mathmode} and~\ref{anymode} have been taken from the Springer Verlag `Astronomy and Astrophysics' design, with their permission. They are directly compatible and use the same macro names. These symbols will work in all text sizes, but are only guaranteed to work in text and displaystyles. Some of the symbols will not get any smaller when they are used in sub- or superscripts, and will therefore be displayed at the wrong size. Don't worry about this as the typesetter will be able to sort this out. \begin{table*} \begin{minipage}{106mm} \caption{Special symbols which can only be used in math mode.} \label{mathmode} \begin{tabular}{@{}llllll} Input & Explanation & Output & Input & Explanation & Output\\ \hline \verb"\la" & less or approx & $\la$ & \verb"\ga" & greater or approx & $\ga$\\[2pt] \verb"\getsto" & gets over to & $\getsto$ & \verb"\cor" & corresponds to & $\cor$\\[2pt] \verb"\lid" & less or equal & $\lid$ & \verb"\gid" & greater or equal & $\gid$\\[2pt] \verb"\sol" & similar over less & $\sol$ & \verb"\sog" & similar over greater & $\sog$\\[2pt] \verb"\lse" & less over simeq & $\lse$ & \verb"\gse" & greater over simeq & $\gse$\\[2pt] \verb"\grole" & greater over less & $\grole$ & \verb"\leogr" & less over greater & $\leogr$\\[2pt] \verb"\loa" & less over approx & $\loa$ & \verb"\goa" & greater over approx & $\goa$\\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}{115mm} \caption{Special symbols which don't have to be used in math mode.} \label{anymode} \begin{tabular}{@{}llllll} Input & Explanation & Output & Input & Explanation & Output\\ \hline \verb"\sun" & sun symbol & $\sun$ & \verb"\earth" & earth symbol & $\earth$ \\[2pt] \verb"\degr" & degree &$\degr$ & \verb"\micron" & \micron & \micron \\[2pt] \verb"\diameter" & diameter & \diameter & \verb"\sq" & square & \squareforqed\\[2pt] \verb"\fd" & fraction of day & \fd & \verb"\fh" & fraction of hour & \fh\\[2pt] \verb"\fm" & fraction of minute & \fm & \verb"\fs" & fraction of second & \fs\\[2pt] \verb"\fdg" & fraction of degree & \fdg & \verb"\fp" & fraction of period & \fp\\[2pt] \verb"\farcs" & fraction of arcsecond & \farcs & \verb"\farcm" & fraction of arcmin & \farcm\\[2pt] \verb"\arcsec" & arcsecond & \arcsec & \verb"\arcmin" & arcminute & \arcmin\\ \hline \end{tabular} \end{minipage} \end{table*} The command \verb"\chemical" is provided to set chemical species with an even level for subscripts (not produced in standard mathematics mode). Thus \verb"\chemical{Fe_{2}^{2+}Cr_{2}O_{4}}" will produce \chemical{Fe_{2}^{2+}Cr_{2}O_{4}}. \subsection{Bibliography} Two methods are provided for managing citations and references. The first approach uses the \verb" \section{Introduction: file preparation and submission} The \verb"iopart" \LaTeXe\ article class file is provided to help authors prepare articles for submission to IOP Publishing journals. This document gives advice on preparing your submission, and specific instructions on how to use \verb"iopart.cls" to follow this advice. You do not have to use \verb"iopart.cls"; articles prepared using any other common class and style files can also be submitted. It is not necessary to mimic the appearance of a published article. The advice on \LaTeX\ file preparation in this document applies to the journals listed in table~\ref{jlab1}. If your journal is not listed please go to the journal website via \verb"http://iopscience.iop.org/journals" for specific submission instructions. \begin{table} \caption{\label{jlab1}Journals to which this document applies, and macros for the abbreviated journal names in {\tt iopart.cls}. Macros for other journal titles are listed in appendix\,A.} \footnotesize \begin{tabular}{@{}llll} \br Short form of journal title&Macro name&Short form of journal title&Macro name\\ \mr 2D Mater.&\verb"\TDM"&Mater. Res. Express&\verb"\MRE"\\ Biofabrication&\verb"\BF"&Meas. Sci. Technol.$^c$&\verb"\MST"\\ Bioinspir. Biomim.&\verb"\BB"&Methods Appl. Fluoresc.&\verb"\MAF"\\ Biomed. Mater.&\verb"\BMM"&Modelling Simul. Mater. Sci. Eng.&\verb"\MSMSE"\\ Class. Quantum Grav.&\verb"\CQG"&Nucl. Fusion&\verb"\NF"\\ Comput. Sci. Disc.&\verb"\CSD"&New J. Phys.&\verb"\NJP"\\ Environ. Res. Lett.&\verb"\ERL"&Nonlinearity$^{a,b}$&\verb"\NL"\\ Eur. J. Phys.&\verb"\EJP"&Nanotechnology&\verb"\NT"\\ Inverse Problems&\verb"\IP"&Phys. Biol.$^c$&\verb"\PB"\\ J. Breath Res.&\verb"\JBR"&Phys. Educ.$^a$&\verb"\PED"\\ J. Geophys. Eng.$^d$&\verb"\JGE"&Physiol. Meas.$^{c,d,e}$&\verb"\PM"\\ J. Micromech. Microeng.&\verb"\JMM"&Phys. Med. Biol.$^{c,d,e}$&\verb"\PMB"\\ J. Neural Eng.$^c$&\verb"\JNE"&Plasma Phys. Control. Fusion&\verb"\PPCF"\\ J. Opt.&\verb"\JOPT"&Phys. Scr.&\verb"\PS"\\ J. Phys. A: Math. Theor.&\verb"\jpa"&Plasma Sources Sci. Technol.&\verb"\PSST"\\ J. Phys. B: At. Mol. Opt. Phys.&\verb"\jpb"&Rep. Prog. Phys.$^{e}$&\verb"\RPP"\\ J. Phys: Condens. Matter&\verb"\JPCM"&Semicond. Sci. Technol.&\verb"\SST"\\ J. Phys. D: Appl. Phys.&\verb"\JPD"&Smart Mater. Struct.&\verb"\SMS"\\ J. Phys. G: Nucl. Part. Phys.&\verb"\jpg"&Supercond. Sci. Technol.&\verb"\SUST"\\ J. Radiol. Prot.$^a$&\verb"\JRP"&Surf. Topogr.: Metrol. Prop.&\verb"\STMP"\\ Metrologia&\verb"\MET"&Transl. Mater. Res.&\verb"\TMR"\\ \br \end{tabular}\\ $^{a}$UK spelling is required; $^{b}$MSC classification numbers are required; $^{c}$titles of articles are required in journal references; $^{d}$Harvard-style references must be used (see section \ref{except}); $^{e}$final page numbers of articles are required in journal references. \end{table} \normalsize Any special submission requirements for the journals are indicated with footnotes in table~\ref{jlab1}. Journals which require references in a particular format will need special care if you are using BibTeX, and you might need to use a \verb".bst" file that gives slightly non-standard output in order to supply any extra information required. It is not necessary to give references in the exact style of references used in published articles, as long as all of the required information is present. Also note that there is an incompatibility between \verb"amsmath.sty" and \verb"iopart.cls" which cannot be completely worked around. If your article relies on commands in \verb"amsmath.sty" that are not available in \verb"iopart.cls", you may wish to consider using a different class file. Whatever journal you are submitting to, please look at recent published articles (preferably articles in your subject area) to familiarize yourself with the features of the journal. We do not demand that your \LaTeX\ file closely resembles a published article---a generic `preprint' appearance of the sort commonly seen on \verb"arXiv.org" is fine---but your submission should be presented in a way that makes it easy for the referees to form an opinion of whether it is suitable for the journal. The generic advice in this document---on what to include in an abstract, how best to present complicated mathematical expressions, and so on---applies whatever class file you are using. \subsection{What you will need to supply} Submissions to our journals are handled via the ScholarOne web-based submission system. When you submit a new article to us you need only submit a PDF of your article. When you submit a revised version, we ask you to submit the source files as well. Upon acceptance for publication we will use the source files to produce a proof of your article in the journal style. \subsubsection{Text.}When you send us the source files for a revised version of your submission, you should send us the \LaTeX\ source code of your paper with all figures read in by the source code (see section \ref{figinc}). Articles can be prepared using almost any version of \TeX\ or \LaTeX{}, not just \LaTeX\ with the class file \verb"iopart.cls". You may split your \LaTeX\ file into several parts, but please show which is the `master' \LaTeX\ file that reads in all of the other ones by naming it appropriately. The `master' \LaTeX\ file must read in all other \LaTeX\ and figure files from the current directory. {\it Do not read in files from a different directory, e.g. \verb"\includegraphics{/figures/figure1.eps}" or \verb"\include{../usr/home/smith/myfiles/macros.tex}"---we store submitted files all together in a single directory with no subdirectories}. \begin{itemize} \item {\bf Using \LaTeX\ packages.} Most \LaTeXe\ packages can be used if they are available in common distributions of \LaTeXe; however, if it is essential to use a non-standard package then any extra files needed to process the article must also be supplied. Try to avoid using any packages that manipulate or change the standard \LaTeX\ fonts: published articles use fonts in the Times family, but we prefer that you use \LaTeX\ default Computer Modern fonts in your submission. The use of \LaTeX\ 2.09, and of plain \TeX\ and variants such as AMSTeX is acceptable, but a complete PDF of your submission should be supplied in these cases. \end{itemize} \subsubsection{Figures.} Figures should ideally be included in an article as encapsulated PostScript files (see section \ref{figinc}) or created using standard \LaTeX\ drawing commands. Please name all figure files using the guidelines in section \ref{fname}. We accept submissions that use pdf\TeX\ to include PDF or bitmap figures, but please ensure that you send us a PDF that uses PDF version 1.4 or lower (to avoid problems in the ScholarOne system). You can do this by putting \verb"\pdfminorversion=4" at the very start of your TeX file. \label{fig1}All figures should be included within the body of the text at an appropriate point or grouped together with their captions at the end of the article. A standard graphics inclusion package such as \verb"graphicx" should be used for figure inclusion, and the package should be declared in the usual way, for example with \verb"\usepackage{graphicx}", after the \verb"\documentclass" command. Authors should avoid using special effects generated by including verbatim PostScript code in the submitted \LaTeX\ file. Wherever possible, please try to use standard \LaTeX\ tools and packages. \subsubsection{References.\label{bibby}} You can produce your bibliography in the standard \LaTeX\ way using the \verb"\bibitem" command. Alternatively you can use BibTeX: our preferred \verb".bst" styles are: \begin{itemize} \item For the numerical (Vancouver) reference style we recommend that authors use \verb"unsrt.bst"; this does not quite follow the style of published articles in our journals but this is not a problem. Alternatively \verb"iopart-num.bst" created by Mark A Caprio produces a reference style that closely matches that in published articles. The file is available from \verb"http://ctan.org/tex-archive/biblio/bibtex/contrib/iopart-num/" . \item For alphabetical (Harvard) style references we recommend that authors use the \verb"harvard.sty" in conjunction with the \verb"jphysicsB.bst" BibTeX style file. These, and accompanying documentation, can be downloaded from \penalty-10000 \verb"http://www.ctan.org/tex-archive/macros/latex/contrib/harvard/". Note that the \verb"jphysicsB.bst" bibliography style does not include article titles in references to journal articles. To include the titles of journal articles you can use the style \verb"dcu.bst" which is included in the \verb"harvard.sty" package. The output differs a little from the final journal reference style, but all of the necessary information is present and the reference list will be formatted into journal house style as part of the production process if your article is accepted for publication. \end{itemize} \noindent Please make sure that you include your \verb".bib" bibliographic database file(s) and any \verb".bst" style file(s) you have used. \subsection{\label{copyright}Copyrighted material and ethical policy} If you wish to make use of previously published material for which you do not own the copyright then you must seek permission from the copyright holder, usually both the author and the publisher. It is your responsibility to obtain copyright permissions and this should be done prior to submitting your article. If you have obtained permission, please provide full details of the permission granted---for example, copies of the text of any e-mails or a copy of any letters you may have received. Figure captions must include an acknowledgment of the original source of the material even when permission to reuse has been obtained. Please read our ethical policy before writing your article. \subsection{Naming your files} \subsubsection{General.} Please name all your files, both figures and text, as follows: \begin{itemize} \item Use only characters from the set a to z, A to Z, 0 to 9 and underscore (\_). \item Do not use spaces or punctuation characters in file names. \item Do not use any accented characters such as \'a, \^e, \~n, \"o. \item Include an extension to indicate the file type (e.g., \verb".tex", \verb".eps", \verb".txt", etc). \item Use consistent upper and lower case in filenames and in your \LaTeX\ file. If your \LaTeX\ file contains the line \verb"\includegraphics{fig1.eps}" the figure file must be called \verb"fig1.eps" and not \verb"Fig1.eps" or \verb"fig1.EPS". If you are on a Unix system, please ensure that there are no pairs of figures whose names differ only in capitalization, such as \verb"fig_2a.eps" and \verb"fig_2A.eps", as Windows systems will be unable to keep the two files in the same directory. \end{itemize} When you submit your article files, they are manipulated and copied many times across multiple databases and file systems. Including non-standard characters in your filenames will cause problems when processing your article. \subsubsection{\label{fname}Naming your figure files.} In addition to the above points, please give each figure file a name which indicates the number of the figure it contains; for example, \verb"figure1.eps", \verb"figure2a.eps", etc. If the figure file contains a figure with multiple parts, for example figure 2(a) to 2(e), give it a name such as \verb"figure2a_2e.eps", and so forth. \subsection{How to send your files} Please send your submission via the ScholarOne submission system. Go to the journal home page, and use the `Submit an article' link on the right-hand side. \section{Preparing your article} \subsection{Sample coding for the start of an article} \label{startsample} The code for the start of a title page of a typical paper in the \verb"iopart.cls" style might read: \small\begin{verbatim} \documentclass[12pt]{iopart} \begin{document} \title[The anomalous magnetic moment of the neutrino]{The anomalous magnetic moment of the neutrino and its relation to the solar neutrino problem} \author{P J Smith$^1$, T M Collins$^2$, R J Jones$^3$\footnote{Present address: Department of Physics, University of Bristol, Tyndalls Park Road, Bristol BS8 1TS, UK.} and Janet Williams$^3$} \address{$^1$ Mathematics Faculty, Open University, Milton Keynes MK7~6AA, UK} \address{$^2$ Department of Mathematics, Imperial College, Prince Consort Road, London SW7~2BZ, UK} \address{$^3$ Department of Computer Science, University College London, Gower Street, London WC1E~6BT, UK} \ead{williams@ucl.ac.uk} \begin{abstract} ... \end{abstract} \keywords{magnetic moment, solar neutrinos, astrophysics} \submitto{\jpg} \maketitle \end{verbatim} \normalsize At the start of the \LaTeX\ source code please include commented material to identify the journal, author, and (if you are sending a revised version or a resubmission) the reference number that the journal has given to the submission. The first non-commented line should be \verb"\documentclass[12pt]{iopart}" to load the preprint class file. The normal text will be in the Computer Modern 12pt font. It is possible to specify 10pt font size by passing the option \verb"[10pt]" to the class file. Although it is possible to choose a font other than Computer Modern by loading external packages, this is not recommended. The article text begins after \verb"\begin{document}". Authors of very long articles may find it convenient to separate their article into a series of \LaTeX\ files each containing one section, and each of which is called in turn by the primary file. The files for each section should be read in from the current directory; please name the primary file clearly so that we know to run \LaTeX\ on this file. Authors may use any common \LaTeX\ \verb".sty" files. Authors may also define their own macros and definitions either in the main article \LaTeX\ file or in a separate \verb".tex" or \verb".sty" file that is read in by the main file, provided they do not overwrite existing definitions. It is helpful to the production staff if complicated author-defined macros are explained in a \LaTeX\ comment. The article class \verb"iopart.cls" can be used with other package files such as those loading the AMS extension fonts \verb"msam" and \verb"msbm", which provide the blackboard bold alphabet and various extra maths symbols as well as symbols useful in figure captions. An extra style file \verb"iopams.sty" is provided to load these packages and provide extra definitions for bold Greek letters. \subsection{\label{dblcol}Double-column layout} The \verb"iopart.cls" class file produces single-column output by default, but a two-column layout can be obtained by using \verb"\documentclass[10pt]" at the start of the file and \verb"\ioptwocol" after the \verb"\maketitle" command. Two-column output will begin on a new page (unlike in published double-column articles, where the two-column material starts on the same page as the abstract). In general we prefer to receive submissions in single-column format even for journals published in double-column style; however, the \verb"\ioptwocol" option may be useful to test figure sizes and equation breaks for these journals. When setting material in two columns you can use the asterisked versions of \LaTeX\ commands such as \verb"\begin{figure*} ... \end{figure*}" to set figures and tables across two columns. If you have any problems or any queries about producing two-column output, please contact us at \verb"submissions@iop.org". \section{The title and abstract page} If you use \verb"iopart.cls", the code for setting the title page information is slightly different from the normal default in \LaTeX. If you are using a different class file, you do not need to mimic the appearance of an \verb"iopart.cls" title page, but please ensure that all of the necessary information is present. \subsection{Titles and article types} The title is set using the command \verb"\title{#1}", where \verb"#1" is the title of the article. The first letter of the title should be capitalized with the rest in lower case. The title appears in bold case, but mathematical expressions within the title may be left in light-face type. If the title is too long to use as a running head at the top of each page (apart from the first) a short form can be provided as an optional argument (in square brackets) before the full title, i.e.\ \verb"\title[Short title]{Full title}". For article types other than papers, \verb"iopart.cls" has a generic heading \verb"\article[Short title]{TYPE}{Full title}" and some specific definitions given in table~\ref{arttype}. In each case (apart from Letters to the Editor and Fast Track Communications) an optional argument can be used immediately after the control sequence name to specify the short title; where no short title is given, the full title will be used as the running head. Not every article type has its own macro---use \verb"\article" for any not listed. A full list of the types of articles published by a journal is given in the submission information available via the journal home page. The generic heading could be used for articles such as those presented at a conference or workshop, e.g. \small\begin{verbatim} \article[Short title]{Workshop on High-Energy Physics}{Title} \end{verbatim}\normalsize Footnotes to titles may be given by using \verb"\footnote{Text of footnote.}" immediately after the title. Acknowledgment of funding should be included in the acknowledgments section rather than in a footnote. \begin{table} \caption{\label{arttype}Types of article defined in the {\tt iopart.cls} class file.} \footnotesize\rm \begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus12pt}}l}} \br Command& Article type\\ \mr \verb"\title{#1}"&Paper (no surtitle on first page)\\ \verb"\ftc{#1}"&Fast Track Communication\\ \verb"\review{#1}"&Review\\ \verb"\topical{#1}"&Topical Review\\ \verb"\comment{#1}"&Comment\\ \verb"\note{#1}"&Note\\ \verb"\paper{#1}"&Paper (no surtitle on first page)\\ \verb"\prelim{#1}"&Preliminary Communication\\ \verb"\rapid{#1}"&Rapid Communication\\ \verb"\letter{#1}"&Letter to the Editor\\ \verb"\article{#1}{#2}"&Other articles\\\ & (use this for any other type of article; surtitle is whatever is entered as {\tt \#1})\\ \br \end{tabular*} \end{table} \subsection{Authors' names and addresses} For the authors' names type \verb"\author{#1}", where \verb"#1" is the list of all authors' names. Western-style names should be written as initials then family name, with a comma after all but the last two names, which are separated by `and'. Initials should {\it not} be followed by full stops. First (given) names may be used if desired. Names in Chinese, Japanese and Korean styles should be written as you want them to appear in the published article. Authors in all IOP Publishing journals have the option to include their names in Chinese, Japanese or Korean characters in addition to the English name: see appendix B for details. If the authors are at different addresses a superscripted number, e.g. $^1$, \verb"$^1$", should be used after each name to reference the author to his/her address. If an author has additional information to appear as a footnote, such as a permanent address, a normal \LaTeX\ footnote command should be given after the family name and address marker with this extra information. The authors' affiliations follow the list of authors. Each address is set by using \verb"\address{#1}" with the address as the single parameter in braces. If there is more than one address then the appropriate superscripted number, followed by a space, should come at the start of the address. E-mail addresses are added by inserting the command \verb"\ead{#1}" after the postal address(es) where \verb"#1" is the e-mail address. See section~\ref{startsample} for sample coding. For more than one e-mail address, please use the command \verb"\eads{\mailto{#1}, \mailto{#2}}" with \verb"\mailto" surrounding each e-mail address. Please ensure that, at the very least, you state the e-mail address of the corresponding author. \subsection{The abstract} The abstract follows the addresses and should give readers concise information about the content of the article and indicate the main results obtained and conclusions drawn. It should be self-contained---there should be no references to figures, tables, equations, bibliographic references etc. It should be enclosed between \verb"\begin{abstract}" and \verb"\end{abstract}" commands. The abstract should normally be restricted to a single paragraph of around 200 words. \subsection{Subject classification numbers} We no longer ask authors to supply Physics and Astronomy Classification System (PACS) classification numbers. For submissions to {\it Nonlinearity}\/ we ask that you should supply Mathematics Subject Classification (MSC) codes. MSC numbers are included after the abstract using \verb"\ams{#1}". The command \verb"\submitto{#1}" can be inserted, where \verb"#1" is the journal name written in full or the appropriate control sequence as given in table~\ref{jlab1}. This command is not essential to the running of the file and can be omitted. \subsection{Keywords} Keywords are required for all submissions. Authors should supply a minimum of three (maximum seven) keywords appropriate to their article as a new paragraph starting \verb"\noindent{\it Keywords\/}:" after the end of the abstract. \subsection{Making a separate title page} To keep the header material on a separate page from the body of the text insert \verb"\maketitle" (or \verb"\newpage") before the start of the text. If \verb"\maketitle" is not included the text of the article will start immediately after the abstract. \section{The text} \subsection{Sections, subsections and subsubsections} The text of articles may be divided into sections, subsections and, where necessary, subsubsections. To start a new section, end the previous paragraph and then include \verb"\section" followed by the section heading within braces. Numbering of sections is done {\it automatically} in the headings: sections will be numbered 1, 2, 3, etc, subsections will be numbered 2.1, 2.2, 3.1, etc, and subsubsections will be numbered 2.3.1, 2.3.2, etc. Cross references to other sections in the text should, where possible, be made using labels (see section~\ref{xrefs}) but can also be made manually. See section~\ref{eqnum} for information on the numbering of displayed equations. Subsections and subsubsections are similar to sections but the commands are \verb"\subsection" and \verb"\subsubsection" respectively. Sections have a bold heading, subsections an italic heading and subsubsections an italic heading with the text following on directly. \small\begin{verbatim} \section{This is the section title} \subsection{This is the subsection title} \end{verbatim}\normalsize The first section is normally an introduction, which should state clearly the object of the work, its scope and the main advances reported, with brief references to relevant results by other workers. In long papers it is helpful to indicate the way in which the paper is arranged and the results presented. Footnotes should be avoided whenever possible and can often be included in the text as phrases or sentences in parentheses. If required, they should be used only for brief notes that do not fit conveniently into the text. The use of displayed mathematics in footnotes should be avoided wherever possible and no equations within a footnote should be numbered. The standard \LaTeX\ macro \verb"\footnote" should be used. Note that in \verb"iopart.cls" the \verb"\footnote" command produces footnotes indexed by a variety of different symbols, whereas in published articles we use numbered footnotes. This is not a problem: we will convert symbol-indexed footnotes to numbered ones during the production process. \subsection{Acknowledgments} Authors wishing to acknowledge assistance or encouragement from colleagues, special work by technical staff or financial support from organizations should do so in an unnumbered `Acknowledgments' section immediately following the last numbered section of the paper. In \verb"iopart.cls" the command \verb"\ack" sets the acknowledgments heading as an unnumbered section. Please ensure that you include all of the sources of funding and the funding contract reference numbers that you are contractually obliged to acknowledge. We often receive requests to add such information very late in the production process, or even after the article is published, and we cannot always do this. Please collect all of the necessary information from your co-authors and sponsors as early as possible. \subsection{Appendices} Technical detail that it is necessary to include, but that interrupts the flow of the article, may be consigned to an appendix. Any appendices should be included at the end of the main text of the paper, after the acknowledgments section (if any) but before the reference list. If there are two or more appendices they should be called Appendix A, Appendix B, etc. Numbered equations will be in the form (A.1), (A.2), etc, figures will appear as figure A1, figure B1, etc and tables as table A1, table B1, etc. The command \verb" \section{Abstract} Any search or sampling algorithm for solution of inverse problems needs guidance to be efficient. Many algorithms collect and apply information about the problem on the fly, and much improvement has been made in this way. However, as a consequence of the the No-Free-Lunch Theorem, the only way we can ensure a significantly better performance of search and sampling algorithms is to build in as much information about the problem as possible. In the special case of Markov Chain Monte Carlo sampling (MCMC) we review how this is done through the choice of proposal distribution, and we show how this way of adding more information about the problem can be made particularly efficient when based on an approximate physics model of the problem. A highly nonlinear inverse scattering problem with a high-dimensional model space serves as an illustration of the gain of efficiency through this approach. \vspace*{10mm}\noindent {\small {\it Keywords}: Inverse Problems, Seismic Inversion, Probabilistic Inversion, Markov Chain Monte Carlo, Sampling Methods.} \section{Introduction} Over the last 25 years, Monte Carlo methods have been established as a main tool for providing solutions and uncertainty estimates for small- to intermediate-scale, highly nonlinear inverse problems. This development is closely connected to the dramatic increase in computational speed over the last few decades. However, there has also been an increasing demand for solving inverse problems on a larger scale, with more time-consuming forward calculations, e.g., \cite{Fichtner18}, and more complex a priori information, e.g., \cite{Lange12,Grana10}. In this connection it has become clear that straightforward use of standard Monte Carlo algorithms is unfeasable, and recent years have seen a surge of modified samplers with more and more sophisticated sampling strategies \cite{Tierney99,Haario06,Vrugt16,Ying20}. Useful improvements have been found, but there is a growing impression amongst applicants that Monte Carlo strategies are fundamentally slow, and that alternatives should be found. This experience has indeed led to improvements where quite efficient solutions, all taylored to the problem at hand through a priori constraints and/or well-chosen simplifying assumptions, have shown promising results (see, e.g., \cite{Fjeldstad18}). Another recent development is an attempt to perform often time-consuming likelihood calculations with neural networks, trained on a very large number of model-data pairs sampled from an a prior probability distribution \cite{Andrieu03,Scheidt18,Nawaz19,Holm-Jensen20}. Research in Monte Carlo methods has often been based on a search for new -- often surprising -- inspiration that will allow efficient calculation with simple operations. In the early years of Monte Carlo developments there were many examples of this: Simulated Annealing \cite{Kirkpatrick83}, Hamiltonian Monte Carlo \cite{Duane87}, Simulated Tempering \cite{Marinari92}, Evolutionary Algorithms \cite{Holland92}, etc., all using ideas from other scientific fields to improve sampling, and the benefit has been new ways of building useful intuition to improve our understanding of sampling processes. In recent years we see a continuation of this trend in statistics literature \cite{Roberts09}, and all these methods have brought some success, the degree of which depends on the category of problems they are applied to. The 'race of Monte Carlo ideas' has been accompanied by intense discussions in the research community about the efficiency of algorithms. Not only have intuitive ideas been held up against each other, but arguments for and against methodologies have also been accompanied by numerical experiments to support the conclusions. This approach rests apparently on a sound basis, but if we take a closer look at the way algorithm comparisons are typically carried out, we discover a common deficiency: In very few cases, if any, algorithms are compared by solving {\em exactly} the same problem. At the surface, test problems look similar, but a closer look reveals that the information available to algorithms in the same test differs significantly. As a result, comparisons often become meaningless, but there is one thing that seems clear from most comparative studies: The more information about the inverse problem we build into the code of an algorithm, the more efficient the algorithm is. The purpose of this paper is to explore how additional information in Monte Carlo sampling may significantly reduce the computational workload. We will first discuss the reasons for the often excessive time-consumption of Monte Carlo strategies. We will then turn to the problem of finding and applying supplementary information to speed up calculations, not from external, independent sources (a priori information), but from the physical problem itself. Our aim will be to apply this information in a way that will not bias the sampling assymptotically. We shall explore and support our findings through numerical experiments. Our test example will be the acoustic inverse scattering problem for a vertical plane wave hitting a horizontally stratified medium with varying acoustic impedance (product of wavespeed and mass density). This problem is highly nonlinear due to internal multiple scattering (eccoes) and attenuation in the medium. Since our aim is to evaluate solutions and their uncertainties, we use Markov Chain Monte Carlo (MCMC) for the analysis. We compare a straightforward MCMC sampling approach, where the proposal distribution is arbitrary, with one where the proposal mechanism is designed from an approximation to the forward relation. The result is a significant improvement in the algorithm's efficiency. \section{Markov Chain Monte Carlo and the Proposal Problem} \subsection{Proposal Distributions} The basic idea behind any implementation of Markov Chain Monte Carlo (MCMC) is an interplay between {\em proposals} and {\em rejections}. In each iteration, sampling from a probability density $f({\bf x})$ over a space ${\cal X}$ proceeds from a current value ${\bf x}$ by first randomly proposing a new value ${{\bf x}}'$ according to the so-called {\em proposal distribution} $q({{\bf x}}' | {\bf x})$, followed by a random decision where ${{\bf x}}'$ is accepted, with probability \begin{equation} P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}={\rm min} \( \frac{f({{\bf x}}') q({\bf x}|{{\bf x}}')}{f({\bf x}) q({{\bf x}}'|{\bf x})},1 \). \label{eq: acc-prob} \end{equation} This acceptance probability ensures that, once an equilibrium sampling distribution is established, it will be maintained through {\em microscopic reversibility}, because the probability $P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'} q({{\bf x}}'|{\bf x}) f({\bf x})$ of a transition from ${\bf x}$ to ${{\bf x}}'$ equals the probability of the reverse transition, $P_{\rm acc}^{{{\bf x}}' \rightarrow {{\bf x}}} q({\bf x}|{{\bf x}}') f({{\bf x}}')$ \cite{Mosegaard02}. At this point it is important to note that the proposal distribution has no influence on the distribution to which the sampling converges, it only influences the speed of convergence. \bigskip\noindent The two most common types of proposal distributions are: \begin{enumerate} \item {\em Local} proposal distributions $q$, where $q({{\bf x}}'|{\bf x})$ depends on the starting point ${{\bf x}}$. A frequent assumption is translation invariance where $q({{\bf x}}'|{\bf x}) = q({{\bf x}}' + {\bf a}|{{\bf x}} + {\bf a})$ for any shift ${\bf a}$ in the parameter space. Another common assumption is symmetry : $q({{\bf x}}' | {\bf x}) = q({\bf x} | {{\bf x}}')$, and in this case we get a simpler expression expression for the acceptance probability (\ref{eq: acc-prob}): \begin{equation} P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}={\rm min} \( \frac{f({{\bf x}}')}{f({\bf x})},1 \). \label{eq_accept} \end{equation} \item {\em Global} proposal distributions $q$ that are independent of the starting point ${{\bf x}}$. This means that $q({\bf x}|{{\bf x}}') = h({\bf x})$ where $h({\bf x})$ is fixed during the sampling process. If $h({\bf x})$ is in some sense close to the target distribution $f({\bf x})$, $h$ is often called a "surrogate" (for $f$). \end{enumerate} An MCMC sampler is only efficient if large enough steps (connecting any two areas of high values of $f({\bf x})$ in a few steps) are frequently accepted. This ability critically depends on $q({{\bf x}}' | {\bf x})$, and requires that $q({{\bf x}}' | {\bf x})$ is (at least) locally similar to $f({\bf x}')$. This is revealed by a close look at the expression for the transition probability from ${\bf x}$ to ${\bf x}'$: \begin{equation} P({{\bf x}}' |{{\bf x}})= q({{\bf x}}'|{\bf x}) \cdot {\rm min} \( \frac{f({{\bf x}}') q({\bf x}|{{\bf x}}')}{f({\bf x}) q({{\bf x}}'|{\bf x})},1 \) \ , \label{eq_accept} \end{equation} showing that, for $f({{\bf x}}') \ge f({{\bf x}})$ and a large $q({\bf x}|{{\bf x}}')/q({{\bf x}}'|{\bf x})$, the transition ${{\bf x}} \rightarrow {{\bf x}}'$ is most likely, but for $f({{\bf x}}') < f({{\bf x}})$ it is only likely when \begin{enumerate} \item $f({{\bf x}}')$ and $q({{\bf x}}'|{{\bf x}})$ are both large at ${{\bf x}}'$, and \label{cond-1} \item $q({\bf x}|{{\bf x}}')/q({{\bf x}}'|{\bf x})$ is large \label{cond-2} \end{enumerate} We will now see how implementations of local and global proposals may address these questions. \subsection{Local proposals} The use of local proposals is an attempt to satisfy the above two conditions: \begin{enumerate} \item This condition is met by aiming to choose a $q({{\bf x}}'|{{\bf x}})$ so narrowly that most of $q$'s support coincides with high values of $f$. The underlying assumption here is that $f$ is somehow smooth in the neighborhood of ${{\bf x}}$. In the absense of external information about the smoothness of $f$, one must usually resort to experimentation with different widths of $q$. \item This condition is usually met by using a symmetric $q$: $q({{\bf x}}'|{\bf x}) = q({{\bf x}}|{\bf x}')$. In this way, the ratio $q({\bf x}|{{\bf x}}')/q({{\bf x}}'|{\bf x})$ is always $1$ (and hence never "small"). \end{enumerate} Local proposals are widely used, but they have at least two serious drawbacks. Firstly, if they are too narrow, the proposed steps will be so small that the algorithm needs many iterations to traverse the parameters space. As a result, many iterations are required to produce sufficiently many independent samples from the space. Secondly, even a very narrow proposal may not approximate the target distribution $f({\bf x})$ very well. To investigate and exemplify the latter problem in high-dimension\-al spaces, let us consider the case where the target distribution of ${{\bf x}}$ is Gaussian with covariance matrix ${\bf C}$ and mean ${\bf x}_0$: $f({{\bf x}}) = {\cal N}_{{\bf x}} ({\bf x}_0,{\bf C})$. Assume for illustration that our proposal distribution is an isotropic Gaussian $q({{\bf x}}|{{\bf x}}_q) = {\cal N}_{{\bf x}} ({\bf x}_q,{\bf C}_q)$ with mean ${{\bf x}}_q$ and covariance matrix ${\bf C}_q$, and that we, in the sampling process, have been fortunate to arrive at point with a high value of $f({{\bf x}})$, say, for simplicity, at its maximum point ${{\bf x}}_0$. We can now calculate the expected acceptance probability $P^{{{\bf x}}_0 \rightarrow {{\bf x}}}$ proposed in the next step by the algorithm: \begin{equation} \begin{split} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) &= \int_{{\cal X}} \frac{f({{\bf x}})}{f({{\bf x}}_0)} q({{\bf x}}|{{\bf x}}_0) d{{\bf x}} \\ &= \int_{{\cal X}} \frac{{\cal N}_{{\bf x}} ({\bf x}_0,{\bf C})}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})} {\cal N}_{{\bf x}} ({\bf x}_0,{\bf C}_q) d{{\bf x}} \\ &= \frac{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C}+{\bf C}_q)}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})}\int_{{\cal X}} {\cal N}_{{\bf x}} ({\bf x}_1,{\bf C}_1) d{{\bf x}} \end{split} \label{eq: Meanf} \end{equation} where \begin{equation} {\bf x}_1 = ({\bf C}^{-1} + {\bf C}_q^{-1})^{-1} ({\bf C}^{-1} {\bf x}_0 + {\bf C}_q^{-1} {\bf x}_0) = {\bf x}_0 \end{equation} and \begin{equation} {\bf C}_1 = ({\bf C}^{-1} + {\bf C}_q^{-1})^{-1} \, . \end{equation} Since the last integral in (\ref{eq: Meanf}) is $1$, we have the following expression for the expected acceptance probability: \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) = \frac{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C}+{\bf C}_q)}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})} = \( \frac{{\rm det}\( 2\pi {\bf C} \)}{{\rm det}\( 2\pi ({\bf C}+{\bf C}_q) \)} \)^{1/2} \, . \end{equation} Both ${\bf C}_q = \sigma_q^2 {{\bf I}}$ (with $\sigma_q^2 > 0$) and ${\bf C}$ are diagonal in the frame spanned by ${\bf C}$'s eigenvectors, and if we assume that the eigenvalues of ${\bf C}$ are $\sigma_1^2 \ge \dots \ge \sigma_N^2 > 0$, where $N$ is the dimension of ${\cal X}$, the eigenvalues of ${\bf C}+{\bf C}_q$ are $(\sigma_1^2+\sigma_q^2), \dots ,(\sigma_N^2+\sigma_q^2)$. From this we have \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) = \prod_{n=1}^N \( \frac{\sigma_n^2}{\sigma_n^2+\sigma_q^2} \)^{1/2} \, . \label{eq-eigenC1} \end{equation} From (\ref{eq-eigenC1}) we see that for any non-zero values of $\sigma_n$ and $\sigma_q$ we have \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) \rightarrow 0 \quad {\rm for} \quad N \rightarrow \infty \, . \end{equation} expressing the influence from the so-called 'curse of dimensionality' on the sampling process. If the proposed steps are kept very short ($\sigma_q$ is small compared to all $\sigma_n$), the decrease of $E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}})$ with $N$ is slow. But this situation is of no practical value, because adequate sampling by the algorithm requires that it can traverse high-probability areas of $f({{\bf x}})$ within a reasonable amount of time. For non-negligible step lengths, the situation is radically different. Indeed, if there exists an integer $K$ and a real constant $k$ such that $\sigma_q > k\sigma_n$ for all $n > K$, then $E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}})$ decreases more that exponentially with $N$. In other words, if the distribution $f({{\bf x}})$ is 'elongated' compared to the proposal $q$, that is, if it is broader than $q$ in only a fixed number $K < N$ of directions/dimensions, the mean number of accepted moves will decrease at least exponentially with the number of dimensions. As an example, let us consider the case where $\sigma_q^2 = 1$, and $\sigma_n^2 = 1/n$. For $N=2$ this gives an expected acceptance probability of $0.4082$, corresponding to a mean waiting time of about $0.4082^{-1} \approx 2.5$ iterations between accepted moves. For $N=10$ the expectation is $1.5828 \cdot 10^{-4}$, and for $N=100$ it decreases to $1.03 \cdot 10^{-80}$, giving a waiting time of about $3.0 \cdot 10^{62}$ years for 1 Billion iterations per second. The above analysis is carried out under the favorable assumption that the maximum of $f({{\bf x}})$ has been located by the algorithm, and does not even consider the serious difficulties faced by the sampling algorithm in the initial search for points with high values of $f({{\bf x}})$ (the {\em burn-in} phase). Hence, it is clear that the proposal mechanism, as defined by $q$, is the Achilles heel of the standard MCMC approach. \subsection{Global proposals} A global proposal $q({{\bf x}}'|{{\bf x}})$ is independent of ${{\bf x}}$ and hence it can be written $q({{\bf x}}'|{{\bf x}}) = h({{\bf x}}')$. The use of global proposals seeks to meet the requirements of (\ref{cond-1}) and (\ref{cond-2}) by choosing $h({{\bf x}}') \approx f({{\bf x}}')$, ensuring that \begin{enumerate} \item $q$ and $f$ are everywhere similar \item when $f({{\bf x}}') \leq f({{\bf x}})$ the condition $q({\bf x}|{{\bf x}}')/q({{\bf x}}'|{\bf x}) \gtrapprox 1$ is always met. \end{enumerate} In fact, from (\ref{eq_accept}) it is easily seen that global proposals are ideal if they closely resemble the target distribution. In the ideal case where $h({{\bf x}}') = f({{\bf x}}')$, the transition probability is equal to $f({{\bf x}}')$, and the sampler has no rejected moves. Arbitrarily large steps in the sample space are allowed, and therefore all sample points are statistically independent. However, the problem with global proposals is to find them in the first place. There are, in principle, two approaches: \begin{enumerate} \item Using, as proposal, a local approximation $h({\bf x})$ to $f({\bf x})$, estimated/interpolated from already visited sample points in the neighborhood of ${\bf x}$ \cite{Christen05,Ying20}. This proposal may be consistent with (similar to) $f$ in the neighborhood of existing sample points. \label{local-prop} \item Using a global approximation $h({\bf x})$ derived from external information about $f({\bf x})$, that is, {\em not} derived from already visited sample points. This proposal should be consistent (similar to) $f$ even far away from existing sample points. \label{global-prop} \end{enumerate} In the following we shall show an example of the use of global proposals in inverse problems. Our global proposal will be constructed from external information about the target distribution $f$ using an approximate forward function that is independent of known values of $f$. However, before we proceed, we shall first understand the fundamental advantage of (\ref{global-prop}) over (\ref{local-prop}). To this aim, we shall look into an important theorem, proven in the late 90s, namely the No-Free-Lunch Theorem \cite{Wolpert97}. \section{No-Free-Lunch Theorems and the importance of information} We will now make an important distinction between {\em blind algorithms} and {\em informed algorithms}. We use the following definitions: \begin{enumerate} \item A {\em blind algorithm} is an algorithm whose search or sampling is performed only via an {\em oracle}. An oracle is a function that, when called by the algorithm, is able to evaluate the target distribution $f$ at a given point ${{\bf x}}$. The oracle is used by the algorithm as a black box: No other properties of $f$ than the corresponding inputs and outputs are used. In computer science, blind algorithms are often called {\em heuristics}. For inversion, there are many well-known examples of blind algorithms in use: Regular MCMC, Simulated Annealing, Genetic Algorithms, Neural Networks, etc. \item An {\em informed algorithm} is an algorithm that, in addition to an oracle, uses known, {\em external} properties of $f$ to guide/improve the search or sampling. By external properties we mean any information about $f$ that is not given by samples from $f$. Examples of informed algorithms used in geophysical inversion are Hamiltonian Monte Carlo, exploiting that for seismic wave fields adjoint methods can be used to efficiently compute misfit gradients \cite{Fichtner18}, and Discriminative Variational Bayesian inversion exploiting knowledge about the statistics of the unknown model in case it is a Markov Random Field \cite{Nawaz19}. \end{enumerate} Based on the No-Free-Lunch Theorem (Wolpert and Macready, 1997), Mosegaard (2010) considered limits for the performance of algorithms designed for solution of inverse problems. The conclusion was that all blind inversion algorithms in finite-dimensional spaces (optimization-based as well as sampling-based) have exactly the same performance, when averaged over all conceivable inverse problems. Only an algorithm that take into account more characteristics of the "forward model" than given by the oracle can ensure performance that is superior to blind inversion algorithms. We can draw the conclusion that efficient inversion algorithms are the ones that operate in accordance with specific properties of the problem it is aiming to solve. If the problem is linear with known Gaussian noise statistics and a given Gaussian prior, it can be solved in "one iteration" (applying a closed-form solution formula). If the problem is mildly nonlinear with, e.g., Gaussian noise and Gaussian prior, our knowledge that the posterior probability distribution is unimodal will render the problem solvable in relatively few iterations. For a highly nonlinear problem, the situation is, in principle, the same, except that the term "highly nonlinear" usually signals a lack of knowledge of the shape of the posterior. The posterior may be highly multimodal and possess other pathologies, but we may still have some sparse knowledge about it, for instance that it has a certain smoothness. Irrespective of what we know about the target posterior distribution, we have the option of building this information into the algorithm. If we have plenty of information, we can create an efficient algorithm. If we have sparse information, our algorithm will need more computation time. Countless methods use interpolation methods to construct local or global approximations to the posterior and to use them as proposals in the sampling process, e.g., \cite{Christen05,Ginting11,Jin11,Stuart19,Ying20} Laloy et al, 2013; Georgia et al, 2019). These methods are useful and may improve performance, but they still suffer from the limitations set by the No-Free-Lunch Theorem, because they do not bring in additional, external information. In the following we will suggest an approach that allows us to design more efficient inversion algorithms through incorporation of additional, external information about the target distribution. The approach is general and can be used in deterministic as well as in sampling approaches. In this exposition we will focus on MCMC sampling, and our approach will be to replace a traditional, blind proposal mechanism with one built from a simplified forward model. Being based on approximate physics, the chance of obtaining a good global approximation to the posterior is high. \section{MCMC with Problem-dependent Proposals} Let us now consider algorithms that bring in new, external information about the target posterior distribution $f({{\bf x}})$. An approximation $\tilde{f} ({{\bf x}}) \approx f({{\bf x}})$, constructed from a simplified version of the physics behind the correct distribution $f$ will be used as a proposal. This proposal will not only be close to $f$ in the neighborhood of points already visited by the algorithm, it is also expected to work well far away from current samples, because it is guided by the physics of the problem. \subsection{Linear, Gaussian Problems} Sampling of solutions to a linear Gaussian problem through MCMC sampling is straightforward. Since we have an explicit expression for the Gaussian posterior, the distribution itself can be used as an optimal proposal. Samples from an $N$-dimensional standard (isotropic) Gaussian (mean $\bf 0$ and covariance ${\bf I}$) can be generated with, e.g., the Box-M\"uller method, and the desired samples ${{\bf m}}$ from a $N$-dimensional multivariate Gaussian with mean ${{\bf m}}_0$ and covariance ${\bf C}$ can be calculated as ${{\bf m}} ={{\bf m}}_0 + {{\bf A}}{{\bf m}}$, where ${{\bf A}}{{\bf A}}^T = {{\bf C}}$. The matrix ${{\bf A}}$ can be found by, for instance, Cholesky decomposition. \subsection{Nonlinear Problems} For nonlinear inverse problems, let us consider the general expression for the joint posterior probability in the formulation of Tarantola and Valette (1982): \begin{equation} \sigma({\bf d},{\bf m}) = \frac{\rho({\bf d},{\bf m}) \theta({\bf d},{\bf m})}{\mu({\bf d},{\bf m})} \end{equation} where ${\bf d}$ is data, ${\bf m}$ is the model parameters, and $\rho({\bf d},{\bf m})$ and $\mu({\bf d},{\bf m})$ is the prior and the homogeneous probability densities in the joint $({\bf d},{\bf m})$-space, respectively. The density $\theta({\bf d},{\bf m})$ expresses the "uncertainty of the forward relation" between ${\bf m}$ and data, ${\bf d}$. For simplicity, let us assume that the homogeneous probability density $\mu({\bf d},{\bf m})$, as well as the marginal prior in the model space $\rho_m ({\bf m})$ is constant, which leads us to the following expression for the joint posterior: \begin{equation} \sigma({\bf d},{\bf m}) = \rho({\bf d}) \theta({\bf d},{\bf m}) \end{equation} Under the further assumption that the observational data uncertainties are small, compared to the modelization errors, we arrive at the approximation \begin{equation} \sigma_m({\bf m}) = \sigma({\bf d},{\bf m}) \approx \theta({\bf d}_{obs},{\bf m}) \end{equation} This is a very rough approximation, but it should be remembered that we will not replace the accurate posterior by this expression. The approximation will only be used as a global proposal distribution to speed up the search/sampling from the correct posterior. The question is now how we can find an acceptable expression for $\theta({\bf d}_{obs},{\bf m})$. In this paper we will adopt the following simple procedure: \begin{enumerate} \item Choose a simplified forward function $\tilde{g}({\bf m})$ expressing much of the essential physics, and at the same time allowing an efficient (but probably inaccurate) inversion. This step can be skipped if a direct way to the following step (without a formal inversion) is available. \item Find a solution $\tilde{{\bf m}} = h({\bf d}_{obs})$ to the simplified problem with an acceptable datafit. \item Estimate the modelization error introduced by using $\tilde{g}({\bf m})$ instead of the accurate forward function $g({\bf m})$. This error is quantified by the distribution $\tilde{\theta}({\bf d}_{obs},{\bf m})$, which is also a rough approximation to the posterior ${\tilde{\sigma}}_m({\bf m})$ computed through $\tilde{g}({\bf m})$. The procedure is: \begin{enumerate} \item The "true" modelization error is $$\delta{{\bf m}}_{true} = \tilde{{\bf m}} - {{\bf m}}_{true} ,$$ but since ${{\bf m}}_{true}$ is unknown, we compute instead an approximate modelization error $$\delta{{\bf m}}_{approx} = \tilde{{\bf m}} - h(g(\tilde{{\bf m}})) .$$ The above formula estimates what the modelization would have been if $\tilde{{\bf m}}$ had been the true model. In case $\tilde{{\bf m}}$ is close to ${{\bf m}}_{true}$, we expect that $\delta{{\bf m}}_{approx}$ will be close to $\delta{{\bf m}}_{true}$. \item Use $\delta{{\bf m}}_{approx}$ to construct a reasonable approximation to the modelization error distribution $\tilde{\theta}({\bf d}_{obs},{\bf m})$, centered at $\tilde{{\bf m}}$. This can be done by assuming a functional form for $\tilde{\theta}({\bf d}_{obs},{\bf m})$ and by using the components of $\delta{{\bf m}}_{approx}$ to obtain the parameters of $\tilde{\theta}({\bf d}_{obs},{\bf m})$. An example of this can be found in the following section. \end{enumerate} \end{enumerate} \bigskip\noindent \section{Numerical Example} To illustrate the gain of computational efficiency obtained by using an even rough approximation to a high-dimensional target posterior as proposal, we shall look at a 1D inverse scattering problem. The unknown model is a horizontally stratified medium with 1000 homogeneous layers. Figure 1B shows the acoustic impedance as a function of distance from the surface. A plane-wave seismic pulse (modeled as a Ricker wavelet) is injected perpendicularly into the medium at the surface, and the data (backscattered waves from the medium) are recorded at the surface (Figure 1A left). The data are synthetic 1-D full-waveform seismic signals generated by the propagator matrix method, containing all multiple reflections, transmission losses and damping effects, so the inverse problem of recovering the model from the data is highly nonlinear. For comparison, an approximate seismogram, computed by convolution of the reflectivity with the Ricker wavelet, is shown in Figure 1A (middle), together with its error (deviation from the correct seismogram) to the right. Figure 1C shows an approximate solution to the inverse scattering problem in the absence of noise, computed by deconvolution, and converted to impedance through trace integration and addition of the slowly varying trend from Figure 1B. The approximate solution requires very little computation time, but is clearly inaccurate (compare to the "true" model in Figure 1B). The purpose of the study is to show how the approximate result can be used to efficiently produce a more accurate solution with uncertainty estimates using Markov Chain Monte Carlo (MCMC). \begin{figure} \includegraphics[width=5.0in]{MCinformedStep120520.pdf} \caption{\small (A) Left: Accurate seismogram from B; Center: seismogram computed by convolution; Right: error of the convolution seismogram. (B) True acoustic impedance (C) Acoustic impedance computed by deconvolution (impedance trend from B is added). (D) Envelope of true modelization error (deconvolution impedance minus true impedance). (E) Envelope of estimated modelization error. (F) A sample model from the Informed Proposal Monte Carlo inversion. (G) Median of 10000 sample models.} \label{fig: panel} \end{figure} \begin{figure} \begin{center} \includegraphics[width=2.5in]{IMC-MCMC-iter-2000-Misfits-xx.pdf} \caption{\small Convergence towards equilibrium of a classical MCMC algorithm (upper curve), attempting to sample solutions to our test inverse problem. The lower curve is the fast-converging Informed Proposal Monte Carlo (IPMC) algorithm, which was guided by linearized inversion. In this case the convergence of the guided algorithm was between $10^3$ and $10^4$ times faster than the classical MCMC algorithm (with an tuned, isotropic proposal).} \end{center} \end{figure} Our aim is to produce enough samples from the posterior probability distribution in reasonable time, and this raises a well-known problem, namely that the traditional MCMC approach in unfeasible for problems with more than a couple of hundred parameters. Our way of speeding up the sampling is to construct a global proposal distribution for the MCMC sampling using the approximate solution $\tilde{{\bf m}}$. First, we compute the estimated modelization error vector $\delta{{\bf m}}_{approx}$ using the method described in the previous section. Figure E shows the envelope of the components of this vector, and for comparison, the true modelization error (known in this synthetic data case) is shown in Figure D. The proposal distribution is then built as a Gaussian with mean $\tilde{{\bf m}}$ and a diagonal covariance matrix ${\bf C}_{\theta}$ whose diagonal is the squared components of the envelope function. The 1000-parameter problem is now solved in two ways: (1) via a classical MCMC with an isotropic ad-hoc proposal distribution where the step length is adjusted to obtain an acceptance rate of approximately 50\%, and (2) an Informed Proposal Monte Carlo (IPMC) algorithm driven by our proposal derived above. Figure 2 (upper curve) shows the slow convergence to equilibrium of the classical MCMC in the first 2000 iterations of the inversion process. The lower curve shows the much faster convergence of the algorithm guided by the linearized solution. The improvement in convergence time is significant, in this case between $10^3$ and $10^4$ times faster when started at the model $\tilde{{\bf m}}$ obtained by linear inversion (deconvolution). \subsection{Discussion} It is important to realize that the significantly improved efficiency provided by the physical proposal in this study is {\em not} resulting from prior constraints. Priors generally assign different probabilities to different solutions, but this is not the case with a proposal. A proposal only influences the frequency by which models are presented to the acceptance/rejection algorithm. The bias of the proposal will, asymptotically, be neutralized because it is compensated for in the acceptance probability. In this way it will only influence the efficiency of the sampler, not the asymptotic result. It should, however, be remembered that the most serious problem in non-linear inversion is that the number of models we can practically test is limited. And considering that highly non-linear problems are often so complex that they can only be safely solved with a high number of approximately independent samples from the posterior, it is clear that using an efficient proposal will not only be an improvement in speed, but also a potential improvement in quality of solutions. Simply speaking, we can expect to discover more significantly different solutions (peaks of the target distribution) within the allowed computer resources than with a plain MCMC implementation. We have illustrated how important it is for the proposal to mimic the posterior in MCMC sampling of solutions to inverse problems. However, the idea of using the physics of the problem to build a posterior-like proposal is not restricted to Monte Carlo sampling. Any method depending on a search for sample solutions or good data fits can potentially benefit from this strategy. In an interesting recent paper on variational full-waveform inversion \cite{Zhang20}, it is shown how variational methods may be used to modify samples from the prior into samples of the posterior in the solution of large-scale inverse problems. It is likely that this class of methods may, in the future, be further improved through application of informed proposal mechanisms. \subsection{Conclusion} We have analyzed the impact of proposal distributions on the performance of MCMC sampling methods when applied to the solution of inverse problems. We concluded that the "small step" strategies used in traditional implementations are relatively efficient because they impose a local consistency between the proposal distribution and the target (posterior) distribution: the target probabilities tend to be large where the proposal probabilities are large. Nevertheless, we showed by a simple analytical example that even local consistency may be difficult to obtain when local "small-step" proposals are arbitrary. Furthermore, a main problem with local proposals is the limited step length, which is strongly hampering the exploration of vast, high-dimensional spaces. The volumes of high-probability areas are negligible in such spaces, so burn-in times, and the times needed to pass from one maximum to another can be prohibitive for small-step algorithms. Our solution to these problems is to use global proposals built from external information about the target distribution. We propose to use simplified physics of the problem to ensure global consistency between the proposal and the target distribution. The efficiency of this approach will be highly problem-dependent and strongly conditioned on the choice of the external proposal, but we successfully carried out a test on a $1000$-parameter, highly nonlinear inverse scattering problem. Our gain in efficiency was in this case of the order of up to $10^4$. \subsection{Acknowledgments} This work was supported by Innovation Fund Denmark through the OPTION Project (5184-00025B). Klaus Mosegaard would like to thank Dr. Amir Khan and colleagues at the Department of Earth Sciences, ETH, for their hospitality and inspiring discussions during the fall 2017 where this work was initiated. \section{Introduction} Introduction introduction introduction introduction introduction introduction introduction introduction introduction introduction introduction introduction \section{Classical MCMC} In this section we follow Tarantola and Valette (1982a) whose formulation of the probabilistic inference problem is based on an expression for the conjunction of information given by two probability densities $f_1({\bf x})$ and $f_2({\bf x})$ over the same space ${\cal X}$: $$ (f_1 \wedge f_2) ({\bf x}) = \frac{f_1({\bf x}) f_2({\bf x})}{\mu({\bf x})}. $$ where ${\mu({\bf x})}$ is the homogeneous probability density assigning equal probabilities to equal volumes in ${\cal X}$. They adapt this formula to the inference problem by defining ${\bf x} = ({\bf d},{\bf m})$ as a point in the joint data-model space, and obtain the posterior probability distribution: $$ \sigma({\bf d},{\bf m}) = \frac{\rho({\bf d},{\bf m}) \theta({\bf d},{\bf m})}{\mu({\bf d},{\bf m})}. $$ Under the assumption that the priors over ${\bf d}$ and ${\bf m}$ are independent, that is, $\rho({\bf d},{\bf m})=\rho_d({\bf d}) \rho_m({\bf m})$, that $\mu({\bf d},{\bf m})$ is constant, and that the physical relation ${\bf d} = {\bf g}({\bf m})$ is exact, such that $\theta({\bf d},{\bf m})=\delta({\bf d} - {\bf g}({\bf m})) \mu({\bf m})$, we arrive at an expression for the posterior in the model space \begin{equation} \sigma_m({\bf m}) = L({\bf m})\rho_m({\bf m}) \label{TV-Bayes} \end{equation} with $L({\bf m})=\rho_d({\bf g}({\bf m})).$ Equation (\ref{TV-Bayes}) is equivalent to the classical Bayes Formula, which in our notation reads $$ \sigma_m({\bf m} | {\bf d}) = \frac{L({\bf d} | {\bf m})\rho_m({\bf m})}{\rho_d({\bf d})}. $$ The posterior distribution (\ref{TV-Bayes}) is generally impossible to calculate directly, except for linear inverse problems, so one has to rely on Monte Carlo sampling methods to generate realizations from the distribution. A commonly used approach is the Extended Metropolis Algorithm (\cite{MT95,M06}): \bigskip\noindent \begin{Algorithm} \label{algo: ext-Metro} \textbf{(Extended Metropolis).} Given a random algorithm $V\left( \mathbf{m}\right) $ which iteratively samples the prior probability density ${\rho_m (\mathbf{m})}$: \begin{equation} \mathbf{m}^{(n+1)}=V\left( \mathbf{m}^{(n)}\right) , \label{eq: priorwalk} \end{equation} and an algoritm $U\left( 0,1\right) $ producing random numbers from the interval $\left[ 0,1\right]$ with uniform probability. The random algorithm $W$, which iteratively updates the current parameter vector $\mathbf{m}^{(n)}$: \begin{equation} \mathbf{m}^{(n+1)}=W\left( \mathbf{m}^{(n)}\right) =\left\{ \begin{array}{l} V\left( \mathbf{m}^{(n)}\right) \text{ if }U\left( 0,1\right) \leq \min \left[ 1,\frac{L\left( V\left( \mathbf{m}^{(n)}\right) \right) }{L\left( \mathbf{m}^{(n)}\right) }\right] \\ \mathbf{m}^{(n)}\text{ else} \end{array} ,\right. \label{bigwalk} \end{equation} will asymptotically sample the posterior ${\sigma (\mathbf{m}) \propto L(\mathbf{m})\rho (% \mathbf{m})}$. \end{Algorithm} \bigskip\noindent Algorithm~\ref{algo: ext-Metro} works if the prior sampler $V$ is {\em irreducible} and {\em aperiodic} (see, e.g., \cite{MT02}). \bigskip\noindent The extended Metropolis will, in general, perform better than a regular Metropolis, since only models that are accepted a priori (by $V$) will be subject to the time consuming misfit calculation needed to evaluate $L({\bf m}) $. \section{No-Free-Lunch Theorems and the importance of information} \label{sec:NFL} Text text text text text text text text text text text text text \section{Blind and Informed Proposals Mechanisms} Despite the improved efficiency of Algorithm (\ref{algo: ext-Metro}) gained from the use of a prior sampler $V$ in the initial phase of each algorithm, practical experience shows that the efficiency of extended Metropolis is highly dependent on the choice of $V$. The problem is that the prior $\rho$ and the likelihood function $L$ are usually so different that the moves in the model space proposed by $V$ will usually lead to points with low values of $L$. As explained earlier, there is little hope that even the best, blind sampling strategy will be able to significantly improve on this problem. In the following we shall therefore describe a method that will allow us to inject information about the physical relation between data and model parameters into the sampling strategy. First, however, we will understand why the 'blind' strategy of the classical MCMC is so inefficient. \subsection{Proposal Distributions in Classical MCMC} The basic idea behind any implementation of the Metropolis Algorithm is an interplay between {\em proposals} and {\em rejections}. In each iteration, sampling from a probability density $f({\bf x})$ over a space ${\cal X}$ proceeds from a current value ${\bf x}$ by first proposing a new value ${{\bf x}}'$ from the so-called {\em proposal distribution} $q({{\bf x}}',{\bf x})$, followed by a random decision where ${{\bf x}}'$ is accepted with probability \begin{equation} P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}={\rm min} \( \frac{f({{\bf x}}') q({\bf x}|{{\bf x}}')}{f({\bf x}) q({{\bf x}}'|{\bf x})},1 \). \end{equation} This follows from the requirement that the probability that a transition takes place from ${\bf x}$ to ${{\bf x}}'$, namely $P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'} q({{\bf x}}'|{\bf x}) f({\bf x})$, must be equal to the probability of the reverse transition $P_{\rm acc}^{{{\bf x}}' \rightarrow {{\bf x}}} q({\bf x}|{{\bf x}}') f({{\bf x}}')$ to attain equilibrium sampling (detailed balance). At this point it is important to note that the proposal distribution has no influence on the distribution to which the sampling converges, it only influences the speed of convergence. It is common practice (but not a necessity) to work with symmetric proposal distributions: $q({{\bf x}}',{\bf x}) = q({\bf x} , {{\bf x}})$, and in this case we get the simpler expression \begin{equation} P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}={\rm min} \( \frac{f({{\bf x}}')}{f({\bf x})},1 \). \label{eq_accept} \end{equation} The Metropolis sampler is only efficient if large enough steps (to connect any two high value areas of $f({\bf x})$ in a few steps) will be frequently accepted. This ability critically depends on the $q({{\bf x}}',{\bf x})$, and that is reason why intensive research in recent years has been devoted to finding improved proposal distributions. A close look at the expression for the unconditional transition probability from ${\bf x}$ to ${\bf x}'$ (for symmetric $q$) \begin{equation} P^{{{\bf x}} \rightarrow {{\bf x}}'}=f({\bf x}) P_{\rm acc}^{{{\bf x}} \rightarrow {{\bf x}}'}=f({\bf x}) \cdot q({{\bf x}}'|{\bf x}) \cdot {\rm min} \( \frac{f({{\bf x}}')}{f({\bf x})},1 \). \end{equation} shows that, if $f({\bf x})'$ turns out to be low in places where $q({{\bf x}}'|{\bf x})$ is high, the move ${{\bf x}} \rightarrow {{\bf x}}'$ is very likely to be rejected. To investigate and exemplify the importance of this problem in high-dimension\-al spaces, let us consider the case where the distribution of ${{\bf x}}$ is Gaussian with covariance matrix ${\bf C}$ and mean ${\bf x}_0$: $f({{\bf x}}) = {\cal N}_{{\bf x}} ({\bf x}_0,{\bf C})$. Assume now that out proposal distribution is an isotropic Gaussian $q({{\bf x}}|{{\bf x}}_q) = {\cal N}_{{\bf x}} ({\bf x}_q,{\bf C}_q)$ with mean ${{\bf x}}_q$ and covariance matrix ${\bf C}_q$, and that we, in the sampling process, have arrived at point with a high value of $f({{\bf x}})$, say, for simplicity, at its maximum point ${{\bf x}}_0$. We can now calculate the expected acceptance probability $P^{{{\bf x}}_0 \rightarrow {{\bf x}}}$ proposed in the next step by the algorithm: \begin{align} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) &= \int_{{\cal X}} \frac{f({{\bf x}})}{f({{\bf x}}_0)} q({{\bf x}}|{{\bf x}}_0) d{{\bf x}} \label{eq: Meanf1} \\ &= \int_{{\cal X}} \frac{{\cal N}_{{\bf x}} ({\bf x}_0,{\bf C})}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})} {\cal N}_{{\bf x}} ({\bf x}_0,{\bf C}_q) d{{\bf x}} \\ &= \frac{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C}+{\bf C}_q)}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})}\int_{{\cal X}} {\cal N}_{{\bf x}} ({\bf x}_1,{\bf C}_1) d{{\bf x}} \label{eq: Meanf} \end{align} where \begin{equation*} {\bf x}_1 = ({\bf C}^{-1} + {\bf C}_q^{-1})^{-1} ({\bf C}^{-1} {\bf x}_0 + {\bf C}_q^{-1} {\bf x}_0) = {\bf x}_0 \end{equation*} and \begin{equation*} {\bf C}_1 = ({\bf C}^{-1} + {\bf C}_q^{-1})^{-1} \, . \end{equation*} Since the integral in (\ref{eq: Meanf}) is $1$, we have the following expression for the expected acceptance probability: \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) = \frac{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C}+{\bf C}_q)}{{\cal N}_{{\bf x}_0} ({\bf x}_0,{\bf C})} = \( \frac{{\rm det}\( 2\pi {\bf C} \)}{{\rm det}\( 2\pi ({\bf C}+{\bf C}_q) \)} \)^{1/2} \, . \end{equation} Both ${\bf C}_q = \sigma_q^2 {{\bf I}}$ (with $\sigma_q^2 > 0$) and ${\bf C}$ are diagonal in the frame spanned by ${\bf C}$'s eigenvectors, and if we assume that the eigenvalues of ${\bf C}$ are $\sigma_1^2 \ge \dots \ge \sigma_N^2 > 0$, where $N$ is the dimension of ${\cal X}$, the eigenvalues of ${\bf C}+{\bf C}_q$ are $(\sigma_1^2+\sigma_q^2), \dots ,(\sigma_N^2+\sigma_q^2)$. From this we have \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) = \prod_{n=1}^N \( \frac{\sigma_n^2}{\sigma_n^2+\sigma_q^2} \)^{1/2} \, . \label{eq-eigenC1} \end{equation} From (\ref{eq-eigenC1}) we see that for any non-zero values of $\sigma_n$ and $\sigma_q$ we have \begin{equation} E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}}) \rightarrow 0 \quad {\rm for} \quad N \rightarrow \infty \, . \end{equation} expressing the influence on sampling of the so-called 'curse of dimensionality'. If the proposed steps are kept very short ($\sigma_q$ is small compared to all $\sigma_n$), the decrease of $E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}})$ with $N$ is slow. But this situation is of no practical value, because adequate sampling by the algorithm requires that it can traverse high-probability areas of $f({{\bf x}})$ within a reasonable amount of time. For non-negligible step lengths, the situation is radically different. Indeed, if there exists an integer $K$ and a real constant $k$ such that $\sigma_q > k\sigma_n$ for all $n > K$, then $E(P^{{{\bf x}}_0 \rightarrow {{\bf x}}})$ decreases more that exponentially with $N$. In other words, if the distribution $f({{\bf x}})$ is 'elongated' compared to the proposal $q$, that is, if it is broader than $q$ in only a finite number of directions/dimensions, the mean number of accepted moves will decrease at least exponentially with the number of dimensions. As an example, let us consider the case where $\sigma_q^2 = 1$, and $\sigma_n^2 = 1/n$. For $N=2$ this gives an expected acceptance probability of $0.4082$, corresponding to a mean waiting time of about $0.4082^{-1} \approx 2.5$ iterations between accepted moves. For $N=10$ the expectation is $1.5828 \cdot 10^{-4}$, and for $N=100$ it decreases to $1.03 \cdot 10^{-80}$, giving a waiting time of about $3.0 \cdot 10^{62}$ years for 1 Billion iterations per second. The above analysis is carried out under the very favourable situation where the maximum of $f({{\bf x}})$ has been located by the algorithm, and does not even consider the serious difficulties faced by the sampling algorithm in the initial search for points with high values of $f({{\bf x}})$ (the {\em burn in} phase). So it is clear that the proposal mechanism, as defined by $q$, is the Achilles heel of the standard MCMC approach. \subsection{Sampling with Problem-dependent Proposals} The unsurmountable difficulties described in the previous section are rooted in the fact that standard MCMC calculations are essentially blind search algorithms supplemented with a few problem-specific features. To evade the fundamental problems of classical MCMC sampling, we shall therefore take the consequence of the legacy of the No-Free-Lunch theorem and propose a class of algorithms that explicitly incorporates information about the problem to be solved. To avoid a mismatch between the distribution $f({{\bf x}})$ to be sampled, and the proposal distribution $q({\bf x}' | {\bf x})$, we shall in the following look into proposals $q$ that are approximations to $f$, in the sense that \begin{equation} q({\bf x}' | {\bf x}) = \tilde{f} ({{\bf x}'}) \end{equation} where the Kullbach-Leibler divergence \begin{equation} D(f,\tilde{f}) = \int_{\cal X} f({{\bf x}}) \log \( \frac{f({{\bf x}})}{\tilde{f}({\bf x})} \) d{{\bf x}} \end{equation} is small. For a given inverse problem, the challenge is here to derive $\tilde{f}({{\bf x}})$. We shall devote the next section to this problem, where the focus will be on the special conditions characterizing inverse problems. \subsection{Inverse Problems: proposing through approximate forwards} When applying an MCMC method to inverse problems, where $f({\bf x})$ is the posterior $\sigma({\bf x})$ or the likelihood $L({\bf x})$, the expected acceptance probability is given by (\ref{eq: Meanf1}). This expression is largest (equal to $1$) when $f({{\bf x}}_0) = q({{\bf x}}|{{\bf x}}_0)$ for all ${\bf x}$, or, when $q$ is symmetric, $f({{\bf x}}) = q({{\bf x}}|{{\bf x}}_0)$ for all ${\bf x}_0$. In case $q({{\bf x}}|{{\bf x}}_0)$ deviates from $f({\bf x})$ there will be areas within ${\cal X}$ where $q({{\bf x}}|{{\bf x}}_0)<f({\bf x})$ as well as areas with $q({{\bf x}}|{{\bf x}}_0)>f({\bf x})$, because both $f$ and $q({{\bf x}}|{{\bf x}}_0)$ are positive with integral $1$ over ${\cal X}$. It is, however, the areas with $q({{\bf x}}|{{\bf x}}_0)<f({\bf x})$ that are critical, because they directly reduce the acceptance probability. In the following we exploit this fact to construct approximate likelihood functions or posterior distributions, allowing efficient sampling of solutions to inverse problems. \subsubsection{Approximate posteriors} Consider an inverse problem with observational data ${\bf d}^{obs}$ and computed data \begin{equation*} {{\bf d}} = {{\bf g}}({{\bf m}}) \, . \end{equation*} Here, ${{\bf g}}$ is an exact forward function. Suppose that we also have an approximate forward relation $\tilde{{\bf g}}$ that allows us efficiently compute approximate data \begin{equation*} \tilde{{\bf d}} = \tilde{{\bf g}}({{\bf m}}) \, . \end{equation*} and that a (possibly simplified) prior probability density $\tilde{\rho}_m({{\bf m}})$ is available. With this method in hand we can form an approximate posterior probability distribution \begin{equation} \tilde{\sigma}({{\bf m}}) = \tilde{L}({{\bf m}}) \tilde{\rho}_m({{\bf m}}) = \rho_d (\tilde{{\bf g}}({{\bf m}})) \tilde{\rho}_m({{\bf m}}) \end{equation} which can be used directly as a proposal distribution: \begin{equation} q({{\bf m}}'|{{\bf m}}) = \tilde{\sigma}({{\bf m}}') \, . \end{equation} Another approach is to use some 'pseudo-inverse' function ${\bf h}$ (if available) on the observed data ${\bf d}_{obs}$: \begin{equation*} {{\bf m}}_{est} = {{\bf h}}({\bf d}_{obs}) \, . \end{equation*} and to use the image $\sigma_{pseud}$ of the data distribution $\rho_d$ under ${\bf h}$: \begin{equation} \sigma_{pseud}({{\bf m}}) = \frac{\partial{\tilde{{\bf h}}}}{\partial{{\bf d}}}\rho_d({{\bf d}}) \, . \end{equation} as a proposal distribution \begin{equation} q({{\bf m}}'|{{\bf m}}) = \sigma_{pseud}({{\bf m}}') \, . \end{equation} However, the fact that we can use the above approximations to the posterior as proposals does not mean that they are efficient. In the following sections we shall analyse some necessary conditions for efficiency, and give examples of implementations. The efficiency conditions will, of course, depend on the category of problems we face (linear, mildly non-linear, highly non-linear), so we will consider these situations in turn. \subsubsection{Linear, Gaussian Problems} Sampling of solutions to a linear Gaussian problem through rejection sampling is straightforward. We have an explicit expression for the Gaussian posterior, and the distribution itself can be used as an exact proposal. Samples ${\bf m}$ from an $N$-dimensional standard (isotropic) Gaussian (mean $\bf 0$ and covariance ${\bf I}$) can be generated with the Box-M\"uller method, and the desired samples ${{\bf m}}$ from a $N$-dimensional multivariate Gaussian with mean ${{\bf m}}_0$ and covariance ${\bf C}$ can be calculated as ${{\bf m}} ={{\bf m}}_0 + {{\bf A}}{{\bf m}}$, where ${{\bf A}}{{\bf A}}^T = {{\bf C}}$. The matrix ${{\bf A}}$ can be found by, for instance, Cholesky decomposition. \subsubsection{Nonlinear Problems} We shall now propose a procedure for efficient sampling of solutions to nonlinear problems with a homogeneous (constant) prior. \medskip \noindent Given an inverse problem \begin{equation*} {{\bf d}} = {{\bf g}}({{\bf m}}) \end{equation*} with homogeneous prior, observed data ${{\bf d}}_{obs}$, and data covariance matrix ${\bf C}_d$. The problem is characterized by its likelihood function $L({{\bf m}}) = {\cal N}_{{\bf d}_{obs}}({{\bf g}}({{\bf m}}),{{\bf C}}_d)$. and its prior $\rho({\bf m}) = \mu({\bf m})$, and will in the following be expressed through the notation \begin{equation} {\cal P} = \{ {\cal N}_{{\bf d}_{obs}}({{\bf g}}({{\bf m}}),{{\bf C}}_d), \mu({\bf m}) \} \, . \end{equation} We shall furthermore consider an approximation to ${\cal P}$ \begin{equation} \tilde{{\cal P}} = \{ {\cal N}_{{\bf d}_{obs}}(\tilde{{\bf g}}({{\bf m}}),{{\bf C}}_d), \mu({\bf m}) \} \, , \end{equation} As a measure of the difference between the exact posterior and the approximate posterior in the joint $({\bf d},{\bf m})$-space, we shall use the Kullbach-Leibler divergence of the conditional $\sigma({\bf d} |{\bf m})$: \begin{align} D(\sigma({\bf d} |{\bf m}),\tilde{\sigma}({\bf d} |{\bf m})) &= \int_{\cal X} \sigma({{\bf d}}|{{\bf m}}) \log \( \frac{\sigma({{\bf d}}|{{\bf m}})}{\tilde{\sigma}({{\bf d}}|{\bf m})} \) d{{\bf d}} \label{eq: Kull} \\ &= \int_{\cal X} {\cal N}_{{\bf d}}({{\bf g}}({{\bf m}}),{{\bf C}}_d) \log \( \frac{{\cal N}_{{\bf d}}({{\bf g}}({{\bf m}}),{{\bf C}}_d)}{{\cal N}_{{\bf d}}(\tilde{{\bf g}}({{\bf m}}),{{\bf C}}_d)} \) d{{\bf d}} \\ &= \frac{1}{2} \({{\bf g}}({{\bf m}})-\tilde{{\bf g}}({{\bf m}}))^T {{\bf C}}_d^{-1} ({{\bf g}}({{\bf m}})-\tilde{{\bf g}}({{\bf m}})) \) \\ &= \frac{1}{2} \| {{\bf g}}({{\bf m}})-\tilde{{\bf g}}({{\bf m}}) \|_{{{\bf C}}_d}^2 \end{align} which is the ${{\cal L}}_2$-distance with weight matrix ${{\bf C}}_d$ between the approximate, computed data $\tilde{{\bf g}}({{\bf m}})$ and the exact, computed data ${{\bf g}}({{\bf m}})$. This shows that the amount of information missing in $\tilde{{\cal P}}$, compared to ${\cal P}$ (the Kullbach-Leibler divergence) can be measured directly from the difference between ${{\bf g}}({{\bf m}})$ and $\tilde{{\bf g}}({{\bf m}})$. \medskip\noindent This suggests the following: \noindent \begin{Algorithm} Use ${\cal N}_{{\bf d}_{obs}}(\tilde{{\bf g}}({{\bf m}}),{{\bf C}}_d)$ as the proposal distribution in Algoritm (\ref{algo: ext-Metro}). \label{algo: ext-Metro-KBMC} \end{Algorithm} \subsubsection{Example: A 100-parameter inverse scattering problem} Figure $\dots$ shows acoustic impedances of a 1000-parameter, horizontally stratified earth model, and the vertical-incidence seismic data generated from the model. The data are 1D full-waveform seismic signals, generated and recorded at the surface. The full-waveform data contains all multiple reflections, transmission losses and damping effects, so the inverse problem of recovering the model from the data is highly nonlinear. A linearized approximate solution to the inverse scattering problem (in the absence of noise) is shown to the right. This solution requires very little computation time, but is clearly inaccurate (compare to left graph). A more accurate solution, with uncertainty estimates, can be obtained by Markov chain Monte Carlo (MCMC) if enough samples can be obtained in reasonable time. Figure $\dots$ (upper curve) shows the lack of convergence to equilibrium of a classical MCMC in the first 2000 iterations of such an inversion process. The lower curve, however, shows much faster convergence of a Algorithm (\ref{algo: ext-Metro-KBMC}) whose random walk is guided by the linearized solution shown in Figure $\dots$. Algorithm (\ref{algo: ext-Metro-KBMC}) uses a forward modeling error distribution centered in the linearized solution to choose samples. The improvement in convergence time is dramatic, in this case of the order of $10^4$ when started at the same model as the Algorithm (\ref{algo: ext-Metro}). \begin{figure} \centering \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{TrueModel-xx.png} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{dobs.png} \end{subfigure} ~ \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{ApproxModel.png} \end{subfigure} \caption{A 1000-layer, horizontally stratified Earth model (left), the corresponding vertical-incidence, full-waveform seismic data (middle), and a classical, linearized solution to the inverse problem (right).}\label{fig:animals} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=4in]{Exp-1-Iter-10000-TrueErrorEnvelope.png} \caption{Linear Inversion: Modelization Error: True Error Envelope} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=4in]{Exp-1-Iter-10000-EstErrorEnvelope.png} \caption{Linear Inversion: Modelization Error: Est Error Envelope} \end{center} \end{figure} \vfill \begin{figure}[h] \begin{center} \includegraphics[width=4in]{IMC-MCMC-iter-2000-Misfits-x.png} \caption{Convergence towards equilibrium of a classical MCMC algorithm (upper curve), attempting to sample solutions to the inverse problem described in Figure 5. Lower curve is the fast-converging KBMC algorithm guided by the linearized inversion. In this case the convergence of the KBMC algorithm was of the order of 104 times faster than the MCMC algorithm.} \end{center} \end{figure} \vfill \medskip \noindent \subsubsection{Procedure 2: Sampling with General Prior} ({\dots soon to appear\dots) \clearpage \newcommand{\marginlabel}[1] {\mbox{}\marginpar{\raggedleft\hspace{0pt}#1}}
{'timestamp': '2021-11-29T02:27:10', 'yymm': '2005', 'arxiv_id': '2005.14289', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14289'}
arxiv
\section{Introduction} Determinantal ideals and their generalizations have been explored extensively both in the context of commutative algebra and also in the study of Schubert varieties in flag var
ieties. This overlap is to be expected because, for example, \begin{itemize} \item each ideal generated by the $k\times k$ minors of a generic matrix is the defining ideal of an open patch of a Schubert variety in a Grassmannian; \item each one-sided ladder determinantal ideal is a Schubert determinantal ideal for a vexillary (i.e., 2143-avoiding) permutation (see eg. \cite{KMY09}); \item each two sided mixed ladder determinantal ideal is a type $A$ Kazhdan-Lusztig ideal (see eg. \cite{EFRW}); \item each ideal generated by the $k\times k$ minors of a generic symmetric matrix is the defining ideal of an open patch of a Schubert variety in a Lagrangian Grassmannian; \item each defining ideal of a variety of complexes is a type $A$ Kazhdan-Lusztig ideal, up to some extra indeterminate generators (see eg. \cite[Ch. 17]{MillerSturmfels}). \end{itemize} While similar results on the above-mentioned families of ideals appear in the Schubert variety and commutative algebra literatures, it is often different techniques that are used to obtain them. For example, in \cite{KMY09}, A. Knutson, E. Miller, and A. Yong introduced \emph{geometric vertex decomposition}, a degeneration technique, and used this to study Gr\"obner geometry of Schubert determinantal ideals for vexillary permutations. See Section \ref{sect:GVD} for background on geometric vertex decomposition. Independently, \emph{liaison-theoretic} methods were used by E. Gorla in \cite{Gor07} and E. Gorla, J. Migliore, and U. Nagel in \cite{GMN13} to obtain Gr\"obner bases for various classes of ladder determinantal ideals (including one sided ladder determinantal ideals, also known as Schubert determinantal ideals for vexillary permutations). Roughly speaking, liaison is a theory that aims to transfer information from one subscheme of projective space to another in cases when their union is sufficiently nice. See Section \ref{sect:liaison} for background on liaison. In this paper, we establish an explicit connection between \emph{geometric vertex decomposition} and \emph{liaison}, and we study implications of this connection. We have three main goals, which we now outline. \subsubsection*{First goal} The first goal of this paper is to show that it is no coincidence that geometric vertex decomposition and liaison can be used to obtain similar results for similar classes of ideals. Indeed, we prove the following explicit connection between the two techniques: \begin{mainthm}\label{thm:main1} Under mild hypotheses, every geometric vertex decomposition gives rise to an elementary $G$-biliason of height $1$. Every sufficiently ``nice" elementary $G$-biliaison of height $1$ gives rise to a geometric vertex decomposition. \end{mainthm} The first half of this theorem is stated precisely and proved as Corollary \ref{cor:gvdToLia}. The second half is stated precisely and proved as Theorem \ref{thm:linkToGVD}. \subsubsection*{Second goal.} The second motivation for our work comes from a long-standing open question in liaison theory, which asks whether subschemes of $\mathbb{P}^n$ are arithmetically Cohen--Macaulay if and only if they are in the \emph{Gorenstein liaison class of a complete intersection} (often referred to as \emph{glicci}, shorthand introduced in \cite{KM+01}). It is a standard homological argument that every glicci subscheme of $\mathbb{P}^n$ is arithmetically Cohen--Macaulay. Hence, the question may be phrased as follows: \begin{questn}\label{motivation2}\cite[Question 1.6]{KM+01} Is every arithmetically Cohen--Macaulay subscheme of $\mathbb{P}^n$ glicci? \end{questn} \noindent For more background on why this question emerges naturally from the history of liaison and for a summary of partial results already in the literature, see Section \ref{sect:liaison}. By combining our main theorem with some straightforward consequences of geometric vertex decomposition, we arrive at the following, which is stated precisely as Corollary \ref{cor:AutomaticallyGlicci}: \begin{coro}\label{cor:mainCor} Let $I$ be a homogenous ideal in a polynomial ring. If the Lex-initial ideal of $I$ is the Stanley--Reisner ideal of a vertex decomposable simplicial complex and the vertex decomposition is compatible with the order of the variables, then $I$ is glicci. \end{coro} From this corollary, one can quickly deduce that certain well-known classes of varieties are glicci. We discuss three such classes in Section \ref{sect:applications}: matrix Schubert varieties, varieties of complexes, and varieties of graded lower bound cluster algebras. Using the first half of our main theorem, we recover a result of U. Nagel and T. R\"omer from \cite{NR08}, namely that the Stanley--Reisner ideal of a vertex decomposable simplicial complex is glicci. In fact, U. Nagel and T. R\"omer showed, more generally, that the Stanley--Reisner ideal of a \emph{weakly vertex decomposable} simplicial complex is glicci \cite[Theorem 3.3]{NR08}. Taking this as motivation, we define the class of \emph{weakly geometrically vertex decomposable} ideals (Definition \ref{def:weaklyGeometricallyVertexDec}), which includes both the geometrically vertex decomposable ideals and the Stanley--Reisner ideals of weakly vertex decomposable complexes. We show the following, labeled as Corollary \ref{weakGVDimpliesgliggi} in the main body of the paper: \begin{thm} Weakly geometrically vertex decomposable ideals are glicci. \end{thm} \subsubsection*{Third goal.} In \cite[Lemma 1.12]{GMN13}, it is shown that one can use liaison to compare Hilbert functions when the degrees of the isomorphisms of the $G$-biliaisons involved in an inductive argument are known. This approach is employed in many of the determinantal cases treated in the literature (\cite{Gor07, Gor08, GMN13, FK20}). It is worth noticing that the isomorphisms employed in these papers all have a similar form. We explain via Theorem \ref{thm:onestep} why this similarity is not a coincidence but, rather, is to be expected. In that theorem, we associate an explicit isomorphism of degree $1$ to a geometric vertex decomposition. In addition to the expository work of describing a unifying structure underlying examples already in the literature, Theorem \ref{thm:onestep} also provides a candidate isomorphism in the style of $G$-biliaison that, in good cases, allows one to use the framework of \cite{GMN13} to prove that a conjectured Gr\"obner basis is, indeed, a Gr\"obner basis. Some consequences of Theorem \ref{thm:onestep} on Gr\"obner bases and degenerations appear in Subsection \ref{sect:GBapplications}. \subsection*{The structure of the paper} In Section \ref{sect:GVD}, we review definitions and key lemmas from \cite{KMY09} on geometric vertex decomposition in the unmixed case and record some additional observations about the structure of a geometrically vertex decomposable ideal. In Section \ref{sect:liaison}, we briefly review background material on Gorenstein liaison. In Sections \ref{sect:GVDGlicci} and \ref{sect:glicciGVD}, we provide a proof of our main theorem (stated above), and related results and examples. In Section \ref{sect:applications}, we prove that certain well-known classes of combinatorially-defined ideals are glicci, via the material in Section \ref{sect:GVDGlicci}. Finally, we devote Section \ref{sect:nonpure} to the not necessarily unmixed case, which we relate to vertex decomposition in the not necessarily pure case. \subsection*{Notational conventions} Throughout the paper, we let $\kappa$ be a field, which can be chosen arbitrarily except in Sections \ref{sect:GVDGlicci}, \ref{sect:applications}, and \ref{sect:nonpure}, where we require that $\kappa$ be infinite. \subsection*{Acknowledgements} We thank Sergio Da Silva, Elisa Gorla, Kuei-Nuan Lin, Yi-Huang Shen, Adam Van Tuyl, and Anna Weigandt for helpful conversations. We are also grateful to the anonymous referee for a very careful reading of the paper and for helpful feedback. Part of this work was completed at the Banff International Research Station (BIRS) during the Women in Commutative Algebra workshop in October 2019. We are grateful for the hospitality of the Banff Centre. The second author was partially supported by NSERC grant RGPIN-2017-05732. \section{Geometric vertex decomposition}\label{sect:GVD} In this section we discuss geometric vertex decomposition, introduced by A. Knutson, E. Miller, and A. Yong in \cite{KMY09}. In the first subsection, we recall the basics of vertex decomposition of simplicial complexes and Stanley--Reisner ideals. In the second subsection, we move beyond the monomial-ideal case and recall the basics of \emph{geometric} vertex decomposition from \cite{KMY09}. In the third subsection, we define and study \emph{geometrically vertex decomposable ideals}. Although the material in this last subsection is not known to the authors to be explicitly in the literature, the results that appear will not be surprising to experts. \subsection{Vertex decomposition and Stanley--Reisner ideals}\label{sect:vertDecomp} Let $\Delta$ be a simplicial complex on vertex set $[n] = \{1,2,\dots, n\}$ (without an insistence that every $v\in [n]$ necessarily be a face of $\Delta$). Given a vertex $v\in \Delta$, define the following three subcomplexes: \begin{itemize} \item the \textbf{star} of $v$ is the set $\text{star}_\Delta(v):= \{F\in \Delta\mid F\cup \{v\}\in \Delta\}$; \item the \textbf{link} of $v$ is the set $\text{lk}_{\Delta}(v) := \{F\in \Delta\mid F\cup\{v\}\in \Delta, F\cap \{v\}=\emptyset\}$; \item the \textbf{deletion} of $v$ is the set $\text{del}_{\Delta}(v) := \{F\in \Delta\mid F\cap \{v\} = \emptyset\}.$ \end{itemize} Recall that the \textbf{cone} from $v$ on a simplicial complex $\Delta$ is the smallest simplicial complex that contains the set $\{F\cup \{v\}\mid F\in \Delta\}$. Then $\text{star}_\Delta(v)$ is the cone from $v$ on $\text{lk}_\Delta(v)$ and \begin{equation}\label{eq:vertexDecomp} \Delta = \text{star}_{\Delta}(v)\cup \text{del}_\Delta(v). \end{equation} The decomposition of $\Delta$ in \eqref{eq:vertexDecomp} is called a \textbf{vertex decomposition}. A simplicial complex is called \textbf{pure} if all of its facets (i.e., maximal faces) are of the same dimension. A simplicial complex $\Delta$ is \textbf{vertex decomposable} if it is pure and if $\Delta = \emptyset$, or $\Delta$ is a simplex, or there is a vertex $v\in \Delta$ such that $\text{lk}_\Delta(v)$ and $\text{del}_{\Delta}(v)$ are vertex decomposable. Given a simplicial complex $\Delta$ on vertex set $[n]$, one defines the \textbf{Stanley--Reisner ideal} $I_{\Delta}\subseteq \kappa[x_1,\dots, x_n]$ associated to $\Delta$ as $I_\Delta:= \langle \textbf{x}_F\mid F\subseteq [n], F\notin \Delta\rangle$, where $\textbf{x}_F := \prod_{i\in F}x_i$. The association $\Delta\mapsto I_\Delta$ determines a bijection between simplicial complexes on $[n]$ and squarefree monomial ideals in $\kappa[x_1,\dots, x_n]$. We write $\Delta(I)$ for the simplicial complex associated to a squarefree monomial ideal $I$. Notice that if $\Delta = \Delta_1\cup \Delta_2$ is a union of simplicial complexes on $[n]$, then $F$ is a non-face of $\Delta$ if and only if it is a non-face of both $\Delta_1$ and $\Delta_2$. Thus, $I_\Delta = I_{\Delta_1}\cap I_{\Delta_2}$. In particular, if $v$ is a vertex of $\Delta$, we may decompose $\Delta$ as in \eqref{eq:vertexDecomp} to get \[ I_\Delta = I_{\text{star}_\Delta(v)}\cap I_{\text{del}_\Delta(v)}. \] The following is immediate from the definitions. We record it as a lemma for easy reference. \begin{lemma}\label{lem:starDelLink} Let $v\in [n]$ be a vertex of $\Delta$. Write $I_{\Delta} = \langle x_v^{d_i}q_i\mid 1\leq i\leq m\rangle$ where $q_i$ is a squarefree monomial that is not divisible by $x_v$ and $d_i = 0$ or $1$. Then \[ I_{\text{star}_\Delta(v)} = \langle q_i \mid 1\leq i\leq m\rangle,\quad I_{\text{lk}_\Delta(v)} = I_{\text{star}_\Delta(v)}+\langle x_v\rangle, \quad I_{\text{del}_\Delta(v)} = \langle q_i \mid d_i = 0\rangle+\langle x_v\rangle. \] \end{lemma} \subsection{Geometric vertex decomposition} In this subsection, we discuss \emph{geometric vertex decomposition}, introduced by A. Knutson, E. Miller, and A. Yong in \cite{KMY09}. Let $R = \kappa[x_1,\dots, x_n]$ be a polynomial ring in $n$ indeterminates and let $y = x_j$ for some $1 \leq j \leq n$. Define the \textbf{initial $y$-form} $\text{in}_y f$ of a polynomial $f\in R$ to be the sum of all terms of $f$ having the highest power of $y$. That is, if $f = \sum_{i = 0}^n \alpha_i y^i$, where each $\alpha_i\in \kappa[x_1,\dots,\widehat{y},\dots, x_n]$ and $\alpha_n\neq 0$, define $\text{in}_y f :=\alpha_n y^n$, which is usually not a monomial. Given an ideal $J\subseteq R$, define $\text{in}_y J$ to be the ideal generated by the initial $y$-forms of the elements of $J$, that is, $\text{in}_y J: = \langle \text{in}_y f \mid f \in J \rangle.$ We say that a monomial order $<$ on $R$ is \textbf{$y$-compatible} if it satisfies $\text{in}_< f = \text{in}_<(\text{in}_y f)$ for every $f\in R$. In this case, one has $\text{in}_<(\text{in}_y J) = \text{in}_< J$ for any ideal $J\subseteq R$. Let $I\subseteq R$ be an ideal and $<$ a $y$-compatible monomial order. With respect to $<$, let $\mathcal{G} = \{y^{d_i}q_i+r_i\mid 1\leq i\leq m\}$ be a Gr\"obner basis of $I$ where $y$ does not divide any $q_i$ and $\text{in}_{y}(y^{d_i}q_i+r_i) = y^{d_i}q_i$. One easily checks that the ideal $\text{in}_y I$ is generated by $\text{in}_y \mathcal{G}:= \{y^{d_i}q_i\mid 1\leq i\leq m\}$. That is, $\text{in}_{y} I = \langle y^{d_i}q_i\mid 1\leq i\leq m\rangle$. \begin{definition}\label{def:gvdKMS}\cite[Section 2.1]{KMY09} Define $C_{y,I} := \langle q_i\mid 1\leq i\leq m\rangle$ and $ N_{y,I} := \langle q_i\mid d_i = 0\rangle.$ When $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$, this decomposition is called a \textbf{geometric vertex decomposition of $I$ with respect to $y$}. \end{definition} The ideals $C_{y,I}$ and $N_{y,I}$ do not depend on the choice of Gr\"obner basis and, in particular, do not depend on the choice of $y$-compatible term order $<$. This follows from the facts that $C_{y,I} = (\text{in}_y I :y^\infty)$ by \cite[Theorem 2.1(d)]{KMY09} and that $N_{y,I}+\langle y\rangle = \text{in}_y I+\langle y\rangle$ by \cite[Theorem 2.1 (a)]{KMY09}, together with the observation that $y$ does not appear in the generators of $N_{y,I}$ given in its definition. We say that a geometric vertex decomposition is \textbf{degenerate} if $\sqrt{C_{y,I}} = \sqrt{N_{y,I}}$ or if $C_{y,I} = \langle 1 \rangle$ and \textbf{nondegenerate} otherwise. As we will see through Lemma \ref{lem:form}, if $C_{y,I} = \langle 1 \rangle$, then some polynomial whose initial $y$-form is a unit multiple of $y$ is an element of $I$, in which case $R/I \cong R/(N_{y,I}+\langle y \rangle)$. If $\sqrt{C_{y,I}} = \sqrt{N_{y,I}}$, then $\sqrt{\text{in}_y I} = \sqrt{C_{y,I}} \cap \sqrt{N_{y,I}+\langle y \rangle} = \sqrt{C_{y,I}}$, in which case $\text{in}_y I$, $C_{y,I}$, and $N_{y,I}$ all determine the same variety. In both of these cases, we may often prefer to study $N_{y,I}$ in the smaller polynomial ring that omits $y$. This is especially true when $I$ is radical for the following reason: \begin{proposition}\label{prop:degen-rad} If $I$ is radical and has a degenerate geometric vertex decomposition $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ with $\sqrt{N_{y,I}} = \sqrt{C_{y,I}}$, then the reduced Gr\"obner basis of $I$ does not involve $y$ and $I = \text{in}_y I = C_{y,I} = N_{y,I}$. \end{proposition} \begin{proof} Throughout this argument, we will refer to the reduced Gr\"obner basis of $I$ as the Gr\"obner generators of $I$ and the generators of $N_{y,I}$ obtained in Definition \ref{def:gvdKMS} from the reduced Gr\"obner basis of $I$ as the Gr\"obner generators of $N_{y,I}$. We claim first that $N_{y,I}$ must also be radical. Fix some $g^t \in N_{y,I}$ for $t \geq 1$. Because $N_{y,I}$ has a generating set that does not involve $y$, we may assume without loss of generality that $g$ does not involve $y$. Because $g^t \in N_{y,I} \subseteq I = \sqrt{I}$, we also have $g \in I$, and so $g$ must have a Gr\"obner reduction by elements of the reduced Gr\"obner basis of $I$. Because $g$ does not involve $y$, this reduction must use only those Gr\"obner generators that do not involve $y$, which are exactly the Gr\"obner generators of $N_{y,I}$, and so $g \in N_{y,I}$. Hence, $\sqrt{C_{y,I}} = \sqrt{N_{y,I}} = N_{y,I} \subseteq C_{y,I}$, and so $N_{y,I} = C_{y,I}$. Suppose now that the reduced Gr\"obner basis of $I$ has some element of the form $y^dq+r$ for $d>0$. Then $q \in C_{y,I} = N_{y,I}$, and so the lead term of $q$ must be divisible by the lead term of one of the Gr\"obner generators of $N_{y,I}$. But any such generator is also an element of the reduced Gr\"obner basis of $I$ and so cannot divide the lead term of $q$ since then it would divide the lead term of $y^dq+r$. Hence, the reduced Gr\"obner basis of $I$ has no term involving $y$, from which it follows that $I = \text{in}_y I = C_{y,I} = N_{y,I}$. \end{proof} \begin{remark}\label{rem:GVDsimplicial} Let $\Delta$ be a simplicial complex on vertex set $[n]$, and let $v$ be a vertex of $\Delta$. The geometric vertex decomposition of $I_\Delta\subseteq R$ with respect to variable $x_v$ agrees with the decomposition \[ I_{\Delta} = I_{\text{star}_\Delta(v)}\cap I_{\text{del}_\Delta(v)}. \] Indeed, $\text{in}_{x_v}I_\Delta = I_\Delta$, $I_{\text{star}_\Delta(v)} = C_{x_v,I_\Delta}$, and $I_{\text{del}_\Delta(v)} = N_{x_v,I_{\Delta}}+\langle x_v\rangle$ (see the end of Section \ref{sect:vertDecomp}). Observe that since $v\in \Delta$, we have $C_{x_v,I_\Delta} \neq \langle 1\rangle$. Thus, the geometric vertex decomposition is degenerate if and only if $\Delta$ is a cone from $v$ on $\text{lk}_\Delta(v)$. \end{remark} If an ideal $I\subseteq R$ has a generating set $\mathcal{G}$ in which $y^2$ does not divide any term of $g$ for any $g \in \mathcal{G}$, then we say that $I$ is \textbf{squarefree in $y$}. It is easy to see (for example, by considering $S$-pair reductions) that every ideal that is squarefree in $y$ has a Gr\"obner basis, with respect to any $y$-compatible term order, such that $y^2$ does not divide any term of any element of the Gr\"obner basis. \begin{lemma}\label{lem:form} If $I \subseteq R$ possesses a geometric vertex decomposition with respect to a variable $y = x_j$ of $R$, then $I$ is squarefree in $y$, and the reduced Gr\"obner basis of $I$ with respect to any $y$-compatible term order has the form $\{yq_1+r_1,\dots, yq_k+r_k,h_1,\dots, h_\ell\}$ where $y$ does not divide any term of any $q_i$ or $r_i$ for any $ 1 \leq i \leq k$ nor any $h_j$ for any $1 \leq j \leq \ell$. \end{lemma} \begin{proof} Fix a $y$-compatible term order $<$, and let $\mathcal{G} = \{y^{d_1}q_1+r_1,\dots, y^{d_m}q_m+r_m\}$ be the reduced Gr\"obner basis of $I$ with respect to $<$, where $y^{d_i}q_i = \text{in}_y(y^{d_i}q_i+r_i)$ and $y$ does not divide any term of $q_i$ for any $1 \leq i \leq m$. Observe that $yq_i\in C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ for each $1 \leq i \leq m$, so $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ implies $yq_i\in \text{in}_y I$ for each $1 \leq i \leq m$. Hence, we may assume $d_i \leq 1$ for each $1 \leq i \leq m$. The remaining statements now follow easily. \end{proof} \subsection{Geometrically vertex decomposable ideals} A geometric vertex decomposition of an ideal is analogous to a vertex decomposition of a simplicial complex into a deletion and star (see Remark \ref{rem:GVDsimplicial}). In this subsection, we extend this analogy by considering \emph{geometrically vertex decomposable ideals}, which are analogous to vertex decomposable simplicial complexes. We again let $R = \kappa[x_1,\dots, x_n]$ throughout this subsection. Recall that an ideal $I\subseteq R$ is \textbf{unmixed} if $\dim(R/P) = \dim(R/I)$ for all $P \in \text{Ass}(I)$. \begin{definition}\label{def:geometricallyVertexDec} An ideal $I\subseteq R$ is \textbf{geometrically vertex decomposable} if $I$ is unmixed and if \begin{enumerate} \item $I = \langle 1\rangle$ or $I$ is generated by indeterminates in $R$, or \item for some variable $y = x_j$ of $R$, $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y\rangle)$ is a geometric vertex decomposition and the contractions of $N_{y,I}$ and $C_{y,I}$ to $\kappa[x_1,\dots,\widehat{y},\dots, x_n]$ are geometrically vertex decomposable. \qedhere \end{enumerate} \end{definition} We take case $(1)$ to include the zero ideal, whose (empty) generating set vacuously consists only of indeterminates. We will soon need observations about the relative heights of the ideals $I$, $C_{y,I}$, and $N_{y,I} $ in the circumstances of condition $(2)$. The degenerate cases are clear: if $C_{y,I} = \langle 1 \rangle$, then $\mbox{ht}(I) = \mbox{ht}(N_{y,I} )+1$ and, if $\sqrt{C_{y,I}} = \sqrt{N_{y,I}}$, then $\mbox{ht}(I) = \mbox{ht}(\text{in}_y I) = \mbox{ht}(C_{y,I})=\mbox{ht}(N_{y,I} )$. The nondegenerate case is handled by the lemma below. We say that the ring $R/I$ is \emph{equidimensional} if $\dim(R/P) = \dim(R/I)$ for all minimal primes $P$ of $I$ or, equivalently, if all irreducible components of the variety of $I$ have the same dimension. Equidimensionality does not preclude the possibility that $I$ might have embedded primes and so is weaker than unmixedness. \begin{lemma}\label{lem:height} If $I\subseteq R$ is an ideal so that $R/I$ is equidimensional and $\text{in}_y I = C_{y,I} \cap (N_{y,I} + \langle y \rangle)$ is a nondegenerate geometric vertex decomposition with respect to some variable $y = x_j$ of $R$, then $\mbox{ht}(C_{y,I}) = \mbox{ht}(I) = \mbox{ht}(N_{y,I} )+1$. Moreover, $R/C_{y,I}$ is equidimensional. \end{lemma} \begin{proof} By Lemma \ref{lem:form}, $I$ has a reduced Gr\"obner basis of the form $\{yq_1+r_1, \ldots, yq_k+r_k, h_1, \ldots, h_\ell\}$ where $y$ does not divide any term of any $q_i$ or $r_i$, $1 \leq i \leq k$, nor any or $h_j$, $1 \leq j \leq \ell$. Let $\tilde{I}\subseteq R[t]$ be the ideal $\tilde{I} = \langle yq_1+tr_1, \ldots, yq_k+tr_k, h_1, \ldots, h_\ell\rangle$. Using \cite[Theorem 15.17]{Eis95}, $R[t]/\tilde{I}\otimes_{\kappa[t]}\kappa[t,t^{-1}]\cong (R/I)[t,t^{-1}]$. Clearly, $R/I$ equidimensional implies $(R/I)[t,t^{-1}]$ equidimensional, and so $R[t]/\tilde{I}\otimes_{\kappa[t]}\kappa[t,t^{-1}]$ is equidimensional. By \cite[Theorem 15.17]{Eis95}, $R[t]/\tilde{I}$ is flat as a $\kappa[t]$-module. By this flatness, $t$ is not a zero-divisor on $R[t]/\tilde{I}$, and so no minimal prime of $R[t]/\tilde{I}$ contains $t$. Then because the primes of $R[t]/\tilde{I}\otimes_{\kappa[t]}\kappa[t,t^{-1}]$ are in correspondence with the primes of $R[t]/\tilde{I}$ that do not contain $t$, $R[t]/\tilde{I}$ is equidimensional as well. Finally, because $R[t]/\langle \tilde{I}, t \rangle \cong R/\text{in}_y I$, it suffices to note that every minimal prime over $\langle t \rangle$ in $R[t]/ \tilde{I}$ has height exactly one as a consequence of Krull's principal ideal theorem and the fact that $t$ is not a zero-divisor on $R[t]/ \tilde{I}$. Hence, $R/\text{in}_y I$ is equidimensional. Because $\text{in}_y I = C_{y,I} \cap (N_{y,I} + \langle y \rangle)$, each minimal prime of $\text{in}_y I$ is either a minimal prime of $C_{y,I}$ or of $N_{y,I} + \langle y \rangle$. Conversely, each minimal prime of $N_{y,I}+\langle y \rangle$ either is a minimal prime of $\text{in}_y I$ or contains some minimal prime of $C_{y,I}$. (Because $y \notin C_{y,I}$, no minimal prime of $C_{y,I}$ can contain any minimal prime of $N_{y,I}+\langle y \rangle$.) Hence, because $\mbox{ht}(\text{in}_y I )= \mbox{ht} (I)$, we will have $ \mbox{ht}(C_{y,I}) = \mbox{ht}(I) = \mbox{ht}(N_{y,I} )+1$ so long as some minimal prime of $N_{y,I}+\langle y \rangle$ does not contain a minimal prime of $C_{y,I}$, i.e., so long as $\sqrt{C_{y,I}} \not\subseteq \sqrt{N_{y,I}+\langle y \rangle} $. Because $C_{y,I}$ and $N_{y,I}$ have generating sets that do not involve $y$, we cannot have $\sqrt{C_{y,I}} \subseteq \sqrt{N_{y,I}+\langle y \rangle}$ unless $\sqrt{C_{y,I}} \subseteq \sqrt{N_{y,I}}$. But $N_{y,I} \subseteq C_{y,I}$, so $\sqrt{C_{y,I}} \subseteq \sqrt{N_{y,I}}$ would imply $\sqrt{C_{y,I}} = \sqrt{N_{y,I}}$, contradicting the assumption of nondegeneracy. Finally, because every minimal prime of $\sqrt{C_{y,I}}$ is a minimal prime of $\text{in}_y I$, equidimensionality of $R/C_{y,I}$ follows from equidimensionality of $R/\text{in}_y I$. \end{proof} As noted above, the definition of a geometrically vertex decomposable ideal is analogous to the definition of a vertex decomposable simplicial complex. In particular, we have the following proposition, whose proof we leave as an exercise: \begin{proposition}\label{prop:VDiffGVD} Let $\Delta$ be a simplicial complex on vertex set $[n]$. Its Stanley--Reisner ideal $I_\Delta \subseteq R$ is geometrically vertex decomposable if and only if $\Delta$ is vertex decomposable. \end{proposition} \begin{comment} \begin{proposition} Let $\Delta$ be a vertex decomposable simplicial complex on vertex set $[n]$. Its Stanley--Reisner ideal $I_\Delta\subseteq R$ is geometrically vertex decomposable. \end{proposition} \begin{proof} We proceed by induction on the number of vertices $m$ in $\Delta$. When $m = 0, 1$, the result is clear. So suppose $2\leq m\leq n$ is arbitrary. If $\Delta$ is a simplex, there is nothing to show. Otherwise, there is some $v\in \Delta$ such that $\text{lk}_\Delta(v)$ and $\text{del}_\Delta(v)$ are vertex decomposable. As discussed in Remark \ref{rem:GVDsimplicial}, the decomposition $I_\Delta = I_{\text{star}_\Delta(v)}\cap I_{\text{del}_\Delta(v)}$ is a geometric vertex decomposition (where $I_{\text{star}_\Delta(v)} = C_{x_v,I_\Delta}$, and $I_{\text{del}_\Delta(v)} = N_{x_v,I_{\Delta}}+\langle x_v\rangle$). Furthermore, thinking of $\text{lk}_{\Delta}(v)$ and $\text{del}_\Delta(v)$ as simplicial complexes on vertex set $\{1,\dots, \widehat{v},\dots, n\}$, the Stanley--Reisner ideals of $\text{lk}_{\Delta}(v)$ and $\text{del}_\Delta(v)$ are the contractions of $C_{x_v, I_\Delta}$ and $N_{x_v, I_\Delta}$ to the ring $\kappa[x_1,\dots,\widehat{x}_v,\dots, x_n]$. Hence, by induction, these contracted ideals are geometrically vertex decomposable. It follows that $I$ is geometrically vertex decomposable. \end{proof} \end{comment} In the remainder of this section, we discuss some properties of geometrically vertex decomposable ideals and further connections to vertex decomposable simplicial complexes. \begin{proposition}\label{prop:radical} A geometrically vertex decomposable ideal is radical. \end{proposition} \begin{proof} Let $I \subseteq R$ be a geometrically vertex decomposable ideal. We proceed by induction on $n = \dim(R)$. We note first that if $I = \langle 0\rangle$, $I = \langle 1\rangle$, or $I$ is generated by indeterminates, the result is immediate. Otherwise, there exists some variable $y = x_j$ such that $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y\rangle)$ is a geometric vertex decomposition and the contractions of $C_{y,I}$ and $N_{y,I}$ to $\kappa[x_1,\dots, \widehat{y},\dots, x_n]$ are geometrically vertex decomposable. These contracted ideals are radical by induction, thus so are $C_{y,I}$ and $N_{y,I}$. Hence, $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y\rangle)$ is radical. Finally, because $\text{in}_y I$ is radical, $I$ must also be radical. \end{proof} We remark briefly that Proposition \ref{prop:radical} does not require the unmixedness assumptions on $I$, $C_{y,I}$, or $N_{y,I}$. We next consider geometrically vertex decomposable ideals that have a certain compatibility with a given lexicographic monomial order. The main result in our discussion of these ideals is Proposition \ref{prop:simplicial}, which we will need in Section \ref{sect:applications} on applications. \begin{definition} Fix a lexicographic monomial order $<$ on $R$. We say that an ideal $I\subseteq R$ is \textbf{$<$-compatibly geometrically vertex decomposable} if $I$ satisfies Definition \ref{def:geometricallyVertexDec} upon replacing item (2) with \begin{enumerate} \item[(2*)] for the $<$-largest variable $y$ in $R$, $\text{in}_{y} I = C_{y,I} \cap (N_{y,I}+\langle y\rangle)$ is a geometric vertex decomposition and the contractions of $N_{y,I}$ and $C_{y,I}$ to $\kappa[x_1,\dots,\widehat{y},\dots, x_n]$ are $<$-compatibly geometrically vertex decomposable for the naturally induced monomial order on $\kappa[x_1,\dots, \widehat{y},\dots, x_n]$ (which we also call $<$).\qedhere \end{enumerate} \end{definition} Let $\Delta$ be a simplicial complex on a vertex set $[n]$, and let $<$ be a total order on $[n]$. We say that a simplicial complex $\Delta$ is \textbf{$<$-compatibly vertex decomposable} if either $\Delta = \emptyset$ or $\Delta$ is a simplex or, for the $<$-largest vertex $v\in \Delta$, $\text{del}_{\Delta}(v)$ and $\text{lk}_{\Delta}(v)$ are $<$-compatibly vertex decomposable. \begin{comment} We next record a few technical lemmas that we use in our proof of Proposition \ref{prop:simplicial}. \begin{lemma}\label{lem:equivDefs} Let $\Delta$ be a simplicial complex on vertex set $[n]$ and $<$ a total order on $[n]$. Let $v\in [n]$ with $v\notin \Delta$. Then $\Delta$ is $<$-compatibly vertex decomposable if and only if the cone from $v$ on $\Delta$ is $<$-compatibly vertex decomposable. \end{lemma} \begin{proof} Let $\widetilde{\Delta}$ denote the cone from $v$ on $\Delta$. We proceed by induction on the number $m$ of vertices in $\Delta$. When $m = 0, 1$, $\Delta$ and $\widetilde{\Delta}$ are simplices and there is nothing to show. So suppose that $2 \leq m\leq n$ is arbitrary. First assume that $\widetilde{\Delta}$ is $<$-compatibly vertex decomposable. If $v$ is the largest vertex (with respect to $<$) appearing in $\widetilde{\Delta}$, then $\text{del}_{\widetilde{\Delta}}(v) = \Delta$ and so $\Delta$ is $<$-compatibly vertex decomposable. Otherwise, there exists a largest vertex $i>v$ that is a vertex of $\Delta$. This is also the largest vertex of $\widetilde{\Delta}$. So, each of $\text{del}_{\widetilde{\Delta}}(i)$ and $\text{lk}_{\widetilde{\Delta}}(i)$ have vertex decompositions compatible with $<$. Because these complexes are cones from $v$ to $\text{del}_\Delta(i)$ and $\text{lk}_\Delta(i)$, respectively, the induction hypothesis implies that $\text{del}_\Delta(i)$ and $\text{lk}_\Delta(i)$ have vertex decompositions compatible with $<$. Hence, so too does $\Delta$. The converse is similar. \end{proof} \end{comment} The following is an easy consequence of \cite[Theorem 2.1]{KMY09}. \begin{lemma}\label{lem:compatible} Suppose that $I\subseteq R$ is squarefree in $y = x_j$, and suppose that $<$ is a $y$-compatible monomial order on $R$. Then $\text{in}_< I = \text{in}_< C_{y,I}\cap (\text{in}_< N_{y, I}+\langle y\rangle)$. \end{lemma} \begin{proof} Since $I$ is squarefree in $y$, $I$ has a Gr\"obner basis $\{yq_1+r_1,\dots, yq_k+r_k,h_1,\dots, h_\ell\}$ where $y$ does not divide any term of $q_i$, $r_i$, $1 \leq i \leq k$, nor $h_j$, $1 \leq j \leq \ell$. Let $m_i = \text{in}_<q_i$, $1\leq i\leq k$, and $m_{k+i} = \text{in}_< h_{i}$, $1\leq i\leq \ell$. By \cite[Theorem 2.1(a)]{KMY09}, $\{q_1,\dots, q_{k}, h_1,\dots, h_{\ell}\}$ and $\{h_1,\dots, h_\ell\}$ are Gr\"obner bases for $C_{y,I}$ and $N_{y,I}$ respectively, so $\text{in}_< C_{y,I} = \langle m_i\mid 1\leq i\leq {k+\ell}\rangle$ and $\text{in}_< N_{y,I} = \langle m_{k+i}\mid 1\leq i\leq \ell\rangle$. It is then straightforward to check \begin{equation}\label{eq:initialDecomp} \text{in}_< I = \langle ym_1,\dots, ym_k, m_{k+1},\dots, m_{k+\ell}\rangle = \text{in}_< C_{y,I}\cap (\text{in}_< N_{y,I}+\langle y\rangle). \qedhere \end{equation} \end{proof} We are now ready to prove the main result of our discussion on $<$-compatibly geometrically vertex decomposable ideals. \begin{proposition}\label{prop:simplicial} An ideal $I\subseteq R$ is $<$-compatibly geometrically vertex decomposable for the lexicographic monomial order $x_1>x_2>\cdots>x_n$ if and only if $\text{in}_< I$ is the Stanley--Reisner ideal of a $<$-compatibly vertex decomposable simplicial complex on $[n]$ for the vertex order $1>2>\cdots>n$. \end{proposition} \begin{proof} If $I = \langle 1\rangle$ or $\langle 0\rangle$, there is nothing to show. So suppose that $I$ is nontrivial and proceed by induction on $n = \text{dim}(R)$. The base case $n = 1$ is straightforward. Suppose that $n\geq 2$ is arbitrary. First assume that $I$ is $<$-compatibly geometrically vertex decomposable and let $y = x_1$. If $I$ is generated by indeterminates, there is nothing to show. Otherwise, we have that $\text{in}_{y} I = C_{y, I}\cap (N_{y, I}+\langle y\rangle)$ is a geometric vertex decomposition, and the contractions $N^c$ and $C^c$ of $N_{y, I}$ and $C_{y, I}$ to $\kappa[x_2,\dots, x_n]$ are $<$-compatibly geometrically vertex decomposable. There are two cases. The first case is when $C_{y,I} = \langle 1 \rangle$, which implies that $\text{in}_< I = \text{in}_<N_{y,I}+\langle y\rangle$. By induction, $\text{in}_<N^c$ is the Stanley--Reisner ideal of a $<$-compatibly vertex decomposable simplicial complex. Thus, $\text{in}_<I$, which is equal to $\text{in}_<N_{y,I}+\langle y\rangle$, is too. Indeed, the complexes $\Delta(\text{in}_<N^c)$ and $\Delta(\text{in}_<N_{y,I}+\langle y\rangle)$ are the same (though on different ambient vertex sets). Now assume $C_{y, I} \neq \langle 1 \rangle$. By induction, $\text{in}_< N^c$ and $\text{in}_< C^c$ are the Stanley--Reisner ideals of $<$-compatibly vertex decomposable simplicial complexes, thus so are $\text{in}_< N_{y, I}+\langle y\rangle$ and $\text{in}_< C_{y, I}+ \langle y\rangle$. By Lemma \ref{lem:compatible}, we have \begin{equation}\label{eq:squarefreeDecompCN} \text{in}_< I = \text{in}_<C_{y,I}\cap(\text{in}_<N_{y,I}+\langle y\rangle). \end{equation} Thus, $\text{in}_< I$ is a squarefree monomial ideal. Let $\Delta := \Delta(\text{in}_<I)$. Equation \eqref{eq:initialDecomp} and Lemma \ref{lem:starDelLink} imply that $\text{in}_<C_{y,I}$ and $\text{in}_<N_{y,I}+\langle y\rangle$ are the Stanley--Reisner ideals of $\text{star}_\Delta(1)$ and $\text{del}_\Delta(1)$. Thus $\text{in}_<C_{y,I}+\langle y\rangle$ and $\text{in}_<N_{y,I}+\langle y\rangle$ are the Stanley--Reisner ideals of $\text{lk}_\Delta(1)$ and $\text{del}_\Delta(1)$. Hence $\text{lk}_\Delta(1)$ and $\text{del}_\Delta(1)$ are $<$-compatibly vertex decomposable. Thus, $\Delta$ is too. For the converse, assume that $\Delta = \Delta(\text{in}_< I)$ is $<$-compatibly vertex decomposable. Since $y = x_1$ is $<$-largest, and $\text{in}_< I$ is a squarefree monomial ideal by assumption, the reduced Gr\"obner basis of $I$ with respect to $<$ is squarefree in $y$, and so has the form $\mathcal{G} = \{yq_1+r_1,\dots, yq_k+r_k,h_1,\dots, h_\ell\}$, with $y$ not dividing any term of any $q_i, r_i, h_j$, $1\leq i\leq k$, $1\leq j\leq \ell$. So, by \cite[Theorem 2.1 (b)]{KMY09}, $\text{in}_{y}I = C_{y,I}\cap (N_{y,I}+\langle y\rangle)$, is a geometric vertex decomposition. Again, we have two cases. If $C_{y,I} = \langle 1 \rangle$, then $\text{in}_<I = \text{in}_< N_{y,I}+\langle y\rangle$. The complex $\Delta(N_{y,I}+\langle y\rangle)$ is the same as $\Delta(N^c)$. Thus, $\Delta(N^c)$ is $<$-compatibly vertex decomposable. So, by induction, $N^c$ is $<$-compatibly geometrically vertex decomposable. Hence, so too is $I$. If $C_{y,I} \neq \langle 1 \rangle$, then we have that $1$ is a vertex of $\Delta$ and, as discussed above, $\text{in}_<C_{y,I}+\langle y\rangle$ and $\text{in}_<N_{y,I}+\langle y\rangle$ are the Stanley--Reisner ideals of $\text{lk}_\Delta(1)$ and $\text{del}_\Delta(1)$. Since $\Delta$ is $<$-compatibly vertex decomposable, so are $\text{lk}_\Delta(1)$ and $\text{del}_\Delta(1)$. Thinking of these complexes as complexes on vertex set $\{2,3,\dots, n\}$, their Stanley--Reisner ideals are $\text{in}_<C^c$ and $\text{in}_<N^c$. So, by induction, $C^c$ and $N^c$ are $<$-compatibly geometrically vertex decomposable. Hence, so too is $I$. \end{proof} We end this section with an example that shows that there exist geometrically vertex decomposable ideals that are \emph{not} geometrically vertex decomposable compatible with any lexicographic monomial order. \begin{example}\label{ex:nolex} Let $I = \langle y(zs-x^2), ywr, wr(z^2+zx+wr+s^2) \rangle \subseteq \kappa[x,y,z,w,r,s]$. Observe that $I$ is squarefree in $y$, and we have a geometric vertex decomposition with $C_{y,I} = \langle zs-x^2,wr \rangle$ and $N_{y,I} = \langle (wr)(zx+s^2+z^2+wr) \rangle$. Furthermore, the contractions of $C_{y,I}$ and $N_{y,I}$ to $\kappa[x,z,w,r,s]$ are geometrically vertex decomposable. (To see this, let $C^c$ and $N^c$ denote these contracted ideals. Then $C^c$ and $N^c$ are squarefree in $s$ and $x$, respectively, and $\text{in}_s C^c = \langle zs, wr\rangle$ and $\text{in}_x N^c = \langle wrzx\rangle$.) Hence $I$ is geometrically vertex decomposable. Next, we will observe that $I$ has no squarefree initial ideals, hence cannot be $<$-compatibly geometrically vertex decomposable for any order $<$ by Proposition \ref{prop:simplicial}. To prove this, we first note that the given generating set $\{g_1 := y(zs-x^2), g_2 := ywr, g_3 := wr(z^2+zx+wr+s^2)\}$ of $I$ is a universal Gr\"obner basis. Indeed, fix an arbitrary monomial order, and observe that each $S$-polynomial $S(g_i,g_j)$, $i\neq j$, is divisible by $g_2 = ywr$ and thus reduces to $0$ under division by $\{g_1,g_2,g_3\}$. Consequently, if $I$ has a squarefree initial ideal, there exists a monomial order $<$ such that $\langle \text{in}_< g_1, \text{in}_< g_2, \text{in}_< g_3\rangle$ is a squarefree monomial ideal. Noting that none of the monomials of $g_i$ are divisible by any of the monomials in $g_j$, it follows that $\text{in}_< g_1, \text{in}_<g_2, \text{in}_< g_3$ are minimal generators for $\langle \text{in}_< g_1, \text{in}_<g_2, \text{in}_< g_3\rangle$. Hence, the only way for $\langle \text{in}_< g_1, \text{in}_<g_2, \text{in}_< g_3\rangle$ to be a squarefree monomial ideal is if each of $ \text{in}_< g_1, \text{in}_<g_2, \text{in}_< g_3$ is a squarefree monomial, which would force (i) $\text{in}_< (zs-x^2) = zs$ and (ii) $\text{in}_< (z^2+zx+wr+s^2) = zx$. So, suppose there is some monomial order $<$ that satisfies (i) and (ii). Then we have $zx > z^2$ (by (ii)) and hence $x>z$. We also have $zs>x^2$ (by (i)). So, since $x>z$, we have $zs>x^2>zx$ and so $s>x$. Finally, $zx>s^2$ (by (ii)) together with $s>x$ implies that $zx>s^2>x^2$. Hence $z>x$, which is impossible as we have already concluded that $x>z$. Thus no monomial order $<$ that satisfies both (i) and (ii) exists, and there is no squarefree initial ideal of $I$. \end{example} \section{Background on Liaison}\label{sect:liaison} In this section, we recall some background on liaison theory. The first subsection concerns terminology and results relevant to our work. The second subsection provides further context for our second goal from the introduction. \subsection{Liaison theory basics} Here we review standard definitions and lemmas on Gorenstein liaison theory that we will need in this paper. For a more thorough introduction, see \cite{MN01}. We follow definitions and some notation from \cite{GMN13}, which provides a careful discussion of how liaison theory can be used to make inferences about Gr\"obner bases. Throughout this subsection, we let $R = \kappa[x_0, x_1,\dots, x_n]$ with the standard grading. \begin{definition} Let $V_1, V_2, X \subseteq \mathbb{P}^{n}$ be subschemes defined by saturated ideals $I_{V_1}$, $I_{V_2}$, and $I_{X}$ of $R$, respectively, and assume that $X$ is arithmetically Gorenstein. If $I_X \subseteq I_{V_1} \cap I_{V_2}$ and if $[I_X:I_{V_1}] = I_{V_2}$ and $[I_X: I_{V_2}] = I_{V_1}$, then $V_1$ and $V_2$ are \textbf{directly algebraically $G$-linked} by $X$, and we write $I_{V_1} \sim I_{V_2}$. \end{definition} One may generate an equivalence relation using these direct links. \begin{definition} If there is a sequence of links $V_1 \sim \cdots \sim V_k$ for some $k \geq 2$, then we say that $V_1$ and $V_k$ are in the same \textbf{$G$-liaison class (or Gorenstein liaison class)} and that they are \textbf{$G$-linked} in $k-1$ steps. Of particular interest is the case in which $V_k$ is a complete intersection, in which case we say that $V_1$ is \textbf{in the Gorenstein liaison class of a complete intersection (abbreviated glicci)}. \end{definition} We will say that a homogeneous, saturated, unmixed ideal of $R$ is glicci if it defines a glicci subscheme of $\mathbb{P}^{n}$. It is because liaison was developed to study subschemes of projective space that the restriction to homogeneous, saturated ideals is natural. Throughout this paper, we will be interested in $G$-links coming from \emph{elementary $G$-biliaisons}. Indeed, it is through elementary $G$-biliaisons that we connect geometric vertex decomposition to liaison theory. Let $S$ be a ring. If $S_P$ is Gorenstein for all prime ideals $P$ of height $0$, then we say that $S$ is $\mathbf{G_0}$. \begin{definition}\label{def:Gbiliaison} Let $I$ and $C$ be homogeneous, saturated, unmixed ideals of $R $ with $\mbox{ht}(I) = \mbox{ht}(C)$. Suppose there exist $\ell \in \mathbb{Z} $, a homogeneous Cohen--Macaulay ideal $N \subseteq I \cap C$ of height $\mbox{ht}(I)-1$, and an isomorphism $I/N \cong [C/N](-\ell)$ as graded $R/N$-modules. If $N$ is $G_0$, then we say that $I$ is obtained from $C$ by an \textbf{elementary $G$-biliaison of height $\ell$}. \end{definition} \begin{theorem}\cite[Theorem 3.5] {Har07} Let $I$ and $C$ be homogeneous, saturated, unmixed ideals defining subschemes $V_I$ and $V_C$, respectively, of $\mathbb{P}^{n}$. If $I$ is obtained from $C$ by an elementary $G$-biliaison, then $V_I$ is $G$-linked to $V_C$ in two steps. \end{theorem} \begin{remark} \textbf{Even $G$-liaison classes} are equivalence classes of subschemes of $\mathbb{P}^{n}$ of a fixed codimension that are $G$-linked to one another in an even number of steps. Two subschemes in the same \emph{even} $G$-liaison class are more closely related to one another than are two subschemes that can be linked to one another but only in an odd number of steps (see \cite[Section 3]{Nag98}). This provides some intuition for why various classes of generalized determinantal varieties can be linked to one another in an even number of steps, via elementary $G$-biliaisons, yet there is no reason to expect that the intermediate links share a similar form, or are at all easy to describe. \end{remark} \subsection{Further context on a question in liaison theory} The purpose of this subsection is to recall the motivation for the following question: \begin{quest}\cite[Question 1.6]{KM+01}\label{motivQuest} Is every arithmetically Cohen--Macaulay subscheme of $\mathbb{P}^n$ glicci? \end{quest} Because all complete intersections of a fixed codimension are in the same liaison class, an equivalent formulation of the question is \begin{quest} For each codimension, is there exactly one Gorenstein liaison class containing any (or all) Cohen--Macaulay subschemes of $\mathbb{P}^n$? \end{quest} This question arises by analogy to the special case of complete intersection liaison in codimension 2, where all arithmetically Cohen--Macaulay subschemes are in the (complete intersection and so also Gorenstein) liaison class of a complete intersection \cite[Theorem 3.2]{PS74}. The same is not true in higher codimensions, however. In fact, in higher codimensions there are infinitely many complete intersection liaison classes containing arithmetically Cohen--Macaulay schemes (see \cite[Chapter 7]{KM+01} and, for related ideas, \cite{HU87}). Complete intersection liaison is a well-understood and very satisfying theory in codimension 2, and this failure to generalize to higher codimensions suggests that it is worth searching for a theory that reduces to complete intersection liaison in codimension 2 and also preserves many of its desirable properties in higher codimension. This is one of the motivations for studying Gorenstein liaison, where better control of the liaison classes containing arithmetically Cohen--Macaulay schemes may still be hoped for in all codimensions. In particular, an affirmative answer to Question \ref{motivQuest} would serve, at least in the eyes of some, as an endorsement of the structure of Gorenstein liaison. There are partial results in the direction of an affirmative answer to Question \ref{motivQuest}, including the results that standard determinantal schemes \cite[Theorem 1.1]{KM+01}, mixed ladder determinantal schemes from two-sided ladders \cite[Corollary 2.2]{Gor07}, schemes of Pfaffians \cite[Theorem 2.3]{DG09}, wide classes of arithmetically Cohen--Macaulay curves in $\mathbb{P}^4$ \cite{CMR00, CMR01}, and arithmetically Cohen-Macaulay schemes defined by Borel-fixed monomial ideals \cite[Theorem 3.5]{MN02} are all glicci. For more results, see \cite{Cas03, HSS08}. There have also been some quite general discoveries. M. Casanellas, E. Drozd, and R. Hartshorne \cite{CDH05} gave a general characterization of when two subschemes of a normal arithmetically Gorenstein scheme are in the same Gorenstein liaison class and showed that every arithmetically Gorenstein subscheme of $\mathbb{P}^n$ is glicci. In \cite[Theorem 3.1]{Gor08}, E. Gorla obtained the very broad result that every determinantal scheme is glicci, generalizing the results of \cite[Theorem 1.1]{KM+01} and also \cite[Theorem 4.1]{Har07}. Later, J. Migliore and U. Nagel \cite{MN13} showed that every arithmetically Cohen-Macaulay subscheme of $\mathbb{P}^n$ that is generically Gorenstein is actually glicci when viewed as a subscheme of $\mathbb{P}^{n+1}$. One can find both encouragement and cause for trepidation in \cite{Har02}: R. Hartshorne gave positive results for many sets of points in $\mathbb{P}^3$ and curves in $\mathbb{P}^4$ but also produced still-viable candidates for a source of a negative answer. The precision required to study Hartshorne's examples highlights the complexity of Question \ref{motivQuest}. By connecting geometric vertex decomposition and liaison, we provide more evidence in favor of an affirmative answer to this question and give a framework for assessing membership in the Gorenstein liaison class of a complete intersection for some arithmetically Cohen--Macaulay schemes arising naturally from combinatorial data. \section{(Weakly) Geometrically vertex decomposable ideals are glicci}\label{sect:GVDGlicci} In Section \ref{sect:GVDtoGBiliaison}, we show that under mild hypotheses a geometric vertex decomposition gives rise to an elementary $G$-biliaison of height $1$ (Corollary \ref{cor:gvdToLia}). We use this result in Subsection \ref{sect:GVDtoGlicci} to prove that every geometrically vertex decomposable ideal is glicci (Theorem \ref{thm:glicci}). We also define the class of \emph{weakly geometrically vertex decomposable ideals}, a class that contains the geometrically vertex decomposable ideals, and we prove that each weakly geometrically vertex decomposable ideal is glicci (Corollary \ref{weakGVDimpliesgliggi}). Finally, in Subsection \ref{sect:GBapplications}, we obtain some consequences on Gr\"obner bases and Gr\"obner degenerations. Throughout this section, we assume that the field $\kappa$ is infinite, and we let $R$ denote the standard graded polynomial ring $\kappa[x_1, \ldots, x_n]$. \subsection{An elementary $G$-biliaison arising from a geometric vertex decomposition}\label{sect:GVDtoGBiliaison} We begin by using a geometric vertex decomposition to construct the isomorphism that will constitute an elementary $G$-biliaison when the setting is appropriate. \begin{theorem}\label{thm:onestep} Suppose that $I \subseteq R$ is an unmixed ideal possessing a nondegenerate geometric vertex decomposition with respect to some variable $y = x_j$ of $R$. If $N_{y,I} $ is unmixed, then there is an isomorphism $I/N_{y,I} \cong C_{y,I}/N_{y,I} $ as $R/N_{y,I}$-modules. If $N_{y,I}$, $C_{y,I}$, and $I$ are homogeneous, then the same map is an isomorphism $I/N_{y,I} \cong [C_{y,I}/N_{y,I} ](-1)$ in the category of graded $R/N_{y,I}$-modules. \end{theorem} \begin{proof} Fix a $y$-compatible term order $<$. From Lemma \ref{lem:form}, we know that the reduced Gr\"obner basis of $I$ has the form $\mathcal{G} = \{yq_1+r_1,\dots, yq_k+r_k,h_1,\dots, h_\ell\}$ where $y$ does not divide any term of $q_i$ or of $r_i$ for any $1 \leq i \leq k$ nor any $h_j$ for $1 \leq j \leq \ell$. Let $C = C_{y,I} = \langle q_1,\dots, q_k, h_1,\dots, h_\ell\rangle$ and $N = N_{y,I} = \langle h_1,\dots, h_\ell \rangle$. We first observe that $N\subseteq I \cap C$. To build the desired isomorphism, we will need to find regular elements of $R/N$. Towards that end, we claim that $\langle q_1, \ldots, q_k \rangle \not\subseteq Q$ for any minimal prime $Q$ of $N$. If it were, then we would also have $C \subseteq Q$, which is impossible because $\mbox{ht}(Q) = \mbox{ht}(N)<\mbox{ht}(C)$ by Lemma \ref{lem:height}. Similarly, $\langle yq_1+r_1, \ldots, yq_k+r_k \rangle \not\subseteq Q'$ for any minimal prime $Q'$ of $N$ since, if it were, then we would have $I \subseteq Q'$, in violation of Lemma \ref{lem:height}. Because $\kappa$ is infinite, we may choose scalars $a_1, \ldots, a_k \in \kappa$ so that neither $u:=a_1q_1+\cdots+a_kq_k$ nor $v:=a_1(yq_1+r_1)+ \ldots+ a_k(yq_k+r_k)$ is an element of any minimal prime of $N$. Because $\min(N) = \ass(N)$, neither $u$ nor $v$ is a zero-divisor on $R/N$. We may now define a map $\phi: C \rightarrow I/N$ given by $f \mapsto \dfrac{fv}{u}$. To see that $\phi$ is well defined, we claim that, for each $f \in C$, there exists a unique $\overline{g} \in I/N$ so that $fv-gu \in N$ (where $\overline{g}$ is the class of $g$ in $I/N$). Suppose that $fv-g_1u = n_1 \in N$ and $fv-g_2u = n_2 \in N$ for some $g_1, g_2 \in I$. Then $(g_1-g_2)u = n_2-n_1 \in N$, and so, because $u$ is not a zero-divisor on $I/N$, $g_1-g_2 \in N$. Hence, there is at most one such $\overline{g} \in I/N$. To see that there is at least one such $\overline{g} \in I/N$, we note that $\overline{g} = \overline{0}$ is a satisfying choice if $f \in N$ and claim that $\overline{g} = \overline{yq_i+r_i}$ is a satisfying choice if $f = q_i$ for each $1 \leq i \leq k$. Indeed, \begin{align*} (yq_i+r_i)u-q_iv = &(yq_i+r_i)(a_1q_1+\cdots+a_kq_k) - q_i(a_1(yq_1+r_1)+\cdots+a_k(yq_k+r_k)) \\ = &r_i(a_1q_1+\cdots+a_kq_k)-q_i(a_1r_1+\cdots+a_kr_k). \end{align*} Because $yq_i+r_i \in I$ and $v \in I$, we have $r_i(a_1q_1+\cdots+a_kq_k)-q_i(a_1r_1+\cdots+a_kr_k) \in I$. But $y$ does not divide any term of any $q_j$ or any $r_j$ for $1 \leq j \leq k$, and so the leading term of $r_i(a_1q_1+\cdots+a_kq_k)-q_i(a_1r_1+\cdots+a_kr_k)$ is not divisible by the leading term of any $yq_i+r_i$. By the assumptions that $\mathcal{G}$ is a Gr\"obner basis of $I$ and that $<$ is $y$-compatible, it must be that $r_i(a_1q_1+\cdots+a_kq_k)-q_i(a_1r_1+\cdots+a_kr_k)$ has a Gr\"obner basis reduction using only the elements of $\mathcal{G}$ not involving $y$, i.e., using only $h_1,\dots, h_\ell $, which implies that $r_i(a_1q_1+\cdots+a_kq_k)-q_i(a_1r_1+\cdots+a_kr_k) \in N$. Hence, multiplication by $v$ gives a map from $C$ to $u(I/N)$, which maps isomorphically to $I/N$ by multiplication by $1/u$. That is, $\phi$ is indeed a map from $C$ to $I/N$. Because $I$ is generated over $N$ by the $yq_i+r_i$, we have also shown that $\phi$ is surjective. Having established that $\phi(h_i) = \overline{0} \in I/N$, we have $N \subseteq \ker(\phi)$. And $\ker(\phi) \subseteq N$ because $v$ is a non-zero-divisor on $R/N$. Therefore, $\phi$ induces an isomorphism $\overline{\phi}: C/N \rightarrow I/N$. It is clear that whenever $N$ is homogeneous so that discussion of degrees makes sense, $\overline{\phi}$ increases degree by $1$ and so $\overline{\phi}: [C/N](-1) \rightarrow I/N$ is an isomorphism of graded $R/N$-modules. \end{proof} Notice that if, in the proof above, one already knows $q_1$ and $yq_1+r_1$ to be non-zero-divisors on $R/N$, for example if $R/N$ is a domain, one may choose $a_1 = 1$ and $a_i = 0$ for $1<i \leq k$. In this case, the map $\phi$ will be of the same form used in \cite{Gor07}, \cite{Gor08}, and \cite{GMN13}. As indicated above, the primary use of Theorem \ref{thm:onestep} is in the setting of liaison theory (Corollary \ref{cor:gvdToLia}). We will need the following straightforward fact about saturation: \begin{lemma}\label{lem:saturated} If $I \subseteq R$ is homogeneous and unmixed, then $\sqrt{I}$ is the homogeneous maximal ideal $m$ or $I$ is saturated. \end{lemma} \begin{proof} Observe that $m$ is an associated prime of $I$ if and only if it is a minimal prime of $I$ if and only if $\sqrt{I} = m$. \end{proof} \begin{corollary}\label{cor:gvdToLia} Let $I$ be a homogeneous, saturated, unmixed ideal of $R$ and $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ a nondegenerate geometric vertex decomposition with respect to some variable $y = x_j$ of $R$. Assume that $N_{y,I}$ is Cohen--Macaulay and $G_0$ and that $C_{y,I}$ is also unmixed. Then $I$ is obtained from $C_{y,I}$ by an elementary $G$-biliaison of height $1$. \end{corollary} \begin{proof} The height conditions required by the definition of elementary $G$-biliaison are given by Lemma \ref{lem:height}, saturation follows from Lemma \ref{lem:saturated}, and the required isomorphism is constructed in Theorem \ref{thm:onestep}. \end{proof} \begin{comment} One may sometimes wish that it were true that the homogenization of a Gr\"obner basis of an ideal $I$ in general gave the Gr\"obner basis of the homogenization of $I$. That equality does not hold in general. However, we will use Corollary \ref{cor:Gb(I)} to show that in context of geometric vertex decomposition, we do have that convenience. Fix a variable $y = x_j$ of $R$, and consider the weight vector $w$ of length $n+1$ whose $i^{th}$ and $n+1^{st}$ coordinates are $1$ and whose other coordinates are $0$. Understand this weight vector to be assigning $y$ and a new variable $t$ of $R[t]$ weights $1$ and each $x_j$ a weight of $0$ when $i \neq j$. For an element $f \in R$, let $f^h$ denote its homogenization with respect to $w$ and $I^h = \langle f^h \mid f \in I \rangle$ the ideal generated by the homogenizations of elements of $I$. We first notice that homogenizations of elements of a Gr\"obner basis of $I$ are generators of $I^h$: Fix a $y$-compatible term order $<$, and let $\mathcal{G}$ denote the reduced Gr\"obner basis of $I$ with respect to $<$, and let $\mathcal{G}^h$ denote the set of homogenizations of elements of $\mathcal{G}$. Because $I^h$ is generated by the homogenizations of elements of $I$, it is sufficient to show that $f^h \in \langle \mathcal{G}^h \rangle$ for every $f \in I$. According to Lemma \ref{lem:form}, $\mathcal{G} = \{yq_1+r_1, \ldots, yq_k+r_k, h_1, \ldots, h_k\}$ with $y$ not dividing any term of any $q_i$ or $r_i$ for any $1 \leq i \leq k$ nor any $h_j$ for $1 \leq j \leq \ell$. Using the deterministic division algorithm (see \cite[Section 15.4]{Eis95}), we may express $f = (\sum_{i = 1}^k \alpha_i(yq_i+r_i))+(\sum_{j = 1}^\ell(\beta_j h_j))$ for some $\alpha_i, \beta_j \in R$ for $1 \leq i \leq k$ and $1 \leq j \leq \ell$ where $\text{in}_<(\beta_j h_j) \notin \langle \{ \text{in}_< \alpha_i(yq_i+r_i) \}_{1 \leq i \leq k} \rangle$ for each $1 \leq j \leq \ell$. In particular, no term of any $\beta_j h_j$ is divisible by $y$. Now write each $\alpha_i = \sum_{p = 0}^\infty\gamma_{ip}$ where $\gamma_{ip}$ is divisible by $y^p$ and is not divisible by $y^{p+1}$. (Of course, all but finitely many of the $\gamma_{ip}$ are $0$, and each such sum is actually finite.) Then \begin{align*} f^h &= \left(\sum_{i = 1}^k \left( \sum_{p = 0}^\infty \gamma_{ip}\right) \left(yq_i+r_i\right)\right)^h+\left(\sum_{j = 1}^\ell \left(\beta_j h_j\right)\right)\\ & = \left(\sum_{p = 0}^\infty \sum_{i = 1}^k \gamma_{ip} \left(yq_i+r_i\right)\right)^h+\left(\sum_{j = 1}^\ell \left(\beta_j h_j\right)\right)\\ & = \left(\sum_{p = 0}^\infty \sum_{i = 1}^k \gamma_{ip} \left(yq_i+tr_i\right)\right)+\left(\sum_{j = 1}^\ell \left(\beta_j h_j\right)\right) \in \langle \mathcal{G}^h \rangle \end{align*} because each $yq_i+tr_i$ and each $h_j$ is an element of $\mathcal{G}^h$ for each $1 \leq i \leq k$ and $1 \leq j \leq \ell$. Now that we see that $\mathcal{G}^h$ is at least a generating set for $I^h$, we are now ready to use Corollary \ref{cor:Gb(I)} to show that if $I$ is a homogeneous ideal of $R$ (in the standard grading), then $\mathcal{G}^h$ is a Gr\"obner basis of $I^h$. \begin{corollary}\label{cor:homogGB} Let $I$ be an unmixed homogeneous ideal of $R$ that admits a geometric vertex decomposition with respect to some variable $y$ of $R$, and let $\mathcal{G}$ be the reduced Gr\"obner basis of $I$ wth respect to the $y$-compatible term order $<$. Assume that $N_{y,I}$ has no embedded primes. Then $\mathcal{G}^h = \{g^h \mid g \in \mathcal{G} \}$ is a Gr\"obner basis of $I^h$ under any $y$-compatible term order on $R[t]$ refining $<$. \end{corollary} {\color{purple} We're not actually using unmixedness here. It's just that the height lemma in the mixed case comes later. We could either move that lemma up, cite it here even though it comes later in the paper, or just leave things how they are (if we want to keep the corollary at all, that is).} \begin{proof} The degenerate cases are clear, and so we assume that the geometric vertex decomposition of $I$ with respect to $y$ is nondegenerate. For convenience, we consider $I$ as an ideal of $R[t]$ and will refer to the term order of $R[t]$ that refines $<$ also as $<$. By considering $s$-pair reductions, it is clear that $\mathcal{G}$ remains a Gr\"obner basis of $I$ in $R[t]$. By Lemma \ref{lem:form}, we now $\mathcal{G} = \{yq_1+r_1, \ldots, yq_k+r_k, h_1,\ldots,h_\ell\}$, where $y$ does not divide $q_i, r_i$ for any $1 \leq i \leq k$ nor any $h_j$ for any $1 \leq j \leq \ell$. By Lemma \ref{lem:height} $\mbox{ht}(I), \mbox{ht}(C_{y,I})>\mbox{ht}(N_{y,I})$. Now $I^h = \langle yq_1+tr_1, \ldots, yq_k+tr_k, h_1, \ldots, h_\ell \rangle$. Then because $C_{y,I} =\langle yq_1, \ldots, yq_k , h_1, \cdots h_\ell \rangle =C_{y,I^h}$ and $N_{y,I} = \langle h_1, \ldots, h_\ell \rangle = N_{y,I^h}$, and the given generating sets of both ideals are Gr\"obner bases by \cite[Theorem 2.1(a)]{KMY09}, we are in the setting of Corollary \ref{cor:Gb(I)}. Let $M = \begin{pmatrix} q_1 & \cdots & q_k\\ tr_1 & \cdots & tr_k \end{pmatrix}$. It remains to show that $I_2(M) \subseteq N_{y,I}$. If $1 \leq i<j \leq k$, then \[ q_i(tr_j)-tr_i(q_j) = t(q_j(yq_i+r_i)-q_i(yq_j+r_j)) \in I \] because $yq_i+r_i$ and $yq_j+r_j$ are elements of $I$. But $y$ does not divide any term of $q_itr_j-tr_iq_j$, and so, by the assumption that $\mathcal{G}$ is a Gr\"obner basis of $I$, $q_itr_j-tr_iq_j$ has a reduction in terms of the given generators of $N_{y,I}$, and so $I_2(M) \subseteq N_{y,I}$. It now follows from Corollary \ref{cor:Gb(I)} that $\mathcal{G}^h$ is a Gr\"obner basis of $I^h$. \end{proof} Notice that the above result shows that $\text{in}_y I = \text{in}_y I^h$ and that $\text{in}_y I +\langle t \rangle = I^h + \langle t \rangle$, from which it follows that $t$ is a non-zero-divisor on $I^h$. \end{comment} \begin{comment}\begin{theorem} If the homogeneous ideal $I$ of the standard graded polynomial ring $R = \kappa[x_1,\ldots,x_n,y]$ admits a geometric vertex decomposition with respect to $y$, then $I$ is Cohen--Macaulay if and only if $\text{in}_y I$ is Cohen--Macaulay. \end{theorem} \begin{proof} Let $<$ denote the lexicographic order with respect to $y>x_1>\ldots>x_n$. It is always the case that $\text{in}_y I$ Cohen--Macaulay implies $I$ Cohen--Macaulay, so we only need to show the other implication, and so we assume that $I$ is Cohen--Macaulay. If $m$ is the homogeneous maximal ideal of $R$, because $I$ and so $\text{in}_y I$ are homogeneous, it suffices to show that $(R/\text{in}_y I)_m$ is Cohen--Macaulay. If $\dim(R/I) = 0$, the result is clear, and so we may fix take $d = \mbox{ht}(I)<n+1$ to be arbitrary and proceed by induction on $n+1-d$. Note also that the result is clear if $n = 0$, and so we will assume throughout that $n \geq 1$. Let $S = R[t]$, and let $\mu$ denote the maximal ideal $\langle x_1, \ldots, x_n, t \rangle$ of $S$, and let $\prec$ denote the lexicographic order on $S$ with respect to $y>t>x_1>\ldots>x_n$. By Lemma \ref{lem:form}, the reduced Gr\"obner basis of $I$ with respect to $<$ (and so also $\prec$) has the form $\mathcal{G}_I = \{yq_1+r_1, \ldots, yq_k+r_k, h_1, \ldots, h_\ell\}$. According to Corollary \ref{cor:homogGB}, $\mathcal{G}_{I^h} = \{yq_1+tr_1, \ldots, yq_k+tr_k, h_1, \ldots, h_\ell \}$ is a Gr\"obner basis of $I^h$ with respect to $\prec$. Notice that $\text{in}_y I = \text{in}_y I^h$ and so that $\mbox{ht}(I) = \mbox{ht}(\text{in}_y I) = \mbox{ht}(\text{in}_y I^h) = \mbox{ht}(I^h)$. Notice also that $I^h+\langle t \rangle = \text{in}_y I + \langle t \rangle$. Because $t$ does not divide any term of any elements of the reduced Gr\"obner basis of $\text{in}_y I$, we see \[ \mbox{ht}(I^h+\langle t \rangle) = \mbox{ht}(\text{in}_y I +\langle t \rangle) = \mbox{ht}(\text{in}_y I) + 1 = \mbox{ht}(I^h)+1. \] Hence, $t$ is not in any minimal prime of $I^h$. Because $I$ is Cohen--Macaulay and so unmixed, $\text{in}_y I = \text{in}_y I^h$ is unmixed, and so $I^h$ is unmixed, and $\min(I^h) = \ass(I^h)$ and $t$ is not a zero-divisor on $I^h$. It follows that $(S/I^h)_\mu$ is Cohen--Macaulay if and only if $(S/I^h+\langle t \rangle)_\mu \cong (R/\text{in}_y I)_m$ is Cohen--Macaulay. We will now show that $(S/I^h)_\mu$ is Cohen--Macaulay. From now on, we will consider $I$, $I^h$, and $\text{in}_y I$ all as ideals of $S$. Because both $I$, $I^h$, and $\text{in}_y I$ are unmixed and of height $d<n+1$, the ideal $(x_1, \ldots, x_n,t)$ is not associated to either $I$, $I^h$, or $\text{in}_y I$. Hence, using that $|\kappa| = \infty$, for a sufficiently general choice of $(\alpha_1, \ldots, \alpha_n, \beta) \in \kappa^{n+1}$, $f = \alpha_1 x_1+\cdots+\alpha_n x_n + \beta t$ avoids the minimal primes of $I$, $I^h$, and $\text{in}_y I$ and so is a non-zero-divisor on $(S/I)_n$, $(S/I^h)_n$, and $(S/\text{in}_y I)_n$. Because $\beta \neq 0$ is an open condition, we may also assume $\beta \neq 0$. Set $J = I^h+\langle f \rangle$. We claim that $\mathcal{G}_J = \{ yq_1+tr_1, \ldots, yq_k+tr_k, h_1, \ldots, h_\ell, f \}$ is a Gr\"obner basis of $J$. Because $\{ yq_1+tr_1, \ldots, yq_k+tr_k, h_1, \ldots, h_\ell\}$ is a Gr\"obner basis of $I^h$, it suffices to observe that because the leading term of $f$ is $\beta t$, the leading terms of $yq_i+tr_i$ and $f$ have no variables in common for any $1 \leq i \leq k$ and the leading terms of $h_j$ and $f$ have no variables in common for any $1 \leq j \leq \ell$. Similarly, $\{yq_1+r_1, \ldots, yq_k+r_k, h_1, \ldots, h_\ell, f\}$ is a Gr\"obner basis of $I+\langle f \rangle$. Hence, $\text{in}_y J = \langle yq_1, \ldots, yq_k, h_1, \ldots, h_\ell, f \rangle = \text{in}_y (I+\langle f \rangle)$, all of which are equal to $\text{in}_y I+\langle f \rangle$. Because $(S/I)_n$ is Cohen--Macaulay and $f \in n$ is a non-zero-divisor on $(S/I+\langle f \rangle)_n$ is Cohen--Macaulay. Using homogeneity of $I+\langle f \rangle$, we have that $I+\langle f \rangle$ is a Cohen--Macaulay ideal of $S$ of height $d+1$. By induction, $\text{in}_y (I+\langle f \rangle)$ is also Cohen--Macaulay, and so \end{proof} \end{comment} \subsection{Geometrically vertex decomposable ideals and the glicci property}\label{sect:GVDtoGlicci} We make two observations about linkage before proceeding. Let $S = R[z]$ for a new variable $z$. \begin{enumerate} \item If $I$ is obtained from $C$ via an elementary $G$-biliaison of height $1$ in $R$, then $IS$ is obtained from $CS$ via an elementary $G$-biliaison of height $1$ in $S$. \label{Observation1} \item If $I$ is obtained from $C$ via an elementary $G$-biliaison of height $1$ in $R$, then $IS+\langle z \rangle$ is obtained from $CS+\langle z \rangle$ via an elementary $G$-biliaison of height $1$ in $S$. \label{Observation2} \end{enumerate} \begin{theorem}\label{thm:glicci} If $I = I_0 \subseteq R$ is a homogeneous, geometrically vertex decomposable proper ideal, then there is a finite sequence of homogeneous, saturated, unmixed ideals $I_1, \ldots, I_t$ so that $I_{j-1}$ is obtained from $I_{j}$ by an elementary $G$-biliaison of height $1$ for every $1 \leq j \leq t$ and $I_t$ is a complete intersection. In particular, $I$ is glicci. \end{theorem} \begin{proof} Clearly, it suffices to prove the first claim. We will proceed by induction on $n = \dim(R)$, noting that the case of a dimension $0$ polynomial ring is trivial. We now take $n \geq 1$ to be arbitrary and assume the result for all proper homogeneous ideals $I$ in polynomial rings of dimension $<n$. If $I$ is a complete intersection, then there is nothing to prove. Otherwise, there exists some variable $y = x_j$ of $R$ for which $\text{in}_y I = C_{y,I} \cap (N_{y,I} +\langle y\rangle)$ is a geometric vertex decomposition with the contractions of $N_{y,I} $ and $C_{y,I}$ to $T = \kappa[x_1,\ldots, \hat{y}, \ldots, x_n]$ geometrically vertex decomposable. Suppose first that $ C_{y,I} = \langle 1 \rangle$, in which case $I = N_{y,I} +\langle y\rangle$ (possibly after a linear change of variables). By induction, with $\tilde{I}_0 = N_{y,I} \cap T$, there is a sequence of ideals $\tilde{I}_1, \ldots, \tilde{I}_t$ of $T$ so that $\tilde{I}_{j-1}$ is obtained from $\tilde{I}_{j}$ by an elementary $G$-biliaison of height $1$ for every $1 \leq j \leq t$ and $\tilde{I}_t$ is a complete intersection. Setting $I_j = \widetilde{I}_jR+\langle y \rangle$ for every $1 \leq j \leq t$, the result follows from Observation \eqref{Observation2}, above. Similarly, the result is essentially immediate from the inductive hypothesis together with Observation \eqref{Observation1} in the other degenerate case $\sqrt{C_{y,I}} = \sqrt{N_{y,I}}$: indeed, because $I$ is radical by Proposition \ref{prop:radical}, we have $I = C_{y,I} = N_{y,I}$ by Proposition \ref{prop:degen-rad}. Finally, assume the geometric vertex decomposition with respect to $I$ is nondegenerate, in which case we may apply the inductive hypothesis to $\tilde{I}_1 = C_{y,I} \cap T$. By induction and in parallel with the previous case, there is a finite sequence of homogeneous, saturated, unmixed ideals $\widetilde{I}_2, \ldots, \widetilde{I}_t$ of $T$ so that $\widetilde{I}_{j-1}$ is obtained from $\widetilde{I}_j$ by an elementary $G$-biliaison of height $1$ in $T$ for every $2 \leq j \leq t$ and $\widetilde{I}_t$ is a complete intersection. Let $I_j = \widetilde{I}_jR$ for every $2 \leq j \leq t$. Then with $I_1 = C_{y,I}$, by Observation \eqref{Observation1} above, $I_{j-1}$ is obtained from $I_{j}$ by an elementary $G$-biliaison of height $1$ in $R$ for every $2 \leq j \leq t$ and $I_t$ is a complete intersection. Hence, it suffices to show that $I$ is obtained from $C_{y,I}$ by an elementary $G$-biliaison of height $1$, but this is Corollary \ref{cor:gvdToLia}. Note that the hypotheses of Corollary \ref{cor:gvdToLia} hold since $C_{y,I}\cap T$ and $N_{y,I}\cap T$ are geometrically vertex decomposable (hence unmixed and radical, and so $G_0$) and glicci (hence Cohen-Macaulay) by induction. \end{proof} \begin{corollary}\label{cor:CM} If $I \subseteq R$ is a homogeneous, geometrically vertex decomposable proper ideal, then $I$ is Cohen--Macaulay. \end{corollary} \begin{proof} Since glicci implies Cohen--Macaulay, this is immediate from Theorem \ref{thm:glicci}. \end{proof} The remainder of this section will concern weakly geometrically vertex decomposable ideals, a direct generalization of the monomial ideals associated to weakly vertex decomposable simplicial complexes in the sense of \cite{NR08} (see Remark \ref{rem:weakVert} below). \begin{definition}\label{def:weaklyGeometricallyVertexDec} An ideal $I\subseteq R$ is \textbf{weakly geometrically vertex decomposable} if $I$ is unmixed and if \begin{enumerate} \item $I = \langle 1\rangle$ or $I$ is generated by indeterminates in $R$, or \item (\emph{degenerate case}) for some variable $y = x_j$ of $R$, $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ is a degenerate geometric vertex decomposition and the contraction of $N_{y,I}$ to the ring $\kappa[x_1,\dots,\widehat{y},\dots, x_n]$ is weakly geometrically vertex decomposable, or \item (\emph{nondegenerate case}) for some variable $y = x_j$ of $R$, $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ is a nondegenerate geometric vertex decomposition, the contraction of $C_{y,I}$ to the ring $\kappa[x_1,\dots,\widehat{y},\dots, x_n]$ is weakly geometrically vertex decomposable, and $N_{y,I}$ is radical and Cohen--Macaulay. \qedhere \end{enumerate} \end{definition} Notice that it makes no difference whether we require $N_{y,I}$ or the contraction of $N_{y,I}$ to $\kappa[x_1,\dots,\widehat{y},\dots, x_n]$ to be radical and Cohen--Macaulay. We give two corollaries of Theorem \ref{thm:glicci} concerning weakly geometrically vertex decomposable ideals: \begin{corollary} A geometrically vertex decomposable ideal is weakly geometrically vertex decomposable. \end{corollary} \begin{proof} We will proceed by induction on $\dim(R)$. Suppose that a geometrically vertex decomposable ideal $I$ has the geometric vertex decomposition $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ with respect to some variable $y = x_j$ of $R$. By Proposition \ref{prop:radical}, $I$ is radical, and so, if the geometric vertex decomposition is degenerate, then $I = N_{y,I}+\langle y \rangle$ or $I = N_{y,I}$ by Proposition \ref{prop:degen-rad}, and the result is immediate by induction. Hence, we may assume that the geometric vertex decomposition is nondegenerate. From Theorem \ref{thm:glicci}, we know that $N_{y,I} \cap \kappa[x_1,\ldots,\widehat{y},\ldots,x_n]$ is glicci and so Cohen--Macaulay. Hence, so is $N_{y,I}$. Proposition \ref{prop:radical} tells us that $N_{y,I} \cap \kappa[x_1,\dots,\widehat{y},\dots, x_n]$ is radical. Hence, so is $N_{y,I}$. By induction, $C_{y,I} \cap \kappa[x_1,\dots,\widehat{y},\dots, x_n]$ is weakly geometrically vertex decomposable and so, by Observation \eqref{Observation1}, $C_{y,I}$ is weakly geometrically vertex decomposable, completing the proof. \end{proof} \begin{corollary}\label{weakGVDimpliesgliggi} A weakly geometrically vertex decomposable ideal is both radical and glicci. \end{corollary} \begin{proof} The proofs of Proposition \ref{prop:radical} and Theorem \ref{thm:glicci} easily adapt to the weakly geometrically vertex decomposable setting. In particular, in these proofs, we only used that the ideal $N_{y,I}$ was geometrically vertex decomposable to obtain that $N_{y,I}$ was Cohen--Macaulay, radical, saturated, and unmixed. The first two of those properties are automatic from the definition of weakly geometrically vertex decomposable and the last two follow because Cohen--Macaulay ideals are always unmixed and always saturated unless they are the maximal ideal. \end{proof} \begin{remark}\label{rem:weakVert} Let $\Delta$ be a simplicial complex on vertex set $[n]$. As with Proposition \ref{prop:VDiffGVD}, it is a straightforward exercise to show that $\Delta$ is weakly vertex decomposable in the sense of U. Nagel and T. R\"omer (see \cite[Definition 2.2]{NR08}) if and only if $I_\Delta$ is weakly geometrically vertex decomposable. Furthermore, by restricting our proofs of Corollary \ref{weakGVDimpliesgliggi} and Theorem \ref{thm:glicci} to the case of squarefree monomial ideals, we recover \cite[Theorem 3.3]{NR08}, which asserts that $I_\Delta$ is \emph{squarefree glicci} whenever $\Delta$ is a weakly vertex decomposable simplicial complex. \end{remark} We end by showing that the condition of being weakly geometrically vertex decomposable is strictly weaker than that of being geometrically vertex decomposable. \begin{example} This example is a minor modification of Example \ref{ex:nolex}. Take $I$ to be the ideal of $\kappa[x,y,z,w,r,s]$, generated by $\{y(zs-x^2), ywr, wr(x^2+s^2+z^2+wr)\}$, which, following the argument of Example \ref{ex:nolex}, is a universal Gr\"obner basis. Observe that $I$ is squarefree only in $y$, so we must first degenerate with respect to $y$, which yields $C_{y,I} = \langle zs-x^2,wr \rangle$ and $N_{y,I} = \langle (wr)(x^2+s^2+z^2+wr) \rangle$. We saw in Example \ref{ex:nolex} that the contraction of $C_{y,I}$ to $\kappa[x,z,w,r,s]$ was geometrically vertex decomposable. Here the contraction of $N_{y,I}$ to $\kappa[x,z.w,r,s]$ is clearly radical and Cohen--Macaulay but has no geometric vertex decomposition because it is not squarefree in any variable. Hence, $I$ is weakly geometrically vertex decomposable but not geometrically vertex decomposable. \end{example} \subsection{Applications to Gr\"obner bases and degenerations}\label{sect:GBapplications} One can not in general transfer the Cohen--Macaulay property from an ideal to its initial ideal or from one component of a variety to the whole variety. However, in the context of geometric vertex decomposition, we can use the combination of Cohen--Macaulyness of a homogeneous ideal $I$ and of the component $N_{y,I}+\langle y \rangle$ (equivalently, of $N_{y,I}$) to infer the same about $\text{in}_y I$. \begin{corollary}\label{cor:allCM} Suppose that $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ is a nondegenerate geometric vertex decomposition of the homogeneous ideal $I\subseteq R$ and that both $N_{y,I}$ and $I$ are Cohen--Macaulay. Then, $C_{y,I}$ and $\text{in}_y I$ are Cohen--Macaulay as well. \end{corollary} \begin{proof} For convenience, write $N = N_{y,I}$ and $C = C_{y,I}$. Because $I$ and $N$ are Cohen--Macaulay, they are unmixed. Hence, we may apply Theorem \ref{thm:onestep} to see that $I/N \cong C/N$. It is easy to see that $C/N \xrightarrow{y} \text{in}_y I/N$ is also an isomorphism, and so $I/N \cong \text{in}_y I/N$. Let $m$ denote the homogeneous maximal ideal of $R$, and let $H^i_m(M)$ denote the $i^{th}$ local cohomology module of the $R$-module $M$ with support in $m$. Because $I$ homogeneous implies $\text{in}_y I$ homogeneous, it is sufficient to check Cohen--Macaulayness at $m$. Let $d = \dim(R/I) = \dim(R/N)-1$. The short exact sequence $0 \rightarrow I/N \rightarrow R/N \rightarrow R/I \rightarrow 0$ tells us that $H^i_m(I/N) \cong H^{i-1}_m(R/I) = 0$ for all $i \leq d$ because $H^i_m(R/N) = 0$ for all $i \leq d$. Then from the short exact sequence $0 \rightarrow \text{in}_y I/N \rightarrow R/N \rightarrow R/\text{in}_y I \rightarrow 0$ together with the fact that $\text{in}_y I/N \cong I/N$, we have \[ H^{i-1}_m(R/\text{in}_y I) \cong H^i_m(\text{in}_y I/N) \cong H^i_m(I/N) \cong H^{i-1}_m(R/I) = 0 \] for all $i-1<d = \dim(R/\text{in}_y(I))$, and so $R/\text{in}_y(I)$ is Cohen--Macaulay. The argument in the case of $C$ follows the same line using the short exact sequence $0 \rightarrow C/N \rightarrow R/N \rightarrow R/C \rightarrow 0$. \end{proof} One consequence of Corollary \ref{cor:allCM} is that we may omit as a hypothesis that $C_{y,I}$ be unmixed in Corollary \ref{cor:gvdToLia} whenever $I$ is Cohen--Macaulay. We will now describe conditions that allow one to use the map constructed in Theorem \ref{thm:onestep} in order to conclude that a known set of generators for $I$ forms a Gr\"obner basis when Gr\"obner bases for $C_{y,I}$ and $N_{y,I}$ are known. The result complements the framework of \cite{KMY09}, in which one begins with a Gr\"obner basis of $I$ and concludes that the resultant generating sets of $C_{y,I}$ and $N_{y,I}$ are also Gr\"obner bases. For convenience, we recall a lemma from \cite{GMN13}: \begin{lemma}\label{GVLmodified}\cite[Lemma 1.12]{GMN13} Fix a term order $<$ and homogeneous ideals $N$, $C$, $I$, and $\tilde{I}$ in a polynomial ring with $N \subseteq I \cap C$ and $\tilde{I} \subseteq \text{in}_<(I)$. If $I/N \cong [C/N](-1)$ and $\tilde{I}/\text{in}_<(N) \cong [\text{in}_<(C)/\text{in}_<(N)](-1)$, then $\tilde{I} = \text{in}_<I$. \end{lemma} Although the lemma is stated differently in \cite{GMN13}, the proof given there also applies to the conditions as stated above. \begin{corollary}\label{cor:Gb(I)} Let $I = \langle yq_1+r_1,\dots, yq_k+r_k,h_1,\dots, h_\ell \rangle$ be a homogenous ideal of $R$ with $y = x_j$ some variable of $R$ and $y$ not dividing any term of any $q_i$ for $1 \leq i \leq k$ nor of any $h_j$ for $1 \leq j \leq \ell$. Fix a term order $<$, and suppose that $\mathcal{G}_C = \{q_1,\dots, q_k,h_1,\dots, h_\ell\}$ and $\mathcal{G}_N = \{h_1,\dots, h_\ell\}$ are Gr\"obner bases for the ideals they generate, which we call $C$ and $N$, respectively. Assume that $\text{in}_<(yq_i+r_i) = y \cdot \text{in}_<q_i$ for all $1 \leq i \leq k$. Assume also that $\mbox{ht}(I)$, $\mbox{ht}(C)>\mbox{ht}(N)$ and that $N$ is unmixed. Let $M = \begin{pmatrix} q_1& \cdots & q_k\\ r_1& \cdots & r_k \end{pmatrix}.$ If the ideal of $2$-minors of $M$ is contained in $N$, then the given generators of $I$ are a Gr\"obner basis. \end{corollary} \begin{proof} Following the proof of Theorem \ref{thm:onestep}, the conditions that $\mbox{ht}(I)$, $\mbox{ht}(C)>\mbox{ht}(N)$ and that $N$ be unmixed imply that the elements $u$ and $v$ of Theorem \ref{thm:onestep} are non-zero-divisors on $R/N$. The condition that the ideal of $2$-minors of $M$ be contained in $N$ implies that \[ r_i(a_1q_1 +\cdots+a_kq_k)-q_i(a_1r_1 +\cdots+a_kr_k) = a_1(r_iq_1-r_1q_i)+\cdots+a_k(r_iq_k-r_kq_i)\in N \] for every $1 \leq i \leq k$. The remainder of the argument from Theorem \ref{thm:onestep} that $\overline{\varphi}: [C/N](-1) \rightarrow I/N$ is an isomorphism remains intact in this setting. Set $\tilde{I} = \langle y \cdot \text{in}_<(q_1), \ldots, y \cdot \text{in}_<(q_k), \text{in}_<(h_1), \ldots, \text{in}_<(h_\ell) \rangle$. Because $\mathcal{G}_C$ and $\mathcal{G}_N$ are Gr\"obner bases, we know that $\text{in}_< C = \langle \text{in}_<(q_1), \ldots, \text{in}_<(q_k), \text{in}_<(h_1), \ldots, \text{in}_<(h_\ell) \rangle$ and that $\text{in}_< N = \langle \text{in}_<(h_1), \ldots, \text{in}_<(h_\ell)\rangle $. Because $\text{in}_< (yq_i+r_i) = y \cdot \text{in}_<(q_i)$ for each $1 \leq i \leq k$, the map $[\text{in}_<C/\text{in}_<N](-1)\xrightarrow{y} \tilde{I}/\text{in}_<N$ is also an isomorphism. It follows from Lemma \ref{GVLmodified} that $\tilde{I} = \text{in}_< I$. \end{proof} \begin{example}[The Veronese Embedding] As an application of Corollary \ref{cor:Gb(I)}, we give a concise inductive proof that the usual set of homogeneous equations defining the image of the $d^{\rm{th}}$ Veronese $\nu_d:\mathbb{P}^1 \rightarrow \mathbb{P}^d$ forms a Gr\"obner basis for any $d\geq 1$. With homogeneous coordinates $[s:t]$ on $\mathbb{P}^1$ and $[x_0: \cdots :x_d]$ on $\mathbb{P}^d$, recall that the $d^{th}$ Veronese is the map $[s:t] \mapsto [s^d:s^{d-1}t:\cdots:st^{d-1}:t^d]$. Let $A_d = \begin{pmatrix} x_0 & x_1 &\cdots & x_{d-1}\\ x_1 & x_2 &\cdots & x_d \end{pmatrix}$, let $\mathcal{G}_d$ denote the set of $2\times 2$ minors of $A_d$, and let $I = \langle \mathcal{G}_d\rangle$ be the ideal generated by $\mathcal{G} _d$. The image of the $\nu_d$ is defined by $I$, which is to say that there is a ring isomorphism $\dfrac{\kappa[x_0, \ldots, x_d]}{I} \rightarrow \kappa[s^d, s^{d-1}t, \ldots, st^{d-1}, t^d]\subseteq \kappa[s,t]$ given by $x_i \mapsto s^{d-i}t^i$ for $0 \leq i \leq d$. We now show that $\mathcal{G}_d$ is a Gr\"obner basis of $I$ with respect to the lexicographic monomial order with $x_d>x_{d-1}>.\cdots>x_1>x_0$. We proceed by induction on $d$, noting that $d = 1$ is trivial because in that case $I = \langle 0 \rangle$. For $d \geq 2$ and with notation as in Corollary \ref{cor:Gb(I)}, notice that $C= \langle x_0, \ldots, x_{d-2} \rangle$ and that $N = \langle \mathcal{G}_{d-1}\rangle$, whose given generators are a Gr\"obner basis by induction. Because $N$ is a prime ideal properly contained in $C \cap I$, we know both that $N$ is unmixed and also that $\mbox{ht}(I)$, $\mbox{ht}(C)>\mbox{ht}(N)$. Lastly, observe that the ideal generated by the $2\times 2$ minors of $M = \begin{pmatrix} x_0& x_1 &\cdots & x_{d-2}\\ x_1x_{d-1}& x_2 x_{d-1} &\cdots & x_{d-1}^2 \end{pmatrix}$ is equal to $x_{d-1} \cdot N$ and so is contained in $N$. Thus, the result follows from Corollary \ref{cor:Gb(I)}. \qedhere \end{example} \section{Some well-known families of ideals are glicci}\label{sect:applications} Many well-known classes of ideals Gr\"obner degenerate to Stanley--Reisner ideals of vertex decomposable complexes. In this section, we recall a few of these classes and deduce that they are glicci, thus providing further evidence for an affirmative answer to the question of whether every homogeneous Cohen--Macaulay ideal is glicci \cite[Question 1.6]{KM+01}. As in Section \ref{sect:GVDGlicci}, we will assume throughout this section that the field $\kappa$ is infinite. The main result we need for our applications is as follows. It is immediately obtained by combining Proposition \ref{prop:simplicial} with Corollary \ref{weakGVDimpliesgliggi}. \begin{corollary}\label{cor:AutomaticallyGlicci} Let $I\subseteq \kappa[x_1,\dots, x_n]$ be a homogeneous ideal, and let $<$ denote the lexicographic order with $x_1>x_2>\cdots >x_n$. If $\text{in}_<I$ is the Stanley--Reisner ideal of a $<$-compatibly vertex decomposable simplicial complex on $[n]$ for the vertex order $1>2>\cdots >n$, then $I$ is glicci. \end{corollary} We now discuss three classes of ideals which satisfy the hypotheses of Corollary \ref{cor:AutomaticallyGlicci}. We omit many definitions of the particular ideals in question, and instead provide references. \subsubsection*{Schubert determinantal ideals} Let $X = (x_{ij})$ be an $n\times n$ matrix of variables and let $R = \kappa[x_{ij}]$ be the polynomial ring in the matrix entries of $X$. Given a permutation $w\in S_n$, there is an associated generalized determinantal ideal $I_w\subseteq R$, called a \emph{Schubert determinantal ideal}. Schubert determinantal ideals and their corresponding \emph{matrix Schubert varieties} were introduced by W. Fulton in \cite{Fulton}. Fix the lexicographical monomial order $<$ on $R$ defined by $x_{ij}>x_{kl}$ if $i<k$ or $i = k$ and $j>l$. This monomial order is \emph{antidiagonal}, that is, the initial term of the determinant of a submatrix $Y$ of $X$ is the product of the entries along the antidiagonal of $Y$. For this monomial order, $\text{in}_<I_w$ is the Stanley--Reisner ideal of a simplicial complex, called a \emph{subword complex}, which is $<$-compatibly vertex decomposable (see \cite{KnutsonMiller} or \cite[Ch. 16.5]{MillerSturmfels}). Corollary \ref{cor:AutomaticallyGlicci} thus immediately implies: \begin{proposition}\label{prop:SchubGlicci} Schubert determinantal ideals are glicci. \end{proposition} \subsubsection*{Graded lower bound cluster algebras} Cluster algebras are a class of combinatorially-defined commutative algebras that were introduced by S. Fomin and A. Zelevinsky at the turn of the century \cite{FZI}. \emph{Lower bound algebras}, introduced in \cite{BFZ} are related objects: each lower bound algebra is contained in an associated cluster algebra, and this containment is equality in certain cases (i.e. in the \emph{acyclic} setting, see \cite[Theorem 1.20]{BFZ}). Each (skew-symmetric) lower bound algebra is defined from a quiver. Indeed, given a quiver $Q$, there is an associated polynomial ring $R_Q = \kappa[x_1,\dots, x_n,y_1,\dots, y_n]$ and ideal $K_Q\subseteq R_Q$ such that the lower bound algebra $\mathcal{L}_Q$ associated to $Q$ can be expressed as $\mathcal{L}_Q = R_Q/K_Q$. Fix the lexicographical monomial order with $y_1>\cdots>y_n>x_1>\cdots >x_n$. By \cite[Theorem 1.7]{MRZ} and the proof of \cite[Theorem 3.3]{MRZ}, $\text{in}_<K_Q$ is the Stanley--Reisner ideal of a simplicial complex $\Delta$ on the vertex set $\{y_1,\dots, y_n,x_1,\dots, x_n\}$, which has vertex decomposition compatible with $<$. Consequently, by Proposition \ref{prop:simplicial}, we have the following: \begin{proposition} The ideal $K_Q$ is geometrically vertex decomposable. When $K_Q$ is homogeneous, it is glicci. \end{proposition} \begin{remark} It follows from \cite[Theorem 1.7]{MRZ} that $K_Q$ is homogeneous if and only if $Q$ has no \emph{frozen vertices} and $Q$ has exactly two arrows entering each vertex and two arrows exiting each vertex. \end{remark} \subsubsection*{Ideals defining equioriented type $A$ quiver loci} Let $d_0, d_1, \dots, d_n$ be a sequence of positive integers and consider the product of matrix spaces $\texttt{Hom}$, and product of general linear group $\texttt{GL}$ defined as follows: \[ \texttt{Hom}:=\oplus_{i = 1}^n \text{Mat}_{d_{i-1}\times d_i}(\kappa), \quad \texttt{GL}:=\oplus_{i = 0}^n \text{GL}_{d_i}(\kappa). \] The group $\texttt{GL}$ acts on $\texttt{Hom}$ on the right by conjugation: $(M_i)_{i=1}^n\bullet (g_i)_{i=0}^n = (g_{i-1}^{-1}M_i g_i)_{i=1}^n$. Closures of $\texttt{GL}$-orbits are called \textbf{equioriented type $A$ quiver loci}. \emph{Buchsbaum-Eisenbud varieties of complexes} are special cases of these quiver loci. An introduction to equioriented type $A$ quiver loci and related combinatorics can be found in \cite[Ch. 17]{MillerSturmfels}. \begin{proposition} Equioriented type $A$ quiver loci are glicci. In particular, varieties of complexes are glicci. \end{proposition} \begin{proof} Let $\Omega\subseteq \texttt{Hom}$ be an equioriented type $A$ quiver locus, and let $I(\Omega)$ be its (homogeneous and prime) defining ideal in the polynomial ring $\kappa[\texttt{Hom}]$. It follows from results of A. Zelevinsky \cite{Zelevinsky} and V. Lakshmibai and P. Magyar \cite{LakshmibaiMagyar} that there is a polynomial ring $R$ with $\kappa[\texttt{Hom}]\subseteq R$, a \emph{Kazhdan-Lusztig ideal} $J\subseteq R$, and an ideal $L$ generated by the indeterminates in $R\setminus \kappa[\texttt{Hom}]$ such that $J = I(\Omega)R+L$. (Here $I(\Omega)R$ denotes the extension of the ideal $I(\Omega)$ to $R$.) As shown in \cite{WooYongGrobner}, each Kazhdan-Lusztig ideal Gr\"obner degenerates to the Stanley--Reisner ideal of a subword complex, and this degeneration is compatible with the vertex decomposition of the complex. Consequently, $J$ is geometrically vertex decomposable. Thus $I(\Omega)$ is geometrically vertex decomposable, hence glicci. \end{proof} \section{From G-biliaisons to geometric vertex decompositions}\label{sect:glicciGVD} In this section, we give something of a converse to Theorem \ref{thm:onestep}. In that theorem, we showed that, under mild assumptions, a geometric vertex decomposition gives rise to an elementary $G$-biliaison and showed that the isomorphism of that elementary $G$-biliaison has a very particular form. In this section, we show that every elementary $G$-biliaison in which the isomorphism has the same form as the ones constructed in Theorem \ref{thm:onestep} gives rise to a geometric vertex decomposition. The precise statement of the main theorem of this section is below. As usual, throughout this section we will let $R$ denote the polynomial ring $\kappa[x_1,\ldots,x_n]$. \begin{theorem}\label{thm:linkToGVD} Let $I$, $C$, and $N \subseteq I \cap C$ be ideals of $R$, and let $<$ be a $y$-compatible term order. Suppose that $I$ is squarefree in $y$ and that no term of any element of the reduced Gr\"obner basis of $N$ is divisible by $y$. Suppose further that there exists an isomorphism $\phi: C/N \xrightarrow{f/g} I/N$ of $R/N$-modules for some $ f, g \in R$ not zero-divisors on $R/N$, and $\text{in}_y(f)/g = y$. Then $\text{in}_y I = C \cap ( N+\langle y \rangle )$ is a geometric vertex decomposition of $I$. \end{theorem} \begin{proof} Recall that $I$ must have a Gr\"obner basis of the form $\{yq_1+r_1, \ldots, yq_k+r_k, h_1, \ldots, h_\ell\}$ where $y$ does not divide any term of any $q_i$ or $r_i$ for any $1 \leq i \leq k$ nor any $h_j$ for any $1 \leq j \leq \ell$ because $I$ is squarefree in $y$. Hence, $\text{in}_y I = \langle yq_1, \ldots, yq_k, h_1, \ldots, h_\ell \rangle$, and this generating set is a Gr\"obner basis of $\text{in}_y I$. We claim first that $N = \langle h_1, \ldots, h_\ell \rangle$. Because no term of any element of the reduced Gr\"obner basis of $N \subseteq I$ is divisible by $y$, each such element must be a polynomial in the $h_i$ for $1 \leq i \leq \ell$, and so $N \subseteq \langle h_1, \ldots, h_\ell \rangle$, from which it follows that $\text{in}_y(N) = N$ and that $y$ is not a zero-divisor on $R/N$. Conversely, suppose there is some $h_i \in I \setminus N$ for some $1 \leq i \leq \ell$. Then there exists some $c \in C \setminus N$ and $n \in N$ such that $c f = gh_i+n$, where $c$ has been chosen to have the smallest possible $d$ for which $y^d$ divides $\text{in}_y(c)$. Taking initial $y$-forms yields \[ \text{in}_y(c)yg = \text{in}_y(c)\cdot \text{in}_y(f) =\text{in}_y(c f) = \text{in}_y(gh_i+n). \] Note that $\text{in}_y g = g$ by the assumption that $g$ divides $\text{in}_y f$ and that, if $g \in \langle y^m \rangle \setminus \langle y^{m+1} \rangle$, then $gh_i \in \langle y^m \rangle \setminus \langle y^{m+1} \rangle$ while $\text{in}_y(c)yg \in \langle y^{m+1} \rangle$. Thus, we see that $\text{in}_y(c)yg = \text{in}_y(n)$. Hence, $\text{in}_y(c)yg\in N$ (as $\text{in}_y N = N$) and so $\text{in}_y(c)\in N$ (as $N:\langle yg\rangle = N$). Set $c' = c-\text{in}_y(c) \in C \setminus N$, which has an initial $y$-form not divisible by $y^d$. But $\varphi(c'+N) = \varphi(c+N) = h_i+N$, contradicting minimality of $d$. Hence, $N = \langle h_1, \ldots, h_\ell \rangle$. Next, we claim that $C = \langle q_1, \ldots, q_k, h_1, \ldots, h_\ell \rangle$. By assumption, $N \subseteq C$, and so it suffices to show that the $q_i$ for $1 \leq i \leq k$ generate $C$ over $N$. In order to establish this, we will show that $\psi: C/N \xrightarrow{y} \text{in}_y(I)/N$ is an isomorphism. For each $1 \leq i \leq k$, let $c_i \in C \setminus N$ be a representative of the preimage under $\phi$ of the class of $yq_i+r_i$ in $I/N$ with the smallest possible $d_i$ so that $\text{in}_y(c_i) \notin \langle y^{d_i} \rangle$. We will show that the image under $\psi$ of the class of $c_i$ is the class of $yq_i$. First, we will show that $d_i = 1$ for all $1 \leq i \leq k$. Suppose, for contradiction, that some $d_i > 1$, i.e., that some $\text{in}_y(c_i) \in \langle y \rangle$. Then $\text{in}_y(g(yq_i+r_i)+n_i) = y \cdot \text{in}_y(fc_i)\in \langle y^2\rangle$. Because neither $q_i$ nor $r_i$ has any term divisible by $y$, we must have \[ y g \text{in}_y(c_i) = \text{in}_y\left(g(yq_i+r_i)+n_i\right) = \text{in}_y(n_i) \in \text{in}_y (N) = N, \] and so $\text{in}_y(c_i) \in N$. Then $c_i' = c_i-\text{in}_y(c_i) \in C\setminus N$ and $\text{in}_y(c_i') \notin \langle y^{d_i} \rangle$. But still $c_i'$ represents a preimage of $yq_i+r_i$, contradicting minimality of $d_i$. Hence, $\text{in}_y(c_i) \notin \langle y \rangle$, which establishes that $\text{in}_y(c_i) = c_i$. From the former fact and the relationship $y g \text{in}_y(c_i) = \text{in}_y(g(yq_i+r_i)+n_i)$, we have either $y g c_i = gyq_i$ (if $\text{in}_y(g(yq_i+r_i)+n_i) = \text{in}_y(g(yq_i+r_i))$) or $y g c_i = gyq_i+gyn_i'$ for some nonzero $n_i' \in N$ (using $N:\langle yg\rangle = N$ and $\text{in}_y(N) = N$). In either case, the $\psi(c_i)$ is the class of $yq_i$ in $\text{in}_y(I)/N$, which is to say that $\psi$ is surjective. Also, $\psi$ is injective because $y$ is not a zero-divisor on $R/N$. Now because the $yq_i$ for $1 \leq i \leq k$ generate $\text{in}_y(I)$ over $N$ and $\psi$ is an isomorphism under which the preimage of the class of $yq_i$ is the class of $q_i$ for each $1 \leq i \leq k$, it must be that $C$ is generated over $N$ by $\{q_1, \ldots, q_k\}$ and that $C = \langle q_1, \ldots, q_k, h_1, \ldots, h_\ell \rangle$. By \cite[Theorem 2.1(a)] {KMY09}, the specified generating sets for $\text{in}_y(I)$, $N$, and $C$ are all Gr\"obner bases for them, and so it follows from \cite[Theorem 2.1(b)]{KMY09} that $\text{in}_y(I) = C \cap (N+\langle y \rangle)$ is a geometric vertex decomposition of $I$. \end{proof} \begin{example}\label{ex:standard} To illustrate this correspondence between elementary $G$-biliaison and geometric vertex decomposition, we consider a classical example. If $I$ is the ideal of $2$-minors of the matrix $M = \begin{pmatrix} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \end{pmatrix}$, $C = \langle x_{11}, x_{12} \rangle$, $N = \langle x_{22}x_{11}-x_{21}x_{12} \rangle$, $f = x_{23}x_{12}-x_{22}x_{13}$, and $g = x_{12}$ in $\kappa[x_{11}, \ldots, x_{23}]$, then the multiplication by $f/g$ map $[C/N](-1) \xrightarrow{f/g} I/N$ gives an elementary $G$-biliaison. Using any lexicographic order with $x_{23}$ largest, we take $C = C_{x_{23},I}$ and $N = N_{x_{23},I}$, and then $\text{in}_{x_{23}}(I) = C \cap (N+\langle x_{23} \rangle)$ is a geometric vertex decomposition. \end{example} Notice that in Theorem \ref{thm:linkToGVD} we use only the isomorphism that makes up an elementary $G$-biliaison to construct a geometric vertex decomposition in the sense of \cite{KMY09}. In this direction, we do not need to assume that the ideals $I$, $C$, and $N$ are homogeneous or saturated or even unmixed, nor that $N$ is Cohen--Macaulay or $G_0$. Of course, the isomorphism $\phi$ increases degree by $\deg(y)$ whenever that makes sense. \begin{remark}\label{rmk:invertible} In the notation and under the hypotheses of Theorem \ref{thm:onestep}, the construction in Theorem \ref{thm:onestep} produces an isomorphism $I/N_{y,I} \xrightarrow{v/u} C_{y,I}/N_{y,I}$ with $\dfrac{\text{in}_{y}(v)}{u} = y$. In particular, the hypotheses of Theorem \ref{thm:linkToGVD} are satisfied. It is not hard to see that the geometric vertex decomposition produced by Theorem \ref{thm:linkToGVD} is the same one assumed before applying Theorem \ref{thm:onestep}. If we begin, instead, with an isomorphism between $I/N$ and $C/N$ and accompanying hypotheses of Theorem \ref{thm:linkToGVD}, we may first apply Theorem \ref{thm:linkToGVD} to obtain a geometric vertex decomposition satisfying the hypotheses of Theorem \ref{thm:onestep}. If we then apply the construction in Theorem \ref{thm:onestep}, we obtain an isomorphism between $I/N$ and $C/N$, but it need not be the same isomorphism we began with. For example, we may begin with the multiplication by $f/g$ map from Example \ref{ex:standard} but produce the multiplication by $v/u$ map for $v = a_1f+a_2(x_{23}x_{11}-x_{21}x_{13})$ and $u = a_1x_{12}+a_2x_{11}$ for a generic choice of scalars $a_1$ and $a_2$. \end{remark} One has to be quite careful in tracking the correspondence between a particular biliaison and a geometric vertex decomposition. In particular, somewhat surprisingly, the condition that the reduced Gr\"obner basis of $N$ has no term divisible by $y$ cannot be discarded while preserving the canonical mapping noted in Remark \ref{rmk:invertible}. For example, we consider a modification of Example \ref{ex:standard} by letting $I' = I+\langle x_{23}x_{10}-x_{13}x_{20} \rangle$, $N' = N+\langle x_{23}x_{10}-x_{13}x_{20} \rangle$, and $C' = C+\langle x_{23}x_{10}-x_{13}x_{20} \rangle$. We think of this example as naturally occurring from the matrix $M' = \begin{pmatrix} x_{10} & x_{11} & x_{12} & x_{13} \\ x_{20} & x_{21} & x_{22} & x_{23} \end{pmatrix}$, from which the ideal $I'$ is generated by all $2$-minors involving any 2 of the last 3 columns or exactly the first and fourth columns. Here, taking $f' = x_{23}x_{12} - x_{22}x_{13} \in I'\setminus N'$ and $g' = x_{12} \in C' \setminus N'$ yields an isomorphism $C'/N' \xrightarrow{f'/g'} I'/N'$. Taking lexicographic order with respect to $x_{23}>x_{13}>x_{22}>\cdots>x_{10}$ and noting that $N'$ is prime, it is not hard to check that the hypotheses of Theorem \ref{thm:linkToGVD} are satisfied aside from the hypothesis that the reduced Gr\"obner basis of $N'$ have no term divisible by $x_{23}$. However, the geometric vertex decomposition of $I'$ with respect to $x_{23}$ is \begin{align*} &\langle x_{21}x_{13}x_{10}-x_{20}x_{13}x_{11}, x_{22}x_{11}-x_{21}x_{12}, x_{22}x_{13}x_{10}-x_{20}x_{13}x_{12}, x_{23}x_{10}, x_{23}x_{11}, x_{23}x_{12} \rangle= \\ &\langle x_{10}, x_{11}, x_{12} \rangle \cap ( \langle x_{21}x_{13}x_{10}-x_{20}x_{13}x_{11}, x_{22}x_{11}-x_{21}x_{12}, x_{22}x_{13}x_{10}-x_{20}x_{13}x_{12}\rangle+\langle x_{23} \rangle ). \end{align*} In particular, $\text{in}_{x_{23}} I' \neq C' \cap (N'+\langle x_{23} \rangle )$. In the other direction, the elementary $G$-biliaison constructed from Theorem \ref{thm:onestep} yields the isomorphism $C'/\tilde{N} \xrightarrow{f'/g'} I'/\tilde{N}$ for \[ \tilde{N} = (x_{21}x_{13}x_{10}-x_{20}x_{13}x_{11}, x_{22}x_{11}-x_{21}x_{12}, x_{22}x_{13}x_{10}-x_{20}x_{13}x_{12}), \] which is not the same elementary $G$-biliaison we began with. Remark \ref{rmk:invertible} gives rise to the question of whether or not there is a sort of moving lemma applicable to this situation that would allow us to replace the module $N$ with a Cohen--Macaulay and $G_0$ module $\tilde{N}$ that also links $C$ to $I$ but does not involve $y$. More precisely: \begin{question} With notation as in Theorem \ref{thm:linkToGVD}, suppose that $I$ is squarefree in $y$ and that there exists an elementary $G$-biliaison given by the isomorphism $\phi: C/N \xrightarrow{f/g} I/N$ of $R/N$-modules for some $ f \in I$, $g \in C$, and $\text{in}_y(f)/g = y$. Do not assume that the reduced Gr\"obner basis of $N$ does not involve $y$. From \cite[Theorem 2.1(b)]{KMY09}, $I$ must have some geometric vertex decomposition with respect to $y$. If $\text{in}_y(I) = \tilde{C} \cap (\tilde{N}+\langle y \rangle )$ is a geometric vertex decomposition of $I$, then Theorem \ref{thm:onestep} requires that there be an isomorphism $\tilde{C}/\tilde{N} \rightarrow I/\tilde{N}$. In particular, though, will multiplication by $f/g$ always be an isomorphism from $C/\tilde{N}$ to $I/\tilde{N}$? Need $\tilde{N}$ be Cohen--Macaulay and $G_0$? \end{question} \section{The mixed case and sequential Cohen--Macaulayness}\label{sect:nonpure} A nonpure version of vertex decomposition was introduced in \cite{BW97}, in which the authors study non-pure shellable complexes, including their homotopy types and combinatorially significant direct sum decompositions of their Stanley--Reisner rings. It has been shown that if a simplicial complex is non-pure vertex decomposable, then it is non-pure shellable \cite[Theorem 11.3]{BW97}. And it is not hard to see that a non-pure shellable simplicial complex is sequentially Cohen--Macaulay (i.e., its associated Stanley--Reisner ring is sequentially Cohen--Macaulay). For background on sequential Cohen--Macaulayness, introduced by Stanley, we refer the reader to \cite[Section III.2]{Sta96}. This story parallels the well-known history of the pure case, which is summarized in Section \ref{sect:GVD}. This non-pure version has been applied particularly effectively in the study of edge ideals (see \cite{FVT07, VTH08, FH08, Woo09}). In this section, we compare non-pure vertex decomposition with geometric vertex decomposition when $I$ is not necessarily unmixed, and we describe how geometric vertex decomposition can transfer the structure of sequential Cohen--Macaulayness in a manner similar to how $G$-biliaison transfers Cohen--Macaulayness in the unmixed case. This result is stated precisely as Theorem \ref{thm:SCM}. Throughout this section, we will assume that $\kappa$ is infinite, and we will let $R = \kappa[x_1,\ldots, x_n]$ with the standard grading. We begin with the definition of a vertex decomposable complex when the complex not necessarily pure. \begin{definition}\label{def:nonpurevd}\cite[Definition 11.1]{BW97} A simplicial complex $\Delta$ is \textbf{vertex decomposable} if \begin{enumerate} \item $\Delta$ is a simplex or $\Delta = \{\emptyset\}$, or \item there exists a vertex $v$ of $\Delta$ such that \begin{enumerate} \item $\text{del}_\Delta(v)$ and $\text{lk}_\Delta(v)$ are vertex-decomposable and \item no facet of $\text{lk}_\Delta(v)$ is a facet of $\text{del}_\Delta(v)$. \end{enumerate} \end{enumerate} A vertex $v$ as in condition (2) is called a \textbf{shedding vertex}. \end{definition} Let $\Delta$ be a simplicial complex on $[n]$ and $I_\Delta\subseteq R$ its Stanley--Reisner ideal. While any variable $y\in R\setminus I_\Delta$ that divides a minimal generator of $I_\Delta$ gives rise to a nondegenerate geometric vertex decomposition of $I_\Delta$ (see Definition \ref{def:gvdKMS}), $y$ need not correspond to a shedding vertex of $\Delta$. For example, if $I = (xy,xz)$, then $I = \text{in}_y I = C_{y,I} \cap (N_{y,I} +\langle y \rangle) = \langle x \rangle \cap \langle xz, y \rangle$ would be a geometric vertex decomposition, but $y$ is not a shedding vertex of $\Delta = \{\{x\}, \{y,z\}\}$ because $\{z\}$ is a facet of $\text{del}_\Delta(y) = \{\{z\}, \{x\}\}$ that is also a facet of $\text{lk}_\Delta(y)= \{\{z\}\}$. In order to prevent an ideal from being geometrically vertex decomposable via nondegenerate geometric vertex decompositions at variables that do not correspond to shedding vertices, we propose an alternative definition of geometric vertex decomposition: \begin{altdefinition}\label{def:nonpureGVD} If \begin{enumerate} \item $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$, and \item either $\sqrt{C_{y,I}} = \sqrt{N_{y,I}}$ or no minimal prime of $C_{y,I}$ is a minimal prime of $N_{y,I}$, \end{enumerate} then we say that $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ is a \textbf{geometric vertex decomposition of $I$ with respect to $y$}. \end{altdefinition} As in the original definition, we will call this geometric vertex decomposition \textbf{nondegenerate} if $C_{y,I} \neq \langle 1 \rangle$ and if $\sqrt{C_{y,I}} \neq \sqrt{N_{y,I}}$. In the unmixed case, if $I$ is geometrically vertex decomposable and one step in that decomposition is a nondegenerate geometric vertex decomposition with respect to $y$, it is automatic that the minimal primes of $N_{y,I}$ and $C_{y,I}$ must be disjoint because the minimal primes of the former must all have height one less than those of the latter in virtue of Lemma \ref{lem:height} and unmixedness of $N_{y,I}$ and $C_{y,I}$. Reinterpreting Definition \ref{def:geometricallyVertexDec} in terms of Definition \ref{def:nonpureGVD}, it is a straightforward exercise to see that a squarefree monomial ideal is geometrically vertex decomposable exactly when its Stanley--Reisner complex is vertex decomposable in the sense of Definition \ref{def:nonpurevd}. We will now describe how geometric vertex decomposition behaves somewhat analogously to $G$-biliaison in the not necessarily unmixed case. In particular, we will show in Theorem \ref{thm:SCM} that if $I$ is homogeneous and $R/N_{y,I}$ is Cohen-Macaulay, then $R/I$ is sequentially Cohen-Macaulay if and only if $R/C_{y,I}$ is sequentially Cohen-Macaulay. Just as in $G$-biliaison, in which $R/N_{y,I}$ is required to be not only Cohen--Macaulay but also $G_0$ in order to transfer the Cohen-Macaulay property between $R/C_{y,I}$ and $R/I$, we impose a stricter requirement on $R/N_{y,I}$ in Theorem \ref{thm:SCM} than the property we hope to pass between $R/I$ and $R/C_{y,I}$. As in the unmixed case, we begin with a lemma concerning the heights of the ideals involved: \begin{lemma}\label{lem:nonpureheight} If $I \subseteq R$ is a homogeneous ideal with nondegenerate geometric vertex decomposition $\text{in}_y I = C_{y,I} \cap (N_{y,I} + \langle y \rangle)$ in the sense of Definition \ref{def:nonpureGVD}, then $\mbox{ht}(I) = \mbox{ht}(N_{y,I})+1 \leq \mbox{ht}(C_{y,I})$. \end{lemma} \begin{proof} Because $\mbox{ht}(I) = \mbox{ht}(\text{in}_y I)$, it suffices to show that $\mbox{ht}(\text{in}_y I) = \mbox{ht}(N_{y,I})+1$. Because every prime containing $\text{in}_y I$ must contain either $C_{y,I}$ or $N_{y,I}+\langle y \rangle$, we must have \[ \mbox{ht}(\text{in}_y I) = \min\{\mbox{ht}(C_{y,I}), \mbox{ht}(N_{y,I}+\langle y \rangle)\} = \min\{\mbox{ht}(C_{y,I}), \mbox{ht}(N_{y,I})+1\}. \] Suppose $\mbox{ht}(C_{y,I})<\mbox{ht}(N_{y,I})+1$. Then, because $N_{y,I} \subseteq C_{y,I}$, we must have $\mbox{ht}(N_{y,I}) = \mbox{ht}(C_{y,I})$. Fix $P \in \text{Min}(C_{y,I})$ with $\mbox{ht}(P) = \mbox{ht}(C_{y,I})$. Then $N_{y,I} \subseteq P$, and there cannot be a prime $Q \subsetneq P$ with $Q \in \text{Ass}(N_{y,I})$ or else we would have $\mbox{ht}(N_{y,I})<\mbox{ht}(C_{y,I})$, so $P \in \text{Min}(N_{y,I})$ as well, contradicting condition (2) of Definition \ref{def:nonpureGVD}. Hence, we must have \[ \mbox{ht}(I) = \mbox{ht}(\text{in}_y I) = \mbox{ht}(N_{y,I})+1 \leq \mbox{ht}(C_{y,I}).\qedhere \] \end{proof} Unlike in the unmixed case, we cannot hope to give an upper bound on the height of $C_{y,I}$ in terms of the heights of $I$ and $N_{y,I}$. For example, if $I = (yx_1, \ldots, yx_d)$ for any $d \geq 1$, then $\mbox{ht}(C_{y,I}) = \mbox{ht}(\langle x_1, \ldots, x_d \rangle) = d$ while $\mbox{ht}(I) = 1 = \mbox{ht}( 0+\langle y \rangle) = \mbox{ht}(N_{y,I}+\langle y \rangle)$. \begin{lemma}\label{lem:nonpureonestep} Suppose that $I$ is a homogeneous ideal of $R$ and that $I$ possesses a nondegenerate geometric vertex decomposition (in the sense of Definition \ref{def:nonpureGVD}) with respect to a variable $y = x_j$ of $R$. If $N_{y,I} $ has no embedded primes, then there is an isomorphism $I/N_{y,I} \cong [C_{y,I}/N_{y,I}](-1)$ as graded $R/N_{y,I}$-modules. \end{lemma} \begin{proof} We will modify the proof of Theorem \ref{thm:onestep}. As there, we have a reduced Gr\"obner basis $\{yq_1+r_1,\dots, yq_k+r_k,h_1,\dots, h_\ell\}$ for $I$, and we let $C = C_{y,I} = \langle q_1,\dots, q_k, h_1,\dots, h_\ell\rangle$ and $N = N_{y,I} = \langle h_1,\dots, h_\ell \rangle$. The modification in the argument comes in the steps showing that neither $C$ nor $I$ is contained in any minimal prime of $N$. Suppose first that $\langle q_1, \ldots, q_k \rangle \subseteq Q$ for some minimal prime $Q$ of $N$. Then also $C \subseteq Q$. Because $N \subseteq C$ and $Q$ is minimal over $N$, $Q$ must also be minimal over $C$, and so $N$ and $C$ share a minimal prime, in violation of Definition \ref{def:nonpureGVD}. Similarly, suppose $\langle yq_1+r_1, \ldots, yq_k+r_k \rangle \subseteq Q'$ for some minimal prime $Q'$ of $N$. Then $I \subseteq Q'$. But $N$ has a generating set that does not involve $y$, and so its minimal primes may be viewed as ideals of the ring $\kappa[x_1, \ldots, \hat{y}, \dots, x_n]$. Then $I \subseteq Q'$ implies that each $yq_i \in Q'$, hence each $q_i \in Q'$. But then again $N$ and $C$ share a minimal prime, in violation of Definition \ref{def:nonpureGVD}. Hence, neither $\langle q_1, \ldots, q_k \rangle$ nor $\langle yq_1+r_1, \ldots, yq_k+r_k \rangle$ is contained in any minimal prime of $N$. Because $N$ has no embedded primes, it follows that neither $\langle q_1, \ldots, q_k \rangle$ nor $\langle yq_1+r_1, \ldots, yq_k+r_k \rangle$ is contained in any associated prime of $N$. We now follow the remainder of the argument of Theorem \ref{thm:onestep}. \end{proof} \begin{theorem}{\label{thm:SCM}} Let $I \subseteq R$ be a homogeneous ideal and $\text{in}_y I = C_{y,I} \cap (N_{y,I} + \langle y \rangle)$ a geometric vertex decomposition (in the sense of Definition \ref{def:nonpureGVD}). If $R/N_{y,I}$ is Cohen--Macaulay, then $R/I$ is sequentially Cohen--Macaulay if and only if $R/C_{y,I}$ is. \end{theorem} \begin{proof} Let $N = N_{y,I}$ and $C = C_{y,I}$, both of which are homogeneous because $I$ is. Because the graded $R$-submodules of $R/I$ (respectively, $R/C$) are the same as the graded $R/N$-submodules of $R/I$ (respectively, $R/C$), it suffices to show that $R/I$ is sequentially Cohen--Macaulay as an $R/N$-module if and only if $R/C$ is. Let $S$ denote $R/N$ and $m$ the homogeneous maximal ideal of $S$. Set $d = \dim(S)$. Let $\omega_S$ be the canonical module of $S$ and $M^\vee$ the $S$-Matlis dual of a finitely generated graded $S$-module $M$. By \cite[Theorem 1.4]{HS02}, it suffices to show that $H^i_m(R/I)^\vee = 0$ or $H^i_m(R/I)^\vee$ is Cohen--Macaulay of dimension $i$ for all $0 \leq i \leq \dim(R/I)$ if and only if $H^i_m(R/C)^\vee = 0$ or $H^i_m(R/C)^\vee$ is Cohen--Macaulay of dimension $i$ for all $0 \leq i \leq \dim(R/C)$. We consider the long exact sequences of local cohomology corresponding to the short exact sequences \[ 0 \rightarrow I/N \rightarrow S \rightarrow R/I \rightarrow 0 \] and \[ 0 \rightarrow C/N \rightarrow S \rightarrow R/C \rightarrow 0. \] Now $H^i_m(S) = 0$ for all $i<d$ because $S$ is Cohen--Macaulay. According to Lemma \ref{lem:nonpureonestep}, there is an isomorphism $I/N \cong C/N$. Hence, \[ H^{i-1}_m(R/I) \cong H^i_m(I/N) \cong H^i_m(C/N) \cong H^{i-1}_m(R/C) \] for all $i<d$. Hence, $H^i_m(R/C)^\vee$ and $H^i_m(R/I)^\vee$ are zero or nonzero alike and Cohen--Macaulay of dimension $i$ or not Cohen--Macaulay of dimension $i$ alike for all $0 \leq i \leq d-2$. By Lemma \ref{lem:nonpureheight}, $\dim(R/C) \leq \dim(R/I) = d-1$. For all $i>\dim(R/C)$, we know $H^i_m(R/C) = 0$. Hence, it only remains to show that $H^{d-1}_m(R/I)^\vee$ is either $0$ or Cohen--Macaulay of dimension $d-1$ if and only if $H^{d-1}_m(R/C)^\vee$ is either $0$ or Cohen--Macaulay of dimension $d-1$. Because $H^{d-1}_m(R/I)^\vee$ is a Noetherian $R/I$-module and $H^{d-1}_m(R/C)^\vee$ a Noetherian $R/C$-module, both have dimension at most $d-1$, and so it is enough to show that $H^i_m(H^{d-1}_m(R/I)^\vee) = 0$ for all $i<d-1$ if and only if $H^i_m(H^{d-1}_m(R/C)^\vee) = 0$ for all $i<d-1$. We consider the short exact sequences \[ 0 \rightarrow H^{d-1}_m(R/I) \rightarrow H^d_m(I/N) \rightarrow H^d_m(S) \rightarrow 0 \] and \[ 0 \rightarrow H^{d-1}_m(R/C) \rightarrow H^d_m(C/N) \rightarrow H^d_m(S) \rightarrow 0. \] By graded local duality over $S$ (see \cite[Theorem 3.6.19]{BH93}), we have \[ 0 \rightarrow \omega_S \rightarrow H^d_m(I/N)^\vee \rightarrow H^{d-1}_m(R/I)^\vee \rightarrow 0 \] and \[ 0 \rightarrow \omega_S \rightarrow H^d_m(C/N)^\vee \rightarrow H^{d-1}_m(R/C)^\vee \rightarrow 0. \] Recalling that $\omega_S$ is a Cohen--Macaulay module of dimension $d$, we have $H^i_m(\omega_S) = 0$ for all $i \neq d$, and so \[ H^i_m(H^{d-1}_m(R/I)^\vee) \cong H^i_m(H^d_m(I/N)^\vee) \cong H^i_m(H^d_m(C/N)^\vee) \cong H^i_m(H^{d-1}_m(R/C)^\vee) \] for all $i < d-1$. Therefore, $H^i_m(H^{d-1}_m(R/I)^\vee) = 0$ for all $i<d-1$ if and only if $H^i_m(H^{d-1}_m(R/C)^\vee) = 0$ for all $i <d-1$, as desired. \end{proof} Recalling that Cohen--Macaulay is equivalent to sequentially Cohen--Macaulay and unmixed, it is not hard to see that Theorem \ref{thm:SCM} recovers the Cohen--Macaulayness implied by Corollary \ref{weakGVDimpliesgliggi} when all ideals appearing in all the vertex decompositions throughout the induction are unmixed. \begin{question} Using Definition \ref{def:nonpureGVD} and its appropriate extension to an alternate definition of geometrically vertex decomposable, is every homogeneous geometrically vertex decomposable ideal sequentially Cohen--Macaulay? Can we weaken the hypothesis in Theorem \ref{thm:SCM} that $R/N_{y,I}$ be Cohen--Macaulay to the hypothesis that it be merely sequentially Cohen--Macaulay? \end{question} \bibliographystyle{plain}
{'timestamp': '2020-06-01T02:03:23', 'yymm': '2005', 'arxiv_id': '2005.14293', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14293'}
arxiv
\section{High-harmonic generation in spin-orbit coupled systems - Supplementary material} \section{Selection rules -- a simple example} To illustrate the formalism introduced in the paper on a sim
ple example, we derive the well known result that inversion symmetry implies only odd order harmonics. We consider the Hamiltonian \begin{equation} \hat{H}(t) = \sum_{{k}} \cos({k} + A_0 \cos(\Omega t)) \hat{c}^{\dagger}_{{k}} \hat{c}_{{k}}, \end{equation} and, since we are interested in HHG, choose as operator $\mathcal O$ the charge velocity \begin{equation} v({k},t) = \frac{\partial}{\partial {k}}h({{k}},t) =- \sin({k} + A_0 \cos(\Omega t)), \end{equation} which yields the current $J(t) = \sum_{{k}} v({k},t) \langle\hat{c}^{\dagger}_{{k}}\hat{c}_{{k}}\rangle$. Whereas $h({k},t) = h(-{k},t)$ in equilibrium ($A_0=0$), in the presence of the drive, we need to extend the symmetry operation to include time as follows \begin{equation} \mathcal{P} \otimes \mathcal{T}_2 \otimes \boldsymbol{1} \equiv \begin{cases} &{k} \rightarrow -{k} \nonumber\\ &t \rightarrow t + T/2 \\ \end{cases}. \end{equation} Clearly, the group generated by this operation is isomorphic to $\boldsymbol{Z}_2$ if we identify $t$ with $t+T$. Labelling the group element above as $g$, we expand both sides of \begin{equation} \hat{g} {v}({k},t) e^{in\Omega t} \hat{g}^{-1} = {v}({k},t) e^{in\Omega t} \end{equation} to obtain \begin{equation} v(-{k},t + T/2) e^{in\Omega (t + T/2)} = v({k},t) e^{in\Omega t}. \end{equation} Since $v(-{k},t + T/2)= -v({k},t)$, $n$ is constrained by $e^{in\pi}=-1$, which implies that $n$ is odd. \section{Consequences of the selection rules} The Hamiltonian for which we will study selection rules is once again written as \begin{align} &\hat{H} = \sum_{\boldsymbol{k}} \Psi_{\boldsymbol{k}}^{\dagger} [\epsilon(\boldsymbol{k})\otimes \sigma_0 -(\alpha \sin(k_y a) -\gamma \sin(k_x a))\otimes \sigma_x \nonumber\\ &\quad \quad +(\alpha \sin(k_x a) -\gamma \sin(k_y a)) \otimes \sigma_y + B \sigma_z] \Psi_{\boldsymbol{k}} , \label{momHam} \end{align} with $\epsilon(\boldsymbol{k})=2t_h(4-\cos(k_x a) - \cos(k_y a))$. The band dispersions are \begin{equation} \label{bandDisp} \epsilon_{\pm}(\boldsymbol{k}) = \epsilon(\boldsymbol{k}) \pm \sqrt{ (\alpha \sin(k_y a) -\gamma \sin(k_x a))^2 + (\alpha \sin(k_x) -\gamma \sin(k_y))^2 + B^2 }. \end{equation} In the following we set $t_h=a=1$. {\it Linearly polarized light.} For linearly polarized light described by $A_x(t) = A_0 \cos(\Omega t)$, we consider the charge velocity \begin{equation} \label{chargeVelR} \begin{aligned} v_x(\boldsymbol{k},t) =&\frac{\partial }{\partial k_x}h(\boldsymbol{k},t) = 2 \sin(k_x + A_x(t)) \cdot \boldsymbol{1} + \alpha \cos(k_x + A_x(t)) \cdot \sigma_y + \gamma \cos(k_x + A_x(t)) \cdot \sigma_x \\ \end{aligned} \end{equation} and the symmetry ${g}\equiv\mathcal{P} \otimes \mathcal{T}_2 \otimes \mathcal{S}_{(-)}$. Since the spin transformation does not mix the $\boldsymbol{1}$, $\sigma_x$ and $\sigma_y$ matrices it is sufficient to consider \begin{equation} \begin{aligned} & \hat{g}\cos(k_x + A_x(t))e^{in \Omega t} \sigma_{x,y}\hat{g}^{-1} =\cos(-k_x - A_x(t))e^{in \Omega (t+T/2)} (-\sigma_{x,y}) \stackrel{!}{=}\cos(k_x + A_x(t))e^{in \Omega t} \sigma_{x,y}, \\ \end{aligned} \end{equation} which leads to $e^{in\pi}=-1$, i.e., odd order harmonics only. The non-trivial relation between spin and momentum in spin-orbit coupled systems gives rise to interesting spin dynamics upon radiation. In linearly polarized light, we consider $ \mathcal{P} \otimes \mathcal{T}_2 \otimes \mathcal{S}_{(-)}$. That the spin operators in the $x$ and $y$ direction have \emph{odd} harmonics can be easily seen as follows \begin{equation} g\sigma_{x,y}e^{in\Omega t} {g}^{-1} = -\sigma_{x,y}e^{in\Omega (t + T/2)}\stackrel{!}{=} \sigma_{x,y}e^{in\Omega t}, \end{equation} whereas for the $z$-component, we have \begin{equation} {g}\sigma_{z}e^{in\Omega t} {g}^{-1} = \sigma_{z}e^{in\Omega (t + T/2)} \stackrel{!}{=} \sigma_{z}e^{in\Omega t}, \end{equation} which implies \emph{even} harmonics. An oscillating magnetization will on top of any charge current contribute to the power spectrum through $I_{i}(\omega)\equiv \abs{i\omega J_i(\omega) + (\frac{\omega^2}{c})M_i(\omega)}^2$ with $M_i(\omega)=N_{\boldsymbol{k}}^{-1}\sum_{\boldsymbol{k}}\expval{\sigma_i}(\omega)$, where $c$ is the speed of light and $N_{\boldsymbol{k}}$ is the number of ${\boldsymbol{k}}$ points. While the expectation value of $\sigma_z$ is zero in equilibrium states of the Rashba model, circularly polarized light could induce a nonzero expectation value and hence even harmonics - specifically $n=0,4,8,12,...$ from Eqs. \eqref{circSymClockwise} and \eqref{circSymAntiClockwise} below \cite{Zhou_2007}. {\it Circularly polarized light.} To consider the effect of circularly polarized light, we begin by noting that \begin{equation} \label{chargeVelx} \begin{aligned} v_x({\boldsymbol{k}},t) &= 2 \sin(k_x + A_x(t)) \cdot \sigma_0 + \alpha \cos(k_x + A_x(t)) \cdot \sigma_y + \gamma \cos(k_x + A_x(t)) \cdot \sigma_x \\ \end{aligned} \end{equation} and \begin{equation} \label{chargeVely} \begin{aligned} v_y({\boldsymbol{k}},t) &= 2 \sin(k_y + A_y(t)) \cdot \sigma_0 - \alpha \cos(k_y + A_y(t)) \cdot \sigma_x - \gamma \cos(k_y + A_y(t)) \cdot \sigma_y . \end{aligned} \end{equation} \subsection{Rashba model} To investigate the selection rules for circularly polarized harmonics, we define the following symmetry \begin{equation} \label{circSymClockwise} \mathcal{R}_{90^{\circ}} \otimes \mathcal{T}_4 \otimes \mathcal{S}_{90^{\circ}} =\begin{cases} & (k_x,k_y) \rightarrow (k_y,-k_x) \\ &t \rightarrow t + T/4 \\ & (\sigma_x, \sigma_y, \sigma_z) \rightarrow (\sigma_y,-\sigma_x, \sigma_z) \end{cases} \end{equation} (also given in the paper) which is a symmetry valid for \begin{align} A_x &= A_0 \sin(\Omega t), \quad A_y = A_0 \cos(\Omega t) \label{circLight} \end{align} and $\gamma=0$. Note that the time translation $t \rightarrow t + T/4$ results in the same type of rotation for the vector potential as in the spatial sector and the spin sector, namely $(A_x(t),A_y(t)) \rightarrow (A_y(t),-A_x(t))$. The invariance of $\hat H$ follows from \begin{equation} \begin{aligned} &\hspace{-1cm}[ \mathcal{R}_{90^{\circ}} \otimes \mathcal{T}_4 \otimes \mathcal{S}_{90^{\circ}}] {h(\boldsymbol{k}, t)} [\mathcal{R}_{90^{\circ}} \otimes \mathcal{T}_4 \otimes \mathcal{S}_{90^{\circ}}]^{-1} \\ =&- (\alpha \sin(-k_x - A_x(t)) - \gamma \sin(k_y + A_y(t))) \otimes \sigma_y \\ &+ (\alpha \sin(k_y + A_y(t)) - \gamma \sin(-k_x - A_x(t))) \otimes (-\sigma_x) \\ \stackrel{!}{=}& - (\alpha \sin(k_y + A_y(t)) - \gamma \sin(k_x + A_x(t))) \otimes \sigma_x \\ &+ (\alpha \sin(k_x + A_x(t)) - \gamma \sin(k_y + A_y(t))) \otimes \sigma_y = {h(\boldsymbol{k}, t)} \\ \end{aligned} \end{equation} implying that $\gamma$ must be zero. Hence, we have found a symmetry of the Rashba model in circularly polarized light. The quantity we are interested in is the emitted radiation with circular polarization for which we define the charge velocities \begin{equation} v_{\pm}(k,t) = v_x(k,t) \pm i v_y(k,t) , \end{equation} where +(-) refers to right and left hand circular polarization, respectively. We have for $\gamma=0$ \begin{equation} \begin{aligned} v_{\pm}(k,t) &= 2\big( \sin(k_x + A_x(t)) \pm i \sin(k_y + A_y(t)) \big) \\ &+\alpha \cos(k_x + A_x(t)) \sigma_y \mp i \alpha \cos(k_y + A_y(t)) \sigma_x \\ \end{aligned} \end{equation} and \begin{equation} \begin{aligned} &[ \mathcal{R}_{90^{\circ}} \otimes \mathcal{T}_4 \otimes \mathcal{S}_{90^{\circ}}] v_{\pm}(k,t) e^{in\Omega t}[ \mathcal{R}_{90^{\circ}} \otimes \mathcal{T}_4 \otimes \mathcal{S}_{90^{\circ}}]^{-1} \\ &= e^{in\Omega(t + T/4)}\{2\big( \sin(k_y + A_y(t)) \pm i \sin(-k_x - A_x(t)) \big) \\ &+\alpha \cos(k_y + A_y(t)) (-\sigma_x) \mp i \alpha \cos(-k_x - A_x(t)) \sigma_y \} \\ &\stackrel{!}{=}v_{\pm}(k,t) e^{in\Omega t}, \\ \end{aligned} \end{equation} from where we obtain the following requirement: \begin{equation} e^{in\frac{\pi}{2}} = \pm i . \label{selection_circular} \end{equation} The symmetries just derived also hold for the model without SOC terms, which has the same selection rules. As detailed in Ref.~\onlinecite{Neufeld_2017}, the power spectrum can be calculated as $I_{\pm}(n\Omega) = \abs{a_{\pm}(n\Omega)}^2$, with \begin{equation}\label{circPolFormula} a_{\pm}(n\Omega) = \mathcal{FT}\Big[\frac{d}{dt}J_{x}(k,t) \pm i \frac{d}{dt}J_{y}(k,t) \Big]. \end{equation} \subsection{Dresselhaus model} For the Dresselhaus model, we have a different symmetry \begin{equation} \label{circSymAntiClockwise} \mathcal{R}_{90^{\circ}} \otimes \mathcal{T}_4 \otimes \mathcal{S}_{-90^{\circ}} =\begin{cases} & (k_x,k_y) \rightarrow (k_y,-k_x) \\ &t \rightarrow t + T/4 \\ & (\sigma_x, \sigma_y, \sigma_z) \rightarrow (-\sigma_y, \sigma_x, \sigma_z) \end{cases} \end{equation} and a calculation completely analogous to that of the previous subsection yields $e^{in\frac{\pi}{2}} = \pm i$ as for the Rashba model. \section{Additional Results} \subsection{Harmonic orders} To help with the interpretation of this section, we remind the reader of a symmetry which we expect to hold for both circularly and linearly polarized light \begin{equation} \mathcal{P} \otimes \mathcal{T}_2 \otimes \mathcal{S}_{(-)} =\begin{cases} &\boldsymbol{k} \rightarrow -\boldsymbol{k} \nonumber\\ &t \rightarrow t + T/2 \\ & (\sigma_x, \sigma_y, \sigma_z) \rightarrow (-\sigma_x,-\sigma_y, \sigma_z) \nonumber \end{cases}. \end{equation} The presented selection rule is however not the entire story when it comes to the Rashba Dresselhaus model with $\alpha=\gamma$, \begin{equation} h(\boldsymbol{k}) = \epsilon(\boldsymbol{k})\sigma_0 - \alpha (\sigma_x \pm \sigma_y) (\sin(k_y) \mp \sin(k_x)). \end{equation} To illustrate this, we present results for circular polarization for three different sets of $(\alpha, \gamma)$ in Fig.~\ref{circPolFig}. HHG intensities for right and left hand polarized harmonics are calculated as in Eq.~\eqref{circPolFormula}. The leftmost panel illustrates that the selection rules following from Eq.~(\ref{selection_circular}) hold. When $\alpha=\pm \gamma$, we find the following symmetry \begin{equation} \mathcal{P} \otimes \mathcal{T}_2 \otimes \mathcal{S}_{x,\mp y} =\begin{cases} &\boldsymbol{k} \rightarrow -\boldsymbol{k} \nonumber\\ &t \rightarrow t + T/2 \\ & \sigma_{x(y)} \rightarrow \mp \sigma_{y(x)} \nonumber \end{cases}, \end{equation} and at the same time no symmetry involving $t \rightarrow t + T/4$ from which we would anticipate only odd order harmonics (and not the pattern seen in the leftmost panel). The observation that the middle panel still bears signatures of such a symmetry is explained by the fact that $(\sigma_x \pm \sigma_y)$ is a constant of motion. Lastly, the rightmost spectrum bears the characteristic of a system with a two-fold rotational symmetry and harmonics of all odd orders are allowed for both chiralities of the emitted radiation. \begin{figure}[ht] { \begin{minipage}[c][1\width]{ 0.3\textwidth} \centering \includegraphics[width=\textwidth]{circularPolarization1.png} \end{minipage}} \hfill { \begin{minipage}[c][1\width]{ 0.3\textwidth} \centering \includegraphics[width=\textwidth]{circularPolarization2.png} \end{minipage}} \hfill { \begin{minipage}[c][1\width]{ 0.3\textwidth} \centering \includegraphics[width=\textwidth]{circularPolarization3.png} \end{minipage}} \caption{High harmonic spectra for three sets of $(\alpha,\gamma)$ where $\mu=5$ and $E_0=0.2$ and right hand circularly polarized light is applied.} \label{circPolFig} \end{figure} \subsection{Role of filling and chemical potential} In Fig.~\ref{fillingSupFig}, we present two panels where we compare parameter sweeps of $\alpha$ for constant $\mu$ (left panel) and constant filling (right panel). While the intensities of the HHG spectra are different, the cutoff scaling with $\alpha$ is the same in both cases. \begin{figure}[ht] { \begin{minipage}[c][1\width]{ 0.47\textwidth} \centering \includegraphics[width=\textwidth]{HHG_figure_2_png.png} \end{minipage}} \hfill { \begin{minipage}[c][1\width]{ 0.47\textwidth} \centering \includegraphics[width=\textwidth]{HHG_figure_SM_fig2_right.png} \end{minipage}} \caption{Figure showing the influence of keeping a constant chemical potential (left panel) and of keeping the filling constant while changing the chemical potential (right). } \label{fillingSupFig} \end{figure} \subsection{Effect on the magnetization} We find that the ratio $\abs{(\frac{\omega^2}{c}) M(\omega)} /\abs{i\omega J(\omega)}$ is small, and that the total spectrum is not qualitatively changed by it. The following results should thus be regarded as a validation of the selection rules in circularly polarized light presented in the paper and not as an experimentally detectable spectrum. Figure~\ref{magCircFig} shows $\abs{(\frac{\omega^2}{c}) M_i(\omega)}^2$ for $i=x,y,z$ where the left and right panels correspond to $\alpha=0.5, \gamma=0.0$ and $\alpha=0.0, \gamma=0.5$ respectively. These figures should be interpreted in light of the selection rules following from $\mathcal{R}_{90^{\circ}} \otimes \mathcal{T}_4 \otimes \mathcal{S}_{90^{\circ}}$ in equation \eqref{circSymClockwise}, relevant for the left panel, and $\mathcal{R}_{90^{\circ}} \otimes \mathcal{T}_4 \otimes \mathcal{S}_{-90^{\circ}}$, Eq.~\eqref{circSymAntiClockwise} for the right panel. The observed harmonics are compatible with the selection rules following from the symmetries just presented. \begin{figure}[ht] { \begin{minipage}[c][1\width]{ 0.47\textwidth} \centering \includegraphics[width=\textwidth]{circularPolarizationMag1.png} \end{minipage}} \hfill { \begin{minipage}[c][1\width]{ 0.47\textwidth} \centering \includegraphics[width=\textwidth]{circularPolarizationMag2.png} \end{minipage}} \caption{Figure displaying the harmonic spectra for the magnetizations only. The driving field strength is $E_0=0.2$ and $\mu=4.0$.} \label{magCircFig} \end{figure} \subsection{Anisotropy} Figure~\ref{Band_gaps} aims to clarify why for certain choices of $\alpha$ and $\gamma$ the harmonic intensity may be enhanced in one direction compared to the orthogonal direction, as shown in Fig.~5 in the paper. To this end, we plot the band gap, which is given by \begin{equation} \label{bandGapEquation} \Delta E = 2\sqrt{ (\alpha \sin(k_y a) -\gamma \sin(k_x a))^2 + (\alpha \sin(k_x) -\gamma \sin(k_y))^2 + B^2 } \end{equation} in the Brillouin zone. In the two leftmost panels of Fig.~\ref{Band_gaps}, the four-fold rotational symmetry implicit in some of the symmetries in the paper is clearly broken. Restricting attention to the leftmost panel, a field acceleration along $\theta_{\textrm{pol}}=-\frac{\pi}{4}$ will give rise to enhanced intensity relative to the orthogonal direction because according to how the expectation value of the spin changes across the Brillouin zone, this direction will give rise to magnetization dynamics, whereas this happens to a much lesser extent along the $\theta_{\textrm{pol}}=\frac{\pi}{4}$ direction (see Fig.~2 in Ref.~\onlinecite{Liu_2006}). While the magnetic dipole contribution to the radiation may be small, the Pauli spin operators enter into the expression for the charge current - a consequence of spin-momentum locking. (See for instance Eqs.~\eqref{chargeVelx} and \eqref{chargeVely}.) Thus, their dynamics will strongly affect the resulting HHG spectrum. Also the different band structures for cuts along $\theta_{\textrm{pol}}=\pm\frac{\pi}{4}$ through the $\Gamma$ point suggest an effect of the polarization direction on the HHG spectrum. Note that this anisotropy is absent if $\gamma=0$ (or $\alpha=0$), and indeed there is no polarization dependence in the HHG spectrum in this case. \begin{figure}[h] \centering \includegraphics[angle=-0, width=\textwidth]{Band_gaps.png} \caption{Plot of the band gaps for various models according to equation \eqref{bandGapEquation}. } \label{Band_gaps} \end{figure} In Fig.~\ref{magSupFig}, two panels corresponding to the charge current (left panel) and the magnetization scaled by a $\big(\frac{\omega^2}{c}\big)$ factor (right panel) are shown. The right panel indicates the importance played by magnetization dynamics in generating the resulting spectra seen in the left panel. Further evidence of the role played by the dynamics of $\sigma_x, \sigma_y$ in the charge current is found by noting the similarity in the cutoff positions in both panels. In addition to this, both for $\theta_{\textrm{pol}}=\pm\frac{\pi}{4}$, the upper bound of $\Delta E$ coincides well with the observed cutoffs. Lastly, to show that this is generic for the given Hamiltonian, we present a simulation with $\mu=4$ in Fig.~\ref{mu4polDir}. \begin{figure}[ht] { \begin{minipage}[c][1\width]{ 0.47\textwidth} \centering \includegraphics[width=\textwidth]{HHG_pol_dir_SM_left_panel.png} \end{minipage}} \hfill { \begin{minipage}[c][1\width]{ 0.47\textwidth} \centering \includegraphics[width=\textwidth]{HHG_pol_dir_SM_right_panel.png} \end{minipage}} \caption{Figure showing the anisotropy for the radiated intensity in the left panel and the magnetization $M_x$ in the right panel. The field strength is $E_0=0.5$ here.} \label{magSupFig} \end{figure} \begin{figure}[h] \centering \includegraphics[angle=-0, width=0.5\textwidth]{HHG_pol_dir_mu_4.png} \caption{Figure showing the anisotropy for radiated intensity with parameters as in Fig. \ref{magSupFig} but with $\mu=4$. } \label{mu4polDir} \end{figure} \end{document}
{'timestamp': '2020-06-01T02:08:55', 'yymm': '2005', 'arxiv_id': '2005.14455', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14455'}
arxiv
\section{Introduction} Multiple unmanned aerial vehicles (UAVs) have attracted considerable interest these years, of which prospective applications include disaster area or maritime surveillance, bor
der patrol, environmental sensing, delivery service, etc (\cite{jenie2016taxonomy}). This determines that the UAVs would fly in an integrated airspace with a variety of possible conflict objects therein (See in Fig. \ref{fig:environment}). However, one key issue that limits the extensive application and the integration into such complex dynamic integrated airspace system of the UAVs is the collision avoidance problem (\cite{dalamagkidis2008unmanned, dalamagkidis2011integrating, shively2018unmanned}), which is also called as conflict detection and resolution in the literature . Various approaches for collision avoidance of UAVs have been developed these years. \cite{898217} presented cohesive discussion and comparative evaluation of 68 modeling methods for conflict detection and resolution. \cite{lalish2012distributed} discussed the related approaches based on the degree of centralization, the type of the vehicle model, the number of vehicles, and the heterogeneity or homogeneity of the vehicles, respectively. \cite{hoy2015algorithms} mainly reviewed the development of model predictive control (MPC), sensor-based boundary following, sensor-based path planning, and some reactive methods on collision avoidance. Moving obstacles and multi-vehicle situations were also discussed. \cite{mahjri2015review} summarized the functions of a collision avoidance system into three steps: the sensing, the detection, and the resolution, and reviewed the related approaches from these three aspects. Besides, \cite{zhang2018survey} presented an overview of collision avoidance approaches in large, middle and small scales, respectively. The above mentioned survey papers summarized the related research from many different aspects. But one common fact indicated by these papers is that most of these approaches are designed for some specific conflict scenarios (\cite{garcia2016biologically},\cite{dentler2019collision} ). This means any single approach cannot be used to completely solve the problem. To study out a solution for general conflict resolution in the complex dynamic integrated airspace, \cite{jenie2016taxonomy} firstly proposed a taxonomy of conflict detection and resolution approaches for UAVs based on their types of surveillance, coordination, maneuver, and autonomy, then discussed possible combinations of available approaches for a complete solution. However, specific implementations of such approach combinations were not given. Therefore, this paper aims to design a hierarchical collision avoidance system, which is capable of detecting and resolving general conflicts, for autonomous multiple fixed-wing UAVs in the complex dynamic integrated airspace. The main contribution of this paper is the proposal and implementation of the hierarchical collision avoidance architecture. \begin{itemize} \item[-] Firstly, a three-layered collision avoidance architecture dependent on local communication and onboard sensing is proposed for multiple fixed-wing UAVs, by analyzing characteristics of existing methods and hierarchical modeling of the local airspace. \item[-] Then a specific algorithm implementation is studied for each layer of the collision avoidance architecture. \item[-]Finally, the effectiveness of the proposed methodology is evaluated and validated by comparative simulations carried out under both deterministic and probabilistic sensing conditions. \end{itemize} \section{Problem Formulation} \subsection{Preliminary concept definition} Before further discussion, two concepts should be clarified: \begin{definition}[\textbf{Collision}]\label{def_1} For the $i$-th UAV in a $n$-UAV system ($i\in\{1,\cdots,n\}$) and any possible conflict object $o$ in the airspace, a collision happens if \begin{equation}\label{eq:collision_condition} d_{i,o}\leq R_s \end{equation} where $d_{i,o}$ represents the distance between the $i$-th UAV and the conflict object $o$, $R_s$ denotes the restricted safe radius of the UAVs. \end{definition} \begin{definition} [\textbf{Conflict}] For a UAV, a conflict is detected if a collision is predicted to happen on it within a specific time period $\tau_w$ in the future, where $\tau_w$ is the early warning time for collision conflicts. \end{definition} Then two main functions of collision avoidance control are to firstly detect potential conflicts and then take actions to avoid collisions if any conflicts are detected. \subsection{Conflict scenarios analysis} A collision avoidance system aims to enable the UAVs to handle all possible collision conflicts to ensure safe and orderly operations. To this end, various possible conflict objects in the complex integrated airspace are first discussed. See Table \ref{tb:obstacle_categorization}. \begin{table}[htb] \caption{Classifications of various conflict objects in the integrated airspace}\label{tb:obstacle_categorization} \begin{center} \renewcommand{\arraystretch}{1.2} \begin{threeparttable} \begin{tabular}{c|c|c|c} \hline \multicolumn{2}{c|}{ Classification Principles} & Static &Dynamic \\\hline \multirow{6}{*}{Non-cooperative} & \multirow{3}{*}{Unknown} & \multirow{3}{*}{new buildings} & birds \\ ~ & ~ & ~ & air masses \\ ~ & ~ & ~ & enemy UAVs \\ \cline{2-4} ~ & \multirow{3}{*}{Known} & mountains & ~ \\ ~ & ~ & old buildings & ~\\ ~ & ~ & lighthouses & ~ \\\hline \multirow{3}{*}{Cooperative} &\multirow{2}{*}{Unknown} & ~ &civil aircrafts\\ ~ & ~ & ~ & other UAVs \\ \cline{2-4} ~ & Known & ~ & neighbor UAVs \\ \hline \end{tabular} \end{threeparttable} \end{center} \end{table} Firstly, in the consideration of motion states, conflict objects are classified as static and dynamic. % Then according to whether there is active avoidance intention in the process of conflict resolution, they are classified into cooperative and non-cooperative ones. For example, objects like flying birds, balloons, and air masses, which are very much likely to disturb the flight but cannot implement active avoidance if conflicts exist, are classified as non-cooperative. % Civil aircraft are treated as cooperative because generally they can take active collision avoidance maneuvers based on some common rules, although the unknown nature of UAVs to the civil aircraft and vice versa make the cooperation rather challenging. % Thirdly, based on the ways of information acquisition, those obtained by prior knowledge or active communications are included in the known category. Other objects like some new buildings or other aircraft, requiring real-time perception, are included in the unknown category. \begin{figure}[t] \begin{center} \includegraphics[width=8.4cm]{environment} \caption{Prospective mission airspace and possible conflict objects therein} \label{fig:environment} \end{center} \end{figure} \subsection{Collision avoidance objective} This paper mainly studies real-time online collision avoidance. Therefore, those known environmental objects, that can generally be handled before the flight through trajectory pre-planning, are not the focus of this paper. For the rest of the conflict objects, taking the $i$-th UAV in a $n$-UAV system as a reference, denote the set of its neighbor UAVs as $\mathcal{N}_i$, the set of other potential unknown conflict objects as $\mathcal{O}_i$. Then all possible conflict objects of the $i$-th UAV can be represented as the augmented obstacle set: $$\mathcal{O}_i^{aug}\coloneqq \mathcal{N}_i\cup\mathcal{O}_i$$ Then according to Definition \ref{def_1}, the primary objective of collision avoidance control would be to keep a separate distance larger than $R_s$ for the $i$-th UAV from all obstacles in $\mathcal{O}_i^{aug}$, e.g., to ensure \begin{equation}\label{eq:CA_requirement} d_{i,o} > R_s, \forall o\in\mathcal{O}_i^{aug} \end{equation} Moreover, except for the collision avoidance requirement in \eqref{eq:CA_requirement}, dynamic constraints of the minimum cruising speed and limited heading rate, and optimization for the maneuver energy consumption and the required task performance index should also be considered in the collision avoidance strategy. \subsection{Kinematics} This paper studies the collision avoidance problem for UAVs implementing planar flights. Thus the fixed-wing UAVs are modeled as unicycle kinematics: \begin{equation}\label{eq:kinematics} \begin{cases} \dot x =v\cos\phi \\ \dot y=v\sin\phi \\ \dot\phi = u \end{cases} \end{equation} where, $(x,y,\phi)^T$ represents the state vector of the UAV, $(x,y)^T$ denotes the position and $\phi$ describes the heading angle, $v$ is the cruising speed, which is set to be constant during the flight, and the control input $u = \omega$ denotes the heading rate of the UAV. Meanwhile, the control input is subject to the following constraint: \begin{equation} u \in \mathcal{U}, \mathcal{U}\coloneqq\{\omega|-\omega_{max}\leq \omega \leq \omega_{max}\} \end{equation} where $\omega_{max}$ represents the upper bound of the heading rate. Considering the discrete control process during the flight, we use the second-order Runge-Kutta method to obtain the discrete kinematics model. \section{Hierarchical collision avoidance architecture} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table*}[htb] \caption{Algorithm review}\label{tb:Algorithm_comparation} \begin{center} \renewcommand{\arraystretch}{1.2} \begin{threeparttable} \begin{tabular}{c|c|cc|ccc|c} \hline\hline ~ & ~& \tabincell{c}{Computation \\ complexity}& Optimality & MV & MO & IRM & References \\\hline \multirow{4}{*}{\tabincell{c}{Path \\ planning}} & Graph search approaches & high & $\blacktriangle$ &\cmark&$ \blacktriangle$&$\blacktriangle$&[1],[2]\\ ~ & Mathematical programming & high& \cmark& \cmark & $\blacktriangle$& \cmark&[1],[5] \\ ~ & Artificial heuristic approaches & high & \cmark & \cmark&$\blacktriangle$& \cmark& [3],[5] \\ ~ & Potential field based planning& low & $\blacktriangle$ &\cmark&$\blacktriangle$&$\blacktriangle$& [1]\\ \hline \multirow{2}{*}{\tabincell{c}{Optimized \\ control}} & Game theory based approaches & high & $\blacktriangle$& \cmark& \xmark& \cmark & [6],[7]\\ ~ & Distributed model predictive control & $\bigstar$ & \cmark & \cmark &\cmark&\cmark& [4] \\ \hline \multirow{3}{*}{\tabincell{c}{Reactive \\ approaches}} & Geometric approaches & low & \xmark &$\blacktriangle$&\cmark& $\blacktriangle$ & [4]\\ ~ & Rule-based approaches & low & \xmark&\cmark&\xmark&$\blacktriangle$&[1]\\ ~ & Potential field based reactive approaches & low & \xmark &\cmark&\cmark&$\blacktriangle$& [1],[4]\\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item[*] Reference: [1] \cite{zhang2018survey}, [2] \cite{dadkhah2012survey}, [3] \cite{yu2015sense}, [4] \cite{hoy2015algorithms}, [5] \cite{mahmoudzadeh2018online}, [6] \cite{mylvaganam2018autonomous}, [7] \cite{mylvaganam2017differential} \item[*] Key: MV (\textbf{M}ultiple \textbf{V}ehicles), MO (\textbf{M}oving \textbf{O}bstacles), IRM (\textbf{I}nput \textbf{R}estricted \textbf{M}odel) \item[*] Symbols: $\bigstar$ (Not necessarily high), $\blacktriangle$ (With some disadvantages). \end{tablenotes} \end{threeparttable} \end{center} \end{table*} \subsection{Three-layered collision avoidance framework } \begin{figure} \begin{center} \includegraphics[width=6.0cm]{local_airspace_partition} \caption{The three-layered conflict detection region} \label{fig:local_airspace_partition} \end{center} \end{figure} The two main functions of collision avoidance control can be briefly described as conflict detection and resolution. Conflict detection using one single approach once for all can easily fail or delay because of sensing inaccuracy and uncertainty, or communication delay and interrupts. Besides, approaches for conflict resolution in the literature have different advantages and disadvantages in different conflict situations. Therefore, a three-layered collision avoidance architecture including a three-layered airspace partition for hierarchical conflict detection and a three-layered complementary conflict resolution strategy is proposed in this subsection. \subsubsection{\textbf{Three-layered airspace for hierarchical conflict detection}} Dynamic properties at different ranges from the UAV can vary greatly. Thus, a conflict detection region $\Omega_c$ is introduced and partitioned into three layers to implement hierarchical conflict detection: \begin{equation} \begin{split} &\qquad\Omega_c=\Omega_o \cup \Omega_m \cup \Omega_i \\ &\begin{cases} \Omega_o& = \{\bm{P}| R_m < d_{\bm{P}} \leq R_o\leq R_d\}\\ \Omega_m &=\{\bm{P} | R_i < d_{\bm{P}} \leq R_m\}\\ \Omega_i &= \{\bm{P} | R_s < d_{\bm{P}} \leq R_i \} \end{cases} \end{split} \label{eq:conflict_region_partition} \end{equation} where $R_o$, $R_m$ and $R_i$ are the radius of the three-layered conflict detection airspace, $d_{\bm{P}}$ denotes the distance of point $\bm{P}$ in the nearby airspace from the UAV. See Fig. \ref{fig:local_airspace_partition}. Note that $\Omega_d\supseteq\Omega_c$ is the perceptible area of the UAV. The outer-layer airspace has quite long distance from the UAV, which indicates that conflict situations in this area are essentially determined to the reference flight trajectories. Situations in middle-layer airspace is the most dynamic and complex. Motion state variations of neighbor UAVs, other aircraft, balloons, and the UAV itself, increase the uncertainty of conflict situations in this area. The inner-layer airspace has very short distance from the UAV, which determines the UAV should be able to detect potential conflicts very quickly so as to leave enough time for collision avoidance actions. Therefore, a hierarchical conflict detection and resolution scheme is developed in the consideration of these properties. \subsubsection{\textbf{Three-layered conflict detection and resolution}} Approaches for conflict resolution in the literature can be roughly classified into three categories: path planning, optimized control, and reactive approaches. See Table \ref{tb:Algorithm_comparation}. To maximize the advantages of different algorithms, a hierarchical collision avoidance framework integrating these three types of algorithms is proposed for general conflict scenarios. See Fig. \ref{fig:CA_scheme_framewrk}. Considering the range from the UAV and the level of dynamic complexity, a hierarchical collision avoidance framework integrating path planning schemes for the outer layer, optimized control for the middle layer and reactive methods for the inner layer is developed. Notably, the inner-layer reactive control law has the highest priority when it is activated. The middle-layer optimized control scheme has the second priority, which can provides better optimization and flexibility for highly dynamic middle-layer airspace. When there is no conflict detected, the UAVs fly according to the scheduled trajectories. \begin{figure}[h] \centering \includegraphics[width=8.4cm]{framework} \caption{The three-layered collision avoidance framework (CR: the abbreviation of "conflict resolution")} \label{fig:CA_scheme_framewrk} \end{figure} \subsection{Methodology} This subsection studies to present an implementation for the proposed hierarchical collision avoidance framework. \subsubsection{\textbf{Outer-layer path planning using sub-targets and Cubic B-spline}} Path planning approaches have been widely studied for collision avoidance problems. \cite{shuai2014real} proposed a real-time obstacle avoidance method using a sub-targets algorithm and Cubic B-spline for mobile robots that move to a specified target point. Inspired by his work, a conflict detection scheme based on the closest point of environment obstacles from the reference flight path is developed, with consideration of flight tracking error. This approach relies on the onboard sensing system for spacial status information updating. In this way, the sub-targets generation procedure in \cite{shuai2014real} is extended to curved-path following scenarios. Then a collision-free smooth path is generated using the sub-targets and Cubic B-spline algorithms as in \cite{shuai2014real}. \subsubsection{\textbf{Middle-layer DMPC-based collision avoidance}} Distributed model predictive control (DMPC) can explicitly deal with inter-agent constraints and find approximate optimal solutions for subsystems. Besides, the state prediction of MPC provides prior advantage in conflict detection. Thus a DMPC collision avoidance strategy, which executed by all the subsystems synchronously, is developed. The distributed controllers will rely on the local communication system and onboard sensing system for environmental information collection. Firstly, the conflict detection procedure based on state prediction is implemented. Since the reference trajectory is already known, the reference state of each UAV in the future could be computed and transmitted to its neighbor UAVs with the newest state information. Then for the $i$-th UAV in a $n$-UAV system, the assumed motion states of all neighbor UAVs in $\mathcal{N}_i$ could be computed. Also, the sensing system obtains the real-time information of environmental objects in $\mathcal{O}_i$. Thus the distance variations of the UAV from its neighbor UAVs and other environmental objects, e.g., all obstacles in $\mathcal{O}_i^{aug}$, could be predicted for conflict detection. Then if any conflict is detected at time interval $k$, the optimal local collision avoidance input sequence $\bm{u}_{i,(k)}^{\ast} = \{u_{i,(k+0|k)}^{\ast},\cdots,u_{i,(k+N-1|k)}^{\ast}\}$ would be generated by solving the following optimization problem: \begin{equation} \label{eq:DMPC_scheme} \begin{split} &J_{i,(k)}^{\ast}=\min_{\bm{u}_{i,(k)}}J_{i,(k)}\left(\bm{X}_{i,(k)},\bm{u}_{i,(k)}, \tilde{\bm{X}}_{(k-1)}^{\mathcal{O}_{i}^{aug}}\right) \\ &s.t. \\ &\quad u_{i,(k+l|k)}\in\mathcal{U}, \forall l = 0,1,\cdots,N-1 \end{split} \end{equation} where $\bm{X}_{i,(k)}$ is the newest state, $\tilde{\bm{X}}_{(k-1)}^{\mathcal{O}_{i}^{aug}}$ represents the predicted motion states of $\mathcal{O}_i^{aug}$. Once the local collision avoidance command sequence $\bm{u}_{i,(k)}^{\ast}$ has been generated, the first item $u_{i,(k+0|k)}^{\ast}$ would be applied to the UAV, and the complete sequence would be transmitted to its neighbor UAVs for next conflict detection. The whole process is summarized in Algorithm \ref{alg:DMPC_based_CDR}. Due to the limitations of the length of the paper, this algorithm is not rigorous detailed and a complete description and analysis will be given in our another paper later. \begin{algorithm}[htbp] \caption{Middle-layer DMPC-based collision avoidance} \label{alg:DMPC_based_CDR} \begin{algorithmic}[1] \State Parameter initialization: $T$, $N$, $R_m$, $R_s$, etc. \State Spacial status information updating: $\mathcal{O}_i^{aug}$ \State $k \gets k+1$ \State Conflict detection based on motion prediction \Procedure{Conflict Resolution}{} \State Calculate $\bm{u}_{i,(k)}^{\ast}$ by solving \eqref{eq:DMPC_scheme} \State Apply $u_{i,(k+0|k)}^{\ast}$ to the UAV \State Transmit the newest state and the control sequence $\bm{u}_{i,(k)}^{\ast}$ to neighbor UAVs \EndProcedure \State Returen to step 2 \end{algorithmic} \end{algorithm} \subsubsection{\textbf{Inner-layer reactive collision avoidance}} Inner-layer conflict detection and resolution provides the last guarantee for the flight safety of UAVs. Thus for quick response to conflicts, sufficient conditions for non-conflicting flights of any two UAVs in a short distance were derived in previous work (\cite{8996560}), which is utilized for conflict detection. Then a reactive collision avoidance control law is firstly proposed for two-UAV conflict based on the collision-free conditions: \begin{equation}\label{eq:inner_layer_CA} u_i=\rho k_{\psi}\left(\frac{1}{2}\arccos{\frac{\bm{v}_{ij}\cdot\bm{P}_{ij}}{|\bm{v}_{ij}||\bm{P}_{ij}|}}-\pi/4\right) \end{equation} where, parameter $\rho$ is the sign of turning direction, $k_{\psi}$ in (1/s) is a constant coefficient, which transforms the desired heading change into the desired heading rate, $\bm{v}_{ij}$ and $\bm{P}_{ij}$ are the relative velocity and position vectors of the $i$-th and the $j$-th UAVs, respectively. Moreover, the collision avoidance control law in \eqref{eq:inner_layer_CA} was further developed by integrating some additional rules on direction choosing, for more complicated conflict scenarios which involves more than two UAVs (\cite{8996560}). \subsection{Overall hierarchical algorithm} Finally, the overall hierarchical implementation of the hierarchical collision avoidance system is developed by integrating the three approaches described above, which is presented in Algorithm \ref{alg:overall_SMS_algorithm}. \begin{algorithm}[htbp] \caption{The distributed hierarchical collision avoidance for multiple UAVs} \label{alg:overall_SMS_algorithm} \begin{algorithmic}[1] \Procedure{Parameter Initialization}{} \State Initializa $\omega_{max}$, $T$, $R_o$, $R_m$, $R_i$, $R_s$, and $N$; \State $inner\_conflict\_flag \gets 0$ \State $middle\_conflict\_flag \gets 0$ \State $outer\_conflict\_flag \gets 0$ \EndProcedure \State Update data for $\mathcal{O}_{i,(k)}^{aug} = \mathcal{N}_{i,(k)} \cup \mathcal{O}_{i,(k)}$ \State $k \gets k+1$ \Procedure{Conflict Detection}{} \State \textbf{return} $inner\_conflict\_flag$, \State $middle\_conflict\_flag$, and $outer\_conflict\_flag$ \EndProcedure \Procedure{Conflict Resolution}{} \If{$inner\_conflict\_flag==1$} \State \textbf{Do} reactive cillision avoidance control \ElsIf{$middle\_conflict\_flag==1$} \State \textbf{Do} DMPC based collision avoidance \ElsIf{$outer\_conflict\_flag==1$} \State \textbf{Do} path-planning based collision avoidance \Else \State \textbf{Do} normal trajectory tracking. \EndIf \EndProcedure \State Return to step 7 \end{algorithmic} \end{algorithm} \section{Simulations} Comparative simulation tests for the proposed hierarchical collision avoidance system are carried out in comparison with the DMPC-only collision avoidance approach. The DMPC approach is chosen for comparison because it is a typical algorithm which can deal with various dynamic conflict scenarios in the literature. \subsection{Simulation settings} Simulations are performed on Matlab 2018. Each UAV is functioned as a separate running Matlab and uses the UDP protocol for local communication, which is set to be fully connected. The impact of communication delay and failures are ignored. The UAVs utilize the kinematics in \eqref{eq:kinematics} and are required to follow several pre-planned closed triangle-like curved paths at a constant cruising speed using the pure pursuit with line-of-sight approach (\cite{sujit2014unmanned}). To increase the frequency of conflicts for simulation verification, each reference path is designed to be intersected with the others. Each circle of the paths is about $1500m$. Besides, several environmental obstacles are distributed on or near the reference paths. Then during simulation flights, the UAVs perform the collision avoidance method, e.g., the hierarchical collision avoidance system or the DMPC-only approach, when certain conflict is detected. Main parameter settings are presented in Table \ref{tab:list_of_parameters}. \begin{table}[htbp] \caption{Parameter settings in simulation tests} \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{p{0.6cm}|p{1.3cm}|p{5.5cm}} \hline \hline ~& \textbf{Value}& \textbf{Meaning} \\ \hline $V$& $19 (m/s)$& The cruising speed \\ \hline $\omega_{max}$& $0.6 (rad/s)$& The maximum heading rate \\ \hline $R_o$& $80 (m)$ & The outer-layer detection region radius \\ \hline $R_m$& $70 (m)$& The middle-layer detection region radius\\ \hline $R_i$ & $55 (m)$& The inner-layer detection region radius\\ \hline $T$& $0.1 (s)$& The control and sampling period \\ \hline $R_s$& $30 (m) $& The restricted safe radius of the UAV \\ \hline \hline \end{tabular} \label{tab:list_of_parameters} \end{center} \end{table} \subsection{Simulations with deterministic sensing} The simulations are firstly carried out for 5 UAVs with deterministic sensing, e.g., the information of obstacles are obtained as far as they enter the perceptible area $\Omega_d$. Then, the UAVs keep doing conflict detection during the flight, and activate the corresponding conflict resolution methods when certain conflicts are detected. In each comparative simulation, the initial positions of the UAVs are the same and randomly chosen from the non-conflict points on the reference paths. The operation time is set to be 5000 control cycles. Thus the flight distance of each UAV in a simulation test is about $9500m$. Once the distance of the UAV from obstacles is less than $R_s$, it is marked as a failure of conflict resolution. Then total number of failures is calculated for comparison. \begin{table}[htb] \caption{Simulations with deterministic sensing}\label{tb:determinisitic_sensing_5_UAVs} \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{c|c|c|c|c} \hline\hline ~& \multicolumn{2}{c|}{ \textbf{Failure times}} & \multicolumn{2}{c}{ \tabincell{c}{\textbf{Average } \\ \textbf{collision-free } \\ \textbf{distance (m)}}} \\ \cline{2-5} ~& \tabincell{c}{DMPC \\ only} & \tabincell{c}{Hierarchical \\ CAS} & \tabincell{c}{DMPC \\ only} & \tabincell{c}{Hierarchical \\ CAS} \\ \hline Test 1 & 74 & 43 & 128.38 & 220.93 \\ Test 2& 69 & 24 & 137.68 & 395.83 \\ Test 3& 66 & 17 & 143.94 & 558.82 \\ Test 4& 77 & 28 & 123.38 & 339.28 \\ Test 5& 56 & 15 & 169.64 & 633.33 \\ \hline Summation & 342 & 127 & \\\hline Mean & ~ & ~ & 140.60 & 429.64 \\\hline\hline \end{tabular} \begin{tablenotes} \footnotesize \item{*} CAS: the abbreviation of "collision avoidance system" \end{tablenotes} \end{center} \end{table} Table \ref{tb:determinisitic_sensing_5_UAVs} presents the results of 5 comparative simulations. From the content we can see that the total number of conflict resolution failures in flights of about $47500m$ is $127$ using the proposed hierarchical collision avoidance system, which is much less than the result of the DMPC-only method ($342$). Besides, the mean of average collision-free distance using the hierarchical collision avoidance system is $429.64m$, which is much longer than that of the DMPC-only method ($140.60m$). \subsection{Simulations with probabilistic sensing} In the consideration of perception uncertainties in reality, simulations are then performed for 5 UAVs with probabilistic sensing, e.g., obstacles are successfully sensed at a increasing probability as the distance from the UAV decreases. In simulation tests, the probability of successful perception in the outer-layer conflict detection region is $0.70$, the probability of the middle-layer region is $0.85$, and that of the inner-layer region is set to be $1$. Results of 5 comparative simulations are presented in Table \ref{tb:probabilistic_sensing_5_UAVs}. \begin{table}[htb] \caption{Simulations with probabilistic sensing}\label{tb:probabilistic_sensing_5_UAVs} \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{c|c|c|c|c} \hline\hline ~& \multicolumn{2}{c|}{ \textbf{Failure times}} & \multicolumn{2}{c}{ \tabincell{c}{\textbf{Average } \\ \textbf{collision-free } \\ \textbf{distance (m)}}} \\ \cline{2-5} ~& \tabincell{c}{DMPC \\ only} & \tabincell{c}{Hierarchical \\ CAS} & \tabincell{c}{DMPC \\ only} & \tabincell{c}{Hierarchical \\ CAS} \\ \hline Test 1 & 56 & 31 & 169.64 & 306.45 \\ Test 2& 63 & 10 & 150.79 & 950.00 \\ Test 3& 61 & 17 & 155.74 & 558.82 \\ Test 4& 70 & 18 & 135.71 &527.78\\ Test 5& 63 & 24 & 150.79 & 395.83 \\ \hline Summation & 313 & 100 & \\\hline Mean & ~ & ~ & 152.53 & 547.78 \\\hline\hline \end{tabular} \begin{tablenotes} \footnotesize \item{*} CAS: the abbreviation of "collision avoidance system" \end{tablenotes} \end{center} \end{table} Table \ref{tb:probabilistic_sensing_5_UAVs} shows that, the average collision-free distance using the hierarchical collision avoidance system ($547.78m$) is more than three times that of the DMPC-only scheme ($152.53m$). This indicates that the proposed hierarchical strategy is more capable in the uncertain real world. \section{Conclusion} In conclusion, this paper studied a three-layered collision avoidance architecture for autonomous multiple fixed-wing UAVs. The effectiveness of the hierarchical collision avoidance system is tested via numerical simulations, in which the result verified the advantage of the proposed methodology in comparison with the DMPC-only collision avoidance scheme. This work is the first attempt of combing several different approaches together to handle complex conflict scenarios of multiple UAVs. Future work will continue to study the safety management for multiple fixed-wing UAVs. Firstly, the parameters and algorithms involved in the integrated methodology could to be further optimized to maximize the effect of each layer of the integrated scheme. Secondly, the study on this issue in three-dimensional space is in progress. Besides, physical experiment is also a concern of the authors in future work.
{'timestamp': '2020-06-01T02:04:34', 'yymm': '2005', 'arxiv_id': '2005.14340', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14340'}
arxiv
\section{Introduction} This work is concerned with the numerical resolution of the stationary Dirac equation with periodic coefficients. The Dirac equation is historically associated with relativistic
quantum mechanics, and is central in condensed matter physics in the study of graphene, topological insulators, and Weyl/Dirac semimetals, etc. This is the main application we consider here. The two-dimensional expression of the (linear) Dirac Hamiltonian reads, with all physical constants set to one, \mathbf e} \newcommand{\bh}{\mathbf h \label{Dirac} H=\sigma \cdot (-i \nabla +A)+\sigma_z M+ I_2 V, \end{equation} where $\sigma=(\sigma_x,\sigma_y)$, for $\sigma_j$ the Pauli matrices $$ \sigma_x= \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} \qquad \sigma_y= \begin{pmatrix} 0&-i\\ i&0 \end{pmatrix} \qquad \sigma_z= \begin{pmatrix} 1&0\\ 0&-1 \end{pmatrix}. $$ Above, $I_2$ is the $2\times 2$ identity matrix, $A=(A_1,A_2)$ is the vector potential, $V$ the scalar potential, and $M$ a ``mass'' term. We focus on the 2D case for simplicity of the exposition and the implementation, but the methods we introduce are readily extended to 3D geometries. In spite of the apparent simplicity of $H$, the discretization of the Hamiltonian is quite subtle. Consider indeed for instance a uniform grid of stepsize $h$ on a square, and discretize the partial derivatives with centered finite differences. When $A$, $M$ and $V$ are all zero, the resulting dispersion relation is, with $\mathbf k=(k_x,k_y)$, $$ \omega^2(\mathbf k)=\left(\frac{\sin(k_x h)}{h}\right)^2+\left(\frac{\sin(k_y h)}{h}\right)^2, \qquad k_x,k_y \in \left[-\pi h^{-1},\pi h^{-1}\right]. $$ \begin{figure}[h!] \centering \includegraphics[height=6cm, width=8cm]{disp.png} \caption{Dispersion relation of the centered finite differences Dirac Hamiltonian.} \label{fig:disp} \end{figure} It is represented in figure \ref{fig:disp} in the plane $k_y=0$ when $h=\pi$. One recovers the expected relation $\omega(\mathbf k)=|\mathbf k|$ when $|\mathbf k| h \ll 1$. The function $\omega(\mathbf k)$ is not monotone with respect to $k_x$ and $k_y$, and as a consequence there are two solutions to the equation $\omega(\mathbf k)=$constant. The one closest to the origin ($k_\star$ in the figure) is the physical solution approximating the exact solution, while the other one (i.e. $k_\star'$), closer to 1, is unphysical. It is associated with a highly oscillating mode. Indeed, if the square has side $L$ with periodic boundary conditions, and each direction is discretized with $N+1$ points (with e.g. $N$ even), the eigenvalues of $H$ are $$ N \sqrt{\left(\frac{\sin(2 \pi m/N)}{L}\right)^2+\left(\frac{\sin(2 \pi n/N)}{L}\right)^2}, \qquad m,n=-N/2,\cdots, N/2, $$ with eigenvectors $v_{mn}(\ell,\ell')=e^{i 2 \pi (m \ell+n\ell')/N}$, $\ell,\ell'=0,\cdots,N$. Then e.g. $v_{m,n}$ and $v_{N/2-m,n}$ are associated with the same eigenvalue, and the latter is the spurious mode (when $m$ is small). This phenomenon is referred to as the \textit{Fermion doubling problem} in the physics literature, see e.g. \cite{fermion_doubling, fermion_doubling2}. The Schr\"odinger Hamiltonian is immune to the problem as the dispersion relation is monotone, resulting in the eigenvalues $$ \frac{4N^2}{L^2} \left[\left(\sin(\pi m/N)\right)^2+\left(\sin(\pi n/N)\right)^2\right], \qquad m,n=-N/2,\cdots, N/2. $$ The large eigenvalues are not properly approximated, but at least spurious modes associated with small eigenvalues are not created. It is then possible to use the ``squaring trick'' to overcome the doubling issue when the coefficients $M$ and $V$ are constant, that is to square the Dirac operator to recover the Schr\"odinger operator. The procedure does not apply when $M$ and $V$ are variable. A simple way to handle the Fermion doubling problem for linear time evolution problems is to consider initial conditions with only low frequency modes, when possible, that are well propagated by the scheme. The situation is more critical for stationary problems such as band structure calculations where the discrete Hamiltonian is diagonalized. In this case, the dimension of the eigenspaces is wrongly doubled and, as a consequence, while the low eigenvalues that are calculated are accurate approximations, some physical ones are left out by the procedure. Besides, the eigenvectors are corrupted since any linear combination of the physical eigenvector and of the spurious one is also an eigenvector. The classical method for band structure calculations found in the physics community consists of using so-called ``plane-wave expansions''. The idea is simply to perform a Fourier transform adapted to the lattice, resulting in an exact dispersion relation. The major disadvantage is that the Fourier coefficients of the functions $A$, $M$, $V$ have to be computed along with some convolutions. It is then customary to find in the literature simple coefficients $A$, $M$, $V$ with just a small number of Fourier modes that are known by construction. This limits the applicability of the method as one would like to use arbitrary coefficients in the Dirac Hamiltonian. It is then desirable to derive methods in real-space. One-sided derivatives that are widely used in the context of first-order hyperbolic equations such as the Dirac equation are not appropriate since they break the hermicity of the Dirac Hamiltonian. Some solutions were proposed in \cite{anton_dirac3D,anton1D} in the time-dependent case, and could be adapted to the periodic stationary picture. They have two main limitations: they are designed for cartesian grids, which is a serious impediment for band structure calculations, and they are quite technically involved. Indeed, the number of unknowns is doubled in these methods, and they are defined on two different staggered grids resulting in a somewhat complicated scheme. We propose in this work a simple real-space scheme overcoming the Fermion doubling problem and applicable to all two-dimensional periodic lattices. The scheme is based on classical spectral methods, see e.g. the monograph \cite{T-Spectral-00}. These methods rely on Fourier series expansions, and are therefore capturing the dispersion relation exactly and are perfectly adapted to the periodic setting. These methods essentially ``bring back'' the plane-wave expansion methods of the physics community to the real space. The Hamiltonian is discretized in the primitive cell of the lattice, which takes the form of a parallelogram, see e.g. the Honeycomb (i.e. hexagonal) and Kagome lattices in figure \ref{fig:lat}. \begin{figure}[h!] \centering \vspace{-3.8cm} \includegraphics[height=12cm, width=9cm]{hex.pdf} \hspace{-3cm} \includegraphics[height=12cm, width=9cm]{kag.pdf} \vspace{-3.8cm} \caption{Left: Hexagonal lattice; Right: Kagome lattice. The primitive cell is depicted in red along with the primitive vectors.} \label{fig:lat} \end{figure} The partial derivatives are replaced by so-called differentiation matrices, and rotations are performed to align the directions of derivation with the primitive vectors of the lattice. One has to be careful with the construction of these differentiation matrices since, in the standard setting where the number of discretization points is even, the kernels of the matrices are two-dimensional and spanned by highly oscillating sawtooth functions, instead of simply consisting of the constant vectors. This is not a major issue for evolution problems, but is a critical one for eigenvalue calculations as any eigenvector associated with a nonzero eigenvalue $\lambda$ can be multiplied component-wise by an eigenvector in the kernel and still be associated with $\lambda$. The eigenvectors can then be corrupted by eigenvectors in the kernel and exhibit spurious unphysical oscillations. A simple way around this is to consider an \textit{odd} number of discretization points, resulting in differentiation matrices with one-dimensional kernels spanned by constant vectors. We apply our numerical method to the study of flat bands in graphene. These have received much interest lately as they tend to promote interaction-driven ordering phenomena such as unconventional superconductivity since the kinetic energy of the particles is small. We consider two situations. The first one is graphene in a periodic magnetic field. The second one is twisted bilayer graphene consisting of two graphene sheets on top of each other and rotated by a given angle. The resulting Hamiltonian consists of two coupled Dirac Hamiltonians, and flat bands are observed for a particular set of angles. We will study numerically the stability of these flat bands with respect to random perturbations. Another potential application of our method that is not considered here, is the setting of \cite{uri}, where the twisting angle is position-dependent due experimental uncertainties across the structure. The article is structured as follows: we describe our numerical method in section \ref{meth}, and section \ref{appli} is devoted to the applications. \paragraph{Acknowledgement.} OP is supported by NSF CAREER grant DMS-1452349. HC and MT are supported by the start-up funding from CSU. \section{The numerical method} \label{meth} We start by setting the geometry. \subsection{Preliminaries} We use a particular set of coordinates to derive the scheme, but other choices are possible. We suppose a rotation has been performed beforehand so that one of the primitive vectors is aligned with the $x$ axis. We denote by $\Lambda$ the primitive cell of the lattice, with the lower left node located by convention at $(0,0)$. We denote by $\mathbf a_1$ and $\mathbf a_2$ (not normalized to one) the vectors generating the lattice, and by $\mathbf k_1$, $\mathbf k_2$ the ones generating the reciprocal lattice. They are related by $$ \mathbf k_i \cdot \mathbf a_j= 2 \pi \delta_{ij}, \qquad i,j=1,2. $$ The vectors $\mathbf k_1$ and $\mathbf k_2$ admit the expressions $$ \mathbf k_1=\frac{2 \pi R \mathbf a_2}{\mathbf a_1 \cdot R \mathbf a_2}, \qquad \mathbf k_2=\frac{2 \pi R \mathbf a_1}{\mathbf a_2 \cdot R \mathbf a_1}, $$ where $R$ denotes 90 degrees rotation. The cartesian unit vectors associated with the $x$ and $y$ axes are $\mathbf e_1$ and $\mathbf e_2$, see figure \ref{fig:ax}. \begin{figure}[h!] \centering \vspace{-4.3cm} \includegraphics[height=14cm, width=12cm]{axis.pdf} \vspace{-5.9cm} \caption{Axes} \label{fig:ax} \end{figure} Let $a_2=|\mathbf a_2|$ and $a_2=|\mathbf a_2|$ be the lengths of $\mathbf a_1$ and $\mathbf a_2$, and let $\mathbf u} \newcommand{\bv}{\mathbf v_1=\mathbf a_1/a_1$, and $\mathbf u} \newcommand{\bv}{\mathbf v_2=\mathbf a_2/a_2$ be unit vectors. A point in the primitive cell is denoted by $\mathbf u} \newcommand{\bv}{\mathbf v=u_1 \mathbf u} \newcommand{\bv}{\mathbf v_1+u_2 \mathbf u} \newcommand{\bv}{\mathbf v_2$, with $0\leq u_{1}\leq a_{1}$ and $0\leq u_{2}\leq a_{2}$. If $\theta \neq 0$ is the angle between $\mathbf a_1$ and $\mathbf a_2$, the cartesian coordinates $(x_1,x_2)$ of $\mathbf u} \newcommand{\bv}{\mathbf v=x_1 \mathbf e_1+x_2 \mathbf e_2$ are related to $(u_1,u_2)$ by $$ x_1=u_1+u_2 \cos (\theta), \qquad x_2=u_2 \sin(\theta), $$ or equivalently \mathbf e} \newcommand{\bh}{\mathbf h \label{CV} u_2=x_2/\sin(\theta), \qquad u_1=x_1-x_2 \cot(\theta). \end{equation} We get in particular the following relations between the partial derivatives that will be used further: $$ \partial_{x_1}=\partial_{u_1}+\cos(\theta) \partial_{u_2}, \qquad \partial_{x_2}=\sin(\theta) \partial_{u_2}. $$ We now turn to the approximation of the Dirac Hamiltonian. \subsection{Discretization} Let $f$ be a periodic function over the lattice such that $\|f\|^2_{L^2(\Lambda)}=\int_{\Lambda} |f(\mathbf u} \newcommand{\bv}{\mathbf v)|^2 d\mathbf u} \newcommand{\bv}{\mathbf v$ is finite. Then $f$ can be decomposed into the Fourier series $$ f(\mathbf u} \newcommand{\bv}{\mathbf v)=\sum_{\bm \in \mathbb Z^2} e^{i \mathbf K} \newcommand{\bL}{\mathbf L_\bm \cdot \mathbf u} \newcommand{\bv}{\mathbf v} \hat f_\bm, $$ with convergence in $L^2(\Lambda)$. Above, $\mathbf K} \newcommand{\bL}{\mathbf L_\bm=m_1 \mathbf k_1+m_2 \mathbf k_2$ with $\bm=(m_1,m_2) \in \mathbb Z^2$, and the $\hat f_\bm$ are the Fourier coefficients of $f$. In particular, the delta function reads in the distribution sense $$ \delta(\mathbf u} \newcommand{\bv}{\mathbf v)=\sum_{\bm \in \mathbb Z^2} e^{i \mathbf K} \newcommand{\bL}{\mathbf L_\bm \cdot \mathbf u} \newcommand{\bv}{\mathbf v}. $$ Consider now the following discretization of $\Lambda$: for $Q_1$ and $Q_2$ given in $\mathbb N$, let $\bj=(j_1,j_2)$, with $j_1=1,\cdots N_1$, $j_2=1,\cdots N_2$, and $N_1=2Q_1+1$, $N_2=2Q_2+1$; a grid point in $\Lambda$ is then denoted by $\mathbf u} \newcommand{\bv}{\mathbf v_{\bj}=j_1 h_1 \mathbf u} \newcommand{\bv}{\mathbf v_1+j_2 h_2 \mathbf u} \newcommand{\bv}{\mathbf v_2$, for $h_1=a_1/N_1$, $h_2=a_2/N_2$. With such a choice for $\bj$, the bottom and left sides of the primitive cell are ignored in the discretization because of the periodicity of the lattice. The next result is standard: we have $$\frac{1}{N_1}\sum_{|m_1|\leq Q_1}e^{\frac{i 2 \pi m_1 u_1}{a_1}} = D_{Q_1}\left(\frac{2 \pi u_1}{a_1}\right), \qquad \textrm{where} \qquad D_n(x)=\frac{\sin\big((n+1/2)x\big)}{(2n+1) \sin(x/2)}. $$ The function $D_n$ is commonly referred to as the Dirichlet kernel (up to the normalization factor). The series defining the delta function is then truncated and $\delta$ is approximated by $$ \delta_a(\mathbf u} \newcommand{\bv}{\mathbf v)=\frac{1}{N_1N_2}\sum_{|m_1|\leq Q_1}\sum_{|m_2| \leq Q_2} e^{\frac{i 2 \pi m_1 u_1}{a_1}} e^{\frac{i 2 \pi m_2 u_2}{a_2}}=D_{Q_1}\left(\frac{2 \pi u_1}{a_1}\right) D_{Q_2}\left(\frac{2 \pi u_2}{a_2}\right). $$ The normalization is chosen such that $\delta_a(\mathbf 0)=1$, with $\mathbf 0=(0,0)$. We introduce the following notations for simplicity $$ S_1(x)=D_{Q_1}\left(\frac{2 \pi x}{a_1}\right), \qquad S_2(x)=D_{Q_2}\left(\frac{2 \pi x}{a_2}\right). $$ Owing to the relation $$ f_{\bj}:=f(\mathbf u} \newcommand{\bv}{\mathbf v_{\bj})=\sum_{\bm \in \mathbb Z^2} f_{\bm} \delta(\bj-\bm), $$ the function $f$ is then approximated at $\mathbf u} \newcommand{\bv}{\mathbf v \in \Lambda$ by \begin{eqnarray} \label{appf} f_a(\mathbf u} \newcommand{\bv}{\mathbf v)&=& \sum_{|m_1|\leq Q_1}\sum_{|m_2| \leq Q_2} f_{a,\bm} \delta_a(\mathbf u} \newcommand{\bv}{\mathbf v-m_1 h_1 \mathbf u} \newcommand{\bv}{\mathbf v_1-m_2 h_2 \mathbf u} \newcommand{\bv}{\mathbf v_2) \nonumber\\ &=&\sum_{|m_1|\leq Q_1}\sum_{|m_2|\leq Q_2} f_{a,\bm} S_{1}(u_1-m_1 h_1)S_{2}(u_2-m_2 h_2), \end{eqnarray} with $f_{a,\bm}:=f_a(u_\bm)$. The coefficients $f_{a,\bm}$ are defined by periodicity for nonpositive indices $m_1$ and $m_2$: with the notation $f_{a,\bm}=f_{a, m_1,m_2}$, we have e.g. $f_{a,\bm}=f_{a,m_1,m_2}=f_{a,Q_1-m_1,m_2}$ when $-Q_1 \leq m_1 \leq 0$. We will drop in the sequel the index $a$ with an abuse of notation to lighten the expressions. Hence, $f_{a,\bm}$ will become $f_\bm$. \paragraph{Derivatives.} With \fref{appf}, it is then direct to compute the partial derivatives of the approximate function $f_a$. Indeed, with \fref{CV}, we find \begin{eqnarray*} \partial_{x_1} f_a(\mathbf u} \newcommand{\bv}{\mathbf v)&=&\sum_{|m_1|\leq Q_1}\sum_{|m_2|\leq Q_2} f_{\bm} S'_{1}(u_1-m_1 h_1)S_{2}(u_2-m_2 h_2)\\ \partial_{x_2} f_a(\mathbf u} \newcommand{\bv}{\mathbf v)&=&- \cot(\theta)\sum_{|m_1|\leq Q_1}\sum_{|m_2|\leq Q_2} f_{\bm} S'_{1}(u_1-m_1 h_1)S_{2}(u_2-m_2 h_2)\\ &&+ \frac{1}{\sin(\theta)}\sum_{|m_1|\leq Q_1}\sum_{|m_2|\leq Q_2} f_{\bm} S_{1}(u_1-m_1 h_1)S'_{2}(u_2-m_2 h_2). \end{eqnarray*} At one of the grid points $\mathbf u} \newcommand{\bv}{\mathbf v=\mathbf u} \newcommand{\bv}{\mathbf v_\bj$, we have $$ S'_{1}(j_1 h_1)=\frac{\pi}{a_1} \frac{(-1)^{j_1}}{\sin(j_1 \pi/N_1)}, \qquad S'_{2}(j_2 h_2)=\frac{\pi}{a_2} \frac{(-1)^{j_2}}{\sin(j_2 \pi/N_2)}, \qquad S'_1(0)=S_2'(0)=0, $$ leading to the expressions, for $j_1=-Q_1,\cdots, Q_1$ and $j_2=-Q_2,\cdots, Q_2$, \begin{eqnarray*} \partial_{x_1} f_a(\mathbf u} \newcommand{\bv}{\mathbf v_\bj)&=&\sum_{|m_1|\leq Q_1}f_{m_1,j_2} S'_{1}\big((j_1-m_1)h_1\big)\\ \partial_{x_2} f_a(\mathbf u} \newcommand{\bv}{\mathbf v_\bj)&=&- \cot(\theta)\sum_{|m_1|\leq Q_1}f_{m_1,j_2} S'_{1}\big((j_1-m_1)h_1\big)\\ &&+\frac{1}{\sin(\theta)}\sum_{|m_2|\leq Q_2} f_{j_1,m_2} S'_{2}\big((j_2-m_2)h_2\big). \end{eqnarray*} As before, $\partial_{x_1} f_a(\mathbf u} \newcommand{\bv}{\mathbf v_\bj)$ is extended by periodicity: e.g. $\partial_{x_1} f_a(\mathbf u} \newcommand{\bv}{\mathbf v_{-j_1,j_2})=\partial_{x_1} f_a(\mathbf u} \newcommand{\bv}{\mathbf v_{N_1-j_1,j_2})$ for $j_1=0,\cdots,Q_1$. In order to write the derivatives in a compact form, the $N_1 \times N_2$ matrix $f_{\bj}$ is stored into a vector $F$ of length $N_1N_2$, with the correspondence $F_{j_1+N_1(j_2-1)}=f_{j_1,j_2}$ for $j_1=1,\cdots,N_1$, $j_2=1,\cdots,N_2$. Introducing the antisymmetric matrices $T_1$ and $T_2$ defined by $(T_1)_{ij}=S_1'\big((i-j) h_1\big)$ and $(T_2)_{ij}=S_2'\big((i-j) h_2\big)$, $T_1$ of size $N_1 \times N_1$ and $T_2$ of size $N_2 \times N_2$, the partial derivatives of a function $f$ at $\mathbf u} \newcommand{\bv}{\mathbf v_\bj \in \Lambda$ are then approximated by $$ \partial_{x_1} f(\mathbf u} \newcommand{\bv}{\mathbf v) \longrightarrow \mathcal D_1 F, \qquad \partial_{x_2} f(\mathbf u} \newcommand{\bv}{\mathbf v) \longrightarrow \mathcal D_2 F, $$ where $$ \mathcal D_1= I_{N_2} \otimes T_1 , \qquad \mathcal D_2=-\cot(\theta)\; I_{N_2} \otimes T_1+\frac{1}{\sin(\theta)} \; T_2 \otimes I_{N_1}. $$ Above, $I_N$ is the identity matrix of size $N \times N$ and $\otimes$ the tensor product. The matrices $T_j$ are full, while the $\mathcal D_j$ are sparse with $N_1 N_2 N_j$ non-zero elements. How good an approximation the differentiation matrices $T_1$ and $T_2$ provide us with depends on the regularity of $f$. If $f$ is for instance analytic, then there is spectral convergence, i.e. the error decreases exponentially with $N_1$ and $N_2$, see e.g. \cite{T-Spectral-00}, Chapters 1 and 4. The next Lemma shows that the kernel of the differentiation matrix $T_1$ for $N_1$ odd is one-dimensional and spanned by constant vectors. As already mentioned in the introduction, this is to be contrasted with differentiation matrices for $N_1$ and $N_2$ even that have two-dimensional kernels spanned by the highly oscillating sawtooth function, see \cite{T-Spectral-00}, Chapter 3. \begin{lemma} \label{kernel} When $N_1$ (resp. $N_2$) is odd, the kernel of the matrix $T_1$ (resp. $T_2$) consists of constant vectors of length $N_1$ (resp. $N_2$) of the form $C (1, \cdots, 1)$ for $C \in \mathbb R$. \end{lemma} The proof of the Lemma is elementary and based on the discrete Fourier transform. It is given in Appendix for the reader's convenience. From Lemma \ref{kernel}, it follows that the kernels of $\mathcal D_1$ and $\mathcal D_2$ are one-dimensional and spanned by the constant vector. \paragraph{The Dirac Hamiltonian.} Using the calculations of the previous section, the partial derivatives in the Dirac operator $H$ defined in \fref{Dirac} are replaced by the appropriate differentiation matrices, and the discrete version of $H$ is $$ \mathcal H= \sigma_1 [-i \mathcal D_1+\mathcal A_1]+ \sigma_2 [-i \mathcal D_{2}+\mathcal A_2]+\sigma_3 [\mathcal M]+\mathcal V I_{2N}, $$ where, for $N=N_1N_2$, $\mathbf 0_{N}$ the zero matrix of size $N \times N$, and $B$ any $N \times N$ matrix, $$ \sigma_1[B]= \begin{pmatrix} \mathbf 0_{N}&B\\ B&\mathbf 0_{N} \end{pmatrix} \qquad \sigma_2[B]=i \begin{pmatrix} \mathbf 0_{N}&-B\\ B&\mathbf 0_{N} \end{pmatrix} \qquad \sigma_3[B]= \begin{pmatrix} B&\mathbf 0_{N}\\ \mathbf 0_{N}&-B \end{pmatrix}, $$ and where $\mathcal A_1$, $\mathcal A_2$, $\mathcal M$$, \mathcal V$ are diagonal matrices with diagonals given by the values of the coefficients $A_1$, $A_2$, $M$, $V$ at the grid points $\mathbf u} \newcommand{\bv}{\mathbf v_\bj$ and stored as explained in the previous paragraph. The $2N \times 2N$ matrix $\mathcal H$ is hermitian by construction. With $\mathcal D_\pm=-i\mathcal D_1\pm\mathcal D_2$, $\mathcal A=\mathcal A_1+i\mathcal A_2$, $\mathcal H$ is recast as $$ \mathcal H= \begin{pmatrix} \mathcal V+\mathcal M&\mathcal D_-+\mathcal A^*\\ \mathcal D_++\mathcal A&\mathcal V-\mathcal M \end{pmatrix}. $$ \paragraph{The band structure.} With $\mathbf u} \newcommand{\bv}{\mathbf v=x_1 \mathbf e_1+x_2 \mathbf e_2$, the band structure is computed by considering the family of Hamiltonians $H(\mathbf k):= e^{-i \mathbf k \cdot \mathbf u} \newcommand{\bv}{\mathbf v} \circ H \circ e^{i \mathbf k \cdot \mathbf u} \newcommand{\bv}{\mathbf v} $ acting on periodic functions on the lattice. The vector $\mathbf k$ is defined by $\mathbf k=k_1 \mathbf e_1+k_2 \mathbf e_2$ and belongs to the first Brillouin zone of the reciprocical lattice. The operator $H(\mathbf k)$ is simply obtained by shifting $A_1$ and $A_2$ by $ k_1$ and $ k_2$, respectively. The corresponding discrete operator is then $$ \mathcal H(\mathbf k)= \sigma_1 [-i \mathcal D_1+\mathcal A_1+k_1 I_{N}]+ \sigma_2 [-i \mathcal D_{2}+\mathcal A_2+k_2I_{N}]+\sigma_3 [\mathcal M]+\mathcal V I_{2N}. $$ With $\mathbf K} \newcommand{\bL}{\mathbf L=(k_1+ik_2) I_{N}$, this can be recast as $$ \mathcal H(\mathbf k)= \begin{pmatrix} \mathcal V+\mathcal M&\mathcal D_-+\mathcal A^*+\mathbf K} \newcommand{\bL}{\mathbf L^*\\ \mathcal D_++\mathcal A+\mathbf K} \newcommand{\bL}{\mathbf L&\mathcal V-\mathcal M \end{pmatrix}. $$ The band structure then follows by diagonalizing $\mathcal H(\mathbf k)$ for each $\mathbf k$. \section{Applications} \label{appli} We apply now the methods of the previous section to the study of flat bands in graphene. The first application is graphene in periodic magnetic fields. \subsection{Asymptotic flat bands in periodic magnetic fields} We follow the setting of \cite{TPC-bands}. Including the physical constants and setting $M=0$ and $V=0$, the Hamiltonian \fref{Dirac} becomes $$ H=v_F \sigma \cdot \Pi, \qquad \Pi=-i\hbar \nabla+e \mathbf A, $$ where $v_F$ is the Fermi velocity, $e$ the absolute value of the electron charge, and $\mathbf A$ the vector potential periodic on a square lattice. We consider two forms for $\mathbf A$. The first one is a simple sinusoidal potential that reads $$ \mathbf A(\mathbf x} \newcommand{\by}{\mathbf y)=\frac{B}{K} \left(-\sin(K x_2),\sin(K x_1)\right), $$ where $B$ is the strength of the magnetic field and $K$ the wavenumber. The period is then $\lambda=2 \pi/K$. The associated magnetic field has zero average on each cell in the lattice, and we chose here to center the primitive cell at the origin. Nondimensionalizing and keeping the same notations, the primitive cell is $\Lambda=[-\pi,\pi] \times [-\pi,\pi]$, and we find $$ H=\sigma \cdot \Pi, \qquad \Pi=-i \nabla+\mathbf A, $$ with $$ \mathbf A(\mathbf x} \newcommand{\by}{\mathbf y)=t (-\sin(x_2) \mathbf e_1+\sin(x_1) \mathbf e_2), \qquad t=\frac{eB}{\hbar K^2}. $$ The second form of $\mathbf A$ is more realistic and suggested in \cite{zhai}, equation 11, and is accompanied by a scalar field $V_s$. We have, for $\mathbf x} \newcommand{\by}{\mathbf y$ in the same cell $\Lambda$ as above, \begin{eqnarray*} \mathbf A_s(\mathbf x} \newcommand{\by}{\mathbf y)&=&-\tau g\left(\frac{r}{\sigma}\right) \left(\cos(2\theta) \mathbf e_1-\sin(2\theta) \mathbf e_2\right)\\ V_s(\mathbf x} \newcommand{\by}{\mathbf y)&=& \eta \tau g\left(\frac{r}{\sigma}\right). \end{eqnarray*} Above, $g(r)=r^2 e^{-r^2}$, and $(r,\theta)$ are the polar coordinates $x_1=r\cos(\theta)$, $x_2=r\sin(\theta)$. These potentials are strain fields induced by the buckling of the structure, and it is not difficult to show that the associated pseudo magnetic field has zero average in $\Lambda$. The fields $\mathbf A_s$ and $V_s$ are not periodic as defined, but assuming that $\sigma$ is sufficiently small, they are close to zero at the edges of the primitive cell $\Lambda$ and can be assumed to be periodic. The parameters $\tau$ and $\eta$ measure the strength of the strain fields, and $\tau$ is proportional to the period $\lambda$ because of the nondimensionalization. The new Hamiltonian is then $$ H_s=\sigma \cdot (-i \nabla+\mathbf A_s)+I_2 V_s. $$ We compute now the band structure of $H$ and $H_s$ using the method of the previous section. Recall that the ``squaring trick'' does not apply here because of the variable potential $V_s$, and we have therefore to work with the Dirac Hamiltonian. Moreover, even though the Fourier coefficients of $\mathbf A$ are trivial to obtain, one would have to compute those of $\mathbf A_s$ and $V_s$ in order to use the plane-wave expansion method, while this is not needed for our method. We set $N_1=N_2=25$, resulting in a $1250 \times 1250$ sparse matrix $\mathcal H(\mathbf k)$ with at most $2N_1N_2+2N_1N_2(N_1+N_2)=63750$ nonzero elements. We use MATLAB to implement the scheme, and the function \texttt{eigs} to compute only a small number of eigenvalues of interest. In figure \ref{fig:VeffM}, we represent the band structure around the zero energy for the sinusoidal field along the $k_x=k_1$ direction in the Brillouin zone. The Dirac point is located at $\mathbf k=(0,0)$. The flattening of bands around the zero energy is clearly observed as $t$ increases. What makes this behavior interesting from a physical viewpoint is the fact that the magnetic field has zero average. It is indeed well-known that strong uniform magnetic fields tend to confine particles, while here the localization has an additional contribution from the $\pi$ Berry phase of Dirac electrons, especially for small magnetic fields (this leads to the Zeeman potential in the squared Hamiltonian which localizes the wavefunctions for small magnetic fields). The nondimensional parameter $t$ can be increased by increasing the strength $B$ of the magnetic field, but also and more interestingly by decreasing the wavenumber $K$. There is a physical lower bound for $K$ set by the disorder potential in real materials, see \cite{TPC-bands} for more details, and therefore a limit on how flat the band can be. This asymptotic flatness is to be contrasted with the one of Moir\'e structures discussed in the next section, which requires fine tuning for flat bands to appear while here flatness is obtained by increasing $t$ monotonically. \begin{figure}[h!] \centering \includegraphics[height=5.5cm, width=7cm]{Band05.png} \includegraphics[height=5.5cm, width=7cm]{Band15.png} \caption{Band structure around the zero energy for the sinusoidal field. The Dirac cone around the origin flattens as $t$ increases.} \label{fig:VeffM} \end{figure} We quantify the flatness in figure \ref{fig:VeffM2} and represent the effective velocity for both the sinusoidal and gaussian fields. If the dispersion relation around the origin is $\omega(\mathbf k)=\alpha |\mathbf k|$ for the nondimensional problem, then the effective velocity is defined by $v^{\rm{eff}}_F=\alpha v_F$. The left panel corresponds to the sinusoidal case, and exhibits the monotonic decrease of the effective velocity as $t$ increases. On the right panel, we represent $v^{\rm{eff}}_F$ when $\eta=0$ (full lines) and $\eta=0.05$ (dashed lines). We only consider small scalar fields as stronger ones would change the location of the Dirac point. The width refers to the parameter $\sigma$ in the definition of the fields, and the period is $2 \pi$. We observe the same monotone behavior as the strength (here $\tau$) increases. In particular, this suggests that asymptotic flatness does not depend on the particular form of the magnetic field. We refer to \cite{TPC-bands} for more comments about flat bands in periodic magnetic fields. \begin{figure}[h!] \centering \includegraphics[height=5.5cm, width=7cm]{VeffPer.png} \includegraphics[height=5.5cm, width=7cm]{VeffGauss.png} \caption{Effective velocity as a function of the field strength. Left: sinusoidal field. Right: gaussian field, with $\eta=0$ (full lines) and $\eta=0.05$ (dashed lines). A similar monotone behavior is observed in both cases.} \label{fig:VeffM2} \end{figure} We turn now to our second application concerned with twisted bilayer graphene. \subsection{Flat bands in twisted bilayer graphene} The model simulated in this section is based on \cite{sanjose}. It consists of two graphene sheets on top of each other and twisted by a tunable angle. The twisting creates Moir\'e patterns, and it has been observed that for a set of so-called ``magic angles'', the zero energy band is essentially flat, leading to interesting physical phenomena. In particular, unconventional superconductivity has been observed experimentally at the magic angles \cite{cao-nature,cao-nature2,Yanko}. A theoretical analysis for the flatness was proposed in \cite{McD,tarno}. The model proposed in \cite{sanjose} consists of two Dirac Hamiltonians, each describing a layer, coupled by a periodic potential. The associated periodic lattice is hexagonal, with a larger pattern period than the one of the graphene sheets and dependent on the twisting angle. More precisely, we first rotate the primitive cell depicted in figure \ref{fig:ax} (with $\theta=\pi/3$) by an angle $-\pi/6$, and obtain the primitive vectors $$ \mathbf a_1=L \left(\frac{\sqrt{3}}{2},\frac{1}{2}\right), \qquad \mathbf a_2=L \left(\frac{\sqrt{3}}{2},-\frac{1}{2}\right), \qquad $$ where $L= a_0 \sqrt{1+3n+3n^2}$ for $n \in \mathbb N$ when the twisting is commensurate and minimal. The parameter $a_0$ is the lattice constant of graphene and equal to $2.46$ Angstrom. The $x$ and $y$ axes are then aligned with the diagonals of the rhombus defined by $\mathbf a_1$ and $\mathbf a_2$. We nondimensionalize the latter to obtain unit vectors, and keep the same notations with an abuse. The primitive vectors of the reciprocal lattice are chosen to be $$ \mathbf k_1=2 \pi \left(\frac{1}{\sqrt{3}},1\right), \qquad \mathbf k_2=2 \pi \left(-\frac{1}{\sqrt{3}},1\right). $$ Following \cite{sanjose}, we introduce the operators $$ \partial_\pm= -i \partial_{x_1}+\partial_{x_2} \mp (A_1 + i A_2), \qquad \textrm{with} \qquad \mathbf A=(A_1,A_2) = \left(0,\frac{ 2\pi}{3}\right). $$ The coupling potentials $ V_{AA'}$, $V_{BA'}$ and $V_{AB'}$ defined in \cite{sanjose} are related by \mathbf e} \newcommand{\bh}{\mathbf h \label{VAB} V_{AA'}(\mathbf x} \newcommand{\by}{\mathbf y)=V(\mathbf x} \newcommand{\by}{\mathbf y), \qquad V_{BA'}(\mathbf x} \newcommand{\by}{\mathbf y)=V(\mathbf x} \newcommand{\by}{\mathbf y-\bv_0),\qquad V_{AB'}(\mathbf x} \newcommand{\by}{\mathbf y)=V(\mathbf x} \newcommand{\by}{\mathbf y+\bv_0), \end{equation} where $$ V(\mathbf x} \newcommand{\by}{\mathbf y)=t (1+e^{i \mathbf k_1 \cdot \mathbf x} \newcommand{\by}{\mathbf y}+e^{i \mathbf k_2 \cdot \mathbf x} \newcommand{\by}{\mathbf y}), \qquad \bv_0=(\mathbf a_1+\mathbf a_2)/3. $$ Above, $t$ is the (nondimensional) strength of the coupling and is equal to $0.041 \sqrt{1+3n+3n^2}$. The Hamiltonian obtained in \cite{sanjose} is then $$ H= \begin{pmatrix} 0 & \partial_+ & V_{AA'} & V_{AB'}\\ \partial^*_+ & 0& V_{BA'} & V_{AA'}\\ V^*_{AA'} & V^*_{BA'} & 0 & \partial_-\\ V^*_{AB'} & V^*_{AA'} & \partial^*_- & 0 \end{pmatrix}, $$ where $\partial^*_\pm$ is the adjoint of $\partial_\pm$. We investigate now with our numerical method the stability of the flat bands under perturbations of the coupling potential $V$. We assume that the perturbation is random and consists of randomly located gaussians with random amplitudes, phases, and widths, but still has the same period as the unperturbed structure and satisfies the relation in Eq.~\ref{VAB}. The latter assumption is not essential and may be violated for general disorder potentials, but is used here for computational convenience. More precisely, we set $$ W(\mathbf x} \newcommand{\by}{\mathbf y)=\sum_{j=1}^{N_p} \alpha_j e^{i \theta_j} G\left( \frac{\mathbf x} \newcommand{\by}{\mathbf y-\mathbf z} \newcommand{\bxx}{\mbox{\boldmath{$x$}}_j}{\sigma_j}\right), \qquad G(\mathbf x} \newcommand{\by}{\mathbf y)=e^{-|\mathbf x} \newcommand{\by}{\mathbf y|^2}, $$ where the $\mathbf z} \newcommand{\bxx}{\mbox{\boldmath{$x$}}_j$ are uniformly drawn in the primitive cell, $\theta_j$, $\sigma_j$, $\alpha_j$ are uniformly distributed in $[0, 2 \pi]$, $[0.025,0.1]$, and $[0,\beta M_V]$ respectively, where $M_V$ is the maximal value of $|V|$, and $\beta$ a parameter that we will vary. The potential $V$ is then replaced by $V+W$ in the definition of the Hamiltonian. Note again that while the Fourier coefficients of $V$ are obvious, those of $W$ would have to be computed, hence limiting the use of the plane-wave expansion method. As in the last section, we set $N_1=N_2=25$. We represent for concreteness in figure \ref{fig:Pot} the absolute value of $V$ and of a realization of $W$ with $N_p=150$ and $\beta=0.15$. \begin{figure}[h!] \centering \includegraphics[height=5.5cm, width=7cm]{Vaa.png} \includegraphics[height=5.5cm, width=7cm]{perturb.png} \caption{Left: $|V(\mathbf x} \newcommand{\by}{\mathbf y)|$. Right: $|W(\mathbf x} \newcommand{\by}{\mathbf y)|$ for $N_p=150$ and $\beta=0.015$.} \label{fig:Pot} \end{figure} We will depict the band structure by using the $\Gamma$, $K$ and $M$ points defined in the Brillouin zone that are frequently used in the physics literature. In our setting, they are defined with respect to the single layer Brillouin zone and take the values $$ \Gamma=\left(-\frac{2 \pi}{\sqrt{3}},0\right), \qquad K=\left(0,\frac{2 \pi}{3}\right), \qquad M=\left(-\frac{\pi}{\sqrt{3}},\pi \right). $$ The $K$ point corresponds to one of the Dirac points. We move the point $\mathbf k=(k_1,k_2)$ along straightlines connecting $\Gamma$, $K$ and $M$ to compute the band structure, and plot the bands along this path against $k_2=k_y$ in figure \ref{fig:bandt1}. In the left panel of figure \ref{fig:bandt1}, we plot the band structure for $\beta=0$, i.e. without random perturbations. For reference, we depict the Dirac cone at the point $K$ when there is no coupling (dotted lines in green), that is when $t=0$. The red dashed bands correspond to the case $n=20$, and the blue full lines to the flat band case with $n=35$. The choice $n=35$ leads to the smallest effective Fermi velocity at the $K$ point, with a magnitude more than 1000 times smaller than the Fermi velocity in each (uncoupled) layer. \begin{figure}[h!] \centering \includegraphics[height=5.5cm, width=7cm]{bands4.png} \includegraphics[height=5.5cm, width=7cm]{bands3.JPG} \caption{Left panel: Band structure without random perturbations. The dotted lines correspond to the uncoupled case $t=0$, the red dashed bands correspond to the case $n=20$, and the blue full lines to the flat band case with $n=35$. Right panel: Perturbed band structure around the zero energy, for different fluctuations strength $\beta=0.05$ (red), $\beta=0.15$ (green), $\beta=0.25$ (yellow). The blue band corresponds to the unperturbed $\beta=0$ case.} \label{fig:bandt1} \end{figure} In the right panel of figure \ref{fig:bandt1}, we represent the bands around the zero energy in presence of perturbations of various magnitude of fluctuations, $\beta=0.05$ (red), $\beta=0.15$ (green), $\beta=0.25$ (yellow), along with the flat band and no coupling potential cases for comparison. One random realization for each amplitude is depicted. We zoom in around the Dirac point in the left panel of figure \ref{fig:bandt2} for a better assessment of the flatness. In the right panel of figure \ref{fig:bandt2}, we plot several random realizations in the case $\beta=0.05$ to evaluate the statistical stability of the bands. The blue line is the flat band and the red lines the random realizations. The numerical results show that the random perturbations open a gap around the zero energy. While the zero-energy flat band is not stable, the random bands still remarkably exhibit flatness around the Dirac point, at least when the strength of the perturbations is not too large, say less than $\beta=0.25$. This is clearly observed in the left panel of figure \ref{fig:bandt2}. Flatness is stable in the sense that different realizations with similar strengths, here $\beta=0.05$, all exhibit flatness as shown in the right panel of figure \ref{fig:bandt2}. This stability suggests that the superconducting behavior observed in twisted bilayer graphene is at least robust under the particular perturbations considered here. \begin{figure}[h!] \centering \includegraphics[height=5.5cm, width=7cm]{bands.png} \includegraphics[height=5.5cm, width=7cm]{bands2.png} \caption{Left panel: right panel of figure \ref{fig:bandt1} zoomed in around the Dirac point. Right panel: depiction of several random realizations of the bands (in red) for $\beta=0.015$. The blue band corresponds to the unperturbed $\beta=0$ case.} \label{fig:bandt2} \end{figure} \section{Conclusion} We derived in this work a numerical method for the diagonalization of periodic Dirac operators. The scheme is based on spectral methods that are immune to the Fermion doubling problem and provide us with high accuracy when the coefficients in the Hamiltonian are regular. The technique is applicable to all two-dimensional periodic lattices. An important point is to choose an odd number of nodes in the spatial discretization in order to obtain differentiation matrices with one-dimensional kernels spanned by constant vectors. We applied our scheme to the study of flat bands in graphene, and investigated in particular the stability of flat bands in twisted bilayer graphene under random perturbations. Our method can be directly generalized to three-dimensional periodic lattices. \section{Appendix} \subsection{Proof of Lemma \ref{kernel}} Consider a vector $v \in \mathbb R^{N_1}$ with components $v_m$, $m=1,\cdots,N_1=2Q_1+1$, and extend $v$ by periodicity to negative indices, that is $v_{-m}=v_{N_1-m}$ for $0\leq m \leq Q_1$. Then, for $v \in N(T_1)$, the null space of $T_1$, we have \mathbf e} \newcommand{\bh}{\mathbf h \label{eq1} (T_1 v)_j=\sum_{|m|\leq Q_1}v_{m} S'_{1}\big((j-m)h_1\big)=0, \qquad \textrm{for all} \quad j=-Q_1,\cdots,Q_1, \end{equation} with by definition $(T_1 v)_{-j}=(T_1 v)_{N_1-j}$, for $j=0,\cdots,N_1$. Using periodicity, we rewrite \fref{eq1} as $$ g_j:=(T_1 v)_j=\sum_{m=1}^{N_1} v_{m} S'_{1}\big((j-m)h_1\big)=0, \qquad j=1,\cdots,N_1, $$ and define the discrete Fourier transform of a vector $(f_1,\cdots,f_{N_1})$ by $$ \hat f_k=\sum_{\ell=1}^{N_1} f_\ell e^{- 2 i \pi \ell k /N_1}, \qquad k=1,\cdots,N_1, \qquad \textrm{with} \quad \hat f_{k\pm N_1} =\hat f_k. $$ Using the definition of $S_1$, we find, for $k=-Q_1,\cdots,Q_1$, \begin{eqnarray*}\hat g_k&=&\sum_{\ell=1}^{N_1} \sum_{|m| \leq Q_1} \frac{2 i \pi m}{a_1} e^{2 i \pi \ell (m-k)/N_1} \left(\sum_{n=1}^{N_1} v_n e^{-2 i \pi m n/N_1} \right)\\ &=&\sum_{|m| \leq Q_1} \frac{2 i \pi m}{a_1} G(m-k) \hat v_m=0, \end{eqnarray*} where $$ G(n)=\sum_{\ell=1}^{N_1} e^{2 i \pi \ell n /N_1}= e^{2 i \pi n/N_1} \frac{1-e^{2 i \pi n}}{1-e^{2 i \pi n/N_1}}=N_1 \delta_{0,n}. $$ Hence $$ \hat g_k=N_1 \frac{2 i \pi k}{a_1} \hat v_k=0. $$ As a consequence, $\hat v_k=0$ for $k=-Q_1,\cdots, Q_1$, with $k \neq 0$. Using the inverse formula $$ v_\ell=\frac{1}{N_1}\sum_{|k| \leq Q_1} \hat v_k e^{2 i \pi k \ell /N_1}, $$ it follows that $$ v_\ell=\frac{1}{N_1} \hat v_0, \qquad \ell=1, \cdots,N_1, $$ which shows that $v_\ell$ is constant and concludes the proof. \bibliographystyle{siam}
{'timestamp': '2020-06-01T02:04:51', 'yymm': '2005', 'arxiv_id': '2005.14349', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14349'}
arxiv
\section{Introduction} \IEEEPARstart{L}{et} $\mathbb{F}_{q}$ be a finite field with $q$ elements, in which $q$ is prime or a prime power. $\mathbb{F}_{q^n}$ denotes a finite extension field of $\mathb
b{F}_{q}$, which may be seen as a $n-$dimensional vector space over $\mathbb{F}_q$. A polynomial $f(x) \in \mathbb{F}_{q^n} [x]$ is called a \emph{permutation polynomial} of $\mathbb{F}_{q^n}$ if the induced mapping by it \begin{equation} \begin{array}{cccc} \phi_f:&\mathbb{F}_{q^n} &\rightarrow& \mathbb{F}_{q^n}\\ &x &\mapsto& \phi_f (x)=f(x) \end{array} \end{equation} is a bijection of $\mathbb{F}_{q^n}$ on itself. Note that it is always possible to obtain $f(x) \in \mathbb{F}_{q^n} [x]$ from $\phi_f$ by the Lagrange interpolation method. From the finiteness of $\mathbb{F}_{q^n}$, simple conditions determine if $f(x)$ is a permutation polynomial; for instance, if $f(x)$ is one-to-one. Nonetheless, making explicit conditions over the coefficients of $f(x)$ so that it is a permutation one is not an easy task. Given $f(x)$ a bijection/permutation polynomial of $\mathbb{F}_{q^n}$, the (unique) compositional inverse of $f(x)$ is denoted $f^{-1} (x) \in \mathbb{F}_{q^n} [x]$, in which \begin{equation} f(x) \circ f^{-1}(x)\equiv f^{-1} (x) \circ f(x) \equiv x \mod x^{q^n} -x . \end{equation} Recently, in~\cite{china} the authors have presented the compositional inverses of all permutation polynomials of degree $\leq 6$ over $\mathbb{F}_{q^n}$ and inverses of permutation polynomial of degree $7$ in characteristic $2$. Based on the concept of complete mappings for groups, complete permutation polynomials~\cite{completemaps} are defined. If $f(x) \in \mathbb{F}_{q^n} [x]$ is a permutation polynomial over $\mathbb{F}_{q^n}$ in which $f(x)+x$ is also a permutation one, so $f(x)$ is called \emph{complete permutation polynomial} (or complete mapping polynomial). In~\cite{completemaps} several families of complete permutation polynomials have been presented and, in particular, all complete permutation polynomials which degree is less than 6 have been classified. Later~\cite{bcpp}, $\lambda-$\emph{complete permutation polynomials}, which are the natural extension of complete permutation polynomial have been defined, namely, given $\lambda \in \mathbb{F}_{q^n}$, $f(x)$ is a $\lambda-$complete permutation polynomial if $f(x)$ and $f(x)+\lambda x$ are permutation polynomials over $\mathbb{F}_{q^n}$. In~\cite{compinverse2} it has been presented the compositional inverses of linear permutations (particular case of permutation polynomials) of the form $x + x^2 + Tr_{2^n}\left(\frac{x}{a}\right)$, in which $a\in \mathbb{F}^{*}_{2^n}$ and $Tr_{2^n}(\cdot)$ denotes the Trace map over $\mathbb{F}_{2^n}$. In~\cite{tuxanidy}, the authors have exhibited compositional inverses of some linear permutation binomials beyond some $\lambda-$complete permutation polynomials. In particular, permutation polynomials, which compositional inverses are themselves, are called \emph{involutions}, i.e, $f(x) \in \mathbb{F}_{q^n}[x]$ is an involution if $(f\circ f)(x)=f^2 (x)\equiv x \mod x^{q^n} -x$. Describing explicit families of permutation polynomials and their compositional inverses is a current research problem both to theoretic aspect and from application perspective due to so many interesting issues in error-correcting codes, cryptography and combinatorial designs. Just quoting a reference about the relevance of permutation polynomial studies, it is worth mentioning~\cite{app} in which, in 1970s, the authors already used permutation polynomials/rational functions over finite fields in order to propose some cryptographic systems. Currently, permutation polynomials may be applied to S-boxes in cryptosystems acting as extra protection layer, and their compositional inverses working on decryption process. The use of involutions in this context appears as interesting solution once the system is not required to storage the different permutations to the encryption-decryption process. For more recent references, see PRINCE~\cite{prince} (Use of linear involutions) and iSCREAM~\cite{grosso} (Use of non-linear involutions). Based on applications of involutions in cryptography problems and even the development of the own theory properly, in~\cite{charpin} the authors have developed a mathematical background for involutions providing several constructions over $2-$characteristic finite fields, making use of linear polynomials and $b-$linear translators as some of the algebraic tools used in that paper. Moreover, an analysis about fixed points for some involutions is also addressed. In~\cite{lucas} the author has proposed some families of linear permutations over $\mathbb{F}_{q^n}$ by making use of a new class of linear polynomials called \emph{nilpotent linear polynomials} which are defined next: Given a positive integer $t\geq 2$, $L(x)\in \mathbb{F}_{q^n} [x]$ is a nilpotent linear polynomial if $L^t (x) =(\underbrace{L\circ L \circ...\circ L}_{t-times})(x)\equiv 0 \mod x^{q^n} - x$. He has also proposed constructions of binary linear involutions with no fixed points. A characterization of when $x^r h\left(x^s \right)$ is an involution over $\mathbb{F}_q$ has been developed in~\cite{zheng} under some restrictions over $r,s$ and $h(x)$. It is obtained from involutions over the set of $d-$th roots of unity, in which $ds=q-1$, and congruent and linear equation systems. From the AGW-Criterion~\cite{agw}, in~\cite{niu} some involutions with form $x^r h(x^{q-1})$ over $\mathbb{F}_{q^2}$ have been constructed. Moreover, the authors have approached how to explicit the compositional inverse and, in particular cases, the involutory property of polynomial $f(x)=g\left(x^{q^i}- x +\delta\right) +cx$. Finally, the fixed points of some polynomials described in this paper have also been analyzed. This work aims to study the behaviour of conventional $q-$associates of linear permutations on the simple components of the $\mathbb{F}_q-$algebra $\displaystyle{R_{q,n}:=\frac{\mathbb{F}_q [x]}{\left\langle x^n -1 \right\rangle}}$ from the primitive idempotent perspective. In particular cases, primitive idempotents are easily described via $\mathbb{F}_q-$algebra isomorphism between $R_{q,n}$ and the group algebra $\mathbb{F}_q C$, in which $C$ is $n-$order cyclic group. Based on in this new perspective and from the possession of some units in $R_{q,n}$, it is offered an easy implementation technique which allow us to describe families of linear permutations and their respective (linear) compositional inverses. In particular, families of involutions are also described, which elements can be used once more in order to provide new involutions. In Section~\ref{basics}, we review the mathematical background needed for understanding linear polynomials (and the particular cases: linear permutations and involutions) and the relationship between linear polynomials and their conventional $q-$associates. Such relationship is explored throughout this work. Further, we define primitive idempotents and analyze some properties. In Section~\ref{sectioni}, we study the cyclic shifts of linear permutations, that are linear ones as well, and which applications will be used in Section~\ref{construcao}. In this Section, we present linear permutations over $\mathbb{F}_{q^n}$ which coefficients are in $\mathbb{F}_{q}$ from a rereading of the famous Chinese Remainder Theorem based on primitive idempotents. As a particular case, it is possible to get several linear involutions over $\mathbb{F}_{q^n}$. Finally, conclusions and future problems are drawn in Section~\ref{conclusion}. \section{Basics}\label{basics} \subsection{Linear Polynomials} Given $f(x) \in \mathbb{F}_{q^n} [x]$ a permutation polynomial over $\mathbb{F}_{q^n}$, there exists $g(x)\in \mathbb{F}_{q^n} [x]$ which degree is less than $q^n$ and $f(c)=g(c)$ for all $c \in \mathbb{F}_{q^n}$, namely, $f(x)\equiv g(x)\mod x^{q^n} -x$. Thus, it is possible to represent every permutation polynomials in a \emph{reduced degree} version, i.e., permutation polynomials which degree is less than $q^n$. From now on, we will implicitly work only with reduced degree polynomials. In this work, we focus on the class of the \emph{linear/linearized polynomials} (also called $q$-polynomials) over $\mathbb{F}_{q^n}$, which also represent the $\mathbb{F}_q-$linear mappings from $\mathbb{F}_{q^n}$ to itself, seen as $n-$dimensional $\mathbb{F}_q-$vector space. Such polynomials are described as \begin{equation} F(x)=\sum_{i=0}^{n-1}f_i x^{[i]}\in \mathbb{F}_{q^n} [x], \end{equation} in which $[i]=q^i$, for $0\leq i \leq n-1$. We refer to permutation linear polynomials simply as \emph{linear permutations}. The following properties are verified for linear polynomials over $\mathbb{F}_{q^n}$ \begin{itemize} \item[(i)] $F\left(\alpha + \beta\right)=F\left(\alpha \right)+ F\left(\beta\right)$ and \item[(ii)] $F\left(a\alpha\right)= a F\left(\alpha\right)$, \end{itemize} for all $\alpha,\beta \in \mathbb{F}_{q^m}$ and $a \in \mathbb{F}_q$, in which $\mathbb{F}_{q^{m}}$ is an arbitrary extension of $\mathbb{F}_{q^n}$. For a seminal reference over finite fields and, in particular, polynomials over finite fields see~\cite{finitefields}. Besides the applications of (permutation) linear polynomials in block codes, cryptography and combinatorial designs, currently one of their subclass called \emph{subspace polynomials} has been applied in the context of random network coding~\cite{koetterk} in order to obtain good constant dimension codes, which are $k-$dimensional subspace codes. See~\cite{zhao} for one of the most recent constructions and the references therein. Let \begin{eqnarray} \mathcal{L}_n \left(\mathbb{F}_{q^n}\right)&:=&\frac{\mathbb{F}_{q^n}[x]}{\left\langle x^{q^n} - x \right\rangle}\\ &:=&\left\{f_{n-1}x^{[n-1]}+...+f_0 x^{[0]}: f_i \in \mathbb{F}_{q^n}\right\} \nonumber \end{eqnarray} be the set of linear polynomials over $\mathbb{F}_{q^n}$. In particular, the subset of $\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ formed by polynomials which coefficients belong to $\mathbb{F}_{q}$ is denoted as $\mathcal{L}_n \left(\mathbb{F}_{q}\right)$. Since the ordinary multiplication of two linear polynomials, in general, do not provide a linear polynomial, then one define the \emph{symbolic multiplication} between two linear polynomials $F(x)$ and $G(x)$ as $F(x) \circ G (x)=F\left(G (x)\right)$, namely, the symbolic multiplication is, in fact, the composition operation. In this sense, considering $F(x), G (x)\in \mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$, $G (x)$ \emph{divides symbolically} $F(x)$ if there is a linear polynomial $H(x)\in \mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ so that $G(x)\circ H(x)=F(x)$. It is trivial to notice that $\circ$ is not a commutative operation, but it is associative. Hence, in possession of the usual sum and scalar product, besides the polynomial composition/symbolic multiplication, the set $\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ is in fact a non-commutative $\mathbb{F}_q$-algebra. In particular, $\mathcal{P}_n \left(\mathbb{F}_{q^n}\right)\subset\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ describes the non-abelian group under the composition operation formed by the linear permutations. For more information about the algebraic structure of $\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ and its several isomorphic forms as group algebra, matrix algebra and etc., we recommend~\cite{wuliu}. \begin{definition} The polynomials \begin{equation} f(x)=\sum_{i=0} ^{n-1} f_i x^i\,\,\,\mbox{and}\,\,\, F(x)=\sum_{i=0} ^{n-1} f_i x^{[i]} \end{equation} over $\mathbb{F}_{q^n}$ are called $q-$associates of each other. More specifically, $f(x)$ is \emph{the conventional $q-$associate} of $F(x)$ and $F(x)$ is \emph{the linearized $q-$associate} of $f(x)$. \end{definition} Henceforth, we denote linear polynomials with capital letters and their conventional \linebreak$q-$associates with small letters. \begin{lemma}\cite[Lemma 3.59]{finitefields}\label{lemma2} Let $F (x)$ and $G (x)$ be linear polynomials over $\mathbb{F}_q$ with conventional $q-$associates $f (x)$ and $g (x)$. Then $h(x)=f (x) g (x)$ and $H(x)=F(x) \circ G (x)$ are $q-$associates of each other. \end{lemma} \begin{theorem}\cite[Theorem 3.62]{finitefields}\label{divisao} Let $F (x)$ and $G(x)$ be linear polynomials over $\mathbb{F}_q$ with conventional $q-$associates $f (x)$ and $g(x)$. Then the following properties are equivalent: \begin{itemize} \item[(i)] $F(x)$ symbolically divides $G(x)$; \item[(ii)] $F (x)$ divides $G(x)$ in the ordinary sense; \item[(iii)] $f (x)$ divides $g(x)$. \end{itemize} \end{theorem} It is important to stress that the Lemma~\ref{lemma2} and Theorem~\ref{divisao} will be used freely during all this work. Another useful theoretical consideration used in this work comes from the classical Rank-Nullity Theorem, in which the linear polynomial $F(x) \in \mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ is a linear permutation if and only if $0$ is its only root in $\mathbb{F}_{q^n}$. \subsection{Primitive Idempotents} In~\cite{polcycliccodes} the authors used linear polynomials in order to get good cyclic codes~\cite{mac}. In this work, we somehow take the opposite way, i.e, the idempotent generators of cyclic codes are used in order to provide a linear permutation (linear polynomial) construction. From now on, we take positive integers $n$ and $q$ so that $\gcd(n,q)=1$. Based on well-known Chinese Remainder Theorem, the ring $\displaystyle{R_{q,n} =\frac{\mathbb{F}_q [x]}{\left\langle x^n -1 \right\rangle}}$ can be decomposed as \begin{eqnarray}\label{tcr} R_{q,n} =\frac{\mathbb{F}_q [x]}{\left\langle x^n -1 \right\rangle}&\cong& \frac{\mathbb{F}_q [x]}{\left\langle f_1 (x) \right\rangle} \oplus \frac{\mathbb{F}_q [x]}{\left\langle f_2 (x) \right\rangle} \oplus \ldots \oplus \frac{\mathbb{F}_q [x]}{\left\langle f_t (x) \right\rangle},\nonumber\\ &\cong& \mathbb{F}_q\left(\xi_1 \right)\oplus\mathbb{F}_q\left(\xi_2 \right)\oplus\ldots \mathbb{F}_q\left(\xi_t \right) \end{eqnarray} in which $f_i (x)$ are the distinct irreducible factors of $x^n -1$ and $\mathbb{F}_q \left(\xi_i \right)$ are finite extensions of $\mathbb{F}_q$ given by roots of $f_i (x)$, for $1\leq i \leq t$. The minimal ideals $\displaystyle{\frac{\mathbb{F}_q [x]}{\left\langle f_i (x)\right\rangle}}$ $\left(\mbox{or } \mathbb{F}_q \left(\xi_i\right)\right)$ in the decomposition of $R_{q,n}$ in~\eqref{tcr} are called its \emph{simple components}. \begin{definition} A polynomial $e(x)$ of $R_{q,n}$ is an \emph{idempotent} if $e(x)\equiv e^2 (x)=e(x) e(x) \mod x^n -1.$ \end{definition} Since $R_{q,n}$ is a semisimple ring~\cite{polcino}, there exists a family $\{e_1 (x) , e_2 (x) , ... ,e_t (x)\}$ of non-zero elements in $R_{q,n}$, called (orthogonal) \emph{primitive idempotents of} $R_{q,n}$, so that \begin{itemize} \item [(i)] If $i \neq j $, then \begin{equation}\label{eq7} e_i (x)e_j (x) \equiv 0\mod x^n -1, \end{equation} for $1 \leq i\neq j\leq t$. \item [(ii)] \begin{equation}\label{soma1} e_1 (x) + e_2 (x) + ...+ e_t (x)=1. \end{equation} \item [(iii)] $e_i(x)$ cannot be written as $e_i (x) = \overline{e} (x) + \tilde{e} (x),$ in which $\overline{e} (x)$ and $\tilde{e} (x)$ are non-zero idempotents so that $\overline{e} (x) \tilde{e} (x)=0$, $1 \leq i\leq t$. \end{itemize} \begin{remark} Notice that all non-primitive idempotents $e(x)$ can be written as a sum of some primitive idempotents. \end{remark} Idempotents and divisors of $x^n -1$ in $R_{q,n}$ are related. On next result, given $g(x)$ a divisor of $x^n -1$ in $R_{q,n}$, we have \begin{theorem}\cite[Theorem 1,\, pag. 217]{mac}\label{idempcodc} \begin{itemize} \item[(i)] A cyclic code or ideal $\mathcal{C}=\langle g(x)\rangle$ contains a unique idempotent $e(x)$ so that $\mathcal{C}=\langle e(x)\rangle$. Also $e(x)=p(x)g(x)$ for some polynomial $p(x)$, and $e\left(\alpha^i \right)=0$ iff $g\left(\alpha^i \right)=0$. \item[(ii)] $c(x) \in \mathcal{C}$ if and only if $c(x)e(x)=c(x)$. \end{itemize} \end{theorem} More information and results about semisimple rings and other considerations about their algebraic structure based on idempotents, see~\cite{polcino}. \begin{remark} Notice that the simple components of $R_{q,n}$ are generated by primitive idempotents as well. For more information, see~\cite[pg.219]{mac}. \end{remark} \section{$\alpha-$cyclic shift and linear permutations}\label{sectioni} The motivation of this section comes from the fact that composition of linear permutations is also a linear permutation. From an equivalence relation defined next, we provide large trivially-constructed families of linear permutations. We also analyze under which conditions such equivalence relation can be used, in order to get the corresponding compositional inverses, in particular, involutions over $\mathbb{F}_{q^n}$. The results of this section may be applied to further constructions as it will be seen on next Section. The following basic result will guide all the discussions proposed in this section \begin{proposition}\label{deslciclico} If $F(x) \in \mathcal{L}_n\left(\mathbb{F}_{q^n}\right)$ is a linear permutation over $\mathbb{F}_{q^n}$, then $F(x)\circ \alpha x^{[1]}\in \mathcal{L}_n\left(\mathbb{F}_{q^n}\right)$ is also a linear permutation of $\mathbb{F}_{q^n}$, for any $\alpha \in \mathbb{F}^{*}_{q^n}$. \end{proposition} \begin{proof} Since $gcd\left(q^n -1, q \right)=1$, it follows from~\cite[Theorem 7.8, (ii)]{finitefields} that the monomial $x^{[1]}$ is a linear permutation of $\mathbb{F}_{q^n}$. Hence, $\alpha x^{[1]}$ so is for any $\alpha \in \mathbb{F}^{*}_{q^n}$. Therefore, since $F(x)\circ \alpha x^{[1]}$ is a composition of linear permutations, the result follows. \end{proof} Hence, if $\displaystyle{F(x)=\sum_{i=0}^{n-1}f_i x^{[i]}}\in \mathcal{L}_{n}\left(\mathbb{F}_{q^n}\right)$ and $\alpha \in \mathbb{F}^{*}_{q^n}$, then \begin{equation} F(x)\circ \alpha x^{[1]}=\sum_{i=0}^{n-1}\alpha^{[i]} f_i x^{[i+1]}, \end{equation} in which the superscripts are taken modulo $n$. In~\cite{gilbert}, it has been introduced the concept of \emph{cyclically permutable codes}, which are $n-$length block codes in $\mathbb{F}_2 ^n$ . For a more general definition of cyclically permutable codes and their applications in some communication problems, see~\cite{valdemar} and references therein. In this section, we adapt the classical definitions of cyclic order and cyclic equivalence class arising from cyclically permutable codes constructed in $\displaystyle{R_{q,n} }$ to linear polynomials in $\displaystyle{\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)}$. Originally, let $c(x) \in R_{q,n}$. If $S(\cdot)$ denotes the cyclic shift operator, i.e., $S(c(x))= xc(x)$ taken modulo $x^n -1$, then there is a least integer $1\leq m \leq n$ so that $S^m(c(x)) = S^{m-1}\left(S(c(x))\right)\equiv c(x) \mod x^n -1$, which it is called \emph{cyclic order} of $c(x)$. If $d(x)\equiv x^t c(x) \mod x^n -1$, $1\leq t\leq n-1$, then $d(x)$ is \emph{cyclically equivalent} to $c(x)$ in $R_{q,n}$. Extending the definitions above to $\displaystyle{\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)}-$context, denote $S_{\alpha}(\cdot)$ the (left) $\alpha-$cyclic shift operator, for some $\alpha \in \mathbb{F}^{*}_{q^n}$, as $S_{\alpha}(F(x))\equiv F(x)\circ \alpha x^{[1]}\mod x^{[n]} -x$. In particular, for $\alpha=1$ the corresponding cyclic shift in $\displaystyle{\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)}$ is equal to its equivalent in $R_{q,n}$ and we will adopt the same notation. Given the least integer $m$ so that $S_{\alpha}^m(F(x))=\underbrace{\left(S_{\alpha} \circ ...\circ S_{\alpha}\right)}_{m-times}(F(x)) \equiv F(x) \mod x^{[n]}-x$ such integer $m$ is called as $\alpha-$\emph{cyclic order of} $F(x)$. For $F(x)$ a linear permutation over $\mathbb{F}_{q^n}$, this $m$ is computed below \begin{theorem}\label{ordciclica} Let $\beta$ be a primitive element of $\mathbb{F}_{q^n}$. If $F(x) \in \mathcal{L}_n\left(\mathbb{F}_{q^n}\right)$ is a linear permutation over $\mathbb{F}_{q^n}$, $\alpha = \beta^l$, for $1\leq l \leq q^n -2$, and $t$ the least positive integer so that $lt\equiv 0\mod q-1$, then the $\alpha-$cyclic order of $F(x)$ is $m=tn$. \end{theorem} \begin{proof} Let $t$ be the least positive integer so that $lt\equiv 0\mod q-1$. Since $F(x)$ is a linear permutation, it is clear that the $1-$cyclic order of $F(x)$ is $n$ because, given least positive integer $m\leq n$ so that $F(x)\circ x^{[m]}\equiv F(x)\mod x^{[n]}-x$, we have \begin{eqnarray} x^{[m]}&\equiv& F^{-1}(x)\circ F(x)\circ x^{[m]} \equiv F^{-1}(x)\circ F(x) \nonumber\\ &\equiv& x\mod x^{[n]}-x, \end{eqnarray} namely, $m=n$. Consequently, \begin{equation} S^{n}_{\alpha}(F(x))=\alpha^{[0]+[1]+...+[n-1]}F(x)=N(\alpha)F(x), \end{equation} in which $N(\cdot)=N_{\mathbb{F}_{q^n}/\mathbb{F}_q}(\cdot)$ denotes the norm function from $\mathbb{F}_{q^n}$ to $\mathbb{F}_q$. So, after applying $tn$ times the $\alpha-$cyclic shift operator $S_{\alpha} (\cdot)$ over $F(x)$, one have \begin{eqnarray} S^{tn}_{\alpha}(F(x))&=&\underbrace{\left(S^{n}_{\alpha} \circ \ldots \circ S^{n}_{\alpha}\right)}_{t-times}(F(x))\nonumber\\ &\equiv& N^t(\alpha)F(x)=N\left(\alpha^t \right) F(x) \nonumber\\ &\equiv& N\left(\beta^{M(q-1)}\right) F(x)\nonumber\\ &\equiv& N^M \left(\beta^{q-1}\right)F(x)\nonumber\\ &\equiv&F(x)\mod x^{[n]}-x, \end{eqnarray} since $N(\gamma)=1$ if and only if $\gamma=\delta^{q-1}$. \end{proof} From now on, we assume that $\beta$ is a primitive element of $\mathbb{F}_{q^n}$. Given $F(x) \in\displaystyle{\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)}$ a linear permutation, notice that its $\alpha-$cyclic order is $tn$, for $1\leq t \leq q-1$, and it depends on $\alpha$ only. We say $F(x)$ has \emph{maximal $\alpha- $cyclic order} $(q-1)n$, when $\alpha=\beta^l$ so that $\gcd(l,q-1)=1$. Thus, just for simplification and into the context of this paper, we refer such $\alpha$ as maximal cyclic order element in $\mathbb{F}_{q^n}$. Furthermore, $G(x)\in \displaystyle{\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)}$ is $\alpha-$\emph{cyclically equivalent} to $F(x)$ if $S_{\alpha}^{ml}(G(x))\equiv F(x) \mod x^{[n]} -x$, for some $1\leq m\leq q-1$ and $1\leq l\leq n$. \begin{corollary}\label{corordermax} Given $F(x)\in \displaystyle{\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)}$ a linear permutation and $\alpha \in \mathbb{F}^{*}_{q^n}$ a maximal cyclic order element, then the $\alpha-$cyclic equivalence class of $F(x)$ yields $(q-1)n$ distinct linear permutations in $\displaystyle{\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)}$. \end{corollary} \begin{proposition}\label{inverdescic} Let $F(x)\in\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ be a linear permutation, in which $\gcd(n,q-1)=1$. If its $t-$th $\alpha-$cyclic shift is $G(x)\equiv S^{t}_{\alpha}(F(x))$, for $\alpha \in \mathbb{F}^{*}_{q}$ that is a primitive element, then the corresponding compositional inverse is $G^{-1} (x)\equiv S^{(q-1)n - t}_{\alpha}(F^{-1}(x))$. \end{proposition} \begin{proof} Calling $T(x)=S^{(q-1)n - t}_{\alpha}(F^{-1}(x))$ and $N=(q-1)n$, we have \begin{equation}\label{eq11} G(T(x))\equiv F(x)\circ \underbrace{\alpha x^{[1]}\ldots \circ \alpha x^{[1]}}_{t-times}\circ F^{-1}(x)\circ \underbrace{\alpha x^{[1]}\ldots \alpha x^{[1]}}_{N-t-times} \end{equation} Since $\alpha x^{[1]}\in Z\left(\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)\right)$, the center of the ring $\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$, the Equation~\eqref{eq11} can be rewritten as \begin{eqnarray} G(T(x))&\equiv& F(x)\circ F^{-1}(x)\circ \underbrace{\alpha x^{[1]}\circ \ldots \circ \alpha x^{[1]}}_{(q-1)n - times}\nonumber\\ &\equiv& F(x)\circ F^{-1} (x)\nonumber\\ &\equiv& x \mod x^{[n]}-x \end{eqnarray} once $F^{-1}(x)\circ \underbrace{\alpha x^{[1]}\circ \ldots \circ \alpha x^{[1]}}_{(q-1)n - times}\equiv S_{\alpha} ^{(q-1)n}\left(F^{-1}(x)\right)\equiv F^{-1} (x)$ by Corollary~\ref{corordermax}. Thus, indeed, $T(x)\equiv G^{-1} (x)\mod x^{[n]}-x$. \end{proof} As a simple but important observation used on next Section, if $F(x)\in\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ is a linear permutation, then the compositional inverse of $S(F(x))$ is $S^{n-1}(F^{-1}(x))$. \begin{corollary} In addition of the hypothesis of the Proposition~\ref{inverdescic}, let $F(x)\in\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ be an linear involution, in which $2|(q-1)n$. Then $S^{\frac{(q-1)n}{2}}_{\alpha}(F(x))$ is also an linear involution in $\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$. \end{corollary} \begin{proof} It is clear from the Equation~\eqref{eq11} and $\displaystyle{t=\frac{(q-1)n}{2}}$. \end{proof} According to this section, it is possible to obtain several linear permutations in a trivial way; just using the $\alpha-$cyclic shifts of a given linear permutation. This construction is effective for the results proposed on next section, since it will be needed to use conventional $q-$associates of known linear permutations in order to get others, this time in a non-trivial way. The same construction can be applied in to order to yield their respective compositional inverses and, once again, the $\alpha-$cyclic shifts may develop an important role. In particular, these ideas may be applied to involution constructions. \section{A Linear Permutation Construction Based on Primitive Idempotents}\label{construcao} Since linear polynomials in $\mathcal{L}_n \left(\mathbb{F}_{q^n}\right)$ can be seen as linear operators over $\mathbb{F}_{q^n}$, notably it is known they are linear permutations if and only if their kernels are trivial. In the particular case in which their coefficients are in $\mathbb{F}_{q}$, by Theorem~\ref{divisao} this is equivalent to state their respective $q-$conventional associates and $x^{n}-1\in \mathbb{F}_q [x]$ are coprimes. This well-known consideration will be taken during all this section. It is worth recalling (See page 3) $\displaystyle{R_{q,n}:=\frac{\mathbb{F}_q [x]}{\left\langle x^n -1 \right\rangle}}$, in which $\gcd(q,n)=1$. Next, we present the main result of this work. \begin{theorem}\label{teoppidemp} Let $F(x)\in \mathcal{L}_n \left(\mathbb{F}_{q}\right)$, $f(x)$ be its corresponding conventional $q-$associate and $E:=\left\{e_1 (x),...,e_t (x)\right\}$ the family of the primitive idempotents in $R_{q,n}$. $F(x)$ is a linear permutation over $\mathbb{F}_{q^n}$ if and only if \begin{equation} f(x)e_i (x) \not\equiv 0 \mod x^n -1, \end{equation} for all $1\leq i \leq t$. \end{theorem} \begin{proof} Let $f(x)$ be the conventional $q-$associate of $F(x)$. According to the discussion above, showing that $F(x)$ is a linear permutation over $\mathbb{F}_{q^n}$ is equivalent to show that\linebreak $\gcd\left(f(x), x^n -1\right)=1$. Suppose $F(x)$ is not a linear permutation over $\mathbb{F}_{q^n}$, namely, there exists $1\neq g(x)\in R_{q,n}$ so that $\gcd\left(f(x), x^n -1\right)= g(x)$. From Theorem~\ref{idempcodc}, there also exists an unique idempotent $1\neq e(x)\in R_{q,n}$ so that $f(x) \in \left\langle g(x) \right\rangle=\left\langle e(x) \right\rangle$. Without loss of generality, $e(x)=e_1 (x) + e_2 (x)+...+e_l (x)$, $1\leq l<t$, is written as a sum of primitive idempotents. Since \begin{equation} f(x)\equiv h(x)e(x)\mod x^n -1 , \end{equation} for some $h(x) \in R_{q,n}$, then \begin{equation} f(x)e_t (x)\equiv h(x)e(x)e_t (x) \equiv 0\mod x^n -1, \end{equation} (See~\eqref{eq7}) contradicting the hypothesis. Conversely, suppose there is a primitive idempotent $e_i (x) \in E$ so that $f(x)e_i (x)\equiv 0 \mod x^n -1.$ Since $(f(x),x^n -1)=1$ and $x^n -1 |f(x)e_i (x)$, then $x^n -1| e_i (x)$, which is impossible, since the degree of $e_i (x)$ is less than $n$ and $e_i (x)\not\equiv 0$. \end{proof} The following Corollary provides a simple criterion when a linear polynomial is not a linear permutation. \begin{corollary}\label{corpp} Let $F(x)\in \mathcal{L}_n \left(\mathbb{F}_q \right)$. If the sum of the coefficients of $F(x)$ is equivalent to $0$ in $\mathbb{F}_q$, then $F(x)$ is not a linear permutation over $\mathbb{F}_{q^{n}}$. \end{corollary} \begin{proof} By Theorem~\ref{teoppidemp}, $F(x)$ is not a linear permutation, since there is at least one primitive idempotent $e_i (x)$ of $R_{q,n}$ so that the product $f(x)e_i (x)\equiv 0 \mod x^n -1$. Consider the polynomial $\displaystyle{e(x)=\frac{1}{n}\sum_{i=0}^{n-1} x^i}$. It is well-known~\cite[Lemma 3.6.6]{polcino} that $e(x)$ is an idempotent in $R_{q,n}$, which proof we reproduce here, just for completeness \begin{eqnarray} e(x)e(x)&\equiv&\left(\frac{1}{n}\sum_{i=0}^{n-1} x^i\right)\left(\frac{1}{n}\sum_{i=0}^{n-1} x^i\right)\nonumber\\ &\equiv&\frac{1}{n^2}\sum_{i=0}^{n-1}x^i \left(\sum_{i=0}^{n-1} x^i\right) \nonumber\\ &\equiv& \frac{1}{n^2}\sum_{i=0}^{n-1} n x^i \nonumber \\ &\equiv& \frac{1}{n}\sum_{i=0}^{n-1} x^i \nonumber\\ &\equiv& e(x) \mod x^n -1. \end{eqnarray} Further, since~\cite[Proposition 3.6.7]{polcino} $\left\langle e(x)\right\rangle\cong \mathbb{F}_q\left(G/G\right)=\mathbb{F}_q$, then $\left\langle e(x)\right\rangle$ is a simple component and, consequently, $e(x)$ is a primitive idempotent in $R_{q,n}$. Hence, $\displaystyle{F(x)=\sum_{i=0}^{n-1}f_i x^{[i]}}$ is not a linear permutation over $\mathbb{F}_{q^n}$ if $f(x)e(x) \equiv 0 \mod x^n -1$, i.e., \begin{eqnarray} \left(\sum_{i=0}^{n-1}f_i x^{i}\right)\left(\frac{1}{n}\sum_{i=0}^{n-1} x^i\right)&\equiv& \sum_{j=0}^{n-1} \left(\sum_{i=0}^{n-1} f_i\right) x^{j}\equiv 0 \mod x^n -1, \nonumber \end{eqnarray} namely, \begin{equation} \sum_{i=0}^{n-1} f_i \equiv 0 \mod q , \end{equation} and the result follows. \end{proof} Based on Theorem~\ref{teoppidemp}, we provide explicit families of linear permutations over $\mathbb{F}_{q^n}$, in which $q$ and $n$ must satisfy some prescribed conditions. In this work, linear permutations are obtained using the construction of primitive idempotents in group algebras given in~\cite{ferraz}, which uses the group structure in order to get them in an uncomplicated way. Before presenting it, we take into consideration the natural $\mathbb{F}_q-$ algebra isomorphism between the group algebra $\mathbb{F}_q C$ and $R_{q,n}$, in which $C= \langle c \rangle$ is a $n$-order cyclic group generated by $c$, i.e. \begin{equation}\label{isomorphismo} \begin{array}{ccc} \varphi:\mathbb{F}_q C &\rightarrow &R_{q,n} \\ f_0 +...+f_{n-1}c^{n-1}&\mapsto & f_0+...+f_{n-1}x^{n-1}. \end{array} \end{equation} On next lemma, given $C_i$ a subgroup of $C$, define $\displaystyle{\widehat{C_i}=\frac{1}{\left| C_i \right|}\sum_{c \in C_i} c \in \mathbb{F}_q C}$. The notation of such lemma will be slightly altered in order to match with that one used in this work, that is, we adapt it to polynomial context. \begin{lemma}\cite[Lemma 3]{ferraz}\label{idempotentes} Let $\mathbb{F}_q$ be a finite field, let $p$ be a rational prime and let $C=\langle c \rangle$ be a cyclic group of order $p^m$, $m\geq 1$. Let \begin{equation} C=C_0 \supseteq C_1 \supseteq C_m =\{1\} \end{equation} be the descending chain of all subgroups of $C$. Then the elements \begin{equation} e_0 =\widehat{C}\mbox{ and } e_i = \widehat{C_i} -\widehat{C_{i-1}}, 1\leq i\leq m \end{equation} form a set of orthogonal idempotents of $\mathbb{F}_q C$ so that $e_0 +e_1 +...+e_m =1$. \end{lemma} On next Corollary, $U\left(\mathbb{Z}_{p^m}\right)$, $o(\cdot)$ and $\phi(\cdot)$ mean the subgroup of units in $\mathbb{Z}_{p^m}$, the multiplicative order of an element in $U\left(\mathbb{Z}_{p^m}\right)$ and the classical Euler's Totient function, respectively. Once more, we slightly altered the writing of this Corollary in order to preserve the same notation throughout the work. \begin{corollary}\cite[Corollary 4]{ferraz}\label{corolarioidemp} Let $\mathbb{F}_q$ be a finite field, and let $C$ be a cyclic group of order $p^m$. Then, the set of idempotents given in Lemma~\ref{idempotentes} is the set of primitive idempotents of $\mathbb{F}_q C$ if and only if one of the following holds: \begin{itemize} \item[(i)] $p = 2$, and either $m = 1$ and $q$ is odd or $m = 2$ and $q \equiv 3 \mod 4$ or \item[(ii)] $p$ is an odd prime and $o(q) = \phi\left(p^m \right )$ in $U\left(\mathbb{Z}_{p^m}\right)$. \end{itemize} \end{corollary} According to~\cite{ferraz}, it is also possible to describe all primitive idempotents of $\mathbb{F}_q C$, in which $C$ is $2p^m-$order cyclic group. See~\cite[Theorem 3.2]{ferraz}. \begin{remark} Based on isomorphism~\eqref{isomorphismo}, we will adopt the polynomial notation in order to present the primitive idempotents of $R_{q,p^m}$, consequently, families of linear permutations over $\mathbb{F}_{q^{p^m}}$. It is important to stress that there are more general constructions of primitive idempotents which may be adapted to this work, providing linear permutations in $\mathcal{L}_n \left(\mathbb{F}_q\right)$, in which $n$ is not restricted to $p^n$ or $2p^n$. \end{remark} \begin{example}\label{exemplo} Let $F(x)=f_0x + f_{25} x^{[25]} + f_{124} x^{[124]} \in \mathcal{L}_{125}\left(\mathbb{F}_3 \right)$. Since $o(3)=100=\phi(125)$ in $U\left(\mathbb{Z}_{125} \right)$, then the primitive idempotents of $R_{3,125}$ are \begin{itemize} \item[(i)] $\displaystyle{e_0 (x) =\frac{1}{125}\sum_{i=0}^{124}x^i\equiv 2\sum_{i=0}^{124}x^i}$ \item[(ii)]$\displaystyle{e_1 (x) =\frac{1}{25}\sum_{i=0}^{24}x^{5i}-\frac{1}{125}\sum_{i=0}^{124}x^i\equiv \sum_{i=0}^{24}x^{5i}+\sum_{i=0}^{124}x^i}$ \item[(iii)]$\displaystyle{e_2 (x) =\frac{1}{5}\sum_{i=0}^{4}x^{25i}-\frac{1}{25}\sum_{i=0}^{24}x^{5i}\equiv 2\sum_{i=0}^{4}x^{25i}+\sum_{i=0}^{24}x^{5i}}$ \item[(iv)]$\displaystyle{e_3 (x) =1-\frac{1}{5}\sum_{i=0}^{4}x^{25i}\equiv 1+\sum_{i=0}^{4}x^{25i}}$. \end{itemize} Taking the conventional $3-$associate of $F(x)$, $f(x)=f_0 + f_{25} x^{25} + f_{124} x^{124}$, and based on the products $f(x)e_i (x)\not\equiv 0\mod x^{125} -1$, for all $0\leq i \leq 3$, the Table~\ref{tab:tablei} describes all respective linear permutations (linearized $3-$associates) over $\mathbb{F}_{3^{125}}$ which coefficients are in $\mathbb{F}_3$. Since one of the monomials and each linear permutation from the first column are not cyclically equivalent to each other, it is still possible to apply the $S^n (\cdot)-$operator in each one in order to get more $125$ linear permutations (repeating the other two monomials). \begin{table}[h!] \centering \caption{Some Linear Permutation over $\mathbb{F}_{3^{125}}$\label{tab:tablei}} \begin{tabular}{c|c} \hline $x$ & $2x$ \\ \hline $x^{[25]}$ & $2x^{[25]}$\\ \hline $x^{[124]}$&$2x^{[124]}$ \\ \hline $x^{[25]}+x$ & $2x^{[25]}+2x$ \\ \hline $x^{[124]}+x$ & $2x^{[124]}+2x$ \\ \hline $x^{[124]} + x^{[25]}$&$2x^{[124]} + 2x^{[25]}$ \\ \hline $2x^{[124]}+x^{[25]}+x$ &$x^{[124]}+2x^{[25]}+2x$ \\ \hline $x^{[124]}+x^{[25]}+2x$& $2x^{[124]}+2x^{[25]}+x$ \\ \hline $x^{[124]}+2x^{[25]}+x$ & $2x^{[124]}+x^{[25]}+2x$ \\ \hline \end{tabular} \end{table} \end{example} \begin{corollary}\label{corpp1} Let $\mathbb{F}_q$ be a finite field and $p$ an rational odd prime so that $o(q) = \phi\left(p^m \right )$ in $U\left(\mathbb{Z}_{p^m}\right)$. Given $\displaystyle{F(x)=\sum_{i=0}^{n-1} f_i x^{[i]} \in \mathcal{L}_{p^m}\left(\mathbb{F}_q\right)}$, if \begin{itemize} \item [(i)] $\displaystyle{\sum_{j=0}^{p^m -1}f_j\not\equiv 0}$ and \item[(ii)] $\displaystyle{-p^{-m+i-1}\sum_{\substack{j=0,\\ p \nmid j}}^{p^{m-i+1} -1} f_{p^m -jp^{i-1}}+\left(p^{-m+i}-p^{-m+i-1}\right)\sum_{\substack{j=0,\\ p \mid j}}^{p^{m-i+1} -1} f_{p^m -jp^{i-1}}\not\equiv 0,}$ \end{itemize} over $\mathbb{F}_q$, for all $1\leq i\leq m$ in which all subscripts are taken$\mod p^m$, then $F(x)$ is a linear permutation over $\mathbb{F}_{q^{p^m}}$. \end{corollary} \begin{proof} According to Theorem~\ref{teoppidemp}, we should demonstrate that the product of the conventional $q-$associate of $F(x)$, $f(x)$, with all primitive idempotents of $R_{q,p^m }$ are not equivalent to zero. From Corollary~\ref{corolarioidemp}-(ii), the idempotents \begin{eqnarray} e_0 (x) &=& \widehat{A_0}=p^{-m}\left(1+x+x^2+...+x^{p^m -1}\right)\mbox{ and}\nonumber\\ e_i (x)&=&\widehat{A_i}-\widehat{A_{i-1}}\nonumber\\ &=&p^{-m+i}\left(1+x^{p^i} +...+x^{p^i(p^{m-i} -1)}\right)\nonumber\\ &-&p^{-m+i-1}\left(1+x^{p^{i-1}} +...+x^{p^{i-1}(p^{m-i+1} -1)}\right)\nonumber\\ &=&\sum_{\substack{j=0,\\ p \nmid j}}^{p^{m-i+1} -1} -p^{-m+i-1}x^{jp^{i-1}}+\sum_{\substack{j=0,\\ p\mid j}}^{p^{m-i+1} -1} \left(p^{-m+i}-p^{-m+i-1}\right)x^{jp^{i-1}}, \end{eqnarray} $1\leq i \leq m$, correspond to all primitive idempotents of $R_{q,p^m }$. A trivial way to ensure that the products $f(x)e_i (x)$ are $\not\equiv 0 \mod x^{p^m} -1$, for all $0\leq i\leq m$, it is, for instance, to guarantee that each independent term of every product is non-zero over $\mathbb{F}_q$. Defining $\displaystyle{g_i(x)=\sum_{j=0}^{n-1} g_{i,j}x^j}$ and $\displaystyle{e_i(x)=\sum_{j=0}^{n-1} e_{i,j}x^j}$, $0\leq i\leq m$, so that $g_i(x)\equiv f(x)e_i (x)\mod x^{p^m} -1$, we have \begin{eqnarray} g_{0,0}&\equiv& \sum_{j=0}^{p^m -1} f_{p^m-j}e_{0,j}\Rightarrow \sum_{j=0}^{p^m -1}f_j\not\equiv 0 \mbox{ and} \nonumber\\ g_{i,0}&\equiv&\sum_{j=0}^{p^{m-i+1} -1}f_{p^m -jp^{i-1} }e_{i,jp^{i-1}}\nonumber\\ &\equiv& \sum_{\substack{j= 0,\\ p \nmid j}}^{p^{m-i+1} -1}f_{p^m -jp^{i-1} }e_{i,jp^{i-1}}+\sum_{\substack{j=0,\\ p \mid j}}^{p^{m-i+1} -1} f_{p^m -jp^{i-1} }e_{i,jp^{i-1}}\nonumber\\ &\equiv& -p^{-m+i-1}\sum_{\substack{j=0,\\ p \nmid j}}^{p^{m-i+1} -1} f_{p^m -jp^{i-1}}+\left(p^{-m+i}-p^{-m+i-1}\right)\sum_{\substack{j=0,\\ p \mid j}}^{p^{m-i+1} -1} f_{p^m -jp^{i-1}}\nonumber\\ &\not\equiv&0, \end{eqnarray} over $\mathbb{F}_q$, for each $1\leq i \leq m$, and all these subscripts into the summations are taken $\mod p^m$. \end{proof} \begin{remark} It is worth mentioning that the proof of the Corollary~\ref{corpp1} has been done considering only coefficients from the independent terms of the polynomials $g_i(x)$, for $1\leq i \leq m$. However, it could be done choosing any other coefficients from these polynomials. \end{remark} \begin{corollary}\label{binppp} Let $F(x)=f_i x^{[i]} + f_j x^{[j]} \in \mathcal{L}_n \left(\mathbb{F}^{*}_q \right)$, in which $0\leq i < j\leq n-1$. $F(x)$ is a linear permutation over $\mathbb{F}_{q^n}$ if and only if $f_i + f_j \not\equiv 0$ over $\mathbb{F}_q$. \end{corollary} \begin{proof} If $F(x)$ is a linear permutation, then $f_i + f_j \not\equiv 0$ in $\mathbb{F}_q$, according to Corollary~\ref{corpp}. Conversely, let $E=\left\{e_1 (x),...,e_t (x)\right\}$ be the set of primitive idempotents of $R_{q,n}$. In order to ensure that $F(x)$ is a linear permutation over $\mathbb{F}_{q^n}$, beyond the condition $f_i + f_j \not\equiv 0$, we have other $t-1$ conditions $\alpha_l f_i + \beta_l f_j =\delta_ l \not\equiv 0$, each one seen as a coefficient of the products $f(x)e_l(x) \mod x^n -1$, in which $2\leq l \leq t$ and $\alpha_l, \beta_l, \delta_ l \in \mathbb{F}_q$. Thus, we have the following equation system \begin{equation}\label{sistema} \left\{\begin{array}{ccc} f_i + f_j & =&\delta_1 \\ \alpha_2 f_i + \beta_2 f_j & =&\delta_2 \\ &\vdots&\\ \alpha_t f_i + \beta_t f_j & =&\delta_t \end{array}\right.. \end{equation} Applying the Gaussian Elimination on~\eqref{sistema} when needed, this equation system is reduced to the first equation and the other equations are simply descriptions of the non-zero element $f_i$ (or $f_j$) based on elements of $\mathbb{F}_q$. Therefore, the result follows. \end{proof} \begin{example}\label{ex22} Still working on $R_{3,125}$ as in Example~\ref{exemplo}, consider $F(x)=f_{64}x^{[64]}+f_{63}x^{[63]}+f_{62}x^{[62]}+f_0x \in \mathcal{L}_{125}\left(\mathbb{F}_3 \right)$. Based on Corollary~\ref{corpp}, such linear polynomial is a linear permutation over $\mathbb{F}_{3^{125}}$ if \begin{equation}\label{rest} \left\{\begin{array}{ccccccccc} 2f_0&+&2f_{62}&+&2f_{63}&+&2f_{64}&\not\equiv&0 \\ 2f_0&+&f_{62}&+&f_{63}&+&f_{64}&\not\equiv&0 \\ f_0&&&&&&&\not\equiv&0 \\ 2f_0&&&&&&&\not\equiv&0 \end{array}\right. \end{equation} over $\mathbb{F}_3$. Hence, \begin{eqnarray*} F_1 (x)&=&x^{[64]}+x^{[63]}+x^{[62]}+2x,\\ F_2 (x)&=&2x^{[64]}+ 2x^{[63]}+ 2x^{[62]}+2x,\\ F_3 (x)&=&2x^{[64]}+ x^{[63]}+ x\mbox{ and }\\ F_4 (x)&=&x^{[64]}+ 2x^{[63]}+ x \end{eqnarray*} are some examples of linear permutations over $\mathbb{F}_{3^{125}}$ which satisfy~\eqref{rest}. Moreover, observe that they are non-$\alpha-$cyclically equivalent, for $\alpha \in \mathbb{F}^{*}_{3^{125}}$. \end{example} Note that the restrictions imposed by Corollary~\ref{corpp1} might be simplified. On Example~\ref{ex22}/Equation~\eqref{rest}, it could be reduced only to $F(x)\in \mathcal{L}_{125}\left(\mathbb{F}_{3}\right)$ so that $f_0 + f_{62}+ f_{63}+ f_{64}\not\equiv 0$ and $f_0\neq 0$. The Corollary~\ref{corpp1} is useful to describe some $\lambda-$complete linear permutations~\cite{bcpp} over $\mathbb{F}_{q^{p^m}}$, for $\lambda \in \mathbb{F}_q$. In fact, it inspires a more general definition (defined in~\cite{tuxanidy}, but not named like that) which we call $A-$complete linear permutations, in which $A\subset \mathbb{F}_q$. \begin{definition}\label{aclp} Let $f(x)$ be a permutation polynomial over $\mathbb{F}_{q^n}$. Given $A\subset \mathbb{F}_{q^n}$, we define $f(x)$ is a \emph{$A$-complete permutation polynomial} over $\mathbb{F}_{q^n}$ if $f(x)+\lambda x$ is also a permutation polynomial over $\mathbb{F}_{q^n}$, for all $\lambda \in A$. \end{definition} From the definition above, it is straightforward $A\neq \emptyset$, since $0 \in A$. It is possible to adapt the Corollary~\ref{corpp1} in order to offer conditions on $F(x)$ so that it is $A-$complete linear permutation over $\mathbb{F}_{q^{p^m}}$. \begin{corollary}\label{corpp2} Let $A=\left\{0,\lambda_2, ...,\lambda_m \right\}\subset \mathbb{F}_q$ and $p$ an rational odd prime so that $o(q) = \phi\left(p^m \right )$ in $U\left(\mathbb{Z}_{p^m}\right)$. Given $\displaystyle{F(x)=\sum_{i=0}^{n-1} f_i x^{[i]} \in \mathcal{L}_{p^m}\left(\mathbb{F}_q\right)}$, if \begin{itemize} \item [(i)] $\displaystyle{\sum_{j=0}^{p^m -1}f_j\not\equiv -\lambda_l}$ and \item[(ii)] $\displaystyle{-p^{-m+i-1}\sum_{\substack{j=0,\\ p \nmid j}}^{p^{m-i+1} -1} f_{p^m -jp^{i-1}}+\left(p^{-m+i}-p^{-m+i-1}\right)\sum_{\substack{j=0,\\ p \mid j}}^{p^{m-i+1} -1} f_{p^m -jp^{i-1}}}$\\ $\not\equiv -\left(p^{-m+i}-p^{-m+i-1}\right)\lambda_l,$ \end{itemize} over $\mathbb{F}_q$, for all $1\leq i\leq m$ and all $\lambda_l \in A$, then $F(x)$ is a $A-$complete linear permutation over $\mathbb{F}_{q^{p^m}}$. \end{corollary} \begin{example} We know $F(x)=f_t x^{[t]}\in \mathcal{L}_{11} \left(\mathbb{F}_8 \right)$ (See~\cite[Theorem 7.8, (ii)]{finitefields}) is a linear permutation over $\mathbb{F}_{8^{11}}$, for any $0 \leq t \leq 10$. Since $o(8)=10=\phi(11)$ in $U\left(\mathbb{Z}_{11} \right)$, so the primitive idempotents of $R_{8,11}$ are \begin{equation} e_0 (x)=\sum_{i=0}^{10} x^i \mbox{ and } e_1 (x)=1-e_0 (x). \end{equation} From Corollary~\ref{binppp}, we observe that $F(x)+\lambda x$ is also a linear permutation over $\mathbb{F}_{8^{11}}$ if and only if $\lambda\neq -f_t$, namely, $F(x)$ is a $\mathbb{F}_8 \setminus\left\{-f_t \right\}-$complete linear permutation over $\mathbb{F}_{8^{11}}$. \end{example} Next, it is presented a way to construct linear permutations and their compositional inverses over $\mathbb{F}_{q^n}$ by the use of units in $R_{q,n}$; equivalently conventional $q-$associates of linear permutations in $\mathcal{L}_n \left(\mathbb{F}_q\right)$. Once more, the use of primitive idempotents comes as an essential tool, since we analyze the projections of $F(x)\in \mathcal{L}_n \left(\mathbb{F}_q\right)$ over the simple components generated by the primitive idempotents. As it will be seen, the task of getting units on $R_{q,n}$ may be easily simplified using some constructions of Section~\ref{sectioni}. \begin{theorem}\label{tp} Let $E=\left\{e_1 (x),..., e_t (x)\right\}$ be the set of primitive idempotents of $R_{q,n}$. Given $F(x)\in \mathcal{L}_{n}\left(\mathbb{F}_q\right)$ a linear permutation over $\mathbb{F}_{q^n}$ and $f(x)$ its conventional $q-$associate, then $f(x)$ can be written as \begin{equation}\label{soma} f(x)\equiv \sum_{i=1}^t f_i (x) e_i (x) \mod x^n -1, \end{equation} in which $\gcd\left(f_i (x) , x^n -1\right)=1$, for $1\leq i\leq t$. Furthermore, the conventional $q-$associate of the compositional inverse of $F(x)$ is given as \begin{equation} f^{-1} (x)\equiv \sum_{i=1}^t f_i^{-1} (x) e_i (x) \mod x^n -1. \end{equation} Every polynomial $f(x)$ (respectively $f^{-1} (x)$) is uniquely determined by vector \linebreak$(f_1 (x),..., f_t (x))$ (respectively $(f^{-1}_{1} (x),..., f^{-1}_{t} (x))$). \end{theorem} \begin{proof} Given $F(x)\in \mathcal{L}_n \left(\mathbb{F}_q\right)$ a linear permutation over $\mathbb{F}_{q^n}$, from Theorem~\ref{teoppidemp} we have $f(x)e_i (x)\not\equiv 0 \mod x^n -1$, for all $1\leq i \leq t$. We may assume that $ f(x)e_i(x)\equiv f_i(x)e_i(x) \mod x^n -1$ and $\gcd\left(f_i (x), x^n -1\right)=1$, for all $1\leq i \leq t$, since $\left\langle e_i (x) \right\rangle$ corresponds to simple component/finite field in $R_{q,n}$. Indeed, if $\gcd\left(f_i (x), x^n -1\right) =g(x)\neq 1$, then there is an idempotent $e(x)=e_{i_1} (x)+...+e_{i_j}(x)$, $I:=\left\{i_1 ,..., i_j \right\}\subset \{1,..,t\}$, so that $\langle g(x) \rangle= \langle e(x) \rangle$, and \begin{equation} f_i (x)\equiv h_i (x) g(x) \equiv \overline{h_i(x)} e(x)\mod x^n -1 \end{equation} in which $h_i(x), \overline{h_i(x)}\in R_{q,n}$ and $\gcd\left(h_i (x), x^n -1\right)=\gcd\left(\overline{h_i (x)}, x^n -1\right)=1$. Thus \begin{equation} f_i (x) e_i(x)\equiv \overline{h_i(x)} e(x) e_i(x) \equiv \left\{\begin{array}{l} 0, \mbox{ if } i\not\in I \\ \\ \overline{h_i (x)} e_i (x),\mbox{ otherwise } \end{array} \right. . \end{equation} Notice the first case above contradicts $f(x)$ being unit on $R_{q,n}$. Hence, without loss of generality, we may assume $\gcd\left(f_i (x), x^n -1\right)=1$. So, since $\displaystyle{\sum_{i=1}^t e_i (x)= 1}$ (See Equation~\eqref{soma1}) and from \begin{eqnarray}\label{eqequiv} f(x)e_1(x)&\equiv& f_1 (x) e_1 (x)\mod x^n -1 \nonumber \\ &\vdots&\\ f(x)e_t(x)&\equiv& f_t (x) e_t (x)\mod x^n -1 \nonumber, \end{eqnarray} we have $\displaystyle{f(x)\equiv \sum_{i=1}^t f_ i (x) e_i (x)\mod x^n -1.}$ In particular, since $\gcd\left(f_i(x), x^n -1\right)=1$, each equivalence in~\eqref{eqequiv} can be rewritten as $f(x)f_i ^{-1} (x) e_i (x)\equiv e_i (x)\mod x^n -1$, for all $1\leq i \leq t$, consequently, we have \begin{equation} f(x)\sum_{i=1} ^t f_i ^{-1} (x)e_i(x)\equiv \sum_{i=1}^t e_i (x)\equiv 1 \mod x^n -1, \end{equation} therefore, $\displaystyle{f^{-1} (x)\equiv \sum_{i=1} ^t f_i ^{-1} (x)e_i(x) \mod x^n -1}$. Finally, it is clear to observe that the decomposition of $F(x)$ over the simple components of $R_{q,n}$ is unique. Indeed, by the isomorphism~\eqref{tcr}, given $G(x), F(x)\in \mathcal{L}_n \left(\mathbb{F}_{q}\right)$, their corresponding conventional $q-$associates $f(x)$ and $g(x)$ are equal if and only if their projections over the simple components are all equal, namely, $f_i (x) e_i(x) \equiv g_i (x) e_i(x) \mod x^n -1$, for all $1\leq i \leq t$. \end{proof} From known constructions for linear permutations in $\mathcal{L}_n \left(\mathbb{F}_q\right)$, that is, from simplest ones as monomials $\lambda x$, $\lambda \in \mathbb{F}^{*}_q$, to linear permutations of the type $x+x^2 +Tr_{2^n}\left(\frac{x}{a}\right)$ over $\mathbb{F}_{2^n}$~\cite{compinverse2}, in which $a\in \mathbb{F}^{*}_{2^n}$, their corresponding conventional $q-$associates may be used along with Theorem~\ref{tp} in order to provide new linear permutations. \begin{example} From Corollary~\ref{corolarioidemp}, the respective polynomials below are in fact primitive idempotents in $R_{3,25}$ \begin{eqnarray} e_0 (x)&=&x^{24}+x^{23}+x^{22}+x^{21}+x^{20}+x^{19}+x^{18}+x^{17}\nonumber \\ &+&+x^{16}+x^{15}+x^{14}+x^{13}+x^{12}+x^{11}+x^{10}+x^9\nonumber\\ &+&x^8 +x^7 +x^6 +x^5 +x^4 +x^3 +x^2 +x+1 \nonumber\\ e_1 (x)&=&2x^{24}+2x^{23}+2x^{22}+2x^{21}+x^{20}+2x^{19}+2x^{18}\nonumber\\ &+&2x^{17}+2x^{16}+x^{15}+2x^{14}+2x^{13}+2x^{12}+2x^{11}\nonumber\\ &+&+x^{10}+2x^9 + 2x^8 +2x^7 +2x^6 +x^5+ 2x^4 \nonumber\\ &+&2x^3 +2x^2 + 2x+1 \mbox{ and}\nonumber\\ e_2(x)&=&x^{20}+x^{15}+x^{10}+x^5 +2. \end{eqnarray} According to Theorem~\ref{tp}, in order to obtain a linear permutation $F(x)$ and its respective compositional inverse, it is enough to get units $f_0 (x), f_1 (x), f_2 (x) \in R_{3,25}$, once the conventional $3-$associate of $F(x)$, $f(x)$, comes from as $\displaystyle{f(x)\equiv \sum_{i=0}^2 f_ i (x) e_i (x)\mod x^{25} -1.}$ When we exchange the order of the polynomials $f_i (x)$ in this sum, it provides other conventional $3-$associates of linear permutations. Indeed, take \begin{equation} f_0 (x)= x^{20}+2x^{15}+1, \end{equation} which is a unit in $R_{3,25}$. Assume $f_1 (x)=x^{21}+2x^{16}+x$ and $f_2 (x) =x^{22}+2x^{17}+x^2$, which are also units in $R_{3,25}$, once they are left-cyclic shifts of $f_0 (x)$ (See Proposition~\ref{deslciclico}). Since $f_0 ^{-1} (x)= 2x^{20}+2x^{10}+x^5 +2$, consequently \begin{eqnarray} f_1 ^{-1} (x)&=& 2x^{24}+2x^{19}+2x^9 +x^4\equiv x^{24}f_0 ^{-1} (x)\mbox{ and }\nonumber \\ f_2 ^{-1} (x)&=& 2x^{23}+2x^{18}+2x^8 +x^3\equiv x^{23}f_0 ^{-1} (x)\nonumber , \end{eqnarray} so the polynomials \begin{equation} f_{\sigma} (x)\equiv \sum_{i=0}^2 f_{\sigma(i)} (x) e_i (x)\mod x^{25} -1, \end{equation} for $\sigma \in S_3$ (symmetric group), and \begin{equation} f_{\sigma} ^{-1} (x)\equiv \sum_{i=0}^2 f_{\sigma(i)}^{-1} (x) e_i (x)\mod x^{25} -1 \end{equation} are inverses of each other. The corresponding linearized $3-$associates and their respective compositional inverses are described at Table~\ref{tab:tabelaii}. \begin{table*}[htb!] \centering \caption{Some linear permutations over $\mathbb{F}_{3^{25}}$ and their compositional inverses\label{tab:tabelaii}} \begin{tabular}{ccc} \hline $S_3$&$F(x)$ & $F^{-1}(x)$ \\ \hline \multirow{2}{*}{$Id$} & $2x^{[22]}+2x^{[21]}+2x^{[16]}+x^{[12]}+2x^{[11]}$ & $2x^{[24]}+2x^{[19]}+2x^{[14]}+x^{[13]}+2x^{[9]} $ \\ & $+x^{[7]} +2x^{[6]} +2x^{[2]} +2x^{[1]}$ & $ +2x^{[4]} +2x^{[3]}$ \\ \hline \multirow{2}{*}{$(012)$} & $2x^{[22]}+2x^{[20]}+2x^{[17]}+2x^{[12]}+x^{[10]}$ & $2x^{[23]}+2x^{[18]}+x^{[15]}+2x^{[13]}+2x^{[8]} $ \\ & $+2x^{[7]} +x^{[5]} +2x^{[2]} +2x$ & $+2x^{[5]} +2x^{[3]}$ \\ \hline \multirow{2}{*}{$(021)$} & $2x^{[21]}+2x^{[20]}+2x^{[15]}+x^{[11]}+2x^{[10]}$ & $2x^{[20]}+2x^{[15]}+x^{[14]}+2x^{[10]}+2x^{[5]} $ \\ & $+x^{[6]} +2x^{[5]}+ 2x^{[1]}+2x$ & $+2x^{[4]} +2x$ \\ \hline \multirow{2}{*}{$(02)$}& $2x^{[21]}+2x^{[20]}+2x^{[16]}+2x^{[11]}+x^{[10]}$ & $2x^{[24]}+2x^{[19]}+x^{[15]}+2x^{[14]}+2x^{[9]}$ \\ & $+2x^{[6]}+x^{[5]}+2x^{[1]}+2x$ & $+2x^{[5]}+2x^{[4]}$ \\ \hline \multirow{2}{*}{$(12)$} & $2x^{[22]}+2x^{[21]}+2x^{[17]}+2x^{[12]}+x^{[11]}$ & $2x^{[23]}+2x^{[18]}+x^{[14]}+2x^{[13]}+2x^{[8]}$ \\ & $+2x^{[7]}+x^{[6]}+2x^{[2]}+2x^{[1]}$ & $+2x^{[4]}+2x^{[3]}$ \\ \hline \multirow{2}{*}{$(01)$}& $2x^{[22]}+2x^{[20]}+2x^{[15]}+x^{[12]}+2x^{[10]}$ & $2x^{[20]}+2x^{[15]}+x^{[13]}+2x^{[10]}+2x^{[5]}$\\ & $+x^{[7]}+2x^{[5]}+2x^{[2]}+2x$ & $+2x^{[3]}+2x$\\ \hline \end{tabular} \vspace{0.1 cm} \end{table*} \end{example} In particular, the Theorem~\ref{tp} is also a very useful tool in the search of involutions over $\mathbb{F}_{q^n}$. \begin{corollary}\label{involucoes} Let $E=\left\{e_1 (x),..., e_t (x)\right\}$ be the set of primitive idempotents of $R_{q,n}$. $F(x)\in \mathcal{L}_n\left(\mathbb{F}_q \right)$ is an linear involution over $\mathbb{F}_{q^n}$ if and only if its conventional $q-$associate is \begin{equation} f(x)\equiv \sum_{i=1}^t f_i (x) e_i (x) \mod x^n -1, \end{equation} in which $f_i ^2 (x)\equiv 1\mod x^n -1$, for all $1\leq i \leq t$. \end{corollary} Notice that $R_{q,n}$ is not a finite field; in fact it is a ring with zero divisors. Hence, the polynomial $X^n -1$ may have more than $n$ roots in $R_{q,n}[X]$. In particular, $f_i ^2 (x)-1=0$ has at least two solutions in $R_{q,n}$: $f_i (x)\in \{1,-1\} \subset \mathbb{F}_q$. With this trivial information, it becomes very simple to construct some involutions over $\mathbb{F}_{q^n}$, namely, \begin{equation}\label{maisoumenosinv} \pm e_0 (x)\pm e_1 (x)\pm \ldots \pm e_t (x) \mod x^n -1 \end{equation} describe several conventional $q-$associates of involutions over $\mathbb{F}_{q^n}$. In addition, the involutions obtained in~\eqref{maisoumenosinv} may be used to get other linear involutions over $\mathbb{F}_{q^n}$. \begin{example} From Corollary~\ref{corolarioidemp}, the respective polynomials below are in fact primitive idempotents in $R_{11,9}$ \begin{eqnarray*} e_0 (x)&=& 5x^8 +5x^7 +5x^6 +5x^5 +5x^4 +5x^3 +5x^2\nonumber \\ &+& 5x+5\\ e_1 (x)&=& 6x^8 +6x^7 +10x^6 +6x^5+ 6x^4 +10x^3 +6x^2 \nonumber\\ &+&6x+10\mbox{ and }\\ e_2 (x)&=& 7x^6 +7x^3 +8. \end{eqnarray*} As noted before, $\displaystyle{f(x)\equiv \sum_{i=0}^2 f_i (x) e_i (x)}\mod x^{9}-1$, in which $f_i (x)\in\{1, 10\}\subset \mathbb{F}_{11}$, describe conventional $11-$associates of involutions $F(x) \in \mathcal{L}_9 \left(\mathbb{F}_{11}\right)$ over $\mathbb{F}_{11^{9}}$. The Table~\ref{tab:tabelaiii} presents all involutions obtained from this way. Such involutions may be applied again to Corollary~\ref{involucoes} to get other ones. \begin{table*}[h!] \centering \caption{Involutions over $\mathbb{F}_{11^{9}}$\label{tab:tabelaiii}} \begin{tabular}{cc} \hline $f_0 (x),f_1 (x),f_2 (x)$&$F(x)$ \\ \hline $1,1,1$& $x$ \\ $1,1,10$ & $8x^{[6]}+8x^{[3]}+7x$ \\ $1,10,10$ & $10x^{[8]}+10x^{[7]}+10x^{[6]}+10x^{[5]}+10x^{[4]}+10x^{[3]}+10x^{[2]}+10x^{[1]}+9x$ \\ $1,10,1$& $10x^{[8]}+10x^{[7]}+2x^{[6]}+10x^{[5]}+10x^{[4]}+2x^{[3]}+10x^{[2]}+10x^{[1]}+3x$ \\ 10,1,1& $x^{[8]}+x^{[7]}+x^{[6]}+x^{[5]}+x^{[4]}+x^{[3]}+x^{[2]}+x^{[1]}+2x$ \\ 10,10,1& $3x^{[6]}+3x^{[3]}+4x$ \\ 10,1,10& $x^{[8]}+x^{[7]}+9x^{[6]}+x^{[5]}+x^{[4]}+9x^{[3]}+x^{[2]}+x^{[1]}+8x$ \\ 10,10,10& $10x$ \\ \hline \end{tabular} \vspace{0.1 cm} \end{table*} \end{example} \section{Conclusion}\label{conclusion} In this work, we have presented a simple way to construct a lot linear permutations over $\mathbb{F}_{q^n}$ and their respective compositional inverses, examining the projections of the corresponding conventional $q-$associates over the simple components generated by primitive idempotents of $R_{q,n}$. Particularly, the matter of getting involutions has also been addressed in here. Such construction is simple and very effective since, for instance, in possession of a only pair $\left\{f(x),f^{-1}(x)\right\}$ in $R_{q,n}$ and their left-cyclic shifts, it is possible to provide several linear permutations and their compositional inverses. As a future problem, it will be interesting to analyze how many fixed points these linear permutations/involutions have got and their behaviours, once this is a current research problem due to applications on cryptography. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \fi
{'timestamp': '2020-06-01T02:07:29', 'yymm': '2005', 'arxiv_id': '2005.14416', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14416'}
arxiv
\section{Introduction} The holographic approach based on the string/gauge duality is a powerful method to study various thermo-dynamical and non-perturbative properties of the Yang-Mills theory
especially when the chemical potential ($\mu$) of the fundamental fermions plays an important role. Various results have been obtained in such cases and they could give us many kinds of insight into the phase diagram of QCD. (See for example \cite{mateos2,Horigome,GKTT,GKTTT}.) On the other hand, many non-perturbative investigations in QCD have been performed by the lattice gauge theory. However, the analysis has been restricted to the case of the imaginary chemical potential, $\mu_I$, to avoid the sign problem of the fermion determinant (See for example \cite{Bilgici:2008qy,Bilgici:2009tx}). In QCD with $\mu_I$, on the other hand, the Roberge-Weiss (RW) phase transition and its periodicity with respect to $\mu_I$ have been pointed out as a remarkable point \cite{rw}. This observation is understood from the periodicity of the partition function. It would be meaningful to see how this point is realized in the holographic approach to make clear the validity of the holographic approach . Ten years ago, however, such a holographic investigation has been given based on the Euclidean space-time geometry \cite{Aarts,Raff,Morita}. In the Ref. \cite{Aarts} , the periodic RW transition has been shown by adding the two-form Kalb-Ramond field $B$ in the D3/D7-brane system of the type IIB model. After that, also in the D4/D8-brane system in IIA model, similar analysis has been done in Ref. \cite{Raff}, and also in Ref. \cite{Bigazzi:2014qsa, Morita} in a slightly different method. In these approaches, the essential point is the introduction of the $B$ field with dB=0 and its potential $V_{A}(\alpha)$, where $\alpha$ corresponds to the phase of the Polyakov loop \cite{witten1,Aharony-w}. It is introduced as \footnote{The disc $D_2$ is a part of the black hole geometry considered here.} \beq\label{B-alpha} \alpha = \int_{D_2} {B\over 2\pi\alpha'}\, . \eeq And this parameter $\alpha$ discriminates the periodic vacua in the deconfinement phase with spontaneously broken $Z_N$ symmetry. On the other hand, $\mu_I$ comes from the bulk U(1) gauge field $F(=F_{\mu\nu}dx^{\mu}\wedge dx^{\nu})$, and it appears in the theory being combined with $\alpha$ as \beq\label{Bfield} \alpha-{\mu_I \over T} = \int_{D_2}\left(F+ {B\over 2\pi\alpha'}\right)\, . \eeq The important point is that the potential $V_{A}(\alpha)$ is periodic under $\alpha \to \alpha +2\pi /N_c$ due to the gauge symmetry of the boundary SYM theory \cite{rw} \footnote{See also the appendix A.}. As a result, the total effective potential, the sum of $V_{A}(\alpha)$ and the probe action, is also periodic under $\mu_I/T \to \mu/T +2\pi /N_c$, since a finite shift of $\mu_I/T$ can be absorved into $\alpha$ as understood from (\ref{Bfield}). The role of $\mu_I$ in the probe action is to control the minimum of the effective total potential of $\alpha$ as seen in the RW phase transition \cite{rw, Aarts}. The purpose of this paper is to extend the analysis performed for $\mu_I$ in the top down models to a bottom up model which has been used to study the color superconductivity in QCD \cite{Gkntt,Fadafan,Basu}. Through the extension, we could get phase diagrams for our model in the region of $\mu_I$ with RW transitions. And an implication of our holographic model is discussed. Especially, when the back-reaction of flavor fermions is included, we find a $\mu_I$-dependent critical curve of confinement/deconfinement transition. In this case, we could show the usefulness of a simple continuation, $\mu\to i \mu_I$, in terms of the critical curve obtained for real $\mu$. This usefulness is supported by the fact that we can set as $\alpha =0$ near $\mu_I=0$. \vspace{.3cm} In the next section, the extended bottom up model is proposed and the actions are estimated for confinement and deconfinement phases. In Sec.3, the RW transitions are investigated in the probe approximation, and for the back reacted case. Then the phase diagrams are given. In Sec. 4, the validity of the continuation near $\mu=0$ is discussed by comparing the critical curve near $\mu=0$ for the holographic model and the one of the dual QCD theories. A problem related to a wide periodicity of the potential of $\mu_I$ is discussed in Sec.5. Our summary is given in the final section. \vspace{.3cm} \vspace{.3cm} \section{A bottom up model} A bottom up model, which is used before to study the superconductivity of QCD \cite{Gkntt}, is given in a slightly modified form of the following action for the Euclidean space-time to investigate QCD with the imaginary chemical potential. \beq S = S_{\rm bu}+S_{F_{(4)}}\,, \eeq where the first term is given as \bea\label{bottom-up} S_{\rm bu} &=& \int d^{6} x \sqrt{-g}\, \left({\cal L}_\mathrm{Gravity}+{\cal L}_\mathrm{CSC} \right) \,, \\ {\cal L}_\mathrm{Gravity}&=& {1\over 2\kappa_6^2}\left( {\cal R} + {20 \over L^2}\right)\,, \label{bulk-L} \\ \tilde{ {\cal L}}_\mathrm{CSC} &=& - {1 \over 4} \tilde{F}^2 - |D_{_\mu} \psi|^2 - m^2 |\psi|^2\, , \label{probe-L} \\ \tilde{F}_{\mu\nu}&=& \partial_\mu A_\nu-\partial_\nu A_\mu+{B_{\mu\nu}\over 2\pi\alpha'}\,,\quad D_{\mu} \psi = (\partial_{\mu}-iqA_{\mu})\psi\, . \eea This action $S_{\rm bu}$ is proposed as a model dual to the SYM theory with strongly interacting flavor fermions with the chemical potential $\mu$ when the space-time is Lorentzian and $B_{\mu\nu}$ is neglected. In the gravitational action ${\cal L}_\mathrm{Gravity}$, the scale $L$ denotes the AdS radius. In the present case, the theory is set in the Euclidean space-time. It is obtained by the Wick rotation of both the time and fields. Furthermore, the Kalb-Ramond field is added through $\tilde{F}_{\mu\nu}$. This form of $\tilde{F}_{\mu\nu}$ is implied from the D-brane action. And $\psi$ denotes a charged scalar, which is supposed to be dual to the cooper pair of the color charged fermions. Its baryon number charge is assigned as $q$. We could show that there is no non-trivial solution for $\psi$ in the region of small and negative $\mu^2$ \cite{Gkntt}. Since $\mu$ is imaginary for negative $\mu^2$, we can neglect $\psi$ hereafter because we are considering the case with the imaginary chemical potential. Then, the system can be solved by setting as $\tilde{A}_0=\tilde{\phi}$ where $\tilde{A}_\mu$ is defined by \beq \tilde{F}_{\mu\nu} \equiv \partial_\mu \tilde{A}_\nu-\partial_\nu \tilde{A}_\mu\,. \eeq This replacement can be justified since $B$ is introduced with $dB=0$. In this case, we will find the same form of equations of motion with the one of the real $\mu$ theory given in the Lorentzian space-time. However, in the present case, we must notice that the solution $\tilde{\phi}$ is not simply a chemical potential but a combination of the chemical potential and $\alpha$ as found from (\ref{Bfield}). This fact implies that we can obtain the solutions with $\tilde{\phi}$ for $\mu_I$ from the one of the real $\mu$ by a replacement, $\mu/T\to i(\mu_I/T-\alpha)$. \vspace{.3cm} As for the action $S_{F_{(4)}}$, this is necessary to study the potential of $\alpha$. Its explicit form and an effective potential of the B field is given in 2.2. \subsection{Bulk solutions} As mentioned above, neglecting $\tilde{\phi}$, we can obtain solutions with the imaginary chemical potential $\mu_I$. We give three solutions dual to the ground states of the pure YM fields, and they are compared. Two of them are the solutions of $L_\mathrm{Gravity}$ only, then they are independent of $\mu$. The third solution is constructed by considering the back-reaction from $\tilde{F}^2$. \vspace{.3cm} \noindent (1) AdS soliton solution: This represents the low temperature confinement phase, and it is given as \beq\label{Soliton} ds^2=r^2(\delta_{\mu\nu}dx^{\mu}dx^{\nu}+f(r)dw^2)+{dr^2\over r^2f(r)}\, , \eeq where \beq f(r)=1-\left({r_0\over r}\right)^5\, , \quad r_0={2\over 5R_w}\, , \eeq and $2\pi R_w$ denotes the compactified length of $w$. \vspace{.3cm} \noindent (2) The AdS-Schwarzschild solution: This solution corresponds to the high temperature deconfinement phase, \beq\label{Sch} ds^2=r^2(fdt^2+\Sigma_i^3 (dx^i)^2 +dw^2)+{dr^2\over r^2f(r)}\, , \eeq where \beq f(r)=1-\left({r_0\over r}\right)^5\, , \quad r_0={2\over 5R_w}\, , \eeq \vspace{.3cm} \noindent (3) Reisner-Nordstrom (RN): In this case, the back-reaction of flavor is taken into. It represents the high temperature deconfinement phase. The background of RN is given as the solution of the following action \beq\label{B-F} S_{G} = \int d^{6} x \sqrt{-g} \left\{{1\over 2\kappa_6^2}\left( {\cal R} + {20 \over L^2}\right)- {1 \over 4} \tilde{F}^2 \right\}\, , \eeq which includes the flavor part. We get the following RN solution, \beq\label{RN} ds^2=r^2(gdt^2+\Sigma_i^3 (dx^i)^2 +dw^2)+{dr^2\over r^2g(r)}\, , \eeq \beq g=1-(1-{3\tilde{\mu}^2\over 8 r_+^2})\left({r_+\over r}\right)^5-{3\tilde{\mu}^2r_+^6\over 8r^8} \label{RN2} \eeq \beq\label{RN3} \tilde{A}_0=\tilde{\phi}=\tilde{\mu}\left(1-{r_+^3\over r^3}\right) \eeq Here $r_+$ denotes the horizon of the charged black hole, and the temperature is given as \beq\label{T-RN} T={1\over 4\pi}\left(5r_++{9\tilde{\mu}^2\over 8r_+}\right)\, . \eeq Here, $\tilde{\mu}$ is defined by \beq\label{tildmu} -{ \tilde{\mu}\over T}= \int_{D_2} \tilde{F} = \int_{D_2}\left(F+ {B\over 2\pi\alpha'}\right) = \alpha-{\mu_I \over T} \, . \eeq \vspace{.3cm} The action densities for these solutions, (1) AdS-Soliton (2) AdS-Schwartzchild and (3) RN, are given as \bea S_{1}/V_3 &=& -r_0^5v_2 =-r_0^5{4\pi\over 5r_0}{1\over T}\, \\ S_{2}/V_3 &=& -r_0^5v_2=-r_0^5\left({4\pi\over 5r_0}\right)^2\, \\ S_{3}/V_3 &=& -r_+^5\left(1-{3\tilde{\mu}^2\over 8r_+^2}\right)v_2= -r_+^5\left(1-{3\tilde{\mu}^2\over 8r_+^2}\right){4\pi\over 5r_0}{1\over T}\, \label{RN-potential} \eea where $v_2=\int_0^{\beta} d\tau\int dw$, $V_3=\int dxdydz$. In $\mu_I -T$ plane, we can find the phase diagram by comparing the above three actions. We find that the phase of the solution (2) is not realized when the solution (3) is added. \vspace{.5cm} \subsection{Potential of Kalb-Ramond field} Consider the Kalb-Ramond field $B$, which is introduced in terms of $\alpha$ which is defined by (\ref{B-alpha}). The present bottom up model would be related to D4/D8 model of type IIA string. Then the bulk action, which could provide the potential of $\alpha$, might be given as \be\label{Pot-1} S_{F_{(4)}}=-{1\over 2\kappa_6^2}\left(\int d^{6}x\,\sqrt{g}\, \frac{1}{12} F_{(4)}^2 - \int B\wedge F_{(4)}\right ). \ee In the RN background, (\ref{RN})-(\ref{T-RN}), this action is estimated for a constant field $F_{123w}$ and $\alpha$. We obtain \be\label{Pot-2} S_{F_{(4)}}=-{V_4\over 2\kappa_6^2}\left({\beta\over 6r_+^3} F_{123w}^2-\alpha F_{123w} \right ). \ee where $V_4=\int d^3x~ dw=\beta V_3$, $V_3=\int dx^3$, and $r_+$ is used as the lower limit of the integration of $r$ as $\int_{r_+}^{\infty} dr$. It should be replaced by $r_0$ in the case of solutions (1) and (2). {Solving the equation of motion of $F_{123w}$, we find the solution as $F_{123w}=3 r_+^3 \alpha/\beta$}. Then the potential is obtained as \be\label{Pot-3} S_{F_{(4)}}/V_3 \equiv V_A={1\over 2\kappa_6^2} {3r_+^3\over 2} \alpha^2 . \ee Due to the gauge symmetry of the dual SYM theory, the above result should be written as \be\label{Pot-4} V_A=\min_{n\in {\mathbb Z}}{1\over 2\kappa_6^2}{3r_+^3\over 2} (\alpha - {2\pi n\over N_c})^2. \ee \section{Roberge-Weiss transitions} \subsection {Probe approximation} In the probe approximation, the gauge term $\tilde{L}_\mathrm{CSC}$ is treated as the probe for the background given by the gravitational part. In this case, the background actions are given by $ S_{1}$ and $S_{2}$, and we find the critical line of confinement/deconfinemen by. comparing the two bulk actions. Then, the critical line is found as \beq T={5r_0\over 4\pi}\, . \eeq This is independent of $\mu_I$, then the critical line is common to the case of real $\mu$. The probe part defined by (\ref{probe-L}) is solved under these backgrounds. The equations of motion of $\tilde{\phi}$ is given by the ansatz, $\tilde{A}=\tilde{A}_{\mu}dx^{\mu}=\tilde{\phi}(r)\,dt$. In the confinement phase, the background is given by the solution (1), and the equation for $\tilde{\phi}$ is given as \begin{equation} \tilde{\phi}''+\left(\frac{4}{r}+\frac{f'}{f}\right)\tilde{\phi}' =0\;. \label{eq2} \end{equation} We find that the allowed solution of this equation is $\tilde{\phi}=$const.. This solution gives no contribution to the free energy. On the other hand, the Kalb-Ramond field has no meaning in the confinement phase. So there is no new phase transition in this phase. \vspace{.3cm} An interesting phenomenon is observed in the deconfinement phase with the background of the solution (2). In this case, we have the equation, \begin{equation} \tilde{\phi}''+\frac{4}{r}\tilde{\phi}'=0\;. \label{eq22} \end{equation} This equation is solved as \be\label{probe2} \tilde{\phi}=\tilde{\mu} \left( 1-{r_0^3\over r^3}\right)\, . \ee This solution provides a non-trivial contribution to the free energy as shown below. The probe action is given as \bea S_{CS}^{E} &=& -\int dx^6 \sqrt{-g} \left(- {1 \over 4} \tilde{F}^2 \right)\\ &=&\int dx^5 \int_{r_0}^{\infty} dr {r^4\over 2}(\tilde{\phi}')^2 \\ &=&\int dx^3 V_f\,, \eea where \be S_{\rm CS}^{\rm E}/V_3 = V_f={3\over 2}\left({4\pi\over 5}\right)^3 T \tilde{\mu}^2\, . \ee Then $\tilde{\mu}$ is replaced to $\mu_I$ and $\alpha$ by (\ref{tildmu}), and we obtain \be V_{\rm f ={3\over 2}\left({4\pi T\over 5}\right)^3 (\alpha-\mu_I/T)^2\, . \ee \begin{figure}[htbp \vspace{.3cm} \begin{center} \includegraphics[width=7.0cm,height=7cm]{Probe-Veff0.eps} \includegraphics[width=7.0cm,height=7cm]{Probe-Veff06.eps} \caption{$V_{\rm eff}$ for $\mu_I/T=0$ (left) and $\mu_I/T=0.6a$ (right). } \label{Phase-diagram-3-2} \end{center} \end{figure} \begin{figure}[htbp \vspace{.3cm} \begin{center} \includegraphics[width=10.0cm,height=7cm]{ima-ph-dg6.eps} \caption{Phase diagram for probe approximation. The horizontal critical line separates the confinement and deconfinement phases. In the large $T$ deconfinement phase, the RW transitions are shown by the vertical critical lines. The points $A_1 \sim B_2$ represent tri-critical points.} \label{Phase-diagram-3-3} \end{center} \end{figure} \vspace{.3cm} Now, from this $V_{\rm f}$ and (\ref{Pot-4}) where $r_+$ is replaced by $r_0(=4\pi T/5)$, we find \be\label{Pot-5} V_{\rm eff} = \,V_A+V_f(\alpha)\,=\, \min_{n\in {\mathbb Z}}{1\over 2\kappa_6^2}{3\over 2}\left({4\pi T\over 5}\right)^3 (\alpha - {2\pi n\over N_c})^2 + {3\over 2}\left({4\pi T\over 5}\right)^3 (\alpha-\mu_I/T)^2\, . \ee From this effective potential we could see the Roberge-Weiss transition for the state defined by the value of $\alpha$. An example of this transition is read from the Fig., \ref{Phase-diagram-3-2} in which we can see the transition from $\langle\alpha\rangle=0$ to $\langle\alpha\rangle=2\pi/3$ vacuum state, namely from the phase (b0) to (b1) in the Fig. \ref{Phase-diagram-3-3}. The resultant phase-diagram obtained from the above $V_{\rm eff}$ is shown in the Fig. \ref{Phase-diagram-3-3}. {Here, the relative ratio of the probe term and the B term $V_A$ is set by the relation $1/2\kappa^2_6=10$ for the simplicity.} \vspace{.5cm} Finally, we give an effective potential under the quenched approximation of the gauge field configurations which provide the real Polyakov loop. This potential is obtained from (\ref{Pot-5}) by considering the functions at $\alpha=2\pi n$ where ${n\in {\mathbb Z}}$, and it is found by picking up the minimum parts, \beq V_{\rm eff}^{(0)} = \min_{n\in {\mathbb Z}} {3\over 2}\left({4\pi T\over 5}\right)^3 (2\pi n-\mu_I/T)^2\, . \eeq This potential has the period $2\pi$ with respect to $\mu_I/T$ as expected, and it is shown in the Fig. \ref{Potential-2}. This period could be understood from the phase of the boundary condition imposed on the fundamental fermions of the theory. In the present article, this potential is not used, however, this periodicity is seen for example in the calculation of the chiral condensate in the gauge configurations of the real Polyakov loops \cite{Bilgici:2008qy,Bilgici:2009tx}. On this point, the discussion is given more in the Sec. 5. \begin{figure}[htbp \vspace{.3cm} \begin{center} \includegraphics[width=10.0cm,height=7cm]{Polyakov-1.eps} \caption{$V_{\rm eff}^{(0)}$ } \label{Potential-2} \end{center} \end{figure} \vspace{.3cm} \subsection{Back-reacted case} When the back-reaction of the flavor part is taken into account, the deconfinement background is replaced by the RN solution (3) since $S_3<S_2$. The action $S_3$ is given in (\ref{RN-potential}). We notice the $\tilde{\mu}$ dependence of $S_3$ is also comes from $r_+$. Actually, by using (\ref{T-RN}), $r_+$ is written as \beq\label{r-p} r_+={2\pi T\over 5}\left(1+\sqrt{1-{45\tilde{\mu}^2\over 8}/(2\pi T)^2}\right)\, . \eeq This solution is useful for $|\tilde{\mu}|/T<\sqrt{32\pi^2/45}(=2.65)$ due to the reality of $r_+$. For $|\tilde{\mu}|/T>\sqrt{32\pi^2/45}$, the system becomes unstable and decays to the stable confinement phase expressed by the solution (1), the AdS-Soliton background. \begin{figure}[htbp \vspace{.3cm} \begin{center} \includegraphics[width=10.0cm,height=7cm]{im-ph-dg3.eps} \caption{Phase diagram of the back reacted case for $r_0=1$ and $\alpha=0$. The lines (a), (b) and (c) represent $|\mu_I|/T=\pi/3, \pi/2,$ and $\pi$ respectively. } \label{Phase-diagram-2} \end{center} \end{figure} To observe the RW transitions, it is helpful to see the phase diagram for $\alpha=0$, namely for $\tilde{\mu}=\mu_I$. This diagram is obtained by comparing $S_3$ with $S_1$ for $\alpha=0$, and it is shown in the Fig. \ref{Phase-diagram-2}. The unstable region of solution (3) mentioned above is shown by the region (B) in this figure for $\alpha=0$, and the regions (A) for Reisner-Nordstrom deconfinement phase and (C) for AdS soliton confinement phase are shown. Since the region (C) must be replaced by the confinement phase of (B), then the deconfinement region (A) is restricted to the definite region of $|\mu_I|/T$ , $|\mu_I|/T\leq\sqrt{32\pi^2/45}$, as mentioned above. This restriction comes from the back-reaction. \vspace{.3cm} Now we study the RW transitions in the deconfinement phase of region (A) by reviving $\alpha$. In the present case, the effective potential is given as \be V_{\rm eff} = \,V_A+V_f^{\rm RN}\,=\,\min_{n\in {\mathbb Z}}{1\over 2\kappa_6^2}{3r_+^3\over 2} (\alpha - {2\pi n\over N_c})^2 + V_f^{\rm RN} \label{genericmass-1} \ee The first term $V_A$ is obtained in (\ref{Pot-4}) for the background of RN, and the second term is given as \beq\label{Vf} S^{\rm RN} ={1\over 2\kappa_6^2} \int d^{6} x \sqrt{-g} \left\{ {\cal R} + {20 \over L^2}- {1 \over 4} \tilde{F}^2 \right\}=\int dx^3 ~V_f^{\rm RN}\, . \eeq Using (\ref{RN-potential}), we find \be V_f^{\rm RN} = -{1\over 2\kappa_6^2}r_+^5\left(1-{3\tilde{\mu}^2\over 8r_+^2}\right){4\pi\over 5r_0}{1\over T}\, , \ee This potential is the part dual to the combined system of SYM fields and flavor fermions with an imaginary chemical potential $\mu_I$. \vspace{.3cm} Here, we notice that also $V_A$ depends on $\tilde{\mu}$ through $r_+$. This fact can be interpreted as a kind of the back-reaction to the Kalb-Ramond potential from the flavor fermions. In order to understand this back-reaction, we restrict the region of $|\mu_I|$ to the region of small $|\mu_I/T-\alpha|$. Then we can expand $V_{\rm eff}$ in the series of $-(\mu_I/T-\alpha)^2$. The expanded potential is retained up to the order of $|\mu_I-\alpha/\beta|^2$, and we obtain \bea V_A &=&\, \min_{n\in {\mathbb Z}}{1\over 2\kappa_6^2} \left({64\pi^3\over 125}-{27\pi\over 50} \left(\alpha-{\mu_I\over T} \right)^2\right) T^3 (\alpha - {2\pi n\over N_c})^2+ \cdots \label{VA1} \\ V_f^{\rm RN} &= &{1\over 2\kappa_6^2} \left(- {1024\pi^5\over 3124}+{96\pi^3\over 125}\left(\alpha-\frac{\mu_I}{T}\right)^2 \right) T^3 +\cdots\,. \label{genericmass-2} \eea We find that this result is almost equal to the probe approximation except the point that the coefficient of $V_A$ is slightly modified. In fact, we can see a similar behavior of the potential to the one of the probe approximation. Then we could find the expected RW transitions in this case also. Furthermore, the qualitative behaviors of the potential are maintained even if the full form of potential is used. So we show here the RW transitions in terms of the full form of potential. \begin{figure}[htbp \vspace{.3cm} \begin{center} \includegraphics[width=7.0cm,height=7cm]{RN-RW-Veff-f0.eps} \includegraphics[width=7.0cm,height=7cm]{RN-RW-Veff-f06.eps} \caption{Full form of $V_{\rm eff}$ for RN back-reacted case. Left is for $\mu_I/T=0$, and right is for $\mu_I/T=0.6a$ with the period of $a=2\pi/25$.} \label{Phase-diagram-6} \end{center} \end{figure} \vspace{.3cm} \begin{figure}[htbp \vspace{.3cm} \begin{center} \includegraphics[width=10.0cm,height=7cm]{ima-ph-dg5.eps} \caption{Phase diagram for RW transitions under the back reacted background solution for $N_c=3$. In the RN deconfinement phase, the phases are separated by the vertical critical-lines to the regions, (b-1) $\sim$ (b2) for $\langle\alpha\rangle=-2\pi/3,~0,~2\pi/3, ~4\pi/3$. The area (a) corresponds the AdS soliton confinement phase. The points $A_1$ and $A_2$ represent the tri-critical points.} \label{Phase-diagram-4} \end{center} \end{figure} In the Fig. \ref{Phase-diagram-6}, the effective potential with two values of $\mu_I/T$ are shown to see the RW transition from $\langle\alpha\rangle =0$ to $\langle\alpha\rangle =a$ with the period $a$, which should be set as $2\pi/N_c$. It is easy to find other periodic transitions. The resultant phase diagram with the RW transitions is shown in the Fig. \ref{Phase-diagram-4} . Here, we should notice that the periodicity of $\mu_I/T$ in $V_{\rm eff}$ implies $|\mu_I/T|<\pi/N_c$. This and the constaint, $|\mu_I|/T\leq\sqrt{32\pi^2/45}$, given above leads to the constraint, \beq N_c>1.18. \eeq In spite of the fact that $N_c$ must be a large integer to justify the holographic approach, we are allowed to use the holography up to $N_c=2$ at this stage. \section{Comparison with QCD near $\mu=0$} As shown above, the $\mu$ dependent critical line obtained for real $\mu$ can be continued to the imaginary $\mu$ region and used there. The form of the critical line near $\mu=0$ is given as \cite{EN}, \beq\label{smallmu} {T\over T_0}=1-a \left({\mu \over T_0}\right)^2+\cdots\, , \eeq where $T_0$ denotes the critical temperature at $\mu=0$, and $a$ is a dimensionless constant, which depends on the parameters of the theory. It has been obtained also in the lattice QCD for $\mu^2<0$ without bothering with the sign problem. Then it is meaningful to compare the result obtained in lattice QCD with the holographic result given here. \vspace{.3cm} For the probe approximation, we find $a=0$ since the critical line is independent of $\mu$. So we consider the RN background case, where the $\mu$-dependent critical curve is obtained from the equation $S_1=S_3$. And we find \beq\label{small-mu-b} a= {15 \over 32\pi^2 }= 0.0475 \, . \eeq \vspace{.3cm} In the lattice QCD, the coefficient is obtained by the form \beq\label{alatt} a=\kappa N_c^2\, . \eeq For confinement transition, we find many simulation results. We pick up several examples, to compare with the holographic result, where $N_c$ is however assumed to be large. The examples of estimations of $\kappa$ from lattice QCD data in {the $2+1$ and $2+1+1$ flavor systems}; examples are {$0.0066 \pm 20$~\cite{Endrodi:2011gv},} $0.013 \pm 0.003$~\cite{Bonati:2014rfa}, $0.0135 \pm 0.002$~\cite{Bonati:2015bha}, $0.0149 \pm 0.0021$~\cite{Bellwied:2015rza}, $0.020 \pm 0.004$~\cite{Cea:2015cya}. Comparing this with (\ref{small-mu-b}) and using (\ref{alatt}), we find $N_c=1.76$ {for $\kappa \sim 0.0153$}. This is consistent wth the result $N_c\geq 1.2$ obtained in our back reacted case. \vspace{.3cm} In addition, there is the estimation of $\kappa$ by using the Polyakov-loop extended Nambu--Jona-Lasinio (NJL) model with the mean-field approximation as {$\kappa_\chi = 0.017 \pm 0.001$~\cite{Kashiwa:2017yvy}}; it should be noted that this value is estimated from the iso-spin chemical potential but $\kappa$ should be exactly same in both cases at least in the mean-field approximation. {In the case of the PNJL model, there is the estimation of $\kappa$ for the confinement-deconfinement crossover line as $0.004 \pm 0.001$ and $0.003 \pm 0.001$; the former one is evaluated from the Polyakov-loop and the later one does the quark number holonomy~\cite{Kashiwa:2016vrl}.} \section{More about the periodicity} In previous section, we discuss possible connections between lattice QCD simulation near $\mu^2=0$. In this section, we discuss deeper properties of the periodicity appearing in QCD. In full QCD, we should have the RW periodicity as explained and demonstrated above. It is, however, well known that we should have the $2 \pi$ periodicity for $\mu_I/T=\theta$ instead of the Roberge-Weiss (RW) periodicity in the lattice QCD simulation when we fix gauge configurations at $\mu=0$ or the pure gauge limit where dynamical quarks are not taken into account; later one is corresponding to the quenched limit. In this case, any quantities such as the pressure, the entropy density and the cumulant loses the RW periodicity and thus the minimal period becomes $2\pi$ because the grand canonical partition function does not have the RW periodicity. It should be noted that we need the Polyakov-loop phase-flip to consider the RW periodicity in the limits as discussed in Ref.~\cite{Doi:2017dyc,Doi:2017jad}. The $2\pi$ periodicity has been used in the calculation of the dual quark condensation. Actual definition of the dual quark condensation is given by \beq \Sigma^{(n)} = \int_0^{\textcolor{red}{2}\pi} \frac{d \phi}{2 \pi} \sigma(\phi) e^{i n \theta} d \phi \eeq where $\sigma(\phi)$ is the chiral condensate with the phase of the boundary condition, $0 \le \phi \le 2 \pi$, which is related with the dimensionless imaginary chemical potential as \beq \phi = \theta + \pi. \eeq In the heavy quark mass regime, it has the clear relation with the Polyakov-loop by using the Dirac mode expansion~\cite{Bilgici:2008qy,Bilgici:2009tx}. Since the dual quark condensate is calculated from the chiral condensate, this quantity may bridge the chiral and the Polyakov-loop dynamics in QCD. Details of the dual quark condensate have been discussed in the lattice QCD simulation~\cite{Bilgici:2008qy,Bilgici:2009tx}, the Dyson-Schwinger equation~\cite{Fischer:2009wc}, QCD effective models~\cite{Kashiwa:2009ki,Xu:2011pz,Benic:2013zaa}. In principle, we can investigate the Polyakov-loop behavior at $\mu=0$ from the $\phi$-dependent chiral condensate even if calculations of the Polyakov-loop is difficult or impossible. In the strict probe limit of the holographic model, we also face the same situation of the lattice QCD simulation in the quenched limit; we only has the $2\pi$ trivial periodicity. For the consistency between the holographic model and the lattice QCD simulation in both limits, we should reproduce the $2 \pi$ periodicity in addition to the RW periodicity. Also, it is good to calculate the dual quark condensate in the holographic model to discuss the relation between the chiral condensate and the Polyakov-loop. In addition, if we can understand how to control the boundary condition in the holographic model, we will contact with the ${\cal Z}_\mathrm{N_c}$ twisted QCD; it is an interesting QCD like theory in the viewpoint of the sign problem appearing in the lattice simulation. {One possibility to introduce the $2\pi$ trivial periodicity in the probe limit is imposing $2\pi$ periodic form of $\theta$ in the mapping of $\mu$ from $A_0$: we need the special care for the $2\pi$ periodicity issue of the dimensionless imaginary chemical potential. \vspace{1cm} \section{Summary} We have studied here the phase structure and phase transition behaviors ranging from real chemical potential to imaginary one, using a bottom-up holographic model that was introduced to investigate color superconductivity in QCD. From general framework of QCD, one knows that the QCD partition function possesses a certain periodicity, the Roberge-Weiss (RW) periodicity, at the imaginary chemical potential region. Our interest was to see how the analytic continuation of the chemical potential works. To this end, we have computed the effective potential of the model by including the Kalb-Ramond field in the bulk. Unlike the previous studies based on top-down approach, there is an advantage of our bottom-up approach that one can evaluate the effect of back reaction. As the result, we have observed the RW periodicity as well as the $2\pi$ periodicity appropriately. We have further investigated the behavior of the critical line near $\mu=0$ and tried to see the validity of our analysis. Our results have been compared with those obtained from lattice QCD and effective models such as Polyakov-loop extended NJL model \vspace{.3cm} \section*{Acknowledgments} The authors would like to thank helpful discussion with Takeshi Morita. {K.K.is supported by the Grants-in-Aid for Scientific Research from JSPS (No. 18K03618).} \newpage \vspace{.5cm}
{'timestamp': '2020-10-28T01:29:44', 'yymm': '2005', 'arxiv_id': '2005.14276', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14276'}
arxiv
\section{Introduction} \label{Sec:intro} The bottom-quark production in hadron-hadron collisions is an important test of perturbative quantum chromodynamics (pQCD) calculations. Because of its larg
e mass, $m_b \gg \Lambda_{QCD}$, the $b$-quark production cross section can be reliably calculated by including next-to-leading order (NLO) processes, especially at high center of mass energies~\cite{Mangano1992295}. The measurement of the $b\bar{b}$\xspace production cross section over a wide range of colliding energies in hadron-hadron collisions provides a meaningful test of pQCD theory calculations and a baseline measurement for studying modifications of heavy quark production in heavy ion collisions. Cross section measurements for bottom production in hadron-hadron collision experiments have been made from lower energy fixed-target experiments~\cite{PhysRevLett.82.41,PhysRevLett.74.3118,PhysRevD.73.052005} ($\sqrt{s} < 45$~GeV) up to collider energies ($\sqrt{s} > 100$~GeV). It was found that pQCD predictions match experimental results well at energies greater than $\sqrt{s} = 1$~TeV~\cite{PhysRevD.71.032001,Aaij:2010gn,Aaij:2013noa,Aad:2012jga,ALICE_bb,Khachatryan:2011mk,Chatrchyan:2011pw,Chatrchyan:2012hw}, but less so at lower energies. Results at the wide range of collision energies of the Relativistic Heavy Ion Collider explore an important gap between the low-energy fixed-target and TeV-energy regimes. Without displaced vertex $b$-tagging capability at PHENIX, $b$-quark production has been studied using unlike-sign dileptons from heavy quark decays~\cite{Adare:2014iwg}. The PHENIX and STAR collaborations have previously measured the bottom cross section in $p$+$p$\xspace collisions at $\sqrt{s} =200$~GeV using electron-hadron correlations~\cite{PhysRevLett.103.082002,Aggarwal:2010xp} and using dilepton invariant mass and momentum distributions~\cite{physRevC.96.064901,Adare2009313,PhysRevD.99.072003}. Like-sign dimuons have previously been used to investigate the phenomenon of neutral $B$ meson oscillations in $e^+e^-$ collisions by the CLEO Collaboration~\cite{PhysRevLett.58.183}, the ARGUS Collaboration~\cite{Albrecht1987245}, the ALEPH Collaboration~\cite{Buskulic1994441}, and in $p+\bar{p}$ collisions by the UA1 Collaboration~\cite{Albajar1987247}. In this measurement, we use the yield of like-sign dimuons along with the properties of neutral $B$ meson oscillation to determine the $b\bar{b}$\xspace cross section. The correlated like-sign pairs at high mass (5--10 GeV/$c^2$) are dominated by the semileptonic decay of open bottom pairs and the other correlated sources (i.e. dijets or punch-through hadrons) amount to less than 10\%, and therefore provide a clean probe to study the $b\bar{b}$\xspace production. In the Standard Model, neutral $B$ meson oscillation is a result of higher order weak interactions that transform a neutral $B$ meson into its antiparticle: $B^{0}\rightarrow\bar{B}^{0}$ because the flavor eigenstates differ from the physical mass eigenstates of the meson-antimeson system~\cite{Glashow:1961tr,Abe:1999ds}. In the absence of oscillation as shown in Fig.~\ref{fig:decays}(a), primary-primary decays, where the lepton's direct parent is the $B$ meson, can only produce unlike-sign lepton pairs. For example $b\rightarrow \bar{B}(B^-,\bar{B}^0,\bar{B}^0_{s},..)\rightarrow l^-$ and $\bar{b}\rightarrow B(B^+,B^0,B^0_{s},..)\rightarrow l^+$ while like-sign lepton pairs can result from a mixture of primary and secondary decays (decay chain). \begin{figure} \centering \includegraphics[width=1\linewidth]{BBdecays.pdf} \caption{\label{fig:decays} Example diagrams of lepton pair sources. (a) Like-sign primary-secondary or unlike-sign primary-primary dileptons from $B$ decay chain. (b) Primary-primary dileptons from neutral $B$ meson oscillation. } \end{figure} However, if oscillation occurs, as is the case for neutral $B$ mesons ($B^0_d$ and $B^0_s$), the $\bar{B}^0$ meson can spontaneously change into a $B^0$ meson as shown in Fig.~\ref{fig:decays}(b). Unless otherwise noted, we denote $B$($\bar{B}$) as a generic admixture of bottom (antibottom) hadrons with production ratios, from weak decays ($i.e.$ $Z\rightarrow b\bar{b}$) of: $B^+(B^-)=40.4\pm0.9\%$, $B^0(\bar{B}^0) = 40.4\pm0.9\%$, $B^0_s(\bar{B}^0_s) = 10.3\pm0.9\%$, and $b(\bar{b}$)-baryon $= 8.9\pm 1.5\%$~\cite{Agashe:2014kda}. The $B_c$ production ratio is negligible (0.2\%) and less than the uncertainties associated with bottom hadrons listed above. The time-integrated probability for a neutral $B$ meson to oscillate before it decays is defined as \begin{equation} \chi_{d/s} = \frac{1}{2} \frac{(\Delta m /\Gamma)^2}{1+(\Delta m/\Gamma)^2} \; , \end{equation} where $\Delta m$ is the mass difference between heavy and light mass eigenstates and $\Gamma$ is the decay rate of the weak eigenstates. These values are found to be $\chi_{d} \approx 0.1874\pm 0.0018$ and $\chi_{s} \approx 0.499311\pm 0.000007$ for the $B^0_d$ and $B^0_s$ mesons, respectively~\cite{Agashe:2014kda}. This process can result in a like-sign dilepton event from a primary-primary decay as sown in Fig.~\ref{fig:decays}(b). Given the large branching ratio of the $B\rightarrow\mu$ decay channel ($\approx 10.99\%$)~\cite{Agashe:2014kda}, the like-sign dilepton from a primary-primary decay provides a unique opportunity for extracting the $b\bar{b}$\xspace cross section. In this paper, we present measurements of $b\bar{b}$\xspace production cross section through the like-sign dimuon decays and the azimuthal opening angle between the muon pair and their \mbox{$p_T$}\xspace distributions in $p$+$p$\xspace collisions at $\sqrt{s} = 510$~GeV at forward ($1.2<\!y<\!2.2$) and backward ($-2.2\!<y\!<-1.2$) rapidities. The azimuthal opening angle and pair \mbox{$p_T$}\xspace distributions are compared to distributions generated using {\sc pythia6} with parton-shower ({\sc ps}) model~\cite{PYTHIA6}. The model approximates the correction to all higher orders (almost next-to-leading-log) for $b\bar{b}$\xspace production, which includes flavor creation, flavor excitation, and gluon splitting. The extrapolated total cross section, using {\sc ps pythia6}~\cite{PYTHIA6} and {\sc pythia8}~\cite{SJOSTRAND2015159}, and MC$@$NLO~\cite{Frixione:2002ik} calculations, is also presented and compared to pQCD calculation. The paper is organized as follows: The PHENIX apparatus is described in Sec.~\ref{sec:apparatus}. The data samples used for this analysis and the analysis procedure are presented in Sec.~\ref{sec:dataAna}. The results are presented and discussed in Sec.~\ref{sec:results}. The summary and conclusions are presented in Sec.~\ref{sec:summary} \section{Experimental Setup} \label{sec:apparatus} \begin{figure}[htp!] \centering \includegraphics[width=0.99\linewidth,trim={0 10 0 413},clip]{Phenix_2012.pdf} \caption{\label{fig:Detector} A side view of the PHENIX detector, concentrating on the muon arm instrumentation.} \end{figure} A complete description of the PHENIX detector can be found in Ref.~\cite{Adcox:2003p2584}. We briefly describe here only the detector subsystems used in these measurements. The relevant systems, which are shown in Fig.~\ref{fig:Detector}, include the PHENIX muon spectrometers covering forward and backward rapidities and the full azimuth. Each muon spectrometer comprises a hadronic absorber, a magnet, a muon tracker (MuTr), and a muon identifier (MuID). The absorbers comprise layers of copper, iron, and stainless steel and have about 7.2 interactions lengths. Following the absorber in each muon arm is the MuTr, which comprises three stations of cathode strip chambers in a radial magnetic field with an integrated bending power of 0.8~T$\cdot$m. The MuID comprises five alternating steel absorbers and Iarocci tubes. The composite momentum resolution, $\delta p/p$, of particles in the analyzed momentum range is about 5\%, independent of momentum and dominated by multiple scattering. Muon candidates are identified by reconstructed tracks in the muon spectrometers. Another detector system relevant to this analysis is the beam-beam counter (BBC), consisting of two arrays of 64~\v{C}erenkov counters, located on both sides of the interaction point and covering the pseudorapidity range $3.1<|\eta|<3.9$. The BBC system was used to measure the $p$+$p$\xspace collision vertex position along the beam axis ($z_{\rm vtx}$), with 2 cm resolution, and initial collision time. It was also used to measure the beam luminosity and form a minimum bias trigger (MB). The MB trigger requires at least one hit in each BBC on the sides of the interaction point. \section{Data Analysis} \label{sec:dataAna} \subsection{Data set and quality cuts} \label{subsec:muid} The data set for this analysis is collected by PHENIX during the 2013 $p$+$p$\xspace run at $\sqrt{s} = 510$~GeV. Events, in coincidence with the MB trigger, containing a muon pair within the acceptance of the spectrometer are selected by the level-1 dimuon trigger (MuIDLL1-2D) requiring that at least two tracks penetrate through the MuID to its last two layers. After applying a vertex cut of $|z_{\rm vtx}| < 30$ cm and extensive quality assurance checks, the data remaining correspond to $3.02\times10^{12}$ MB events or to an integrated luminosity of 94.4~pb$^{-1}$. A set of cuts was used to select good muon candidates and improve the signal-to-background ratio. Hits in the MuTr are used to make MuTr tracks and hits in the MuID are used to make MuID roads. The MuTr track is required to have more than 9 hits out of the maximum possible of 16 while the MuID road is required to have more than 6 hits out of the maximum possible of 10. Additional $\chi^2$ cut is applied on MuTr track that is calculated from the difference between the measured hit positions of the track and the subsequent fit. MuTr tracks are then projected to the MuID at the first MuID gap and matched to MuID roads by applying cuts on maximum position and angle differences. Muon candidates are required to have a minimum $p_T$ greater than 1~GeV/$c$. This cut improves the sample quality by reducing background from pions and kaons. A minimum of 3.0~GeV/$c$ is applied to single muon momentum along the beam axis, $p_z$, which is reconstructed and energy-loss corrected at the collision vertex, corresponding to the momentum cut effectively imposed by the absorbers. Muon candidates are further restricted to the rapidity range of $-2.2<y<-1.2$ for the south muon arm and $1.2 < y < 2.2$ for the north muon arm. Additionally, a cut on the $\chi^2$ of the fit of the two muon tracks to the common vertex of the two candidate tracks near the interaction point is applied. \subsection{Detector acceptance and reconstruction efficiency} \label{subsect:acc_eff} The product of the acceptance and reconstruction efficiency ($A\epsilon$) is determined using Monte Carlo (MC) simulation. The $A\epsilon$ is defined by the number of dimuons reconstructed in the muon spectrometers with respect to the number of dimuons generated in the same kinematic region. The kinematic distributions of {\sc pythia}\footnote{We used {\sc pythia6} (ver 6.421), with parton distribution functions given by CTEQ6LL. The following parameters were modified: MSEL = 0, MSUB(86) = 1, PARP(91) = 2.1, MSTP(51) = 10041, MDME(858,1) = 0, MDME(859,1) = 1, MDME(860,1) = 0, and Tune A.}~\cite{Field:2005sa} generated $p_T$, rapidity, and $b\bar{b}$\xspace mass shape were used as input into a full PHENIX {\sc geant4} simulation~\cite{AGOSTINELLI2003250}. The $p_T$ and rapidity distributions were tuned such that the reconstructed distributions match those of 2013 data. Variations within the uncertainties of data are taken as systematic uncertainty. The detector response in the simulation is tuned to a set of characteristics (dead and hot channel maps, gains, noise, etc.) that describes the performance of each detector subsystem. The simulated vertex distribution is also tuned to match that of the 2013 data. The simulated events are further embedded with real data to account for the effects of detector noise and other background tracks, and then are reconstructed in the same manner as the real data. A final cross check was done on $J/\psi$ invariant yield after $A\epsilon$ correction, which matched very well within statistical uncertainties in all $p_T$ and rapidity bins~\cite{PhysRevD.101.052006}. Figure~\ref{fig:AccEff} shows the $A\epsilon$ as a function of (a) dimuon mass $m_{\mu\mu}$, (b) dimuon opening angle $\delta\Phi$, and (c) dimuon $p_T$. The relative difference in $A\epsilon$ between the two spectrometers is due to different detection efficiencies of the MuTr and MuID systems and different amounts of absorber material. \begin{figure} \centering \includegraphics[width=1\linewidth]{accEff_bbshape.pdf} \caption{\label{fig:AccEff} $A\epsilon$ as a function of (a) invariant mass for like-sign dimuons, (b) dimuon azimuthal opening angle, and (c) dimuon \mbox{$p_T$}\xspace. Shown are the weighted averages of $\mu^+\mu^+$ and $\mu^-\mu^-$ distributions. } \end{figure} \subsection{Raw yield extraction} We measure like-sign dimuons in the same muon arm that have an invariant mass between 5 and 10~GeV/$c^2$. In this mass range, the correlated pairs in the dimuon spectrum are dominated by the semileptonic decay of open bottom pairs either from the primary-secondary decay chain as shown in Fig.~\ref{fig:decays}(a) or from the primary-primary pairs from neutral $B$ meson oscillation as shown in Fig.~\ref{fig:decays}(b). Dileptons from the Drell-Yan process and quarkonia decays can only yield unlike-sign pairs. $D$ mesons can produce like-sign pairs through their decay chain. For example, $c\rightarrow D^+ \rightarrow \mu^+ + anything$ and the other open charm decays as $\bar{c} \rightarrow D^- \rightarrow K^+ + anything \rightarrow \mu^+\nu_{\mu}$. However, in the mass range of interest the like-sign pairs from $D$ mesons are negligible. The contribution from neutral $D$ meson oscillation to the like-sign signal is expected to be very small because the oscillation probability is $\mathcal{O}(<10^{-2})$~\cite{PhysRevLett.110.101802}; therefore, it is not included. \subsubsection{Correlated background} Additional contribution to the correlated pairs could originate from correlated sources such as dijets or punch-through hadrons. Hadrons (particularly $\pi^\pm$ and $K^\pm$) can punch through to the last gap of the MuID or decay to muons creating a background to the correlated like-sign signal. These contributions are estimated using MC simulation by determining the \mbox{$p_T$}\xspace-dependent survival probability that a hadron will traverse the muon arm detectors and applying it to {\sc pythia} generated dihadron pairs to get the yield expected at the back of the muon arm detectors. $\pi^\pm$ and $K^\pm$ are generated with {\sc pythia}\footnote{Non-default parameters used in Multiparton Interaction (MPI)``Tune-A" {\sc ps pythia6} simulation for hadron and jet production. The following parameters were modified:MSEL = 1, PMAS(5,1) = 4.1, PYTUNE 100, and PARP(90) = 0.25}~\cite{Field:2005sa,PhysRevD.99.072003,PhysRevD.84.012006} and then run through the PHENIX detector simulation chain to determine a $p_T$-dependent probability that the hadrons penetrate the last gap of the MuID. To get a better estimate of the survival probability, the hadron simulation is run using two different hadron interaction packages for {\sc geant:} {\sc fluka} and {\sc geisha}~\cite{Brun:1994aa,PhysRevC.86.024909}. Figure~\ref{fig:Jet_sim} shows the simulated invariant mass spectra from irreducible background are fitted with an exponential function of the form exp($a + b \times m + c \times m^2$) between 5 and 10 GeV/$c^2$, where $m$ is the invariant mass and $a$, $b$ and $c$ are fit parameters. The average of the indicated results from {\sc geisha} and {\sc fluka} is used to subtract the hadronic background from like-sign pairs while the difference is considered as a systematic uncertainty. \begin{figure} \centering \includegraphics[width=1\linewidth]{hbMass.pdf} \caption{\label{fig:Jet_sim} Like-sign invariant mass distribution from jet background simulation in the north and south arms. The solid lines are fits to the data with an exponential function between 5 and 10 GeV/$c^2$ while the dashed lines represent the averages of the resulting fits. } \end{figure} The invariant mass distribution for like-sign pairs is then constructed from {\sc pythia} generated dihadron pairs within the same event and from mixed events, with each entry weighted by the survival probability. Event-mixing procedure is discussed in the next section. Just as with data, the correlated like-sign signal is obtained by subtracting the mixed event spectrum from the like-sign spectrum, providing the correlated like-sign signal due to dijets or punch-through hadrons. The sum of $\pi$ and $K$ correlated like-sign signals is weighted based on their $p_T$-dependent cross sections~\cite{PhysRevD.98.032007, NuclPhysB.335.261}. Fake like-sign pairs due to charge misidentification and like-sign pairs from Drell-Yan process or quarkonia decays and muon-decayed or punch-through hadrons were also studied and found to be negligible. \subsubsection{Uncorrelated background} The uncorrelated pair contribution is estimated using event mixing technique~\cite{Crochet2002564}, where like-sign pairs are constructed by pairing muons in the current event with those of the same sign and same arm in previous events of z-vertex position within 2 cm. The mixed event pairs ($N_{++}^{BG}$ and $N_{--}^{BG}$) form the uncorrelated background spectrum which is normalized to the the foreground ($N_{++}^{FG}$ and $N_{--}^{FG}$) using a normalization factor ($\sqrt{N_{++}^{FG}N_{--}^{FG}}/\sqrt{N_{++}^{BG}N_{--}^{BG}}\;$). The normalization factor requires that the integrated counts from event mixing equal those from the like-sign in the low mass region where the correlated pairs are expected to be negligible~\cite{Crochet2002564}. The normalized like-sign pairs from event mixing are given as: \begin{equation} \label{eq:Nbckd} N_{\pm\pm}^{BG} = \left( N_{++}^{BG} + N_{--}^{BG} \right) \frac{\sqrt{N_{++}^{FG}N_{--}^{FG}}}{\sqrt{N_{++}^{BG}N_{--}^{BG}}}. \end{equation} However, the specific range where the signal of interest is negligible is not well known, and the average of normalization factors over five mass ranges (0.6--2.6~GeV/$c^2$, 1.0--2.0~GeV/$c^2$, 1.6--3.2~GeV/$c^2$, 2.6--3.8~GeV/$c^2$, and 0.6--4.2~GeV/$c^2$) is used. The correlated like-sign signal ($N_{\pm\pm}^{\rm cor}$) is then isolated by subtracting the mixed-event spectrum ($N_{\pm\pm}^{BG}$) from the ``foreground'' like-sign pairs ($N_{\pm\pm}^{FG}$) according to the following, \begin{equation} \label{eq:Ncorr} N_{\pm\pm}^{\rm cor} = N_{\pm\pm}^{FG}-N_{\pm\pm}^{BG} . \end{equation} To further improve the normalization process, the $b\bar{b}$\xspace invariant mass distribution shape from {\sc ps pythia6} simulation is utilized. This is done by normalizing the integral of the {\sc ps pythia6} distribution to the result of Eq.~(\ref{eq:Ncorr}), over the signal mass range 5--10~GeV/$c^2$. The integral of the normalized $b\bar{b}$\xspace mass distribution is then subtracted from the background distribution in Eq.~(\ref{eq:Nbckd}) for each of the background ranges and the normalization factor is recalculated. The second step is then repeated until the value of the mixed-events normalization factor converges. \begin{figure} \includegraphics[width=1\linewidth]{normdNdm5.pdf} \caption{\label{fig:spectra1} Invariant mass spectra for like-sign pairs from the same event ($N_{\pm\pm}^{FG}$, solid black points), event-mixing ($N_{\pm\pm}^{BG}$, red band), and the difference between the two ($N_{\pm\pm}^{\rm cor}$, empty blue pluses) for the (a) north arm and (b) south arm. These distributions are corrected with $A\epsilon$. The solid green triangles show {\sc pythia} $b\bar{b}$\xspace shape. } \end{figure} Figure~\ref{fig:spectra1} shows the resulting distributions of $N_{\pm\pm}^{FG}$, $N_{\pm\pm}^{BG}$ and $N_{\pm\pm}^{\rm cor}$ as a function of the invariant mass of the pairs. These distributions are corrected with $A\epsilon$. To extract the $b\bar{b}$\xspace distribution as a function of the azimuthal opening angle between muon pairs ($\Delta\phi$\xspace) and their \mbox{$p_T$}\xspace, the normalization factors obtained previously are used to normalize $\Delta\phi$\xspace and \mbox{$p_T$}\xspace mixed event distributions, which are then subtracted from $\Delta\phi$\xspace and \mbox{$p_T$}\xspace foreground distributions, respectively. \subsection{Systematic uncertainties} Table~\ref{tab:sysUncer} summarizes the systematic uncertainties. Evaluated as standard deviations, they are divided into three categories based upon the effect each source has on the measured results: \begin{description} \item[Type-A] Point-to-point uncorrelated uncertainties that allow the data points to move independently with respect to one another and are added in quadrature with statistical uncertainties; however, no systematic uncertainties of this type are associated with this measurement. \item[Type-B] Point-to-point correlated uncertainties which allow the data points to move coherently within the quoted range to some degree. These systematic uncertainties include a 4\% uncertainty from MuID tube efficiency and an 8.2\% (2.8\%) from MuTr overall efficiency for the north (south) arm. The systematic uncertainty associated with $A\epsilon$ includes the uncertainties on the input \mbox{$p_T$}\xspace and rapidity distributions which are extracted by varying these distributions over the range of the statistical uncertainty of the data, yielding 4.4\% (5.0\%) for the north (south) arm. To be consistent with the real data analysis, a trigger emulator was used to match the MuIDLL1-2D trigger for the data. The efficiency of the trigger emulator was studied by comparing the dimuon mass spectrum requiring the dimuon passes the trigger emulator to the dimuon mass spectrum requiring the dimuon passes the MuIDLL1-2D trigger, which resulted in a 1.5\% (2\%) uncertainty for the north (south) arm. Additional 11.2\% (8.8\%) systematic effect for the north (south) arm was also considered to account for the azimuthal angle distribution difference between data and simulation. The source of systematic uncertainty in signal extraction is the normalization of mixed events which could come from the choice of the different normalization ranges in the mixed events or $b\bar{b}$\xspace shape from {\sc pythia} used to guide the signal extraction. A 1.9\% uncertainty on the mixed events normalization was observed from using each of the five normalization windows by itself as well as the different combinations of these normalization windows. {\sc pythia} $b\bar{b}$\xspace shape is the sum of three subprocesses: flavor creation, flavor excitation and gluon splitting. A maximum variation of 3.1\% on the extracted signal was observed from choosing each of the subprocesses by itself as the source of $b\bar{b}$\xspace shape. Added in quadrature, they result in a 3.6\% uncertainty on signal extraction. The systematic uncertainty associated with correlated backgrounds could come from the input \mbox{$p_T$}\xspace distribution, differences between {\sc geisha} and {\sc fluka}, and differences between {\sc geant3} and {\sc geant4}. {\sc pythia} \mbox{$p_T$}\xspace distributions of $\pi^\pm$ and $K^\pm$ were compared separately to fits of UA1 data~\cite{PhysRevD.98.032007, NuclPhysB.335.261} and an overall difference of 18\% was observed. Differences of up to 30\% and 20\% between {\sc fluka} and {\sc geisha}, see Fig.~\ref{fig:Jet_sim}, were observed in the north and south arms, respectively. Additional 15\% was considered to account for the difference between {\sc geant3} and {\sc geant4}. Added in quadrature, all three sources give an overall effect on the hadronic background of 39\% (31\%) for the north (south) arm for the mass and $\Delta\phi$ distributions. For \mbox{$p_T$}\xspace distribution, a \mbox{$p_T$}\xspace-dependent correction was used for the effect on the input \mbox{$p_T$}\xspace spectra and the other two sources gave an overall effect on the hadronic background of 34\% (25\%) for the north (south) arm. To extract the systematic uncertainty associated with the cross section (or invariant yields) for all distributions (mass, $\Delta\phi$ and \mbox{$p_T$}\xspace ), the hadronic background was varied between the limits listed above which resulted in an overall systematic of 5.1\% (4.5\%) for north (south) arm. The Type-B systematic uncertainties are added in quadrature and amount to 16.0\% (12.8\%) for the north (south) arm. They are shown as shaded bands on the associated data points. \item[Type-C] An overall (global) normalization uncertainty of 10\% was assigned for the BBC cross section and efficiency uncertainties~\cite{PhysRevLett.91.241803} which allows the data points to move together by a common multiplicative factor. \end{description} \begin{table}[ht!] \caption{\label{tab:sysUncer} Systematic uncertainties associated with the differential cross section calculation in the north (south) arm.} \begin{ruledtabular} \begin{tabular}{ccc} Type & Origin & North (South)\\ \hline B & MuID hit efficiency & 4.0\% (4.0\%)\\ B & MuTr hit efficiency & 8.2\% (2.8\%)\\ B & $A\epsilon$\xspace \mbox{$p_T$}\xspace and $y$ input distributions & 4.4\% (5.0\%)\\ B & $A\epsilon$\xspace trigger emulator & 1.5\% (2.0\%)\\ B & $A\epsilon$\xspace $\phi$ distribution & 11.2\% (8.8\%)\\ B & Signal extraction & 3.6\% (3.6\%)\\ B & Correlated background & 5.1\% (4.5\%)\\ B & Quadratic sum & 16.4\% (12.8)\%\\ C & MB trigger efficiency & 10\%\\ \end{tabular} \end{ruledtabular} \end{table} \section{Results and Discussion} \label{sec:results} \subsection{Differential cross section} The differential yield and cross section of $B$ meson pairs decaying into like-sign dimuons as a function of mass are calculated according to the following relations, \begin{equation} \label{eq:invYield} \frac{d^2N}{dydm}= \frac{1}{\Delta y\Delta m} \frac{N_{\mu\mu}}{A\epsilon(y,m)}\frac{\epsilon_{\rm BBC}^{\rm MB}}{\epsilon^{\rm BBC}N_{\rm MB}} \; , \end{equation} \begin{equation} \label{eq:diff_xs} \frac{d^2\sigma}{dydm}= \frac{d^2N}{dydm}\frac{\sigma_{\rm BBC}^{pp}}{\epsilon_{\rm BBC}^{\rm MB}} \; , \end{equation} where $N_{\mu\mu}/A\epsilon(y,m)$ is the yield of like-sign dimuons from $B$ meson decay normalized by $A\epsilon(y,m)$ in $y$ and $m$ bin with $\Delta y$ and $\Delta m$ widths, respectively. The yields of the north and south arms are calculated independently and are consistent within statistical uncertainties; therefore, the weighted average~\cite{EPJC.74.3004} is used in the differential yield calculation. $\sigma_{\rm BBC}^{pp} = 32.5 \pm 3.2$ mb is the cross section as seen for the BBC in $p$+$p$\xspace collisions at $\sqrt{s} = 510$~GeV, which is determined from the van der Meer scan technique~\cite{PhysRevLett.106.062001}. $\epsilon_{\rm BBC}^{\rm MB} = 0.53 \pm 0.02$ is the fraction of inelastic $p$+$p$\xspace collisions recorded by the BBC~\cite{PhysRevD.79.012003}. $\epsilon^{\rm BBC}=0.91 \pm 0.04$ is the efficiency of the MB trigger for events containing a hard scattering~\cite{PhysRevD.101.052006}. $N_{\rm MB}$ is the number of MB events. The differential cross section of like-sign dimuons from $B$ meson decay is shown in Fig.~\ref{fig:diffXsec_BBbar_osc}. The gray shaded bands represent the weighted average of the quadratic sum of type-B systematic uncertainties of the north and south arms, $\approx$10.1\%. The average is weighted based on the statistical uncertainties of each arm. In addition to type-B systematic uncertainties, we have a 10\% global systematic uncertainty for BBC cross section and efficiencies~\cite{PhysRevLett.91.241803}. \begin{figure}[htp!] \includegraphics[width=1\linewidth]{d2sigdydm_wtAve.pdf} \caption{\label{fig:diffXsec_BBbar_osc} Differential cross section of like-sign dimuons from $B$ meson decay. The error bars represent the statistical uncertainties, and the gray shaded bands represent the quadratic sum of type-B systematic uncertainties. } \end{figure} The total cross section, $d\sigma_{b\bar{b}\rightarrow B\bar{B}\rightarrow\mu^\pm\mu^\pm }/dy$, within the mass range, $5 < m_{\mu^\pm\mu^\pm} < 10$ GeV/$c^2$, and rapidity and \mbox{$p_T$}\xspace ranges, $1.2<|y|<2.2$ and \mbox{$p_T$}\xspace $>$ 1~GeV/$c$, respectively, is extracted by integrating $d^2\sigma_{b\bar{b}\rightarrow B\bar{B}\rightarrow\mu^\pm\mu^\pm }/dydm$, which resulted $d\sigma_{b\bar{b}\rightarrow\mu^\pm\mu^\pm }/dy~(1.2<|y|<2.2,~p_T>1~\mbox{GeV}/c,~5<m_{\mu^\pm\mu^\pm}<10~\mbox{GeV}/c^2) = 0.16\pm0.01 ~\mbox{(stat)}\pm0.02~\mbox{(type-B syst)} \pm 0.02~(\mbox{global syst})$ nb. To obtain the differential cross section of all $B$ meson pairs that decay into dimuons, regardless of the muon pair charge, the differential cross section of like-sign dimuons from $B$ meson decay is scaled by the ratio of the total number of all $B$ meson pairs that decay into dimuons, regardless of their sign, to those of like-sign. For clarification purposes, the process is divided into two separate steps defined by two variables $\alpha(m)$ and $\beta$, both of which depend on the signal from like-sign dimuons due to oscillation. The ratio of like-sign dimuons at mass $m$ and from primary-primary decays due to $B^0$ oscillation to like-sign muon pairs resulting from primary-primary or a mixture of primary-secondary decays is defined as: \begin{equation}\label{EQ:ALPHA} \alpha(m)=\frac{b\bar{b}\rightarrow B\bar{B} \rightarrow\mu^{\pm}\mu^{\pm} \mbox{ (osc)} }{b\bar{b}\rightarrow B\bar{B}\rightarrow \mu^{\pm}\mu^{\pm} }, \end{equation} which is calculated in the mass range $5 < m < 10~\mbox{GeV}/c^2$ at $1.2<|y|<2.2$ and \mbox{$p_T$}\xspace $>$ 1 GeV/$c$ and extrapolates the correlated like-sign signal to an open bottom signal from oscillation, $N_{\pm\pm}^{osc}$. The $\alpha(m)$ is obtained using open bottom events from three model calculations: {\sc mc@nlo} (ver 4.10), {\sc ps pythia6} (ver 6.421) and {\sc pythia8} (ver 8.205) as shown in Fig.~\ref{fig:alpha}. The red line is a second-order polynomial fit with $\chi^2/ndf$ of 3.8/4. The shaded boxes represent the uncertainty based on the three model calculations. \begin{figure} \includegraphics[width=1\linewidth]{alpha_nopTcut.pdf} \caption{\label{fig:alpha} Fraction of like-sign dimuons from neutral $B$ meson oscillation ($\alpha(m)$) from {\sc mc@nlo} (blue points) , {\sc ps pythia6} (magenta points) and {\sc pythia8} (green points) within the PHENIX muon-arms acceptance. Cyan data points are the RMS average of the three model calculations. The shaded boxes are the associated errors based on the three model calculations. The red curve is a second-order polynomial fit to the RMS data points. } \includegraphics[width=1\linewidth]{d2sigdydmFinal.pdf} \caption{\label{fig:diffXsec_BBbar} Differential cross section of all dimuons from $B$ meson decay. The error bars represent the statistical uncertainties, and the gray shaded band represents the quadratic sum of type-B systematic uncertainties. } \end{figure} \begin{figure*} \begin{minipage}{0.75\linewidth} \includegraphics[width=0.99\linewidth]{worlddataXS.pdf} \end{minipage} \begin{minipage}{0.23\linewidth} \caption{\label{fig:totXsec} (a) Bottom cross section, $\sigma_{b\bar{b}}$ as a function of $\sqrt{s}$. The curves are NLO pQCD calculation~\cite{Vogt} with the dashed lines being error bands obtained by varying the renormalization scale, factorization scale and bottom quark mass. (b) Ratio of data to NLO pQCD calculation. } \end{minipage} \end{figure*} \begin{figure*}[ht] \begin{minipage}{0.48\linewidth} \includegraphics[width=0.99\linewidth]{dNddphi_bbshape.pdf} \caption{\label{fig:dndphi_theory1} Like-sign $\mu\mu$ yield as a function of the azimuthal opening angle. The data are compared to the distributions calculated with {\sc ps pythia6}. The model calculations are normalized to the data. For {\sc ps pythia6} the $\mu\mu$ pair yield is broken down into contributions from flavor creation, flavor excitation, and gluon splitting. } \end{minipage} \hspace{0.4cm} \begin{minipage}{0.48\linewidth} \includegraphics[width=0.99\linewidth]{dNdpt_bbshape.pdf} \caption{\label{fig:dndpt_theory} Like-sign $\mu\mu$ yield as a function of the pair \mbox{$p_T$}\xspace. The data are compared to the distributions calculated with {\sc ps pythia6}. The model calculations are normalized to the data. For {\sc ps pythia6} the $\mu\mu$ pair yield is broken down into contributions from flavor creation, flavor excitation, and gluon splitting.} \end{minipage} \end{figure*} $\beta$ is the ratio of primary-primary like-sign dimuons due to $B^0$ oscillation to all $B$ meson pairs that decay into primary-primary dimuons with all possible muon charge pairs ($++, --$ and $+-$). $\beta$ converts the number of muon pairs from oscillation into all $B$ meson pairs and is defined as: \begin{equation}\label{EQ:BETA} \beta = \frac{b\bar{b}\rightarrow B\bar{B} \rightarrow \mu^{\pm}\mu^{\pm} \mbox{ (osc)}}{b\bar{b}\rightarrow B\bar{B} \rightarrow \mu\mu} . \end{equation} The value of $\beta$ is $0.22\pm0.01$ which is the calculated RMS value from the three model simulations described above. The error of $\beta$ is the standard deviation of the three model calculations which represents the model-dependent uncertainty. The differential cross section of all $B$ meson pairs that decay into a primary-primary dimuon, regardless of the muon pair charge, is then calculated as follows: \begin{equation}\label{BB_eqn} \frac{d^2\sigma_{b\bar{b}\rightarrow B\bar{B}\rightarrow\mu\mu }}{dydm}=\frac{\alpha(m)}{\beta}\frac{d^2\sigma_{b\bar{b}\rightarrow B\bar{B}\rightarrow\mu^{\pm}\mu^{\pm}}}{dydm}. \end{equation} Figure~\ref{fig:diffXsec_BBbar} shows the differential cross section of all $B$ meson pairs that decay into a primary-primary dimuon. Additional type-B systematic uncertainties associated with this measurement due to $\alpha(m)$ and $\beta$ and amount to 1.9\% and 4.5\%, respectively, are included. This brings the type-B systematic uncertainties on $d^2\sigma_{b\bar{b}\rightarrow B\bar{B}\rightarrow\mu\mu }/dydm$ to 11.2\%. The total cross section, $d\sigma_{b\bar{b}\rightarrow B\bar{B}\rightarrow\mu\mu }/dy$, within the mass range, $5.0 < m_{\mu\mu} < 10.$ GeV/$c^2$, and rapidity and \mbox{$p_T$}\xspace ranges, $1.2<|y|<2.2$ and \mbox{$p_T$}\xspace $>$ 1~GeV/$c$, respectively, is extracted by integrating $d^2\sigma_{b\bar{b}\rightarrow B\bar{B}\rightarrow\mu\mu }/dydm$, which resulted $d\sigma_{b\bar{b}\rightarrow\mu\mu }/dy~(1.2<|y|<2.2,~p_T>1~\mbox{GeV}/c,~5<m_{\mu\mu}<10~\mbox{GeV}/c^2) = 0.31\pm0.01 ~\mbox{(stat)}\pm0.04~\mbox{(type-B syst)} \pm 0.03~(\mbox{global syst})$ nb. \subsection{Total cross section} To extrapolate from the $b\bar{b}$\xspace differential cross section in the muon decay channel within the acceptance of muon arms to a total $b\bar{b}$\xspace cross section, the differential cross section is scaled by the ratio of $B$ pairs that decay to dimuons within the measured region to those over the entire kinematic range. This method is similar to that found in Ref.~\cite{ZEUS}. The total cross section, $\sigma_{b\bar{b}}$, is extrapolated and corrected for the semileptonic branching ratio in the following manner: \begin{equation} \sigma_{b\bar{b}} = \frac{d\sigma_{b\bar{b}\rightarrow\mu\mu}}{dy} \times \frac{1}{scale} \times \frac{1}{(BR_{B\rightarrow\mu})^{2}} \; , \end{equation} where $BR_{B\rightarrow\mu}$ is the branching ratio of $B$ to muon through the primary decay channel (=10.99$\%$), and $scale$, defined as: \begin{equation}\label{eq_scale} scale = \frac{B\bar{B}\rightarrow\mu\mu (1.2 < y < 2.2 ; p_T>1; 5<m_{\mu\mu}<10)}{B\bar{B}\rightarrow\mu\mu (all)} , \end{equation} which is a factor used to convert from the visible kinematic region to full phase space. The $scale$ factor is determined from {\sc pythia} and {\sc mc@nlo} simulations. It is taken as the average value, $1.96\times10^{-3}$, of {\sc ps pythia6} (CTEQ6LL), {\sc ps pythia6} (CTEQ5M1), {\sc pythia8} (CTEQ6LL) and {\sc mc@nlo} (CTEQ5M) as listed in Table~\ref{tab:scale_sys}. \begin{table}[ht] \caption{\label{tab:scale_sys} Values of the scale factor as found using {\sc ps pythia6}~\protect\cite{PYTHIA6}, {\sc pythia8}~\protect\cite{SJOSTRAND2015159}, and MC$@$NLO~\protect\cite{Frixione:2002ik}. } \begin{ruledtabular} \begin{tabular}{lc} Simulation & Scale Factor \\ \hline {\sc pythia8} (CTEQ6LL) & 0.00210\\ {\sc ps pythia6} (CTEQ6LL) & 0.00207\\ {\sc ps pythia6} (CTEQ5M1) & 0.00255\\ {\sc mc@nlo} (CTEQ5M) & 0.00113\\ Average Value & 0.00196\\ \end{tabular} \end{ruledtabular} \end{table} The difference in the $scale$ factor due to the different models and parton distribution functions is considered to be a global type-C uncertainty, which amounts to 18.1\%. This results in a total cross section of $13.1 \pm 0.6~(\mbox{stat}) \pm 1.5~(\mbox{type-B syst}) \pm 2.7~(\mbox{global syst})~\mu$b. Type-B systematic uncertainties are from the differential cross section while global uncertainties are the quadrature sum of type-C from the differential cross section and uncertainties arising from the extrapolation. The $\sigma_{b\bar{b}}$ measured at $\sqrt{s} = 510$~GeV is shown in Fig.~\ref{fig:totXsec} and compared to measurements from other experiments~\cite{Adare2009313,PhysRevLett.82.41,PhysRevLett.74.3118,PhysRevD.73.052005,Albajar1991121,ALICE_bb}. The solid line is the cross section from NLO pQCD calculations~\cite{Vogt} and the dashed lines are error bands, and they are obtained by varying the renormalization scale, factorization scale and bottom quark mass. At $\sqrt{s} = 510$~GeV, the NLO pQCD calculation predicts $\sigma_{b\bar{b}} = 11.5^{+6.5}_{-3.9}~\mu$b, which is consistent with the extrapolated total cross section using the current dimuon analysis within uncertainties. Figure~\ref{fig:totXsec} also shows the ratio of data to theory. \subsection{Azimuthal correlations and pair \mbox{$p_T$}\xspace} The like-sign $\mu\mu$ pair yield from $b\bar{b}$\xspace decays is shown in Fig.~\ref{fig:dndphi_theory1} and Fig.~\ref{fig:dndpt_theory} as a function of $\Delta\phi$ and pair \mbox{$p_T$}\xspace, respectively. The spectra are compared to model calculations based on {\sc ps pythia6} that are normalized by fitting the subprocesses sum to the data~\cite{PhysRevD.99.072003}. The generated pairs are filtered with the same kinematic cuts that are applied in the data analysis. The azimuthal opening angle distribution from {\sc ps pythia6} shows a similar pattern to that of the data, an increase until $\approx$2.6 rad and then drop, and it is consistent with the data with $\chi^2/ndf{\approx}27/28$, when considering the quadrature sum of the statistical and systematic uncertainties. The data show steeper \mbox{$p_T$}\xspace dependence than that of {\sc ps pythia6} but they are still consistent when considering the large statistical and systematic uncertainties. We note that flavor creation fits the data much better than any other subprocess with $\chi^2/ndf{\approx}8.4/7$. These results show similar behavior to that observed at 200 GeV~\cite{PhysRevD.99.072003} where the data favors a dominant mix of flavor creation and flavor excitation subprocesses over gluon splitting. \section{Summary and Conclusion} \label{sec:summary} In summary, we presented first measurements of the differential cross section for dimuons from bottom quark-antiquark production in $p$+$p$\xspace collisions at $\sqrt{s}=510$ GeV, which we found to be: $d\sigma_{b\bar{b}\rightarrow \mu^\pm\mu^\pm}/dy = 0.16 \pm 0.01~(\mbox{stat}) \pm 0.02~(\mbox{syst}) \pm 0.02~(\mbox{global})$ nb. The analysis technique is based on the yield of high-mass correlated like-sign dimuons from parity-violating decays of $B$ meson pairs at forward and backward rapidities. Using a model dependent extrapolation, the measured differential cross section is converted to a total cross section of $13.1 \pm 0.6~(\mbox{stat}) \pm 1.5~(\mbox{syst}) \pm 2.7~(\mbox{global})~\mu$b. This extrapolated total cross section is consistent with NLO pQCD calculations within uncertainties. This agreement with NLO pQCD calculations at $\sqrt{s}=510$ GeV is better than what was observed at 200 GeV~\cite{PhysRevD.99.072003}, possibly indicating a better match with NLO pQCD calculations at higher energies. However, the measurement at $\sqrt{s}=200$ GeV used the unlike-sign pairs method and could be impacted by the presence of Drell-Yan process and resonances. The azimuthal opening angle between the muons from $b\bar{b}$\xspace decays and the pair \mbox{$p_T$}\xspace distributions are compared to distributions generated using {\sc ps pythia6}~\cite{PYTHIA6}, which includes NLO processes. While the data tend to have a wider azimuthal distribution than {\sc ps pythia6} calculations and present a steeper \mbox{$p_T$}\xspace distribution, both are still consistent within uncertainties with {\sc ps pythia6}, where flavor creation and flavor excitation subprocesses are dominant. This is similar to what was observed at 200 GeV~\cite{PhysRevD.99.072003}. \begin{acknowledgments} We thank the staff of the Collider-Accelerator and Physics Departments at Brookhaven National Laboratory and the staff of the other PHENIX participating institutions for their vital contributions. We acknowledge support from the Office of Nuclear Physics in the Office of Science of the Department of Energy, the National Science Foundation, Abilene Christian University Research Council, Research Foundation of SUNY, and Dean of the College of Arts and Sciences, Vanderbilt University (U.S.A), Ministry of Education, Culture, Sports, Science, and Technology and the Japan Society for the Promotion of Science (Japan), Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol{\'o}gico and Funda\c c{\~a}o de Amparo {\`a} Pesquisa do Estado de S{\~a}o Paulo (Brazil), Natural Science Foundation of China (People's Republic of China), Croatian Science Foundation and Ministry of Science and Education (Croatia), Ministry of Education, Youth and Sports (Czech Republic), Centre National de la Recherche Scientifique, Commissariat {\`a} l'{\'E}nergie Atomique, and Institut National de Physique Nucl{\'e}aire et de Physique des Particules (France), Bundesministerium f\"ur Bildung und Forschung, Deutscher Akademischer Austausch Dienst, and Alexander von Humboldt Stiftung (Germany), J. Bolyai Research Scholarship, EFOP, the New National Excellence Program ({\'U}NKP), NKFIH, and OTKA (Hungary), Department of Atomic Energy and Department of Science and Technology (India), Israel Science Foundation (Israel), Basic Science Research and SRC(CENuM) Programs through NRF funded by the Ministry of Education and the Ministry of Science and ICT (Korea). Physics Department, Lahore University of Management Sciences (Pakistan), Ministry of Education and Science, Russian Academy of Sciences, Federal Agency of Atomic Energy (Russia), VR and Wallenberg Foundation (Sweden), the U.S. Civilian Research and Development Foundation for the Independent States of the Former Soviet Union, the Hungarian American Enterprise Scholarship Fund, the US-Hungarian Fulbright Foundation, and the US-Israel Binational Science Foundation. \end{acknowledgments}
{'timestamp': '2020-07-31T02:07:30', 'yymm': '2005', 'arxiv_id': '2005.14327', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14327'}
arxiv
\section{Introduction} Recently, the speech community is seeing a significant trend of moving from deep neural network based hybrid modeling \cite{DNN4ASR-hinton2012} to end-to-end (E2E) modeling \cit
e{miao2015eesen, chan2016listen, prabhavalkar2017comparison, battenberg2017exploring, rao2017exploring, chiu2018state, Li18CTCnoOOV, he2019streaming, Li2020Developing} for automatic speech recognition (ASR). While hybrid models require disjoint optimization of separate constituent models such as acoustic and language model, E2E ASR systems directly translate an input speech sequence into an output token (sub-words, or even words) sequence using a single network. Some widely used contemporary E2E approaches for sequence-to-sequence transduction are: (a) Connectionist Temporal Classification (CTC) \cite{graves2006connectionist, Graves-E2EASR}, (b) recurrent neural network Transducer (RNN-T)\cite{Graves-RNNSeqTransduction}, and (c) Attention-based Encoder-Decoder (AED) \cite{Attention-bahdanau2014, Attention-speech-chorowski2015, chan2016listen}. Among these three approaches, CTC was the earliest and can map the input speech signal to target labels without requiring any external alignments. However, it also suffers from the conditional frame-independence assumption. RNN-T extends CTC modeling by changing the objective function and the model architecture to remove the frame-independence assumption. Because of its streaming nature, RNN-T has received a lot of attention for industrial applications and has also managed to replace traditional hybrid models for some cases \cite{he2019streaming, Sainath19, Li2019RNNT, jain2019rnn}. AED is a general family of models that was initially proposed for machine translation \cite{bahdanau2014neural} but has shown success in other domains (including ASR \cite{Attention-bahdanau2014, Attention-speech-chorowski2015, chan2016listen}) as well. These models are not streaming in nature by default but there are several studies towards that direction, such as monotonic chunkwise attention \cite{chiu2017monotonic} and triggered attention \cite{moritz2019triggered}. The early AED models used RNNs as a building block for its the encoder and decoder modules. We refer to them as RNN-AED in this study. More recently, the transformer architecture with self attention \cite{vaswani2017attention} has also become prevalent and is being used as a fundamental building block for encoder and decoder modules \cite{dong2018speech, zhou2018syllable, karita2019comparative}. We refer to such a model as Transformer-AED in this paper. Given the fast evolving landscape of E2E technology, it is timely to compare the most popular and promising E2E technologies for ASR in the field, shaping the future research direction. This paper focuses on the comparison of current most promising E2E technologies, namely RNN-T, RNN-AED, and Transformer-AED, in both non-streaming and streaming modes. All models are trained with 65 thousand hours of Microsoft anonymized training data. As E2E models are data hungry, it is better to compare its power with such a large amount of training data. To our best knowledge, there is no such a detailed comparison. In a recent work \cite{Sainath19}, the streaming RNN-T model was compared with the non-streaming RNN-AED. In \cite{chiu2019comparison}, streaming RNN-AED is compared with streaming RNN-T for long-form speech recognition. In \cite{karita2019comparative}, RNN-AED and Transformer-AED are compared in a non-streaming mode, with training data up to 960 hours. As the industrial applications usually requires the ASR service in a streaming mode, we further put more efforts on how to develop these E2E models in a streaming mode. While it has been shown in \cite{sainath2020streaming} that combining RNN-T and RNN-AED in a two-pass decoding configuration can surpass an industry-grade state-of-the-art hybrid model, this study shows that a single streaming E2E model, either RNN-T or Transformer-AED, can also surpass a state-of-the-art hybrid model \cite{li2020high, li2019improving}. In addition to performing a detailed comparison of these promising E2E models for the first time, other contributions of this paper are 1) We propose a multi-layer context modeling scheme to explore future context with significant gains; 2) The cross entropy (CE) initialization is shown to be much more effective than CTC initialization to boost RNN-T models; 3) For streaming Transformer-AED, we show chunk-based future context integration is more effective than the lookahead method; 4) We release our Transformer related code with reproducible results on Librispeech at \cite{Wang2020Transformer} to facilitate future research. \section{Popular End-to-End Models} \label{sec:e2e} In this section, we give a brief introduction of current popular E2E models: RNN-T, RNN-AED, and Transformer-AED. These models have an acoustic encoder that generates high level representation for speech and a decoder, which autoregressively generates output tokens in the linguistic domain. While the acoustic encoders can be same, the decoders of RNN-T and AED are different. In RNN-T, the generation of next label is only conditioned on the label outputs at previous steps while the decoder of AED conditions the next output on acoustics as well. More importantly, RNN-T works in a frame-synchronized way while AED works in a label-synchronized fashion. \subsection{RNN transducer} The encoder network converts the acoustic feature $x_{1:T}$ into a high-level representation $h_{1:T}^{enc}$. The decoder, called prediction network, produces a high-level representation $h_u^{pre}$ by consuming previous non-blank target $y_{u-1}$. Here $u$ denotes output label index. The joint network is a feed-forward network that combines the encoder network output $h_t^{enc}$ and the prediction network output $h_u^{pre}$ to generate the joint matrix $h_{t,u}$, which is used to calculate softmax output. Here $t$ denotes time index. The encoder and prediction networks are usually realized using RNN with LSTM \cite{Hochreiter1997long} units. When the encoder is a unidirectional LSTM-RNN as Eq. \eqref{eq:uni_enc}, RNN-T works in streaming mode by default. \begin{equation} h_t^{enc} = LSTM (x_t, h_{t-1}^{enc}) \label{eq:uni_enc} \end{equation} However, when the underlying LSTM-RNN encoder is a bi-directional model as Eq. \eqref{eq:bi_enc}, it is a non-streaming E2E model. \begin{equation} h_t^{enc} = [LSTM (x_t, h_{t-1}^{enc}), LSTM (x_t, h_{t+1}^{enc})] \label{eq:bi_enc} \end{equation} When implemented with LSTM-RNN, the prediction network formulation is \begin{equation} h_u^{pre} = LSTM (y_{u-1}, h_{u-1}^{pre}). \label{eq:dec} \end{equation} With the advantage of Transformer models, there is a recent work to replace the LSTM-RNN in the encoder with the Transformer model to construct Transformer transducer \cite{zhang2020transformer} and Conformer transducer \cite{gulati2020conformer}. \subsection{Attention-based Encoder-Decoder} While RNN-T has received more attention from the industry due to its streaming nature, the Attention-based Encoder-Decoder (AED) models attracts more research from academia because of its powerful attention structure. RNN-AED and Transformer-AED differ at the realization of encoder and decoder by using LSTM-RNN and Transformer, respectively. \subsubsection{RNN-AED} The encoder of RNN-AED can have the same structure as RNN-T like Eq. \eqref{eq:uni_enc} and Eq. \eqref{eq:bi_enc}. However, the attention-enhanced decoder operates differently as below: \begin{equation} h_u^{dec} = LSTM (c_u, y_{u-1}, h_{u-1}^{dec}). \label{eq:rnnaed_dec} \end{equation} here $c_u$ is the context vector obtained by weighted combination of the encoder output. $c_u$ is supposed to contain the acoustic information necessary to emit the next token. It is calculated using the help of the attention mechanism \cite{Attention-bahdanau2014, bahdanau2016end}. \subsubsection{Transformer-AED} Even though RNNs can capture long term dependencies, Transformer \cite{vaswani2017attention} based models can do it more effectively given the attention mechanism sees all context directly. Specifically, the encoder is composed of a stack of Transformer blocks, where each block has a multi-head self-attention layer and a feed-forward layer. Suppose that the input of a Transformer block can be linearly transformed to $\mathbf{Q}$, $\mathbf{K}$, and $\mathbf{V}$. Then, the output of a multi-head self-attention layer is \begin{flalign}\label{eq:transformer} \text{Multihead}(\mathbf{Q,K,V}) &= [\mathbf{H_1} \ldots \mathbf{H}_{d_{head}}]\mathbf{W}^{head}\\ \text{where~} \mathbf{H_i} &= \text{softmax}(\frac{\mathbf{Q_iK_i^{T}}}{\sqrt{d_k}})\mathbf{V_i}, \nonumber \\ \mathbf{Q_i} = \mathbf{Q} W^{Q_i}, \mathbf{K_i}& = \mathbf{K} W^{K_i}, \mathbf{V_i} = \mathbf{V} W^{V_i}. \nonumber \end{flalign} Here $d_{head}$ is the number of attention heads and $d_k$ is the dimension of the feature vector for each head. This output is fed to the feed-forward layer. Residual connections \cite{RESNET-he2015} and layer normalization (LN) \cite{ba2016layer} are indispensable when we connect different layers and blocks. In addition to the two layers in an encoder block, the Transformer decoder also has an additional third layer that performs multi-head attention over the output of the encoder. This is similar to the attention mechanism in RNN-AED. \section{Our Models} \subsection{Model building block} \label{ssec:block} The encoder and decoder of E2E models are constructed as the stack of multiple building blocks described in this section. For the models using LSTM-RNN, we explore two structures. The first one, LSTM\_cuDNN, directly calls Nvidia cuDNN library \cite{chetlur2014cudnn} for the LSTM implementation. We build every block by concatenating a cuDNN LSTM layer, a linear projection layer to reduce model size, and then followed by LN. Calling Nvidia cuDNN implementation enables us for fast experiment of comparing different models. The second structure, LSTM\_Custom, puts LN and projection layer inside LSTM, as it was indicated in \cite{he2019streaming} that they are important for better RNN-T model training. Hence, we only use this structure for RNN-T by customizing the LSTM function. The detailed formulations are in \cite{Li2019RNNT}. However, this slows down the model training speed by 50\%. For the Transformer-AED models, we remove the position embedding part \cite{wang2020semantic} and use a VGG-like convolution module \cite{simonyan2014very} to pre-process the speech feature $x_{1:T}$ before the Transformer blocks. The LN is put before multi-head attention layer (Pre-LN), which makes the gradients well-behaved at the early stage in training. \subsection{Non-streaming models} We achieve non-streaming behavior in RNN-T by adding bidirectionality in the encoder. The encoder of this RNN-T is composed of multiple blocks of bi-directional LSTM\_cuDNN as described in Section \ref{ssec:block}. The prediction network is realized with multiple uni-directional blocks of LSTM\_cuDNN. Similar to RNN-T, the non-streaming RNN-AED investigated in this study also uses multiple blocks of bi-directional LSTM\_cuDNN in the encoder and uni-directional LSTM\_cuDNN in the decoder. This decoder works together with a location-aware softmax attention \cite{Attention-speech-chorowski2015}. No multi-task training or joint-decoding with CTC is used for RNN-AED. Following \cite{karita2019comparative}, the Transformer-AED model uses the multi-task training and the joint decoding of CTC/attention. The training objective function is \begin{equation} \mathcal{L} = - \alpha \log p_{ctc}(y|x_{1:T}) - (1-\alpha) \log p_{att}(y|x_{1:T}). \end{equation} The log-likelihood of the next subword $\log p(y_u|x_{1:t},y_{1:u})$ in the joint decoding is formulated as \begin{equation} \label{eq:joint_decoding} \log p_{ctc}(y_u|x_{1:t},y_{1:u}) + \beta_1 \log p_{att}(y_u|x_{1:t},y_{1:u}). \end{equation} In practice, we first use the attention model to select top-k candidates and then re-rank them with Eq. \ref{eq:joint_decoding}. \subsection{Streaming models} \label{sec:streaming_model} Streaming RNN-T model has a uni-directional encoder. While we can directly incorporate a standard LSTM as the building block with either LSTM\_cuDNN or LSTM\_Custom as described in Section \ref{ssec:block}, incorporating the future context into encoder structure can significantly improve the ASR accuracy, as shown in \cite{Li2019RNNT}. However, different from \cite{Li2019RNNT} which explores future context frames together with the layer trajectory structure, in this study we propose to only use context modeling. We do this to save model parameters. Future context is modelled using the simple equation below. \begin{equation} {\zeta}_{t}^{l} = \sum_{\delta =0}^{\tau} {q}_{\delta}^{l} \odot {g}_{t+\delta}^{l} \label{eq:context}. \end{equation} Because $\odot$ is element-wise product, Eq. \eqref{eq:context} only increases the number of model parameters very slightly. It transfers a lower layer vector $g_t^l$ together with its future vectors $g_{t+\delta}^{l}$ into a new vector ${\zeta}_{t}^{l}$, where $\delta$ is future frame index. We modify the block of LSTM\_cuDNN or LSTM\_Custom with the context modeling. \begin{itemize} \item LSTM\_cuDNN\_Context: the block is constructed with a Nvidia cuDNN LSTM layer, followed by a linear projection layer, then the context modeling layer, and finally a LN layer. \item LSTM\_Custom\_Context: the block is constructed with the layer normalized LSTM layer with projection, and then followed by the context modeling layer. \end{itemize} A similar concept of context modeling was applied to RNN in \cite{wang2016lookahead} as Lookahead convolution layer. However, it was only applied to the top layer of a multi-layer RNN. In contrast, in this study we apply context modeling to every block of LSTM\_cuDNN or LSTM\_Custom, and also investigate its effectiveness in the context of E2E modeling. For RNN-T, we also investigate initializing the encoder with either CTC \cite{rao2017exploring} or CE training \cite{Hu2020}. RNN-AED models use blocks of LSTM\_cuDNN\_Context as encoder. Experiments with LSTM\_Custom\_Context will be a part of future study. The streaming mechanism we have chosen for this study is Monotonic Chunkwise Attention (MoChA) \cite{mocha}. MoChA consists of a monotonic attention mechanism \cite{monotonic_attention} which scans the encoder output in a left to right order and selects a particular encoder state when it decides to trigger the decoder. This selection probability is selected by sampling from a parameterized Bernoulli random variable. Once a trigger point is detected, MoChA also uses an additional lookback window and applies a regular softmax attention over that. Note that we have a sampling operation here, which precludes the use of standard backpropagation. Therefore we train with respect to the expected values of the context vectors. Please refer to \cite{mocha} for more details. To enable streaming scenario in Transformer-AED models, we borrow the idea in trigger-attention (TA) \cite{moritz2019triggered}, where the CTC conducts frame-synchronized decoding to select top-k candidates for each frame and then the attention model is leveraged to jointly re-rank the candidates using Eq. \ref{eq:joint_decoding} once a new subword is triggered by the CTC. Since the Transformer encoder is deeper than LSTM, the lookahead method may not be the best solution. We compare the chunk-based method and the lookahead-based method. The former segments the entire input into several fixed-length chunks and then feeds them into the model chunk by chunk, while the latter is exactly the same with the method in RNN-T and RNN-AED. For the chunk-based encoder, the decoder can see the end of a chunk. For the lookahead based encoder, we set a fixed window size for decoder. \section{Experiments} In this section, we evaluate the effectiveness of all models by training them with 65 thousand (K) hours of transcribed Microsoft data. The test sets cover 13 application scenarios such as Cortana and far-field speech, containing a total of 1.8 million (M) words. We report the word error rate (WER) averaged over all test scenarios. All the training and test data are anonymized with personally identifiable information removed. For fair comparison, all E2E models built for this study have around 87 M parameters. The input feature is 80-dimension log Mel filter bank with a stride of 10 milliseconds (ms). Three of them are stacked together to form a 240-dimension super-frame. This is fed to the encoder networks for RNN-T and RNN-AED, while Transformer-AED directly consumes the 10 ms feature. All E2E models use the same 4 K word piece units as the output target. \subsection{Non-streaming E2E models} As described in Section \ref{ssec:block}, the non-streaming RNN-T model uses bi-directional LSTM with Nvidia cuDNN library in its encoder. The LSTM memory cell size is 780. The LSTM outputs from the forward and backward direction are concatenated with the total dimension of 1560 then linearly projected to dimension 780, followed by a LN layer. There are total 6 stacked blocks of such operation. The prediction network has 2 stacked blocks, each of which contains a uni-directional cuDNN LSTM with memory cell size of 1280, followed by a linear projection layer to reduce the dimension to 640, and then with a LN layer. The non-streaming RNN-AED model uses exactly the same encoder and decoder structures as the non-streaming RNN-T model. Similar to \cite{bahdanau2016end}, a location-aware attention mechanism is used. In addition to the encoder and decoder hidden states, this mechanism also takes alignments from previous decoder step as inputs. The attention dimension is 512. The Transformer-AED model has 18 Transformer blocks in encoder and 6 Transformer blocks in decoder. Before Transformer blocks in encoder, we use a 4 layers VGG network to pre-process the speech feature with total stride 4. The number of attention head is 8 and the attention dimension of each head is 64. The dimension of the feed-forward layer is 2048 in Transformer blocks. The combination weights of joint training and decoding (i.e. $\alpha, \beta$) are both 0.3. As shown in Table \ref{tab:wer_non-streaming}, the non-streaming AED models have a clear advantage over the non-streaming RNN-T model due to the power of attention modeling. Transformer-AED improves RNN-AED by 2.7\% relative WER reduction. \begin{table}[t] \caption{Average WER of all non-streaming E2E models on 13 test sets containing 1.8 M words. } \label{tab:wer_non-streaming} \centering \begin{tabular}{l|c} \hline non-streaming models & WER \\ \hline RNN-T (cuDNN) & 9.25 \\ RNN-AED (cuDNN) & 8.05 \\ Transformer-AED & 7.83 \\ \hline \hline \end{tabular} \end{table} \subsection{Surpassing hybrid model with streaming E2E models} In \cite{li2020high} we reported results from our best hybrid model called the contextual layer trajectory LSTM (cltLSTM) \cite{li2019improving}. The cltLSTM was trained with a three-stage optimization process. This model was able to obtain a 16.2\% relative WER reduction over the CE baseline. Introducing 24 frames of total future-context further yields an 18.7\% relative WER reduction. The encode latency is only 480 ms (24*20ms=480 ms; stride-per-frame is 20 ms due to frame skipping \cite{Miao16}). Hence, this cltLSTM model (Table \ref{tab:wer_streaming}) presents a very challenging streaming hybrid model to beat. This model has 65 M parameters, and is decoded with 5 gigabytes 5gram decoding graph. We list the results for all streaming E2E models in Table \ref{tab:wer_streaming}. The baseline RNN-T implementation uses unidirectional cuDNN LSTMs in both the encoder and the decoder. The encoder has 6 stacked blocks of LSTM\_cuDNN. Each block has a unidirectional cuDNN LSTM with 1280 memory cells which projected to 640 dimension and followed by LN. The prediction and the joint network is the same as in the non-streaming RNN-T model. This RNN-T model obtains 12.16\% test WER. The second RNN-T model inserts the context modeling layer (Eq. \eqref{eq:context}) after the linear projection layer in each block. The context modeling has 4 frames lookahead at each block, and therefore the encoder has $4*6=24$ frames lookahead. Because the frame shift is 30 ms, the total encoder lookahead is 720ms. The lookahead brings great WER improvement, obtaining 10.65\% WER. This is 12.4\% relative WER reduction from the first RNN-T model without any lookahead. We also followed lookahead convolution proposed in \cite{wang2016lookahead} by using 24 frames lookahead only on the top most RNN block. This model gives 11.19\% WER, showing that our proposed context modeling, which allocates lookahead frames equally at each block, is better than lookahead convolution \cite{wang2016lookahead}, which simply puts all lookahead frames on the top layer only. Next, we look at the impact of encoder initialization for RNN-T. Shown in Table \ref{tab:wer_streaming}, the CTC initialization of RNN-T encoder doesn't help too much while the CE initialization significantly reduces WER to 9.80. This is 8.0\% relative WER reduction from the randomly initialized model. The CTC initialization makes the encoder emit token spikes together with lots of blanks while CE initialization enables the encoder to learn time alignment. Given the gain with CE initialization, we believe the encoder of RNN-T functions more like an acoustic model in the hybrid model. Note the CE pre-training needs time alignments, which is hard to get for word piece units as many of them don't have phoneme realisation. However, the time alignment for words is still accurate. We make an approximation and obtain alignments for a word piece by simply segmenting the duration of its word equally into its constituent word pieces. For the last RNN-T model, we put projection layer and LN inside the LSTM cell (Custom\_LSTM), and then insert the context modeling layer after it. Putting projection layer inside allows us to use larger number of memory cells while keeping similar model size as the cuDNN\_LSTM setup. This LSTM has 2048 memory cells and the project layer reduces the output size to 640. This model finally gives 9.27\% WER, which is slightly better than our best hybrid model. \begin{table}[t] \caption{Average WERs of streaming models on 13 test sets containing 1.8 M words.} \label{tab:wer_streaming} \centering \begin{tabular}{l|c|c} \hline streaming models & WER & encoder lookahead \\ \hline hybrid & & \\ \quad cltLSTM & 9.34 & 480 ms \\ \hline \hline RNN-T & & \\ \quad cuDNN & 12.16 & 0 ms \\ \quad cuDNN+Context & 10.65 & 720 ms \\ \quad cuDNN+convolution \cite{wang2016lookahead} & 11.19 & 720 ms \\ \quad cuDNN+Context+CTC init. & 10.62 & 720 ms \\ \quad cuDNN+Context+CE init. & 9.80 & 720 ms \\ \quad Custom+Context+CE init. & 9.27 & 720 ms \\ \hline RNN-AED &&\\ \quad cuDNN+Context & 9.61 & 720 ms \\ \hline Transformer-AED && \\ \quad Lookahead method & 10.26 & 720 ms \\ \quad Chunk-based method& 9.16 & 720 ms \\ \hline \end{tabular} \end{table} With the same encoder architecture as the cuDNN RNN-T, the MoChA-based streaming RNN-AED model gives impressive results. Unlike RNN-T, it does not need any initialization and is still able to slightly outperform it in an apple-to-apple comparison (9.61\% vs 9.80\%). To the best of our knowledge, this is the first time a streaming RNN-AED has outperformed RNN-T on a large scale task. Note that our previous study didn't observe accuracy improvement for RNN-AED with CE initialization \cite{Hirofumi2020streaming}. We will investigate whether RNN-AED can also benefit from customized LSTM function in future study. The architecture of the streaming Transformer-AED model is the same as the non-streaming one. For lookahead context-modeling method, each encoder block looks ahead 1 frame. Considering the total stride of VGG is 4 and the speech sampling rate is 10ms, the encoder has $1*18*4*10ms=720ms$ latency. The decoder of the lookahead method introduces an extra 240ms latency. The chunk-based method considers future context with a fixed-chunk. The latency of each frame is in the range of $[480\text{ms}, 960\text{ms}]$, resulting in a 720ms averaged latency without extra decoder latency. The chunk-based method obtains 9.16$\%$ WER, significantly outperforming the lookahead method, mainly because the bottom Transformer blocks of the lookahead approach cannot enjoy the full advantages provided by the right context. \section{Conclusions} This work presents the first large-scale comparative study of three popular E2E models (RNN-T, RNN-AED, and Transformer-AED). The models are compared in both streaming and non-streaming modes. All models are trained with 65K hours of Microsoft's internal anonymized data. We observe that with the same encoder structure, AED is better than RNN-T for both non-streaming and streaming models. With customized LSTM and CE initialization for encoder, the RNN-T model becomes better than RNN-AED. Among all models, Transformer-AED obtained the best WERs in both streaming and non-streaming modes. In this study, both streaming RNN-T and Transformer-AED outperformed a highly-optimized hybrid model. There are several significant factors contributing to this success. For streaming RNN-T, the proposed context modeling reduces the WER by 12.4\% relative from the one without any lookahead. The CE initialization for RNN-T improves over the random initialization baseline by 8.0\% relative WER reduction. This shows pretraining is helpful even on a large scale task. To utilize future context for streaming Transformer-AED, we show that the chunk-based method is better than the lookahead method by 10.7\% relative. \bibliographystyle{IEEEtran}